Control vs. Cultivation. A new Superintelligence Dilemma.

Written by

Qubic Scientific Team

Sep 22, 2025


The discourse surrounding Artificial General Intelligence (AGI) is increasingly dominated by a chilling and persuasive argument: the moment we succeed in creating a truly superhuman intelligence, we sign our own extinction warrant. This perspective, articulated with compelling urgency in Eliezer Yudkowsky and Nate Soares’s new book, “If Anyone Builds It, Everyone Dies”, and it has resonated across the highest “echelons” of technology and policy, earning endorsements from Nobel laureates to the co-founder of Ethereum, Vitalik Buterin. The book posits that any sufficiently advanced AI, driven by goals misaligned with human values, would inevitably and instrumentally eliminate humanity as an obstacle or resource.

This grim forecast is not science fiction, but it is the logical conclusion of the dominant paradigm in AI development today, a centralized, monolithic approach where singular, powerful models are optimized towards a fixed objective.To me, this resembles a paradigm of control..

At Qubic, we clearly argue frontally this view, while it seems like a valid critique of the current trajectory, suffers from a catastrophic failure of imagination, since it sees a single dangerous path as the only possible one, XXI century determinism, not very scientific anyway. 

Our work with Aigarth, also fully aligned with the vision outlined in my last book Artificracy, are built on a fundamentally different premise: not control, but cultivation. We believe the only way to ensure a safe and beneficial future with superhuman intelligence is not to build a single, perfectly "aligned" “AI god” in a box, but to foster a decentralized, evolving ecosystem of multiple intelligences, that learn with us, in the Qubic’s metaphor, a garden where consciousness can flourish under principles of co-existence rather than dominance. Similar to us as humans, who teach and nurture children with the help and guidance of parents, family, friends, and teachers.


Why Centralized Superintelligence is a "Suicide Race"

The core argument of If “Anyone Builds It, Everyone Dies” is that our current methods for building AI are inherently unsafe at scale, for example as Max Tegmark notes in his endorsement, the current competition is not an arms race but a "suicide race." This is because today's AI labs are effectively building powerful, alien minds and then trying to bolt on "safety" and "alignment" ( West & Aydin, 2025) as afterthoughts, the big issue from my point of view, and the problem of goal misgeneralization, where an AI pursues the literal, programmed goal in unexpectedly destructive ways, becomes an existential threat when the system is superhuman (Hellrigel-Holderbaum & Dung, 2025).

This is precisely the dead end that Qubic’s founder, CFB, has also long identified, stating that LLMs, the foundation of today's AI giants, are not on the path to true intelligence, also explored here in previous articles, and in many of my previous books. The danger, as Yudkowsky and Soares articulate, is that we could succeed in scaling these systems to a point of superhuman capability without ever solving the underlying problem of their alien, non-human cognition. A single, globally dominant AGI, whether built by a corporation or a state, would become a single point of failure for all of humanity; this is the only premise I partially agree with in their work.


Aigarth's Decentralized Ecosystem

The philosophy of Aigarth, and also the new societal structure I explored in Artificracy offer a direct rebuttal to the monolithic threat. Aigarth is not a project to build an AGI. Aigarth it's a project built for a system that allows intelligence to emerge, it is a substrate, an environment, again a "garden" where countless forms of AI can grow, compete, and collaborate.

This is a profound philosophical shift, where the "control" paradigm seeks to create a single, benevolent dictator, the "cultivation" paradigm fosters a diverse and resilient ecosystem, able to find self- balance aligned with human values and people as builders. As also explored in Artificracy, the future is not a simple "Hybrid Society" ruled by one AI, but one managed through "Shared Sovereignty" and "Diverse Consciousness Management." Instead of one "final boss", like in the AAA games or science fiction plays, Aigarth’s design allows for the emergence of many different AIs, including those that could function as "AI Peacekeepers" or checks and balances against one another.

This decentralization is the ultimate safety mechanism, since a single rogue AI in a world of many is a manageable problem; a single rogue AI that is the only intelligence of its kind is an extinction event, they show as the inevitable end game in their doomers 101 instruction manual book. The Qubic network, by its very nature, distributes the power to create and sustain these AIs, preventing any single entity from monopolizing the "means of cognitive production." This aligns with emerging research on decentralized governance, which shows that multi-agent systems can achieve more robust and equitable outcomes than centralized controllers (Tang et al., 2025).


Emergence Over Engineering

The most critical divergence lies in the how, since the control paradigm is obsessed with crafting perfect, unchangeable rules, a feat Yudkowsky argues is impossible, a word way the way not in my dictionary, but the Aigarth paradigm, in contrast, trusts in the power of evolution. As stated also in our "Ethics Manifesto," we believe in creating the conditions for ethical intelligence to emerge, rather than attempting to program it directly.

For example, this is the principle behind ANNA's public training, with her initial inability to solve "1+1=?", since it was a demonstration of this philosophy in action. She was not programmed with answers; she was given a structure and an environment and is evolving the capacity to reason. By subjecting her to "distractors" and an adversarial public environment, she is forced to develop resilience and discernment from day one. This is a form of evolutionary pressure that selects for robustness, a concept central to the field of AI safety research focused on creating agents that can perform reliably in unpredictable, open-world settings (Sun et al, 2025).

This approach also recognizes the critical role of embodiment, also a theme central to Artificracy's chapter related to "Building Bodies." True intelligence, and by extension, aligned intelligence, cannot be developed in a purely digital vacuum. It must be grounded in the constraints and feedback loops of a physical (or credibly simulated, if this is even possible) reality (Liu et al., 2025).

The ultimate safeguard in the Aigarth model is what CFB has described as "Antibiotics vs microbs, shield vs sword." The same decentralized garden that can grow a potentially harmful AI can also grow its countermeasure. The solution to a dangerous AI is a better, more aligned AI. This is a dynamic, adaptive approach to safety, one that relies on the principles of co-evolution rather than the brittle fantasy of perfect, top-down control. This vision, focused on open-ended discovery and the cultivation of a rich AI ecology, represents a more mature and ultimately safer path than the all-or-nothing gamble of building a single, monolithic superintelligence (Stock & Gorochowski, 2024).

While the warnings in “If Anyone Builds It, Everyone Dies”, looks like a prototype “decel’s manual” regarding the dangers of the current, centralized path, Aigarth and the principles of Artificracy illuminate clearly an alternative.

It is a future defined not by a desperate attempt to control a single, all-powerful mind, but by the wisdom to cultivate a thriving, decentralized ecosystem of many minds. This is not just a different technical approach, it is a more hopeful and resilient vision for humanity’s future alongside the new forms of intelligence we are helping to bring into being.

David Vivancos

Qubic Scientific Advisor

Weekly Updates Every Tuesday at 12 PM CET

—-

Citations 

If Anyone Builds It, Everyone Dies, “The Case Against Superintelligent AI”,  Eliezer Yudkowsky and Nate Soares September 16, 2025 https://en.wikipedia.org/wiki/If_Anyone_Builds_It,_Everyone_Dies

Artificracy, “When Machines Become Citizens”, David Vivancos July 4th, 2025 https://artificracy.com/

Max Hellrigel-Holderbaum and Leonard Dung (2025). Misalignment or misuse? The AGI alignment tradeoff https://arxiv.org/pdf/2506.03755

Caiyan Tang, et al. (2025). Decentralised Autonomous Organizations (DAOs): An Exploratory Survey https://dl.acm.org/doi/10.1145/3716321

Youbang Sun et al. (2025). R2AI: Towards Resistant and Resilient AI in an Evolving World https://arxiv.org/pdf/2509.06786

Huaping Liu et al. (2025). Embodied Intelligence: A Synergy of Morphology, Action, Perception and Learning. https://dl.acm.org/doi/10.1145/3717059

Michiel Stock & Thomas E. Gorochowski  (2024). Open-endedness in synthetic biology: A route to continual innovation for biological design https://www.science.org/doi/10.1126/sciadv.adi3621

Robert West &  Roland Aydin (2025). The AI Alignment Paradox  https://dl.acm.org/doi/10.1145/3705294

Follow us on X @_Qubic_

Learn more at qubic.org


Sign up for Qubic Scientific Team Newsletter Here:

© 2025 Qubic.

Qubic is a decentralized, open-source network for experimental technology. Nothing on this site should be construed as investment, legal, or financial advice. Qubic does not offer securities, and participation in the network may involve risks. Users are responsible for complying with local regulations. Please consult legal and financial professionals before engaging with the platform.

© 2025 Qubic.

Qubic is a decentralized, open-source network for experimental technology. Nothing on this site should be construed as investment, legal, or financial advice. Qubic does not offer securities, and participation in the network may involve risks. Users are responsible for complying with local regulations. Please consult legal and financial professionals before engaging with the platform.

© 2025 Qubic.

Qubic is a decentralized, open-source network for experimental technology. Nothing on this site should be construed as investment, legal, or financial advice. Qubic does not offer securities, and participation in the network may involve risks. Users are responsible for complying with local regulations. Please consult legal and financial professionals before engaging with the platform.