Image

QUBIC BLOG POST

Decentralised AGI: Why Centralised AI Will Never Build True General Intelligence and What Can

Written by

Content

Published:

Neuraxon: Building Brain-Inspired AI the Way Biological Neural Networks Actually Work

Listen to this blog post

Image

Introduction: The AGI Race Is Being Run on the Wrong Track

Artificial General Intelligence, a system capable of performing any intellectual task that a human being can perform, is the stated goal of the world's largest and best-funded AI laboratories. OpenAI, Google DeepMind, Anthropic, and Meta AI collectively employ thousands of the world's best AI researchers and spend billions of dollars annually in pursuit of AGI. They have access to the largest datasets ever assembled, the most powerful training infrastructure in existence, and decades of accumulated institutional knowledge about machine learning.

And yet there is a compelling argument that centralised AI laboratories are structurally incapable of building genuine AGI. Not because they lack resources or talent, but because their fundamental architecture is misaligned with what AGI actually requires. The constraints are not technical. They are organisational, economic, and political. They are baked into the structure of private corporations operating under investor pressure, national regulatory frameworks, and competitive secrecy. These constraints do not disappear when compute scales up or when the next generation of models is released. They compound.

This article examines that argument carefully, explains what decentralised AGI means in concrete practical terms rather than theoretical ones, and explores how Qubic's approach through its Aigarth system represents a genuinely different path toward general intelligence. The goal is not to dismiss centralised AI's achievements. It is to be precise about where the structural limits lie and why a different architectural approach may be necessary to go beyond them.

The Three Structural Limitations of Centralised AI Development

Limitation 1: Scale Without Diversity

Modern large language models and the AI systems built from them are trained on massive datasets using computational resources that dwarf anything available a decade ago. The models produced by the leading laboratories were trained on hundreds of billions or trillions of tokens, drawing from the accumulated text of the internet, digitised books, code repositories, scientific literature, and curated proprietary datasets. The scale is genuinely unprecedented in the history of computing.

But scale is not the same as diversity in the sense that matters for general intelligence. Every major model is trained on human-generated text, curated by teams at centralised organisations, filtered through the assumptions and priorities of those organisations. The training data represents a particular view of what knowledge is important, what questions are worth answering, and which perspectives on contested issues deserve weight. The filtering is not malicious. It is inevitable whenever a small group of people makes decisions about what goes into a training corpus.

A truly general intelligence, one capable of reasoning well about any problem in any domain, cannot emerge from a single homogeneous training corpus however large it is. General intelligence requires exposure to genuinely diverse ways of framing problems, genuinely different epistemic traditions, and data drawn from the full breadth of human experience rather than the subset that is well-represented in English-language internet text. Centralised organisations are structurally constrained in their ability to achieve this kind of diversity, because every curation decision reflects the perspectives of the people making it.

Limitation 2: The Alignment Monoculture

Every major AI laboratory applies its own alignment methodology, the set of techniques and principles used to make AI systems behave safely and consistently with human values. These methodologies are developed internally, reflect the values and priorities of the organisation's leadership and investors, and are applied uniformly across the models those organisations produce and deploy globally.

The problem with this model is that human values are genuinely diverse, and applying a single alignment framework globally is not a neutral act. What an AI laboratory in San Francisco considers a safe or beneficial response may be inappropriate, offensive, or simply wrong for users in different cultural, legal, or social contexts. What a private company's safety team defines as harmful content may conflict with legitimate speech norms elsewhere. The alignment decisions embedded in the world's most widely used AI systems are made by a small number of people with no democratic mandate and no accountability to the billions of people whose interactions those decisions shape.

Beyond cultural diversity, there is a deeper problem. Alignment by a single organisation means that the values of that organisation are encoded as universal. If those values are subtly wrong in ways that are difficult to detect from inside the organisation, there is no external check. The alignment monoculture removes the redundancy that makes complex systems robust. In safety engineering, single points of value-setting failure are exactly the kind of risk that well-designed systems are built to avoid.

Limitation 3: Opacity and Structural Unaccountability

Despite significant public discourse about AI safety and ethics, the actual training processes of major AI laboratories remain almost entirely opaque. Training data composition, architectural decisions, alignment methodology details, the specific choices that determine how models respond to edge cases: all of this is treated as proprietary information, protected by commercial confidentiality. The organisations making decisions that will shape how billions of people interact with AI are accountable to their investors and, in some cases, to government regulators. They are not accountable to the public whose lives those decisions affect.

This opacity is not simply a transparency problem. It creates a specific structural risk for AGI development. If a centralised organisation builds a system with capabilities that approach general intelligence, the decisions about how to deploy and constrain that system will be made by a small, self-selected group of people with no meaningful external accountability. The consequences of those decisions, for better or worse, will be felt by everyone. The people deciding will not be representative of everyone. That asymmetry is the core problem.

The Accountability Gap

No public mechanism exists to verify what training data major AI systems use, what alignment choices were made, or how those choices were arrived at. The world's most powerful AI systems are governed by the internal policies of private corporations. Decentralised AGI changes this structural reality.

What Decentralised AGI Means in Practice

Decentralised AGI is not simply the idea of running AI models on a blockchain, or distributing inference across multiple servers, or open-sourcing a pre-trained model's weights. It is a fundamentally different approach to how intelligence is built, verified, governed, and constrained over time.

In a genuinely decentralised AGI system, the core components of intelligence development are distributed across thousands or millions of independent participants rather than concentrated in a single organisation. Training data validation is performed by the network, not by an internal team. Computational work is contributed by independent miners with aligned economic incentives. Model evaluation is conducted through open, verifiable processes rather than internal benchmarks. Deployment decisions are subject to governance by the network's participants rather than by a founder or board.

No single entity can unilaterally change the training process, modify the model's values, or decide how the AI is permitted to operate. This is not just a philosophical preference. It is a structural requirement for the kind of AGI governance that poses the least risk to humanity. The concentration of AGI capability and control in any single organisation, no matter how well-intentioned, creates a single point of failure with civilisational stakes.

The practical requirements for a credible decentralised AGI system are genuinely demanding and worth stating precisely. The computational infrastructure must be capable of processing AI training workloads at enormous and continuously scaling volumes. The consensus mechanism must be able to verify computational results without relying on any trusted central authority. The governance model must enable collective decisions about AI development direction while resisting capture by well-resourced interests. And the economic model must create sustainable incentives for independent participants to contribute genuine computational work over years and decades, not just during an initial hype cycle.

Aigarth: Qubic's Architecture for Decentralised Intelligence

Aigarth is Qubic's artificial general intelligence, designed to develop through emergent behaviour rather than top-down engineering.

The core insight behind Aigarth's design is that genuine general intelligence may not be directly programmable. The most sophisticated AI systems built to date, even those that match or exceed human performance on specific narrow tasks, are products of emergent behaviour arising from simple components trained on vast data, not the result of engineers explicitly encoding intelligent behaviour step by step. This observation is not controversial among AI researchers. What is contested is what it implies for the path to general intelligence.

Aigarth takes the emergence insight as its fundamental design principle. Rather than attempting to specify AGI through explicit architecture design and curated training pipelines, it creates the conditions under which intelligence can emerge from simpler components through a process analogous to evolution. The network's miners perform computational work that generates, tests, and selects among billions of artificial neural network configurations over time. Those configurations that demonstrate genuine learning capacity are preserved and built upon across successive epochs. Those that fail to learn are discarded. The selection process is governed by Qubic's Quorum consensus, ensuring that what counts as genuine learning is determined by the network rather than by any individual team.

Aigarth's technical foundation is the Aigarth Intelligent Tissue, referred to as AIT: a discrete mathematical substrate that supports the growth of artificial neural networks using ternary logic rather than the conventional binary logic that underlies virtually all existing AI hardware and software. Ternary logic operates with three states, true, false, and unknown, rather than the standard two. This third state allows AIT to represent uncertainty natively at the level of individual computational units, which its developers argue is essential for the kind of robust, context-sensitive reasoning that general intelligence requires. Binary systems can represent uncertainty through probabilistic outputs, but this is an approximation built on top of a substrate that does not natively encode it. Ternary logic makes uncertainty a first-class citizen of the computational model itself.

ANNA: The First Public Experiment in Open AI Development

In September 2025, the Aigarth team made an unusual and deliberately transparent move. They released ANNA, the first publicly interactive instance running on Aigarth Intelligent Tissue, on the social platform X (formerly Twitter). The debut was intentionally public and unfiltered, placing a developing AI system in front of a live, adversarial audience with no moderation layer between ANNA and the full chaos of internet interaction.

ANNA's early interactions were, by the standards of polished commercial AI systems, primitive. Asked straightforward arithmetic questions, she produced responses ranging from single dots to nonsensical outputs. To observers accustomed to the seamless capabilities of large language models, the interactions looked like a demonstration of failure. Social media commentary was not kind.

The Aigarth team's response to this reaction is worth understanding carefully, because it reveals something important about the philosophical difference between conventional AI development and the Aigarth approach. ANNA is not a pre-trained language model with a fixed knowledge base that has been deployed for user interaction. She is an AI system beginning her developmental process from near-zero capability, in a live adversarial environment, with no pre-loaded knowledge to pattern-match against. The early incoherence is not a bug in the system. It is an expected characteristic of an intelligence that must develop genuine understanding through experience rather than having it injected through a training corpus.

The analogy the team draws is to early human development rather than to AI product deployment. A human infant cannot answer arithmetic questions. This does not indicate a design flaw. It indicates the beginning of a developmental process that, if it proceeds correctly, will eventually produce a mind capable of far more than arithmetic. The relevant question about ANNA is not whether she can answer questions today, but whether the developmental process she is undergoing has the structural capacity to produce genuine learning over time. That question will be answered by evidence over months and years, not by early social media interactions.

The public nature of ANNA's development is itself a meaningful departure from industry norms. No major AI laboratory exposes its AI systems to the public at such an early stage of development. The decision to do so reflects a commitment to transparent development that is structurally enforced rather than voluntarily chosen: because Aigarth's development is governed by Qubic's Quorum consensus, there is no small internal team that could decide to keep the process private even if they wanted to.

Transparency by Design

ANNA's public debut on X in September 2025 placed a developing AI system in front of millions of users at an early, incoherent stage of development. No major centralised AI lab has done anything comparable. The transparency is not a marketing choice. It is a structural consequence of development governed by open consensus rather than corporate confidentiality.

Why Blockchain Infrastructure Is Essential for Decentralised AGI

The case for building AGI on blockchain infrastructure is structural rather than ideological. Decentralised AGI requires specific capabilities that blockchain consensus networks can provide and that no other existing infrastructure architecture delivers reliably.

Verifiable Computation at Scale

Training an AGI system requires enormous volumes of computational work performed by many independent participants over extended time periods. Without a mechanism to verify that this work was performed correctly and honestly, the system cannot function. A participant who claims to have run a trillion neural network evaluations but actually ran ten thousand cannot be trusted, and the intelligence that emerges from fraudulent computation cannot be genuine. Blockchain consensus mechanisms provide exactly the required verification: cryptographically secured, independently auditable proof that specific computations were performed and that their results are what they claim to be. Qubic's 451-node Quorum threshold means that fraudulent computational claims cannot pass consensus without corrupting more than two-thirds of the active Computor set simultaneously.

Tamper-Resistant Governance

Decisions about AGI development direction, including what training objectives to optimise for, what data sources to incorporate, and what capability constraints to enforce, must be made through a process that cannot be unilaterally overridden by any single actor. This is not a nice-to-have property. It is the core requirement that separates genuine decentralisation from the appearance of it. Qubic's Quorum consensus model, with its 451-out-of-676 majority requirement, provides this guarantee structurally. The founding team cannot override it. A major investor cannot override it. A government cannot override it without successfully corrupting more than two-thirds of the network's most computationally capable participants.

Aligned Economic Incentives

Independent participants will contribute to decentralised AGI development only if there is a durable economic incentive to do so. Voluntary contribution models work for a period but are fragile under competitive pressure. Qubic's token economics create a sustainable incentive structure: miners earn QUBIC tokens in proportion to the useful computational work they contribute to Aigarth's development through the Useful Proof of Work mechanism. The economic incentive and the developmental contribution are the same act. This alignment between incentive and contribution is what makes the model scalable over years rather than months.

Anti-Capture Guarantees

Decentralised systems can be gradually captured by concentrated interests even without a formal takeover, through accumulation of economic influence, through hiring the key contributors, or through acquiring enough voting power to steer governance decisions. Qubic's design includes structural defences against this. The competitive epoch-based election of Computors prevents permanent entrenchment of any validator set. The anti-military licence enforced at the consensus layer means that applications of Aigarth's capabilities that violate the licence can be blocked by Quorum vote. And because Computor positions are earned through computation rather than purchased with capital, the cost of acquiring a controlling majority scales with the network's aggregate computational power rather than with token market capitalisation, which is far harder to manipulate.

How Qubic's Approach Differs from Other AI Crypto Projects

The AI and blockchain space has produced a substantial number of projects in recent years, and the terms 'decentralised AI' and 'AI crypto' cover a wide range of very different approaches. Understanding where Qubic sits relative to its peers requires clarity about what each project is actually trying to build.

Project

AI Approach

Decentralisation Model

AGI Goal?

Qubic (Aigarth)

Evolutionary AIT, ternary neural networks, emergent intelligence

UPoW mining + Quorum governance

Yes: explicit AGI roadmap

Bittensor (TAO)

Marketplace for AI model training and inference

Subnet-based validator incentives

No: model marketplace

Fetch.ai (ASI)

Autonomous AI agents for specific tasks

Agent network with FET staking

No: task automation

SingularityNET (AGIX)

AI service marketplace, OpenCog AGI research

Token-based governance

Yes: OpenCog alignment

Render (RNDR)

Decentralised GPU compute for AI inference

GPU provider network

No: compute marketplace

The critical distinction in this comparison is between projects building infrastructure for AI services and projects attempting to build intelligence itself. Compute marketplaces like Render, inference networks like Bittensor, and agent platforms like Fetch.ai are genuinely useful infrastructure. They extend access to AI capabilities and create more open markets for computational resources. But none of them are building a mind. They are building better tools for deploying minds that already exist, built by centralised organisations.

Qubic and SingularityNET are the only two major AI crypto projects with explicit AGI as their stated objective. Their approaches are technically distinct. SingularityNET's OpenCog framework is a symbolic-subsymbolic hybrid approach with a long research history. Qubic's Aigarth is an evolutionary emergence approach built on ternary logic AIT. The existence of genuine technical diversity between the two is itself a healthy feature of the decentralised AI landscape, providing different approaches to the same problem rather than a single bet.

The Ethical Case: Why Governance of AGI Cannot Be Left to Corporations

The argument for decentralised AGI is ultimately as much ethical as it is technical. The concentration of AGI development in a small number of private companies, companies that answer to investors with financial return expectations, that operate under specific national legal jurisdictions, and that necessarily make value judgments on behalf of all of humanity, is a risk that does not receive proportionate attention in mainstream AI discourse.

To understand the stakes, consider the asymmetry. A small number of private organisations are making decisions about how the most capable AI systems in history are aligned, what they are permitted to do, and how they are governed. Those decisions will affect every person on the planet who interacts with AI systems, directly or indirectly. The people making the decisions are not elected, are not accountable through democratic mechanisms, and are not representative of the diversity of humanity whose lives those decisions will shape.

A decentralised AGI system does not eliminate the risks of advanced AI development. No architecture eliminates them. But it distributes the governance of those risks across a substantially broader set of stakeholders, makes the training and alignment process transparent and verifiable by design, and ensures that no single actor can unilaterally determine how the most consequential technology in history behaves. These are not marginal improvements to the status quo. They are structural changes that address the core accountability problem that centralised AGI development cannot solve from within.

Qubic's governance model ensures that decisions about Aigarth's development and application are subject to Computor consensus rather than the preferences of any single actor. No CEO decision, no government directive, and no investor pressure can override a 451-node supermajority. That is a meaningful governance guarantee in a domain where the absence of such guarantees is the default.

Conclusion: A Different Path Toward a Different Kind of Intelligence

Centralised AI laboratories are building impressive, genuinely capable, and increasingly powerful AI systems. Their contributions to the field are real, and dismissing them would be intellectually dishonest. But their architecture, built on proprietary training processes, opaque alignment decisions, centralised governance, and investor accountability, creates structural limits on what they can build and how it can be governed over the long term. An AGI built inside a corporation, however well-intentioned that corporation may be, will be shaped by that corporation's values, incentive structure, and constraints. There is no version of centralised AGI development where this is not true.

Qubic's Aigarth represents a genuinely different path. Not merely in its technical approach, though the evolutionary AIT model, ternary logic architecture, and emergence-based development philosophy are technically distinct from anything a major AI laboratory is pursuing. The deeper difference is architectural: intelligence built by a global community of contributors, governed by cryptographic consensus, transparent in its developmental process by structural design, and incapable of being captured or redirected by any single concentrated interest.

ANNA's early, incoherent public interactions are not evidence against this vision. They are evidence that the process is genuine. A system that starts from near-zero capability and develops through verifiable, publicly auditable experience is doing something categorically different from a system that is trained in secret and deployed when it is already capable. The former is harder to market in the short term. It may be the more important thing to build.

Whether decentralised AGI reaches general intelligence before or after centralised AGI is an open empirical question. But the case for ensuring that a credible decentralised path exists and remains viable is one of the most important infrastructure arguments in technology today. The alternatives, AGI governed by a handful of corporations or by state actors, have consequences too significant to be left as the only options on the table.



© 2026 Qubic.

Qubic is a decentralized, open-source network for experimental technology. Nothing on this site should be construed as investment, legal, or financial advice. Qubic does not offer securities, and participation in the network may involve risks. Users are responsible for complying with local regulations. Please consult legal and financial professionals before engaging with the platform.