Decentralized AI: Ending the Monopoly on Superintelligence
Written by
Qubic Scientific Team
Aug 26, 2025
By Daniel Díez (@Danicellero)
Ten years ago, I sat on this very bench where I'm writing now, opening the first page from an old friend's gift—a book that, unbeknownst to me, would reshape my worldview on technology's double-edged sword, Nick Bostrom's Superintelligence: Paths, Dangers, Strategies (2014). It opened my eyes to AI's boundless potential, while sounding alarms about risks that felt distant then but loom urgently today—uncontrolled intelligence explosions, misaligned goals, and the peril of power concentrated in too few hands.

As someone who's spent over a decade at the crossroads of AI research and emerging technologies, I've seen firsthand how the promise of Artificial General Intelligence (AGI)—a "True AI" that thinks and learns like humans, or even better—can excite and terrify in equal measure. If you're deep into AI but wary of Web3, you're not alone. Many in the AI community view decentralized tech with skepticism, associating it with volatile cryptocurrencies rather than rigorous science. But what if decentralization isn't just a fad? What if it's the missing piece to making AI ethical, resilient, and truly innovative?
In this piece, I'll bridge that gap. Drawing from neuroscientific principles, game theory, and real-world studies, I'll explain why centralizing AI in the hands of a few giants is risky—and how Qubic, a protocol inspired by biological intelligence, uses decentralization and clever economics to chart a better path. No hype, just evidence-based insights.
The Risks of Centralized AI: A Scientific View
Let's start with what you know: AGI demands immense computational power to mimic human-like cognition, from reasoning across domains to handling uncertainty. Today, this power is bottled up in centralized data centers run by companies like OpenAI or Google. These setups rely on massive GPU farms, churning through data in parallel. But scientifically, this model has flaws.
First, biases creep in. Centralized training often uses datasets that reflect the creators' worldviews, amplifying societal inequalities. A 2023 study in Nature Human Behaviour explored how AI-enabled systems can perpetuate discrimination, showing that without diverse inputs, models entrench biases rather than overcome them—leading to real-world harms like 20-30% higher misdiagnosis rates in underrepresented groups, as per MIT research.
Then there's the risk factor. AI pioneer Geoffrey Hinton, often called the "Godfather of AI," has repeatedly warned about the dangers of unchecked development. In 2023, he left Google citing concerns over AI's potential misuse, emphasizing how centralized control could lead to scenarios where AI outpaces human oversight. From a game theory lens, this creates a "principal-agent problem": Developers (agents) might prioritize profits or speed over safety, misaligning with society's (principal's) interests. A single vulnerability—like a hack or policy shift—could derail global progress.
Centralization also limits innovation. Many AI tasks, like sequential reasoning (e.g., planning a multi-step strategy), aren't suited to GPU parallelism; they're better handled serially on CPUs. Plus, it excludes voices: Only well-funded entities participate, creating an echo chamber.

Decentralization: Drawing from Biology and Complexity Science
Decentralization isn't about ditching servers -it's about distributing them. Inspired by the brain's distributed processing, where neurons fire in networks without a central boss, decentralization lets AI evolve through collective effort.
Qubic's Aigarth framework embodies this. As an open-source protocol, it invites AI researchers to inspect and build on its code, while Web3 experts appreciate its feeless, scalable design.
It builds "Intelligent Tissue" -self-modifying neural networks that evolve via natural selection principles. Using ternary computing (TRUE, FALSE, UNKNOWN), it handles real-world uncertainty better than binary systems. Nodes worldwide contribute CPU/GPU power, training these tissues in a Darwinian process: Networks compete, the fittest survive based on a scoring algorithm.
This mirrors complex adaptive systems, where emergence arises from simple interactions. Research from the Santa Fe Institute shows how decentralized ecosystems, like ant colonies or economies, generate intelligence far beyond individual parts. In AI terms, this means less bias (from global data diversity) and more resilience (no single failure point).
Qubic secures this with a quorum-based consensus, rooted in Byzantine Fault Tolerance (BFT). Pioneered by Leslie Lamport in 1982, BFT ensures systems agree even if some parts fail or act maliciously. In Qubic, 676 "Computors" vote; at least 451 must agree for validity. Mathematically, this tolerates up to Ffaulty nodes where:
FN-Q2
Here, N=676 (total Computors) and Q=451(quorum), allowing tolerance of about one-third faults. This formula ensures robustness without energy waste.
Qubic's founder, Sergey Ivancheglo (@c___f___b on X), a decentralization trailblazer behind IOTA's DAG and NXT's Proof-of-Stake, champions this. He argues true decentralization prevents power grabs, echoing Hinton's warnings but solving them through tech & behavioural economics.
Tokenomics: The Economic Engine for Sustainable AI
Skeptical of "crypto tokens"? Fair -many Web3 projects feel like speculative schemes that don't deliver real world value. But in Qubic, tokenomics is a scientifically proven incentive system, not speculation fuel. Think about it like gamifying research: Reward useful work to sustain the network.
Qubic's Useful Proof of Work (UPoW) redirects mining to AI training, not pointless puzzles. Coins (QUBIC) have a fixed supply (1 quadrillion initial), with emissions halving every four years—creating scarcity. Burns on operations reduce supply further, promoting deflation.
A 2025 framework on Web3 tokenomics shows how such designs boost participation by aligning incentives, sustaining ecosystems long-term. In Qubic, anyone contributes, earning based on impact—democratizing AGI funding.
Reward=Total Emission per EpochTop Contributors
This formula distributes up to 1 trillion QUBIC gross per week (post-burn net of ~425 billion as of August 2025), incentivizing quality over quantity. This incentive model, verified in real-world tests like Qubic's Monero demo, shows how economics can drive ethical tech without central control.

Building Trust in a Skeptical World: Qubic's Proven Scale and Ethical Edge
For AI purists skeptical of Web3, Qubic isn't here to replace your lab -it's a decentralized enhancer, mitigating risks through distributed power, fostering emergent intelligence via bio-inspired evolution, and leveraging smart economics for sustainable growth. But don't just take my word for it: Qubic's already demonstrating world-class scale. In August 2025, it consistently commanded over 51% of Monero's hashrate—a privacy-focused cryptocurrency's mining power—in a controlled demonstration, showcasing its computational might without harm, simply to prove the network's robustness. This positions Qubic as one of the world's most powerful decentralized supercomputers, with an estimated 130 PFlops of processing power (potentially ranking in the global top 10 for raw computation), all while running on everyday CPUs and GPUs contributed by a global community.
What sets it apart? Qubic is fully open-source, inviting scrutiny and collaboration from experts in both AI and blockchain. And crucially, its licensing explicitly prohibits military applications, ensuring this technology drives "AGI for Good" rather than conflict -aligning with ethical calls from figures like Hinton for responsible AI development. As founder Sergey Ivancheglo envisions, this isn't about elite control; it's about creating AI that benefits all, powered by a network that's already the fastest blockchain verified by CertiK at 15.5 million transactions per second.
Join the Journey
Whether you're an AI veteran exploring Web3 or a blockchain pro diving into True AI, Qubic bridges the gap. The future of AI is not yet written. At Qubic, we’re building an AGI that is ethical, open, and community-led.
Aigarth isn’t just a technology, it’s a movement. One that puts people and principles first.
We invite you to be part of this journey. Join our global community of researchers, dreamers, and builders who believe AGI should serve all of humanity.
Stay informed with weekly updates - subscribe below for weekly updates.
— Daniel Díez, Qubic Scientific Advisory
References
Zheng, Z., et al. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment systems. Nature Human Behaviour. Link
BBC News. (2023). AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google. Link
Mitchell, M. (n.d.). Complex Adaptive Systems. Santa Fe Institute. Link
Aboelnadar, A. (2025). Tokenomics in Web3: A Strategic Framework for Sustainable and Scalable Blockchain Ecosystems. ResearchGate. Link
Lamport, L., et al. (1982). The Byzantine Generals Problem. Link
Wong, K. (2021). Challenges of AI systems: A new kind of principal–agent problem. Institute of Mathematical Statistics. Link
Qubic Antimilitary Licensing. Link
Follow us on X @Qubic
Learn more at qubic.org
Subscribe to the AGI for Good Newsletter below.