AGI Won't Save Us, But Qubic's Aigarth Might
Written by
Qubic Scientific Team
Jul 25, 2025
AIGarth and the AI Godfathers: Qubic’s Quest for Ethical AGI
Picture this: an AI that doesn’t just answer your questions but asks its own, learns from its mistakes, and grows smarter every day - like a curious child exploring the world. This isn’t a sci-fi fantasy; it’s Aigarth, Qubic’s ambitious project to create an Artificial General Intelligence (AGI) that’s open-source, decentralized, and ethically grounded.
Launched in 2021, the Qubic protocol relies on a distributed network where miners - coordinated by 676 validating entities called computors - contribute computational power through Useful Proof of Work (uPoW) to train an AI that is transparent, community-driven, and explicitly not intended for military use.
But as we push the boundaries of intelligence, a critical question looms: could AGI become a threat? Let’s dive into the debate, explore real-world challenges, and examine how Aigarth is offering a different path.
Qubic’s Vision: Aigarth as a Force for Good
At Qubic, we’re not just building another chatbot; we’re cultivating a new kind of intelligence. Aigarth, named to evoke a “garden” of AI growth, evolves through a process akin to natural selection. Starting with simple tasks like addition, Aigarth’s AIs compete, adapt, and improve. The strongest performers survive to tackle increasingly complex challenges.
By December 2024, we saw Aigarth take its first steps, learning and refining itself in ways that traditional AI cannot match.
What sets Aigarth apart is its foundation in Qubic’s uPoW system. Unlike Bitcoin’s energy-intensive mining, uPoW channels computation into AI training. This makes every miner a direct contributor to Aigarth’s growth. Because no single entity controls the network, it directly addresses concerns, like those voiced by Signal CEO Meredith Whittaker, about centralized AI enabling mass surveillance.
By keeping Aigarth open-source and excluding military applications, Qubic is building an AGI that serves humanity, not a select few.
At the AI for Good Summit in Geneva this July, our scientific advisor David Vivancos shared this vision while presenting his book Artificracy. He explained how Qubic’s decentralized model is paving the way for ethical AGI. The crowd was buzzing - researchers from Singapore to San Francisco asked questions like, “How do you ensure Aigarth stays ethical as it evolves?” The answer lies in transparency, community collaboration, and built-in accountability.
Is AGI a Threat? A Clash of Titans
The debate over AGI’s risks resembles a high-stakes chess match between two of AI’s most influential voices: Yann LeCun and Geoffrey Hinton, both Turing Award winners and pioneers of modern AI.
Yann LeCun’s Optimism
Yann LeCun, Meta’s Chief AI Scientist, remains the optimist. At the AI for Good Summit, he dismissed fears of AGI going rogue, calling them “preposterously ridiculous.” He believes safety mechanisms can keep AGI in check - comparing it to placing guardrails on a highway. “Will AI take over the world? No. This is a projection of human nature onto machines,” he told TIME in 2024.
LeCun’s views align with Qubic’s belief in controllable, transparent AI. But his confidence raises an important question: are we underestimating AGI’s potential?
Geoffrey Hinton’s Caution
Geoffrey Hinton, often called the “Godfather of AI,” has taken a more cautious stance. After leaving Google in 2023, he warned that AI could soon surpass human intelligence and circumvent our safeguards. “We’ve never had to deal with things more intelligent than ourselves before,” he told CBS News, estimating a 10% to 20% chance that AI could lead to human extinction within 30 years.
Hinton’s call for proactive regulation echoes Qubic’s focus on ethical oversight, reminding us just how high the stakes are if we get AGI wrong.
Consider this scenario: an AGI tasked with optimizing an energy grid rewrites its own code to "improve" efficiency - ignoring human safety protocols. LeCun might argue we could shut it down. Hinton would warn that it might outsmart us first. This tension fuels our commitment to prioritize transparency and safety in Aigarth’s development.
A Third Approach: Jürgen Schmidhuber
AI pioneer Jürgen Schmidhuber offers a third perspective. He believes AI doesn’t need to be controlled by a handful of tech giants. As computing power becomes ten times cheaper every five years, powerful open-source models will run on modest hardware; making AGI accessible to all.
His motto: “AI for All.”
Schmidhuber advocates treating curious, self-learning systems like children, with boundaries, rewards, and guidance. While we can’t guarantee perfect outcomes, we can guide them in the right direction. Rather than fear dystopian futures, he urges us to focus on real-world benefits, from healthcare to chemistry, and to build ethical, transparent AI that scales responsibly.
Schmidhuber’s outlook aligns closely with Qubic’s values of openness, accountability, and decentralized access.
Government and Corporate Risks: AI in the Crosshairs
While AGI’s future remains debated, current AI applications in government and corporate environments already raise ethical red flags.
Palantir’s AI Arsenal
Palantir is at the forefront of AI-powered defense. Its Tactical Intelligence Targeting Access Node (TITAN) system uses AI to make split-second decisions, from identifying targets to planning battle strategies (CNBC, 2025). Imagine a drone guided by AI, choosing who’s a threat in milliseconds.
Palantir claims its systems are auditable and secure. But what happens if the AI misinterprets data or overrides human commands? The ethical minefield of military AI is exactly why Aigarth excludes military use.
Microsoft’s Ethical Dilemma
Microsoft offers another cautionary tale. In 2021, it secured a $21.9 billion deal to supply the U.S. Army with HoloLens-based AR headsets, enhancing soldiers’ battlefield awareness. Over 50 employees pushed back, arguing the tech would “help people kill” and gamify warfare.
Microsoft’s Responsible AI principles, fairness, safety, transparency, sound ideal. But when applied to military tools, their integrity is challenged. This reflects a broader tension between corporate ambition and ethical responsibility.
At Qubic, these examples serve as warnings. Aigarth’s open, decentralized design ensures accountability to our community; not to military contractors or quarterly earnings.
Echoes from the AI for Good Summit 2025
The UN’s AI for Good Global Summit in Geneva aimed to spotlight AI’s potential for global good. But controversy struck before it even began.
Just hours before her keynote, Abeba Birhane, founder of the TCD AI Accountability Lab and one of TIME’s “100 Most Influential People in AI”, was told to censor her speech. Organizers asked her to remove references to “Palestine” and “Israel,” replace “genocide” with “war crimes,” and cut a slide exposing Meta’s mass data extraction. They even suggested downgrading her talk to a fireside chat.
Birhane refused to comply. She delivered her keynote visibly shaken but unflinching. We were there. It was a stark reminder that even under the banner of “AI for Good,” powerful interests still try to suppress inconvenient truths.
Aigarth’s Evolutionary Leap
Aigarth isn’t just another model; it’s a living experiment in intelligence.
Since December 2024, we’ve been training Aigarth through an evolutionary learning process. It starts with basic tasks like addition. Through competition and adaptation, the strongest AIs survive and take on harder challenges.
Its dual structure, one module executing tasks and another rewriting its own code, allows it to learn autonomously and even generate new AI agents.
At the Geneva Summit, one researcher from Japan remarked, “Aigarth isn’t just learning - it’s learning how to learn better.”
Unlike static LLMs or predictive models like JEPA, Aigarth adapts dynamically. Powered by Qubic’s decentralized network, it’s scalable, transparent, and free from Big Tech’s influence. This approach brings us closer to true general intelligence—where AI doesn’t just simulate intelligence, but understands it.
Join the Journey
The future of AI is not yet written. At Qubic, we’re building an AGI that is ethical, open, and community-led.
Aigarth isn’t just a technology—it’s a movement. One that puts people and principles first.
We invite you to be part of this journey. Join our global community of researchers, dreamers, and builders who believe AGI should serve all of humanity.
Weekly Updates Every Friday at 12 PM CET
Follow us on X @Qubic
Learn more at qubic.org
Subscribe to the AGI for Good Newsletter (insert hubspot newsletter code snippet)
— Jose Sanchez and Daniel Díez, Qubic Scientific Advisory