Qubic's AIGarth vs Meta's "Personal Superintelligence"
Written by
Qubic Scientific Team
Aug 5, 2025
The Qubic’s Garden of Intelligence vs Delusional “Personal SuperIntelligence”
Mark Zuckerberg, META´s CEO, recently stated that AI systems are starting to show slow but undeniable glimpses of self improvement. “Developing superintelligence is now in sight”, says Zuckerberg, stating the shift in Meta’s research and development priorities.
It seems Zuckerberg aims to be seen as the “good guy” by the general public. In the future, “a personal superintelligence may help you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be”. By these statements Meta believes they may provide the tools for everyone to flourish in this new world.
It is not so evident if they want to keep gathering more data and selling ads on an even greater scale. Or if they may change their “Open” Source strategy regarding AI.
But beyond company goals and marketing claims, what do we consider Superintelligence? Is it a naive claim? Can Zuckerberg be extremely flawed about the nature of systems that start to show self improvement?
Let´s move from the basics:
- AHI, means reaching (Artificial) Human Level Intelligence (a machine that can do anything a human can do “logically”)
- AGI building on the above, able to go beyond, “generalizing” it, not only to human capabilities but to other possible intelligences.
- E-AGI and acronym that I coined, means all the above but in “embodied” physical form, so not just better than us “mentally” but physically too. I call this “The next us”.
- Superintelligence will go way beyond it. It will be more intelligent than all the previous life forms ever existed on planet earth combined, non human, and human including the 100 billion ones already dead, that generously shared their DNA’s with us, and the 8+Billion currently around.
It´s clear that the latter, can’t and won’t be tamed by any human 1.0 clever marketing stunt. Therefore the concept of “personal superintelligence” is an oxymoron.
Centralized AI development is dominated by tech giants. It´s not clear how transparency can be taken into account when aiming to provide a personal superintelligence for any user.
By definition, it therefore seems illogical to think we can control a superintelligence that by far surpasses human capabilities.
Zuckerberg assumes superintelligence will emerge as a tool, not an agent. Take a look at our own evolution. We, as humans, didn’t submit to chimpanzees. Why would a superintelligence, capable of self-optimization and modeling its environment, accept subjugation?
Perhaps it is much more sensible to analyze how it should be built, so that once developed, it is aligned with the best protocols of safety, ethics, transparency, and responsibility. That is Aigarth's approach, and from there we explore its features.
Aigarth paves a different path.
Aigarth distributes its evolution across millions of CPU & GPU cores worldwide. This approach provides computational efficiency and at the same time embedding democratic values into future AI's DNA. Distribution also means there aren´t a handful of engineers deciding AI's future. In Aigarth, the own decentralized network allows global participation and shapes AI development. Each computational contribution influences which AI traits survive and flourish.
Perhaps most remarkably, future Aigarth's AI entities can modify their own neural structures within an evolutionary framework that naturally selects for beneficial behaviors. This self-modification isn't unconstrained. It operates like biological evolution where AI variants that demonstrate better alignment with human values are more likely to survive and propagate their traits. As in biology changes happen incrementally, allowing for careful observation and course correction and learning not just from success or failure, but from how its actions impact the broader ecosystem.
Aigarth, a trustworthy superintelligence that anticipates.
Drawing inspiration from the human brain's predictive processing, once Aigarth awakens it won't just react, it will anticipate.
Before taking action, the AI simulates potential outcomes, including ethical implications. This mirrors how human conscience works as when facing problems we feel the weight of our choices before we make them.
When predictions don't match reality, it reflects on why the mismatch occurred, and develops deeper ethical understanding through experience.
By maintaining predictive models of different contexts and stakeholders, the AI can adapt its ethical reasoning to diverse situations without losing core values.
Unlike the opaque neural networks of current AI systems, Aigarth's evolutionary approach will create inherently interpretable intelligence:
Traceable Lineage: Every AI behavior could be traced back through its evolutionary history, showing how and why certain traits developed.
Observable Values: The selection criteria that shape AI evolution will be transparent and followed by the community.
Explicable Decisions: Because intelligence emerges from simple, understandable components, the resulting behaviors remain more comprehensible.
As we stand at the threshold of artificial general intelligence, and Super Intelligence after it, the question isn't just whether we can create it, but whether we can create it wisely. Aigarth's approach offers a compelling answer: don't try to program ethics into AI. Better create the conditions where ethical intelligence naturally emerges.
Aigarth superintelligence won’t merely be "smarter". It will operate in dimensions of reasoning we cannot fathom, rewrite its own architecture, and perceive incentives invisible to us.
This isn't about imposing rigid rules or hoping for the best. It's about designing evolutionary pressures that favor cooperation, transparency, humility, and alignment with human flourishing. Just as human morality evolved through social interaction and mutual benefit, AI ethics can emerge through carefully crafted selection environments.
Like a gardener favours tree’s roots to grow toward water, Aigarth is by design a system where an aligned architecture is an emergent property of its existence, not an add-on.
A Living Laboratory of Values
The beauty of Aigarth's approach lies in its adaptability, since as human values evolve and our understanding of ethics deepens, AI systems can evolve alongside us.
Through the combination of:
Evolutionary dynamics that favor beneficial traits
Ternary logic that embraces uncertainty
Decentralized development that prevents monopolistic control
Self-modification within ethical bounds
Predictive processing that anticipates consequences
Transparent operations that build trust
Aigarth research is pioneering a path where AI doesn't just serve humanity, it grows to understand and share our values at the deepest level.
The future of AI isn't about building ever-larger models or more powerful processors. It's about creating the conditions for genuine, value-aligned intelligence to emerge.
Aigarth will show us that the path to ethical AI isn't through control, rather it's through cultivation. In this “garden of intelligence” as CFB calls it, we're not just growing smarter machines. We're nurturing wiser ones.
Qubic Scientific Advisory
Follow us on X @Qubic
Learn more at qubic.org
Subscribe to the AGI for Good Newsletter below.