Beyond the Illusion of Thinking – Qubic’s Path to True Intelligence

Written by

Qubic Scientific Team

Jul 31, 2025


AGI for Good 3:
Beyond the Illusion of Thinking – Qubic’s Path to True Intelligence

Imagine an AI that doesn’t just spit out answers but ponders, learns, and grows like a curious child exploring a new world. That’s the dream of Artificial General Intelligence (AGI), a machine that thinks, reasons, and adapts like a human. Today’s AI, from chatbots to specialized tools, is stuck in a rut, mimicking intelligence without truly understanding. But Qubic’s Aigarth is different.

The rapid evolution of artificial intelligence has ushered us into a transformative era, where the boundaries of machine cognition are continually tested.

Let’s dive into the limits of current AI models, explore Aigarth’s revolutionary approach, and see how it’s paving the way for an ethical AI future.

Beyond Parrots and Patterns: The Limits of Current AI

Imagine a parrot that’s read every book in the library, capable of reciting poetry, explaining physics, or writing code with uncanny accuracy. That’s the essence of Large Language Models (LLMs) like GPT-4. They generate text by weaving patterns from massive datasets, but they don’t truly understand the words they produce. As Yann LeCun, Meta’s Chief AI Scientist, noted at the AI for Good Summit, LLMs lack internal world models, unable to grasp the physical or social context behind their outputs (Qubic Blog, 2025). Apple’s 2025 paper, "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity", further exposes these flaws, revealing that even advanced Large Reasoning Models (LRMs), designed to enhance reasoning through techniques like Chain-of-Thought with self-reflection, fail to generalize beyond certain complexity thresholds (Apple Research, 2025).

Apple’s findings are striking: LRMs overthink simple problems, wasting computational resources; delay correct solutions in medium-complexity tasks; and collapse entirely in high-complexity scenarios, such as intricate puzzles like Tower of Hanoi (a mathematical puzzle involving three rods and several disks of different sizes, where the goal is to move all disks to another rod following specific rules: only one disk at a time, and never a larger disk on a smaller one) or River Crossing (a logic puzzle where you must transport people or items across a river using a boat, following specific rules and constraints). For example, while they can manage over 100 moves in the Tower of Hanoi, they struggle with fewer than five correct moves in the River Crossing puzzle. Even when provided with explicit algorithms, LRMs falter in exact computation, suggesting an inherent scaling limit in their reasoning capabilities.

Apple’s research has sparked heated debate, with critics like those in “The Illusion of Thinking” arguing that the study’s artificial puzzle environments may exaggerate limitations due to restrictive parameters. Others, like Maria Sukhareva, a computational linguist at Siemens, support Apple’s call for deeper architectural shifts, emphasizing that true reasoning requires more than token generation (Sukhareva, 2025) This inefficiency underscores a critical gap between current AI and human-like intelligence, highlighting their reliance on pattern-matching rather than genuine reasoning.

Small Language Models (SLMs), like Microsoft’s Phi-3, are akin to specialized tools in a toolbox. With fewer parameters, they excel in tasks like medical diagnostics or financial analysis but lack the breadth for general reasoning, sharing LLMs’ core flaw of being pattern-matchers, not thinkers. Meanwhile, Yann LeCun’s Joint-Embedding Predictive Architecture (JEPA) aims to move beyond pattern-matching by predicting abstract world representations, mimicking human mental models. Yet, JEPA remains a prototype, limited by its centralized design and reliance on text and image inputs (Meta AI Blog, 2024). Like a toddler learning to walk, it’s promising but not yet ready to run.

These models invert the human learning process. Babies learn through sensory experiences (seeing, touching, feeling) building intelligence before language (Decety, 2003). Language enhances pre-existing cognition, shaped by social interaction (Tomasello, 2003). Yet, LLMs, LRMs, SLMs, and JEPA start with language, generating text without real-world grounding, trapped in a linguistic bubble where they correlate words they don’t truly comprehend (Bender & Koller, 2020). This controversy highlights the urgent need for innovative AI paradigms that transcend the current limitations of pattern-based systems.


Aigarth: The Dawn of Emergent Intelligence

Now, meet Aigarth, the Qubic’s bold leap toward true AGI. Unlike its predecessors, Aigarth doesn’t mimic; it evolves. Built on the Qubic network’s decentralized platform, Aigarth uses a modular structure where “digital neurons” (functional agents) work together like a living system. Each agent has a specific role, collaborating or competing to solve problems, much like cells in a body adapt to new challenges. This allows Aigarth to reorganize itself, learn in real-time, and tackle new tasks without retraining (Qubic Blog, 2025).

Sergey Ivancheglo, Qubic’s founder and a pioneer behind IOTA and NXT, envisions Aigarth as more than a programmed tool.

“Aigarth’s main goal is to discover a new paradigm allowing to create true AI. I have just seen the wireframe of this paradigm, and I have realized that Artificial Intelligence will not be created; it will emerge,” he shared on X (Ivancheglo, 2025).

This vision drives Aigarth’s design, which leverages Qubic’s Useful Proof of Work (uPoW). Unlike Bitcoin’s energy-heavy mining, uPoW channels global computational power into training Aigarth, making every miner a contributor to its growth (Ivancheglo, 2024).

Ivancheglo recently praised the Qubic mining algorithm powering Aigarth, calling it “terrific” for its simplicity and power in constructing artificial neural networks (ANNs).“I’ve never seen such a powerful yet very simple construction of an ANN!” he tweeted, highlighting Aigarth’s potential to redefine AI efficiency (Ivancheglo, 2024). This decentralized, community-driven approach ensures transparency and aligns with Qubic’s anti-military, open-source ethos, setting Aigarth apart from centralized models like JEPA.


Comparing the Contenders

Here’s how Aigarth stacks up against LLMs, SLMs, and JEPA:

Aspect

LLMs

SLMs

JEPA

Aigarth

Parameter Count

Billions to trillions

Millions to a few billion

Variable, often smaller

Dynamic, evolves with tasks

Training Approach

Pattern-based on vast datasets

Fine-tuned for specific tasks

Predicts abstract world representations

Evolutionary, self-improving via decentralized uPoW

Strengths

Versatile, handles complex tasks like translation

Efficient, excels in specialized tasks

Aims for contextual understanding

Adaptive, learns like a living system, ethically focused

Weaknesses

Energy-intensive, prone to hallucinations and biases

Limited scope, less nuanced reasoning

Experimental, unproven at scale

Early-stage, requires community support to reach full potential

Applications

Customer support, content generation

Healthcare, finance, edge devices

Potential in vision, reasoning tasks

General intelligence, transparent and community-driven projects

LLMs falter with hallucinations (15% of GPT-3’s responses are nonsensical) and biases (19% of its political outputs are skewed) (ProjectPro, 2024). SLMs are efficient but lack breadth. JEPA’s contextual ambitions are exciting but unproven. Aigarth, though early in development, shines with its ability to evolve, adapt, and stay true to ethical principles, powered by Qubic’s global network.


A Vision for the Future

While LLMs, SLMs, and JEPA are stepping stones, they’re tethered to static data and centralized systems. Aigarth, inspired by Ivancheglo’s vision, is a living experiment in intelligence, growing through community contributions and decentralized computing. By 2027, Qubic aims to see Aigarth emerge as a true AGI, capable of reasoning, adapting, and understanding in ways that mirror human cognition.

This is the heart of AGI for Good: an AI that evolves alongside humanity, guided by transparency and ethics. As Ivancheglo puts it:

“Artificial Intelligence will not be created; it will emerge” (Ivancheglo, 2025).

With Aigarth, that emergence is already taking shape.

Join the Journey

The future of AI is not yet written. At Qubic, we’re building an AGI that is ethical, open, and community-led.

Aigarth isn’t just a technology, it’s a movement. One that puts people and principles first.

We invite you to be part of this journey. Join our global community of researchers, dreamers, and builders who believe AGI should serve all of humanity.


Weekly Updates Every Thursday at 12 PM CET

Follow us on X @Qubic
Learn more at qubic.org
Subscribe to the AGI for Good Newsletter below.


- Jose Sanchez and Daniel Díez, Qubic Scientific Advisory


Citations:

●      Qubic Blog on AI for Good Summit: https://qubic.org/blog-detail/qubic-at-the-largest-ethical-ai-conference-ai-for-good-geneva

●      Parshin S, Iman M, Keivan A, Maxwell H, Samy B. Mehrdad F (2025). The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity. https://machinelearning.apple.com/research/illusion-of-thinking

●      Decety, J. (2003). The neurophysiological basis of emotion regulation. Oxford University Press.

●      Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Harvard University Press.

●      Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. ACL.

●      Lake, B. M., et al. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences.

●      Meta AI Blog on JEPA: https://about.fb.com/news/2025/06/our-new-model-helps-ai-think-before-it-acts/


Sign up for Qubic Scientific Team Newsletter Here:

© 2025 Qubic.

Qubic is a decentralized, open-source network for experimental technology. Nothing on this site should be construed as investment, legal, or financial advice. Qubic does not offer securities, and participation in the network may involve risks. Users are responsible for complying with local regulations. Please consult legal and financial professionals before engaging with the platform.

© 2025 Qubic.

Qubic is a decentralized, open-source network for experimental technology. Nothing on this site should be construed as investment, legal, or financial advice. Qubic does not offer securities, and participation in the network may involve risks. Users are responsible for complying with local regulations. Please consult legal and financial professionals before engaging with the platform.

© 2025 Qubic.

Qubic is a decentralized, open-source network for experimental technology. Nothing on this site should be construed as investment, legal, or financial advice. Qubic does not offer securities, and participation in the network may involve risks. Users are responsible for complying with local regulations. Please consult legal and financial professionals before engaging with the platform.