What Would Be the First Signs of General Intelligence?
Written by
Qubic Scientific Team
Oct 21, 2025
Human intelligence stems from evolutionary processes that enable abstract reasoning, flexible learning, language, and cultural accumulation. Although non-human primates are undoubtedly intelligent, as they can use tools and exhibit social learning, they lack the "g" factor capable of generalization across many domains (Burkart et al., 2017).
In the artificial domain, an AGI is an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult. This definition emphasizes that general intelligence requires not just specialized performance in narrow domains but the versatility and proficiency of skills that characterize human cognition (Hendrycs et al. 2025). But where do we find an example of general intelligence? Just in humans.
Human cognition is not a monolithic capability; it is a complex architecture composed of many distinct abilities honed by evolution. To investigate whether AI systems possess this spectrum of abilities, we need to ground the approach in the Cattell-Horn-Carroll (CHC) theory of cognitive abilities (Qubic scientific paper, 2024), the most empirically validated model of human intelligence.

For example, a baby can use a stick to reach a fruit, much like a chimpanzee. However, only the human child may combine several sticks to build a ladder, a boat, or a bow and arrows. Furthermore, humans can accumulate cultural knowledge and build, stepwise, a house or a tent by assembling many different sticks.
Why and when did general intelligence emerge, differing from early hominids and non-human primates?
The human lineage split from chimpanzees between 4 to 8 million years ago, but Sapiens' general intelligence emerged around 300,000 years ago, following several brain expansions and tool mastery in Homo habilis (2.8–1.4 million years ago) and Homo erectus (1.9–0.14 million years ago) (Hublin et al., 2017). Several scientific grounds explain this intelligence explosion: bipedalism allowing hands to create tools, a diet based on meat and fatty acids, and cooking enabled by fire control to support demanding brain resource expenditures. But among these features, social complexity was the main factor that facilitated cultural transmission, theory of mind development, and language emergence (Dunbar, 1998).
Human intelligence development correlates with neural features in humans versus other primates. Neuron density in temporal and prefrontal cortices correlates with executive functions, self-control, cognitive flexibility, and abstract thinking (Semendeferi et al., 2011). A bonobo may learn to reverse associations from error, such as selecting a different box after a mistake, but only a human child can apply this flexibility to abstract problems like solving equations, showing a more unified "g."
Research suggests the g-factor emerges early, correlating with overall cognitive flexibility rather than isolated skills or narrow intelligence (Deary et al., 2010). For instance, a 3-year-old may solve a simple puzzle by trial and error, then apply the same logic to sort toys by shape and color, showing cross-task adaptation. They may quickly learn new words during play and use them in sentences to describe actions, like "The ball rolls fast," indicating verbal and reasoning integration. A child may adapt to a new playground by figuring out how to climb unfamiliar equipment, combining spatial awareness and problem-solving. In fact, they may express how difficult, astonishing, strange, or joyful the new environment is.
Currently, no AI fully replicates human "g." Some "sparks" of generalization may be considered. For instance, when GPT-4 generates code for a novel app idea while also explaining math concepts, it demonstrates a sort of multimodal reasoning without specific training (Bubeck et al., 2023).
A promising project was Gato. Developed by DeepMind, Gato is a generalist AI agent released in 2022 that uses a single neural network to handle a broad array of tasks across multiple modalities (Reed et al., 2022). Gato can outperform human-level on some Atari games. It is able to caption images, engage in dialogue, and control robotic arms for actions like stacking blocks. But its versatility is not fully similar to "g," as it comes from treating diverse data as sequences of tokens, allowing it to generalize without task-specific architectures. Relying on pure computing power, Gato uses a 1.18 billion parameter model to tokenize and predict sequences.
And AlphaGo? AlphaGo excels in Go beyond any human. Its ability to adapt strategies, discover novel moves, or generalize to new board sizes suggests early signs of general intelligence. But AlphaGo just plays Go, so it remains domain-specific. In fact, it relies on predefined rules and rewards. Humans, on the other hand, handle open-ended problems on a daily basis (Silver et al., 2016).
What about ANNA?
Unlike well-known large language models (LLMs) that rely on huge pre-trained datasets for next-token prediction, ANNA begins as a "tabula rasa." Her neural architecture evolves from miners on the decentralized Qubic network, leveraging useful proof-of-work (uPoW) to channel computational resources into training.

The small intelligent tissue units (ITUs) mutate, compete, and self-modify based on performance metrics like error reduction and efficiency, resembling biological evolution and Darwinian principles (Qubic, 2025). ANNA evolved from initial responses like a single "." or "-114" to correctly summing "1+1=2" over weeks, via neural mutations. She also filters noisy public inputs (e.g., correct vs. wrong equations) weighted by stakes, showing discernment akin to learning from experience. ANNA, within Qubic's Aigarth framework, represents a decentralized evolutionary path to g-like intelligence, starting from scratch and building resilience through adversarial learning. This mimics natural selection, fostering generalization in noisy environments, with potential for broader AGI.
If ANNA or any project within Aigarth evolves toward general intelligence, it may show early simple signs such as evolving from arithmetic to symbolic manipulation, like solving algebraic equations (e.g., "solve 2 + (-5) = -3" or "value of x in 2x + 3 = 7") without prior examples, demonstrating systematic generalization. If we find Aigarth inferring user intentions, for instance distinguishing sarcastic from genuine questions and responding accordingly, it would imply some sort of "g." Early signs are undoubtedly a measure of huge success.
As in life's emergence, the outstanding breakthrough was the gradual assembly of inorganic and organic molecules into self-replicating systems, leading to the first signs of life.
In Aigarth, when the first signs emerge, the rest of the story will unveil by itself.
Let's advance Aigarth toward general intelligence.
Jose Sanchez, Qubic Scientific Advisory
Bi-Weekly Updates Every Tuesday
Follow us on X @Qubic
Learn more at qubic.org
Subscribe to the AGI for Good Newsletter below.
Citations
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundstrom, S., Nori, H., Palangi, H., Ribeiro, M., & Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712. https://arxiv.org/abs/2303.12712
Burkart, J. M., Schubiger, M. N., & van Schaik, C. P. (2017). The evolution of general intelligence. Behavioral and Brain Sciences, 40, e195. https://doi.org/10.1017/S0140525X16000959
Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201–211. https://doi.org/10.1038/nrn2793
DeepMind's 'Gato' is mediocre, so why did they build it? - ZDNET - https://www.zdnet.com/article/deepminds-gato-is-mediocre-so-why-did-they-build-it/
Dunbar, R. I. M. (1998). The social brain hypothesis. Evolutionary Anthropology: Issues, News, and Reviews, 6(5), 178–190. https://doi.org/10.1002/(SICI)1520-6505(1998)6:5<178::AID-EVAN5>3.0.CO;2-8
Hendrycs et. al (2025) https://www.agidefinition.ai/
Hublin, J.-J., Ben-Ncer, A., Bailey, S. E., Freidline, S. E., Neubauer, S., Skinner, M. M., Bergmann, I., Le Cabec, A., Benazzi, S., Harvati, K., & Gunz, P. (2017). New fossils from Jebel Irhoud, Morocco and the pan-African origin of Homo sapiens. Nature, 546(7657), 289–292. https://doi.org/10.1038/nature22336
Qubic. (2025). Aigarth ANNA's first steps: Beyond adversarial learning. Qubic Blog. https://qubic.org/blog-detail/aigarth-anna-s-first-steps-beyond-adversarial-learning
Reed, S., Zolna, K., Parisotto, E., Colmenarez, S. G., Novikov, A., Barth-Maron, G., Gimenez, M., Sulsky, Y., Kay, J., Springenberg, J. T., Eccles, T., Bruce, J., Razavi, A., Edwards, A., Heess, N., Chen, Y., Hadsell, R., Vinyals, O., Bordbar, M., & de Freitas, N. (2022). A generalist agent. arXiv preprint arXiv:2205.06175. https://arxiv.org/abs/2205.06175
Semendeferi, K., Teffer, K., Buxhoeveden, D. P., Park, M. S., Bludau, S., Amunts, K., Schenker, N., & Sherwood, C. C. (2011). Spatial organization of neurons in the frontal pole sets humans apart from great apes. Cerebral Cortex, 21(7), 1485–1501. https://doi.org/10.1093/cercor/bhq191
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489. https://doi.org/10.1038/nature16961
The hype around DeepMind's new AI model misses what's actually cool about it - MIT Technology Review - https://www.technologyreview.com/2022/05/23/1052627/deepmind-gato-ai-model-hype/
Qubic Scientific Paper (Vivancos & Sánchez-García, 2024) https://www.researchgate.net/publication/387364505_Qubic_AGI_Journey_Human_and_Artificial_Intelligence_Toward_an_AGI_with_Aigarth