Consciousness in Humans and Machines

Written by

Qubic Scientific Team

Aug 13, 2025


“What is it like to be a bat?” is the question the philosopher Thomas Nagel posed in 1974 to highlight the subjective aspect of consciousness (1). Consciousness is, above all, having subjective experience, the “what it feels like” to be in a state. If there is nothing (or no one) with an inner sense of being and existing, there is no consciousness, no matter how sophisticated the behavior might be.

When analyzing this subjective experience, it helps to distinguish between level of consciousness (arousal/wakefulness) and content (which specific experience is present). Variations in arousal and in content help us map the brain regions and networks involved. In other words, they allow us to study the neural correlates of consciousness (NCC) (2).


Arousal (wakefulness)

What happens in the brain when something becomes conscious? First, a neural signal is amplified and its activity is sustained in the cortex for hundreds of milliseconds. We also observe coordination and coupling in the beta and gamma frequency bands between sensory areas (which encode a stimulus) and association areas (which integrate it and place it in the here-and-now context). In addition, there is a kind of “ignition”: information is no longer confined to a local module but spreads across multiple networks. This allows us to keep it in working memory (our capacity to manipulate information in the present moment) and use it to decide. Finally, when consciousness disappears as in deep anesthesia or during non-REM (NREM) slow-wave sleep, these patterns collapse: there is no sustained activity, coupling breaks down, and there is no global broadcasting of content (3).

Returning to Nagel, these signals are not consciousness itself; they are its neurophysiological basis. This is the well-known hard problem of consciousness, set out by philosopher David Chalmers about 25 years ago and still unresolved (4). The hard problem concerns the leap from the activation of neural networks to subjective experience. How is it that the neurons for “blue” produce the felt quality of that color? However many philosophers and scientists study it, that jump from the objective, cerebral, measurable domain to the subjective, personal, inner experience of consciousness remains a mystery. Through that gap, dualist views are seen in body, matter, brain on one side, and mind, “soul,” consciousness on the other. Dualism does not solve the hard problem; it adds another. If the mind does not need a brain, why does the brain exist, and why do brain lesions or direct modifications instantaneously change consciousness?

Historical dualism led us to think and accept that animals are not conscious (“they don’t go to heaven”). In 2012, the Cambridge Declaration on Consciousness stated that mammals and birds possess the neural substrates of consciousness (5) . More recently, in April 2024, the New York Declaration on Animal Consciousness updated the consensus, citing solid evidence of consciousness in mammals and birds and a realistic possibility of it in reptiles, amphibians, fish, and various invertebrates (cephalopods, decapods, and even insects) (6). To make such claims, scientists examine comparative brain anatomy, neurophysiology, and specific behaviors that are only possible if there is an internal representation of the world, beyond simple reward-and-punishment mechanisms (7).

If consciousness is therefore almost a continuum within the evolution of life, why did it arise? What advantages did it offer? Looking at its characteristics, we can infer the answer. Having conscious, subjective experience is advantageous when an organism must integrate diverse information, resolve conflicting decisions, plan for the long term, and learn flexibly. In a brain that processes massively in parallel, having a “meeting point”, a kind of workspace, helps with control and with social coordination.

When we think about machines, the idea of self-awareness inevitably comes up probably more because of Hollywood than scientific evidence. Think of Terminator, The Matrix, 2001: A Space Odyssey, I, Robot, Ex Machina. When we imagine artificial consciousness, we project a human distrust of the unknown, the new, and the technological.

In reality, if an artificial system reproduced certain organizational properties we observe in conscious brains, it could be conscious. To do so, it should exhibit recurrence, the system not only receives inputs and produces outputs, it also feeds back on itself (returns to its own states) and maintains state memories that influence what it does next (8). It should also show global diffusion or a global workspace, where information is widely shared so other modules can use it (perception, memory, language, action). Perhaps the hardest requirement is causal integration, where interconnected parts produce effects that cannot be explained by each part in isolation. Metarepresentation - an internal model of itself that tracks attention and detects errors, would be another feature. Most fundamentally, it should have the capacity to report what it is processing so that, from the outside, we can suspect the existence of an internal, subjective experience (9).

If we look at current AI (artificial intelligence) systems, it is clear there is no self-consciousness. Some seem to achieve fragments of it, like short-term memory, but they lack strong recurrence, stable global diffusion, robust causal integration, and metacognition that works beyond narrow cases. Of course, a Large Language Model’s (LLM’s) resemblance to consciousness is not consciousness. Its apparent intelligence is, in fact, the product of next-token probability modeling learned from texts, data and words. We must emphasize that the appearance of consciousness is not the thing itself. Because humans easily anthropomorphize, we should be cautious about claims, benchmarks, and hype surrounding “machine consciousness”. Needless to say, apply last sentence to “quantum consciousness”. 

When might it be achieved? There is no reliable date. Each company will offer a timeline aligned with its interests. A cautious scenario is years or decades. It is certain that an AI will keep impressing us more and more, but it is less likely that it will be conscious.

It may be useful to distinguish a few terms about machines. Intelligence is a general ability to achieve goals in diverse environments, make decisions, and solve problems. It can exist without consciousness. In fact, much of our perception, motor control, and day-to-day decision-making is unconscious yet clearly intelligent. The reverse also happens: there can be conscious experience with very limited intelligence, as in dreaming or in many cognitive impairments. Consciousness relates to flexibility, reportability of subjective representations, deliberate planning, and metacognition. It is not the same as intelligence.

In this sense, AGI (artificial general intelligence) defined as the capacity to solve a wide range of tasks with systematic generalization and flexible reasoning beyond human level, could be implemented using non-conscious architectures. We can imagine multiple modules for learning, planning, and memory without a “conscious” global workspace or internal metamodels.

If AGI were conscious, it could endow the system with models of its own attention and state, which would make explainability easier (by providing truthful internal reports), improve safety (by detecting errors), and, above all, enhance social cognition (by attributing states to other agents).

We will very likely see major progress on the path to artificial general intelligence (AGI) with Aigarth. It´s going to be a real breakthrough. Meanwhile the prospect of self-awareness will remain more distant, at least from a neuroscientific perspective.

Jose Sanchez. Qubic Scientific Advisory

Weekly Updates Every Tuesday at 12 PM CET

—-

Citations 

1 Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435–450. https://doi.org/10.2307/2183914

2 Koch, C., Massimini, M., Boly, M., & Tononi, G. (2016). Neural correlates of consciousness: Progress and problems. Nature Reviews Neuroscience, 17(5), 307–321. https://doi.org/10.1038/nrn.2016.22

3 Boly, M., Massimini, M., Tsuchiya, N., Postle, B. R., Koch, C., & Tononi, G. (2017). Are the neural correlates of consciousness in the front or in the back of the cerebral cortex? Journal of Neuroscience, 37(40), 9603–9613. https://doi.org/10.1523/JNEUROSCI.3218-16.2017

4 Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.

5 The Cambridge Declaration on Consciousness. (2012, July). University of Cambridge.

6 New York Declaration on Animal Consciousness. (2024, April). New York University.

7 Birch, J., Schnell, A. K., & Clayton, N. S. (2020). Dimensions of animal consciousness. Trends in Cognitive Sciences, 24(10), 789–801. https://doi.org/10.1016/j.tics.2020.07.007

8 Dehaene, S., Lau, H., & Kouider, S. (2017). What is consciousness, and could machines have it? Science, 358(6362), 486–492. https://doi.org/10.1126/science.aan8871

9 Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., VanRullen, R., et al. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness (preprint). arXiv:2308.08708.


Follow us on X @Qubic
Learn more at qubic.org
Subscribe to the AGI for Good Newsletter below.


Sign up for Qubic Scientific Team Newsletter Here:

© 2025 Qubic.

Qubic is a decentralized, open-source network for experimental technology. Nothing on this site should be construed as investment, legal, or financial advice. Qubic does not offer securities, and participation in the network may involve risks. Users are responsible for complying with local regulations. Please consult legal and financial professionals before engaging with the platform.

© 2025 Qubic.

Qubic is a decentralized, open-source network for experimental technology. Nothing on this site should be construed as investment, legal, or financial advice. Qubic does not offer securities, and participation in the network may involve risks. Users are responsible for complying with local regulations. Please consult legal and financial professionals before engaging with the platform.

© 2025 Qubic.

Qubic is a decentralized, open-source network for experimental technology. Nothing on this site should be construed as investment, legal, or financial advice. Qubic does not offer securities, and participation in the network may involve risks. Users are responsible for complying with local regulations. Please consult legal and financial professionals before engaging with the platform.