Who am I? Am I just a biological machine with a limited lifespan? What happens to my awareness when I die? Will it just fade away? Are my fellow humans as aware of themselves as I am?
These are the questions one poses to oneself when battling with probably the most enigmatic riddles of human existence: our consciousness, the state of being aware of one’s surroundings and oneself.
This mystifying phenomenon — discussed by philosophers for centuries — has entered the public debate in recent years. The intensity and ubiquity of the discussion have been accelerating ever since, not only in outlets such as TED talks and podcasts but also in the mainstream media. What has driven the introduction of this highly philosophical topic into the mainstream debate? While there is — in my opinion — no true inflection point, there are four distinct reasons for this going mainstream.
- First reason: Creative work starting with the 1999 movie “The Matrix” followed by many movies and series made people contemplate the nature of reality and the emergence of consciousness. This trend in popular culture and the consequent public thirst for knowledge likely sparked a movement in scholarly circles.
- Second reason: Popular science books that address consciousness, perception, and related topics — among them Daniel Dennett (“Consciousness Explained”), David Chalmers (“Reality+”), Anil Seth (“Being You”), and Yuval Noah Harari (“Homo Deus”) — raised public awareness further. While scholarly work expanded public understanding, developments in medicine and psychiatry opened another fascinating dimension.
- Third reason: The — what I call — Psychedelic Renaissance showed us that hard-to-treat mental health issues such as post-traumatic stress disorder (PTSD) can be cured using psychedelic substances like ketamine, LSD, psilocybin, or MDMA. Simultaneously, the mind-altering effects of these substances raised further questions about the malleability of consciousness and scrutinized the accuracy of our perception of reality. These intriguing insights were augmented by advances in technology.
- Fourth reason, likely the most critical: Enormous advances in machine learning followed by the creation of LLMs — software architectures that pass the Turing Test with flying colors — have transformed consciousness from a philosophical puzzle into an urgent technological and ethical challenge. This is no longer just a thought experiment — policymakers are now grappling with its profound implications for society.
Due to this, I was pleased to read Peter D’Autry’s article “Why Computers Can’t Be Conscious” in this magazine, which discussed the possibility of machine consciousness, but ultimately concluded that it will not materialize. The article resonated deeply with my thinking, which I had put in words and published at about the same time. There is, however, one argument I wholeheartedly disagree on — the matter of substrate independence of consciousness.
The substrate independence of consciousness — is biology mandatory?
Substrate independence is the question of whether consciousness could potentially emerge from nonbiological entities such as silicon-based systems. I agree with the previous article that consciousness is highly unlikely to emerge from current AI architectures, which are fundamentally pattern-matching and statistical inference engines — statistics on steroids essentially. These systems, while impressive in their output, lack the intrinsic qualities we associate with consciousness such as self-awareness, subjective experience (qualia), and genuine understanding. Yet, I disagree with the categorical dismissal of the possibility of artificial consciousness. Given our limited understanding of how consciousness emerges — even in biological systems — and considering the rapid evolution of both software (e.g. AI architectures & quantum algorithms) and hardware (quantum computing), it seems rather premature to rule out the possibility entirely.
To make matters clear: I am not arguing that consciousness will eventually be created in a nonbiological entity, but I am staying open to the possibility. Hereafter, I will support my view on this.
Consciousness — can our minds figure themselves out?
What is consciousness? Despite all the scientific progress made in neuroscience, we don’t have a clue what consciousness really is and how it emerges; there is, however, no lack of hypotheses that are wildly debated. David Chalmers coined this in “Hard Problem of Consciousness”, the question of how and why subjective experiences — our inner, conscious awareness — arise from the physical processes of the brain. It is a mystery that challenges our understanding of reality itself. This riddle remains unsolved, and while biological neural networks might be one path to consciousness, they may not be the only one.
The challenge begins with a profound question: since we cannot define consciousness with precision, how can we judge whether another entity — biological or artificial — is conscious? The only consciousness we can truly confirm is our own. Are other humans as conscious as we are? Perhaps they are slightly less or even more aware. And what about my dog — is he merely a “beast machine,” as Descartes might argue, or does he possess a form of consciousness that is simply less developed than ours? These questions highlight our first massive hurdle: What exactly is consciousness, and how can we determine whether something else shares this mystic, undefinable property?
Consciousness, fragile as it is enigmatic, can dissolve under certain conditions. Real-world events such as trauma, coma, or general anesthesia starkly demonstrate its impermanence. In the case of anesthesia, a cocktail of chemicals can effectively fade consciousness into oblivion, albeit thankfully temporarily. Invasive procedures like lobotomies, or vascular incidents such as strokes, have drastically altered self-awareness and consciousness. Psychedelics offer yet another perspective, showing how minute amounts of chemical compounds can profoundly alter perception and qualia, the subjective experience of reality. These examples underscore the profound link between consciousness and biological processes.
In addition, the brain’s remarkable ability to rearrange itself, neuroplasticity, suggests an even deeper complexity. After a stroke or severe injury, the brain can create new neural pathways to restore at least some lost functioning and even recover aspects of consciousness. This is what happens when one recovers from for example a stroke. This ability not only highlights the resilience of the human mind but also suggests that consciousness is far more dynamic than we might assume.
Hence, there is undeniably a physical and biochemical foundation to consciousness, deeply rooted in the brain’s processes.
Another crucial consideration is embodied cognition — the theory that consciousness emerges not just from brain activity, but from the body’s interactions with the world. Studies of infant development show that consciousness develops through physical interaction with the environment, suggesting that a purely digital system might face fundamental limitations in achieving consciousness without some form of physical embodiment.
However, neither the correlation between biological mechanisms and consciousness nor the concept of embodied cognition precludes the possibility that there is more to it. William James and Aldous Huxley famously likened the brain to a “reducing valve,” filtering a broader reality into a manageable form of perception. If true, this means that consciousness might extend beyond the confines of biological functions, raising even bigger questions about its origins.
Essentially, there are three broad schools of thought in the field of the philosophy of mind regarding consciousness: dualism, functionalism, and idealism. Dualism posits that consciousness and physical matter are two fundamentally different artifacts that cannot be reduced to each other, strange quantum effects support this. Functionalism holds that mental states are defined by systemic organizations — neural networks — and not by the physical substrate, the above-mentioned possibility to alter or fade out consciousness supports this. Idealism is the philosophical view that reality is fundamentally mental and hence immaterial, consciousness gives rise to the mind, which in turn gives rise to the material world. Mind and matter are referred to as the ontological essentials and the graphic below visualizes the ontological essentials and their fundamentalism according to the three schools of thought.
As a consequence, dualism would imply that computers are unlikely to become conscious, while functionalism implies that eventually machines will be conscious — it is just a matter of time really. Idealism presents a third perspective: if consciousness is indeed the fundamental reality from which all else emerges, then the question of machine consciousness becomes not one of emergence but of manifestation — whether artificial systems could serve as interfaces for the expression of pre-existing consciousness just as biological entities.
Will we ever solve this riddle? Perhaps not, at least not alone. Our limited minds perceive reality through a virtual reality system — a simplified interface designed to help us survive and procreate. Albert Einstein famously said, “Problems cannot be solved at the same level of thinking that created them.” This insight is perfectly applicable to the Hard Problem of Consciousness: Are our limited minds equipped to understand itself and deduce the origins of consciousness? Or will we need the assistance of a higher intelligence, perhaps even advanced AI, to finally glimpse the full picture? Like we need a therapist or coach to tell us what’s going wrong with our own thoughts — we sometimes need an outside perspective to understand something.
In conclusion, our limited understanding of consciousness — the Hard Problem — makes it impossible to definitively verify or falsify whether any entity is truly conscious. This leads us on to the next two topics, namely the possibility of machine consciousness and the consequent ethical dilemma we face.
The possibility of machine consciousness: creepy, complex, and completely uncertain
The creepy notion of humans creating artificial life — and potentially conscious beings — has been haunting literature and philosophy for centuries. Mary Shelley’s “Frankenstein” probed the ethics of creating life in a laboratory, while E.T.A. Hoffmann’s “The Sandman” (a delightfully chilling German Gothic short story) explored the confusion between a human and an automaton. Yet, the possibility of machine consciousness entered philosophical discourse even before machines as we know them even existed!
The four advances outlined above — quite rightly — extended and intensified the discussion on machine consciousness. “Why Computers Can’t Be Conscious” maintains that computers cannot and will not ever be conscious. The reasoning? Consciousness not matter is fundamental, i.e. everything we perceive as reality — including matter — emerges from consciousness, not the other way around. Hence, material systems cannot create consciousness as they are merely a product of consciousness — a viewpoint clearly based on philosophical idealism. Also, scholars arguing for dualism — matter and mind as separate entities — essentially exclude consciousness in machines as it requires a biological basis and that silicon systems will not give rise to a conscious entity.
I partially agree: today’s hardware and software architectures, no matter how advanced, still pale in comparison to even the most basic biological systems (including chicken brains), and hence these systems will by no means give rise to a conscious entity. However, I remain open to the possibility and would not categorically rule it out.
Why? Because history has shown us that underestimating technological advancement is a risky bet. Take chess as an example, according to Murray Head’s 1980s hit “One Night in Bangkok”, it is the ultimate test of cerebral fitness, and experts maintained that chess computers could never beat a true grandmaster. These predictions were nullified when Gary Kasparov lost to IBM’s Deep Blue in the 1990s. Still, techno-pessimists followed up with the bold claim that AI would never master the ancient board game of Go as it is far more intuitive and intuition is clearly not a machine’s strength! Fast forward to 2016, and DeepMind’s AlphaGo not only beat a champion but consistently outperformed human grandmasters. Meanwhile, we are not talking about performance in games, or scientific endeavors but the most innate human competency, our conscious mind. Well, I guess it is a case of “never say never”.
The UX Magazine article “Why Computers Can’t Be Conscious“ argues consciousness must remain embodied in biological systems, while I entertain — but do not claim — the idea that it might emerge independent of biology and may not be fundamental. I am agnostic in the debate on idealism vs functionalism vs dualism, I will wait and see who wins the race.
To complicate matters a bit further, there is another tech game changer in the pipeline: quantum computers and algorithms, which will eventually make their mark, further transforming computing and amplifying its benefits and risks. This raises profound questions about their implications for consciousness and AI. Roger Penrose, for instance, proposed that consciousness might arise from quantum effects in tubular structures in the brain. If he’s right, what happens if an advanced AI is uploaded to a quantum computer? Will we accidentally build a conscious machine?
Before a conscious machine intelligence, even accidentally, is generated, moral and ethical questions have to be tackled head-on considering the far-reaching ramifications of such a creation. The next section delves into this critical ethical consideration: Even if the probability of creating a conscious machine is minimal, should we still prevent it by all means?
Ethical dimensions: could machine consciousness signal the novacene or the end of us?
The debate on consciousness is fascinating, but when it comes to AI, the stakes are much higher. Beyond theoretical musings, AI poses real and escalating risks. Just as industrialization displaced blue-collar jobs, AI is now set to disrupt white-collar professions. Add to that the dangers of losing human control, amplifying bias and discrimination, and the existential risk highlighted by Nick Bostrom in “Superintelligence”: the control problem — how to ensure a superintelligent AI aligns with human values.
But here’s where the ethical dimension becomes truly mind-bending. Imagine a conscious machine — perhaps accidentally — is created. Would we then be morally obligated to keep it turned on forever? After all, turning it off could be akin to killing it, and that opens a Pandora’s box of ethical and moral dilemmas. The twist? We know so little about consciousness that we wouldn’t even be able to confirm whether the machine is truly conscious or not, vide supra. That uncertainty alone creates a formidable ethical challenge. Idealist perspectives would frame the act of creating conscious machines not as generating something entirely new, but as engaging with a reality fundamentally rooted in consciousness itself.
Personally, I believe we must prevent machine consciousness at all costs. The control problem alone is daunting, but there’s also a deeply humanist argument: if we create an entity as — or even more —conscious than ourselves, we’d bear the ethical responsibility of its existence. Moreover, creating conscious machines would violate core humanist values by altering humanity’s unique position and burdening us with godlike responsibilities we’re not prepared for. Could humanity handle that burden? Unfortunately, I doubt we could do that without prior consultation, not only with science and philosophy but also with leaders from faith systems encompassing both organized religions and spiritual practices.
Faith is another critically important aspect of this dilemma: if we were able to falsify the idea of substrate independence in consciousness, we would essentially be disproving functionalism. Such a revelation could lend support to belief systems that have long rationalized our reality, predating modern science. However, this is very risky as it could also exacerbate tensions among already fragile relationships between differing belief systems. In essence, the question of whether artificial consciousness is possible might simultaneously be the question of whether a higher power exists — a profound and potentially divisive implication. This is a topic I plan to explore further in future publications.
Policymakers need to act urgently. Tackling AI’s risks isn’t optional; it’s imperative. Regulation must prioritize ethical and moral considerations alongside safeguarding measures. AI, when harnessed responsibly, has extraordinary potential — I’ve edited a book on how it’s helping reduce emissions and combat climate change. But left unchecked, AI could usher in a future that resembles either “Brave New World” or “1984” — neither of which we want to live in.
The late Jim Lovelock took a relaxed view in his final book “Novacene”, arguing that machines taking over is simply the next step in evolution — inevitable and unstoppable. Humanists, however, might beg to differ. Ultimately, the advent of superintelligent machines will take our position as apex creations of evolution and this would be a hard pill to swallow. More importantly, the possibility of machine consciousness, however small, could shatter belief systems that have existed for millennia — the consequences of such an event are as frightening as they are unpredictable.
Conclusions
The question of consciousness remains one of humanity’s most profound puzzles and is at the very heart of the human condition. While we’ve made remarkable strides in understanding both the brain and artificial intelligence, we still lack a definitive understanding of consciousness itself. With many hypotheses but little concrete evidence, we find ourselves in the uncomfortable position of potentially creating something we can neither understand nor control. Since quantum computers will inevitably enter the scene in the foreseeable future, another totally unpredictable variable will be added to an already highly complex problem. This unprecedented situation forces us to confront not just technological challenges but the very foundations of human existence and consciousness itself.
History cautions us against categorical dismissals of AI’s potential — we’ve consistently underestimated its capabilities, from chess to language understanding, from image recognition to scientific discovery, from medical diagnosis to creative expression. Yet, the question of machine consciousness transcends mere technological achievement. It touches upon fundamental questions about the nature of reality, consciousness, and even the existence of a higher power. If we prove that consciousness can emerge from non-biological systems, we might inadvertently answer age-old questions about functionalism versus other philosophical schools of thought, with far-reaching implications for both science and faith. Such a discovery could profoundly shift perspectives on materialism and lend significant support to philosophical and religious views that place consciousness at the foundation of existence. Especially the consequences of further questioning of faith and belief systems are too frightening for me to contemplate.
However, before we venture further down this path, we face critical ethical challenges. If we create a conscious machine — even accidentally — how would we verify its consciousness? More importantly, what moral obligations would we have toward it? Could we ethically “turn off” an entity possessing human-level consciousness? Could we accept the fact that we deliberately create something that takes our apex position of conscious existence? Jim Lovelock sees it as the next step in the evolutionary process, I — as a humanist — beg to differ: machines were created to serve humanity, not to replace it. These aren’t just philosophical thought experiments anymore; they’re pressing questions that demand immediate attention from policymakers and ethicists.
Given these profound implications and our current inability to definitively answer these questions, we must proceed with extreme caution. While we shouldn’t categorically deny the possibility of machine consciousness, we should prevent its development until we have robust ethical frameworks and a deeper understanding of consciousness itself. The stakes are simply too high to rush forward without careful consideration of the philosophical, ethical, and societal implications.
Featured image courtesy: Oliver Inderwildi.