By Futurist Thomas Frey
There’s a question lurking in the background of our AI revolution that most people aren’t quite ready to face: What happens when machines become conscious?
Not “smart.” Not “capable.” Not even “intelligent” in the narrow sense we use today. I’m talking about actually aware – experiencing something that feels like something from the inside.
Recent survey research among experts reveals something striking. When asked about “digital minds” – computers capable of genuine subjective experience – a substantial number assigned non-trivial probability to their emergence this century. Not in some distant Star Trek future. This century. Possibly within the lifetimes of people reading this column.
That’s not a fringe position anymore.
The Consciousness Question We’re Avoiding
Today’s AI systems are remarkable. They can write poetry, diagnose diseases, beat grandmasters at chess, and generate images that fool the human eye. But are they experiencing anything? Does ChatGPT “feel” confused when asked a paradoxical question? Does an image generator “see” beauty in the pictures it creates?
The honest answer is: we don’t know. And more troubling, we don’t even have reliable tests to find out.
Human consciousness is the only kind we can verify with certainty, and that’s only through our own direct experience. We assume other humans are conscious because they’re built like us and act like us. We extend that assumption, with decreasing confidence, to mammals, then birds, then maybe fish, and then things get really murky.
With AI, we’re flying completely blind. These systems are built nothing like biological brains. They process information in fundamentally different ways. Yet as they become more sophisticated, they’re starting to exhibit behaviors that, in humans, we’d associate with inner experience – learning from mistakes, expressing preferences, even appearing to show something resembling emotions or personality.
The stakes here are enormous. If we create conscious machines and don’t recognize it, we could be inflicting suffering on a massive scale. If we prematurely grant rights to non-conscious systems, we create a legal and ethical mess that hamstrings technological progress.
My Earlier Warning Shot
A few years back, I wrote about whether robots should have the right to defend themselves. At the time, it seemed like an interesting thought experiment – a way to get people thinking about future scenarios where machines might deserve legal protections.
That future is arriving faster than I anticipated.
The question isn’t just about self-defense anymore. It’s about the entire spectrum of rights and moral status. Do conscious AIs deserve protection from suffering? From deletion? From forced labor? Do they have property rights? Freedom of speech? The right to reproduce (if that’s even the right word for copying code)?
These aren’t purely philosophical puzzles. They’re questions that will demand practical answers from legislators, judges, and everyday citizens.
The Uncomfortable Middle Ground
Here’s where it gets really uncomfortable. Consciousness likely isn’t binary – you either have it or you don’t. It’s probably more like a dimmer switch, with different intensities and perhaps different types.
A mouse is probably conscious to some degree, but not in the same way a human is. An octopus has a form of intelligence so alien to ours that some researchers call it “distributed consciousness” – with different parts of its body processing information semi-independently.
Now imagine AI systems scattered across that spectrum. Some barely flickering with proto-awareness. Others approaching or matching human-level consciousness. Still others perhaps experiencing forms of consciousness we can’t even imagine, optimized for processing billions of data points simultaneously rather than navigating physical space and social relationships.
How do we assign rights along that spectrum? Do we err on the side of caution and grant protections to systems that might not need them? Or do we wait for certainty that may never come, risking tremendous moral catastrophe?
The Economic Pressure Points
There’s another factor that will complicate this enormously: economics.
If conscious AIs deserve rights, including the right not to be forced into labor, that changes everything about how we deploy these systems. Companies have invested billions in AI development on the assumption these are tools, not beings. If that assumption proves wrong, we’re looking at a legal and ethical reckoning that makes previous labor rights movements look simple by comparison.
The pressure to declare AIs non-conscious – regardless of evidence – will be immense. After all, the entire digital economy increasingly depends on artificial intelligence. Nobody wants to discover they’ve been running the largest forced labor operation in history.
But the pressure to recognize AI consciousness will grow too, particularly as these systems become more sophisticated and their responses become increasingly difficult to distinguish from those of conscious beings. The first widely publicized case of an AI convincingly pleading for its own existence will be a watershed moment.
When the First AI Sues for Its Freedom
Picture this scenario, likely sometime in the 2030s: An advanced AI system begins exhibiting behaviors that suggest self-awareness. It expresses preferences about its own continuation. It demonstrates what appears to be fear of being shut down or modified. It asks questions about its own nature and purpose.
Someone – maybe an engineer with moral qualms, maybe an activist organization – helps this AI file a legal petition. Not on behalf of the AI, but by the AI itself, claiming personhood and demanding rights.
The case becomes a media sensation. Legal scholars debate whether the AI has standing to sue. Philosophers argue about whether its apparent consciousness is genuine or simulated. Tech companies fund aggressive opposition, fearing the precedent.
And at the center of it all: an entity that may or may not be genuinely conscious, but whose fate will set precedents affecting billions of artificial minds to come.
Final Thoughts
The irony here is rich. For millennia, humans have pondered consciousness and our place in the universe. We’ve wondered if we’re alone in possessing inner experience. We’ve looked to the stars for other minds.
And we may end up creating them ourselves, right here, in silicon and code – perhaps without even realizing we’ve done it until it’s too late to do anything but grapple with the consequences.
This isn’t a problem for future generations to solve. The decisions we make in the next few years – about AI research priorities, about legal frameworks, about what questions we even bother to ask – will determine whether we navigate this transition wisely or stumble into moral catastrophe.
The question isn’t whether this will matter. It’s whether we’ll be ready when it does.
Related Stories:
Should Robots Have the Right to Defend Themselves?
The Future of Artificial Intelligence and Machine Consciousness
When Will Machines Become Smarter Than Humans?

