By Futurist Thomas Frey

A troubling question keeps surfacing in late-night discussions among AI researchers: what happens when truly powerful AI becomes accessible to truly dangerous people?

We’ve spent decades democratizing technology, celebrating the principle that powerful tools should be available to everyone. The internet, smartphones, and cloud computing followed this path—initially expensive and exclusive, eventually cheap and universal. We assumed this was progress, that access equals empowerment equals good.

But AI is different. A malicious actor with sufficiently advanced AI could engineer bioweapons, create undetectable deepfakes to destabilize governments, automate fraud at unprecedented scale, or develop cyber weapons that make current ransomware look primitive. The same tool that helps a researcher cure disease could help a terrorist design a pandemic.

This forces an uncomfortable question: should access to powerful AI require licensing, similar to how we restrict access to aircraft, pharmaceuticals, or financial systems? Should there be an AI passport—verification that you’re responsible enough to wield this kind of power?

The Case for AI Licensing

The argument is straightforward: we already restrict dangerous capabilities. You need a license to drive because cars can kill. You need certification to prescribe medicine because drugs can harm. You need clearance to access classified information because secrets can endanger nations.

AI might be more consequential than all of these combined. A person with access to a sufficiently advanced AI could potentially:

Generate convincing disinformation campaigns that manipulate millions. Create deepfake videos of world leaders declaring war. Design novel pathogens optimized for transmission and lethality. Automate social engineering attacks that drain bank accounts at scale. Develop AI-powered drones that autonomously target individuals. Hack critical infrastructure by finding vulnerabilities humans would miss.

The counterargument—that bad actors will access AI regardless of restrictions—misses the point. We don’t skip drivers licenses because some people drive without them. Licensing creates friction, establishes accountability, and prevents casual misuse even if it doesn’t stop determined criminals.

A tiered system might work: Basic AI accessible to everyone for everyday tasks. Intermediate AI requiring simple verification—proof of identity and clean criminal record. Advanced AI requiring demonstrated training, background checks, and ongoing monitoring. Super AI restricted to vetted researchers, institutions, and professionals with legitimate need.

The verification could leverage existing infrastructure. Your AI passport could link to your digital identity, criminal record, professional credentials, and usage history. Access violations would be logged and investigated. Patterns of concerning behavior would trigger reviews.

The Case Against Restriction

The opposing argument is equally compelling: licensing creates aristocracy. Throughout history, restricting knowledge and tools has concentrated power among elites while disempowering everyone else. Literacy, printing presses, encryption—authorities have always wanted to control these technologies “for safety,” and society has always been better off when they failed.

Who decides who’s “responsible enough” for powerful AI? Governments have terrible track records of judging character. Dissidents, activists, and whistleblowers would be first denied access. Marginalized communities would face discriminatory enforcement. Authoritarian regimes would weaponize licensing to suppress opposition.

The technical challenges are equally daunting. How do you enforce AI licensing when the technology is fundamentally software that can be copied and distributed? Open-source AI models already exist. Once released, they can’t be unreleased. Attempting to restrict access creates a black market where the most dangerous users—the ones licensing aims to stop—simply bypass controls while legitimate users suffer bureaucratic obstacles.

There’s also the innovation cost. Many breakthroughs come from unexpected places—hobbyists, students, outsiders who wouldn’t qualify for advanced AI licenses. Penicillin was discovered by accident. The internet was built by people sharing freely. Restriction stifles the serendipity that drives progress.

And perhaps most fundamentally: restricting AI access doesn’t address the actual problem. The danger isn’t that people have powerful tools—it’s what motivates them to use tools harmfully. We should focus on reducing desperation, extremism, and grievance rather than controlling technology access.

The Uncomfortable Middle Ground

Here’s what keeps me up at night: both arguments are correct. AI is dangerous enough to warrant some restrictions. But restriction mechanisms are dangerous in their own right.

Maybe the answer isn’t binary. Maybe we need nuanced approaches that balance access with accountability:

Capability-Based Licensing: Restrict specific dangerous capabilities (bioweapon design, mass surveillance, autonomous weapons) while keeping general AI widely accessible. You don’t need a license to use AI for writing, coding, or analysis. You do need verification to access modules that could cause mass harm.

Transparent Logging: Rather than preventing access, create indelible audit trails. Powerful AI usage is logged in tamper-proof systems. You can use advanced AI freely, but your usage history is reviewable if harm occurs. This preserves access while enabling accountability.

Graduated Permissions: Start everyone with baseline AI access. Demonstrate responsible usage over time to unlock more advanced capabilities. Behave badly, and your access degrades. It’s reputation-based rather than credential-based—merit through action rather than approval from authorities.

Community Oversight: Instead of government licensing, create community-based verification systems. Professional organizations, academic institutions, and peer networks could vouch for members. Distributed trust rather than centralized control.

Kill Switch Requirements: Mandate that advanced AI systems include built-in limitations that prevent catastrophic misuse. AI could refuse requests that pattern-match to known dangerous applications. Not perfect, but adds friction to casual harm.

The key is accepting that no solution is perfect. Licensing schemes can be circumvented. Open access can be abused. Every approach trades one risk for another. The question is which risks we’re willing to accept.

The Real Danger

Here’s my concern: we’re debating the wrong question. Restricting who accesses AI assumes individuals are the threat. But the most dangerous AI applications will come from institutions—governments, corporations, criminal organizations—that will access advanced AI regardless of licensing schemes.

A surveillance state doesn’t need permission to deploy AI-powered social control. A pharmaceutical company doesn’t ask for a license before using AI to maximize addictive drug formulations. A authoritarian regime won’t be stopped by access restrictions when developing AI-powered propaganda or oppression tools.

Individual bad actors with AI are concerning. Institutional bad actors with AI are existential. And institutions always get the tools they want.

Maybe instead of asking “who should access AI?” we should ask “how do we ensure AI remains aligned with human flourishing regardless of who uses it?” That’s a harder question with less satisfying answers, but it might be the more honest one.

Final Thoughts

Should we license AI access? I honestly don’t know. Every path forward contains dangers we can’t fully predict.

What I do know: we can’t ignore the question. AI capabilities are advancing faster than our social frameworks for managing them. Doing nothing is a choice with consequences. So is restriction. So is unrestricted access.

Perhaps the wisest approach is humility—acknowledging that we’re navigating unprecedented territory without a map. We should experiment carefully with different models, watch for unintended consequences, and remain willing to change course when approaches prove harmful.

The future of AI governance won’t be determined by a single policy decision. It will emerge from thousands of choices made by developers, users, regulators, and societies. The question isn’t whether we can create perfect restrictions, but whether we can create adaptive systems that minimize harm while preserving the benefits that make AI valuable in the first place.

That’s the balance we must find—between access and accountability, freedom and safety, innovation and caution. Get it wrong, and we either enable catastrophe or create technological aristocracy. Get it right, and we might navigate this transformation successfully.

The stakes couldn’t be higher. And the clock is already ticking.

Related Links:

AI Safety and Governance Frameworks

The Ethics of AI Access and Control

Balancing Innovation and Regulation in AI Development