Senators Richard Blumenthal, a Democrat, and Josh Hawley, a Republican, have jointly put forth a legislative framework aimed at shaping future AI regulations. The framework includes proposals to establish a new governmental body tasked with regulating artificial intelligence and to impose restrictions on the development of advanced language models like OpenAI’s GPT-4, limiting such work to companies granted licenses for the purpose.
Under this legislative proposal, companies intending to develop “high-risk” AI applications, including face recognition, must obtain a government license. To obtain such licenses, companies would be required to perform pre-deployment testing of AI models to assess potential harm, publicly disclose incidents of AI-related harm post-launch, and allow independent third-party audits of their AI models. Furthermore, companies would need to provide access to the training data used in AI model development, and individuals harmed by AI would have the right to take legal action against the responsible company.
This framework, introduced by Blumenthal and Hawley, is likely to have a significant impact on ongoing debates about AI regulation in Washington. A Senate subcommittee hearing, overseen by the two senators, is scheduled to discuss how to hold businesses and governments accountable for AI systems causing harm or infringing on rights. Microsoft President Brad Smith and Nvidia’s Chief Scientist William Dally are among the witnesses expected to testify.
Additionally, Senator Chuck Schumer is organizing a series of meetings to explore the challenges of regulating AI, with input from key tech executives and experts in AI ethics and human rights. However, skepticism remains regarding the ability of a new AI oversight body to possess the necessary technical and legal expertise to oversee AI technology, which spans various sectors from autonomous vehicles to healthcare and housing.
The concept of licensing for AI development has garnered attention within the industry and Congress. OpenAI CEO Sam Altman previously suggested AI developer licensing during Senate testimony in May. A separate bill introduced by Senators Lindsay Graham and Elizabeth Warren proposes government AI licensing for tech companies above a certain size.
While Blumenthal and Hawley’s framework represents a significant step toward AI regulation, it raises several unanswered questions. The framework does not specify whether AI oversight would come from a newly created federal agency or a department within an existing one. Criteria for defining high-risk AI use cases requiring licensing are also absent from the proposal.
Critics, including libertarian-leaning groups and digital rights organizations, argue that AI licensing may stifle innovation and result in industry capture by influential companies. The legislative framework does include provisions for strong conflict-of-interest rules within the AI oversight body.
Ultimately, this legislative framework signals Congress’s inclination toward stricter AI regulation compared to the federal government’s current voluntary risk-management framework and nonbinding AI bill of rights. While the White House has initiated voluntary agreements with major AI companies, it acknowledges the necessity of legislative action to ensure public safety in the face of AI harms.
By Impact Lab