OpenAI, the organization behind the widely-used AI chatbot ChatGPT, has announced its plans to establish a team dedicated to mitigating the risks associated with superintelligent AI systems, which it expects to emerge within the next decade.
In a blog post on July 5, OpenAI emphasized the need for this new team to “steer and control AI systems much smarter than us.” The nonprofit firmly believes that superintelligence will be a groundbreaking technology, capable of solving numerous challenges, but acknowledges the potential risks that come with it.
To address these concerns, OpenAI has committed 20% of its secured compute power to this effort and aims to develop a “human-level” automated alignment researcher. The purpose of this automated researcher is to assist the team in ensuring the safety of superintelligence and aligning it with human intent.
The initiative will be led by Ilya Sutskever, Chief Scientist at OpenAI, and Jan Leike, Head of Alignment at the organization. OpenAI is also extending an open invitation to machine learning researchers and engineers, urging them to join the team in this crucial endeavor.
OpenAI’s announcement comes at a time when governments worldwide are contemplating measures to regulate the development, deployment, and use of AI systems. In the European Union, regulators have made significant strides with the EU AI Act, which seeks transparency in AI-generated content, including that of ChatGPT.
Similarly, in the United States, the National AI Commission Act is being proposed to establish a governing body to shape the nation’s approach to AI. OpenAI, among other tech companies, has been vocal about the potential impact of over-regulation, advocating for balanced policies that promote innovation while addressing potential risks.
With its proactive approach, OpenAI aims to pave the way for responsible AI development and use, fostering a collaborative effort to ensure the safe and beneficial implementation of superintelligent AI systems in the near future.
By Impact Lab