In early May 2023, OpenAI’s headquarters in San Francisco buzzed with innovation as Colin Megill, founder of Polis, joined forces with the leading AI lab. Megill’s experience in developing Polis, a platform for public deliberations, caught the attention of OpenAI co-founder Wojciech Zaremba. Polis aimed to address the shortcomings of traditional democracy by allowing users to articulate views, vote, and identify common ground through machine learning-generated maps of values.
OpenAI faced the challenge of aligning its AI systems with human values. Rather than unilaterally deciding whose values should be reflected, OpenAI sought a novel approach. Inspired by Polis, Zaremba proposed leveraging large language models (LLMs) like ChatGPT to streamline and enhance the deliberative process, making it more accessible and efficient.
The collaboration led to OpenAI’s announcement of the “Democratic Inputs to AI” program in May. Ten teams were invited to develop proof-of-concepts for a democratic process to determine rules for AI systems. The program aimed to explore viable mechanisms for involving the public in shaping AI behavior. However, the road ahead proved tumultuous as OpenAI faced internal challenges, including the firing and subsequent reinstatement of CEO Sam Altman.
In September, the grant-winning teams presented their work, emphasizing the difficulty of designing a system that accurately reflects the public’s will. The presentations highlighted the importance of ensuring diverse representation and preventing AI systems from benefiting some communities more than others.
Andrew Konya, founder of Remesh, explored the potential of AI-powered mass-scale direct democracy. His team received a grant to test whether a GPT-4-powered version of Remesh could gather public input, distill it into a policy document, and refine it through human and public consultation.
While acknowledging the limitations and risks associated with AI systems, Konya argued that the mere act of seeking consensus is a powerful step. However, questions lingered about whether consulting the public should result in binding decisions and the broader implications for AI governance.
OpenAI’s commitment to exploring democratic inputs, as outlined in its “Democratic Inputs to AI” program, signals a willingness to involve the public in shaping the behavior of powerful AI systems. Yet, the question of whether these inputs would be advisory or binding remains unanswered. The company’s newly established “collective alignment” team aims to collect democratic public input on AI behavior, raising anticipation about the potential influence the public may have on shaping AI values.
The journey toward democratizing AI governance is underway, with OpenAI at the forefront, navigating challenges and seeking innovative solutions. As the competition for AI dominance intensifies among tech giants, the role of the public in influencing AI behavior emerges as a critical aspect of responsible and inclusive AI development.
By Impact Lab