The European Commission has taken a significant step in regulating artificial intelligence by appointing 13 experts from the EU, US, and Canada to draft a Code of Practice for General Purpose AI (GPAI) systems, including popular models like ChatGPT. This initiative is connected to the recently passed AI Act, which sets strict rules for AI technology and will be enforced starting in August 2025.
The group of professionals includes Nuria Oliver (Spain), who holds a PhD in AI from MIT. Alexander Peukert (Germany) is a Professor of Civil, Commercial, and Information Law at Goethe University Frankfurt. Rishi Bommasani (US) is the Society Lead at the Stanford Center for Research on Models, focusing on the societal impact of general-purpose AI models. Céline Castets-Renard (France) is an expert in AI law, digital copyright law, and technology regulation.
Matthias Samwald (Austria) is an Associate Professor at the Medical University of Vienna specializing in AI for scientific research and medicine. Marta Ziosi (Italy) is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative. Alexander Zacherl (Germany) previously worked at DeepMind on AI safety and simulations. Yoshua Bengio (Canada) is a pioneer in deep learning.
Daniel Privitera (Italy and Germany) is an expert in AI policy and safety. Nitarshan Rajkumar (Canada) is a PhD candidate at the University of Cambridge. Marietje Schaake (Netherlands) is a Fellow at Stanford’s Cyber Policy Center. Markus Anderljung (Sweden) is the Director of Policy and Research at the Centre for the Governance of AI. Anka Reuel (Germany) is a PhD candidate at Stanford University specializing in technical AI governance and trustworthy AI research.
The main goal of the Code of Practice is to help AI developers comply with the AI Act. It will focus on key areas like transparency, risk assessment, and governance.
The commission has stated that the drafting process will be collaborative, involving meetings with various stakeholders such as industry leaders, academics, and community groups. This inclusive approach ensures a more comprehensive Code by considering different viewpoints.
The drafting will be divided into four workstreams, each concentrating on a specific aspect of the Code. Experts aim to complete the Code by April 2025, allowing time for revisions based on feedback before the AI Act takes effect.
Once completed, the Code of Practice will guide AI model providers in complying with the AI Act and promoting the ethical use of AI technology. This is crucial as AI becomes increasingly integrated into daily life, affecting personal decisions and broader societal issues.