Skip to content

Europe's Initiative to Curtail AI Model Chat-GPT and Similar Entities

Prior to the implementation of more stringent AI Act regulations within the EU, a self-imposed code of conduct aims to assist companies such as OpenAI and Google in achieving compliance and alignment.

EU Plan to Regulate AI Model Behavior, Targeting Chat-GPT and Similar Entities
EU Plan to Regulate AI Model Behavior, Targeting Chat-GPT and Similar Entities

Europe's Initiative to Curtail AI Model Chat-GPT and Similar Entities

The European Union has introduced a voluntary code of conduct for AI providers, designed to help companies comply with the upcoming EU AI regulations under the EU AI Act. This code, which is non-binding, offers practical guidance beyond the mandatory requirements, aiming to assist AI developers and providers in adhering to the AI Act's rules.

The focus areas of the code include transparency, copyright, safety, and user protection. These focus areas align with the AI Act's goal to build trustworthy AI that respects fundamental rights and safety. The code is part of the EU's comprehensive approach to AI governance, which includes binding legislation (the AI Act), liability directives, safety reviews, and voluntary standards.

Adoption of the code can help businesses navigate the complex legal landscape, providing clarity and fostering trust with users and regulators. Providers following the code may benefit from easier market acceptance and potentially smoother regulatory oversight, as it signals proactive alignment with EU standards. The code acts as a supplement to existing regulations, pushing for higher standards of transparency and accountability that might go beyond minimum legal requirements.

The code is aimed at providers of general-purpose AI systems, including powerful models with potential risks, such as OpenAI's ChatGPT-4 or Google's Gemini. For existing systems like these, the new rules will apply from next year. Providers will receive practical guidelines on implementing EU copyright provisions, including considering websites that opt out of automated content scraping and setting up contact points for rights holders.

Enhanced requirements will apply for particularly powerful AI models with potential risks, such as the development of new chemical or biological weapons technologies or loss of control over the technology. Providers who voluntarily sign the code can document their "good intentions" and benefit from reduced administrative burden and higher legal certainty.

The code provides a framework for providers to better prepare for their future obligations under the new European regulatory framework. A form is included in the code to help providers record technical details for easier accessibility to supervisory authorities and downstream AI developers. The EU Commission sees the code as an important tool to help companies transition to the new European regulatory framework, even though it is voluntary.

Providers who do not adopt the code will have to develop their own approach to demonstrate their legal compliance, which may involve greater effort. However, the code offers a practical and streamlined approach to meeting the AI Act's requirements, making it an attractive option for many AI providers. The code includes chapters on transparency, copyright, and system risks, offering a comprehensive guide for AI providers aiming to ensure responsible AI deployment in Europe.

Technology and artificial-intelligence are addressed in the voluntary code of conduct for AI providers, which targets providers of general-purpose AI systems, such as OpenAI's ChatGPT-4 or Google's Gemini. The code provides practical guidelines for AI providers on implementing EU copyright provisions and advocates for higher standards of transparency and accountability in line with the EU AI Act.

Read also:

    Latest