Skip to content

European AI Safeguards and Confidence-Building Framework

Compulsory Safety Examination by Autonomous Experts

European AI's foundation for security and reputable relations
European AI's foundation for security and reputable relations

European AI Safeguards and Confidence-Building Framework

The European Quality Infrastructure (EQI) is set to play a pivotal role in ensuring the safety and reliability of artificial intelligence (AI) products and services. This comprehensive system, which already guarantees the safety of various products and services, can be adapted to address the unique challenges posed by AI.

The EQI's core principles, including standardization, metrology, accreditation, conformity assessment, and market surveillance, can be applied to AI foundation models. These models can be subjected to common technical and ethical standards that define safety, reliability, transparency, and fairness requirements.

Risk management will be integral to AI development and deployment, with ongoing identification, assessment, and control of risks. This means systematically evaluating how foundation models might cause harm or behave unpredictably, and implementing measures to mitigate such risks early on.

Providers and deployers of foundation AI models can undergo independent auditing and certification processes to verify compliance with regulatory standards and ethical norms. This process provides trust and ensures accountability for AI safety before the models enter the market or are widely deployed.

When issues arise in AI model behaviour, root cause analysis, corrective measures, and preventive controls can ensure continuous improvement of AI safety protocols. This mirrors the CAPA process used in medical device quality management.

Post-market surveillance will involve continuous monitoring of AI models after deployment, collecting real-world performance data, detecting emerging risks, and implementing updates or recalls. This process keeps AI systems responsive to safety concerns that evolve from actual use.

Applying these EQI methods to foundation AI models involves a holistic quality infrastructure approach, setting European-wide standards, accrediting conformity, enforcing risk management, and establishing ongoing surveillance and correction mechanisms. This would bring structure, transparency, and trust similar to traditional high-risk product safety systems into the AI domain, fostering both innovation and public confidence in foundational AI technologies.

Product testing, including independent examinations of products, is also essential for AI. Periodical inspections of AI products on the market are becoming essential due to the potential for foundation models to develop new capabilities or deficiencies post-deployment. Making third-party testing compulsory for AI products would ensure impartial product testing and motivate AI companies to fund risk assessment measurement units.

Three approaches to testing and certification are commonly used: certification of quality management systems, product testing, and adversarial testing (also known as 'red-teaming'). Adversarial testing actively exploits product vulnerabilities to evaluate their safety, like crash tests in the automotive sector and penetration testing in cybersecurity. This approach can help uncover potentially dangerous features in AI models and identify how malevolent actors could misuse them.

The European Quality Infrastructure fosters competitiveness among businesses, particularly in the automotive, industrial technology, and manufacturing sectors, by enhancing consumer trust through independent conformity assessments. Mandatory third-party conformity assessment services in the EU include testing, inspection, and certification (TIC) activities.

Periodical inspections ensure safety and proper functioning after commercial distribution, particularly relevant for commodities such as cars and industrial installations. Audits or evaluations in the AI realm assess factors like data quality, model robustness, accuracy, and bias.

Mandating independent scrutiny would allow European ecosystems of auditors to adapt their services to the AI sector, utilizing the competitive advantage of established safety cultures. This would create a level playing field for small AI companies that may lack expertise, contributing to a fair allocation of costs for building an assessment ecosystem.

The EU AI Act, the White House Executive Order, the G7 Hiroshima Process, and the Bletchley Declaration have made commitments to some external scrutiny and testing for the most advanced AI products. However, these are not yet mandatory or consistently applied. The adaptation of the European Quality Infrastructure to AI could fill this gap, ensuring a safer and more reliable AI landscape for all.

  1. The European Quality Infrastructure (EQI) could expand its scope to include data-and-cloud-computing and cybersecurity, applying its principles to foundation AI models, thus ensuring the safety and reliability of AI within these domains.
  2. Policymakers and regulators can leverage the EQI's existing processes, such as standardization, accreditation, and market surveillance, to establish policy-and-legislation that governs technology in the AI sector, promoting transparency, fairness, and accountability.
  3. In line with the growth of AI in general-news and politics, the European ecosystem of auditors can adapt their certified conformity assessment services to AI, creating a level playing field for small AI companies and ensuring a safe and reliable AI landscape for all.

Read also:

    Latest