Skip to content

European Struggle to Strike a Balance with AI Development

Artificial Intelligence (AI) has received a worldwide regulatory first with the European Parliament's approval of the first comprehensive AI legislation. This significant step towards governing technology's rapid advancements comes with notable compromises in favor of business entities and law...

Artificial Intelligence's Tightrope Walk in Europe
Artificial Intelligence's Tightrope Walk in Europe

European Struggle to Strike a Balance with AI Development

The European Union (EU) has taken a significant step towards regulating artificial intelligence (AI) systems with the implementation of the EU AI Act. This comprehensive framework, which began being enforced on February 2, 2025, aims to govern AI systems across various sectors, including healthcare, education, politics, and those related to deepfakes.

The Act is being implemented in phases. The initial phase, which commenced in February 2025, banned AI systems that pose unacceptable risks. The next phase, scheduled for August 2, 2025, will introduce specific rules for general-purpose AI models, including transparency, documentation, and risk management requirements.

The EU AI Act employs a risk-based approach, categorizing AI systems into unacceptable, high-risk, limited-risk, and minimal-risk categories. Prohibited AI systems include those using manipulative subliminal techniques, capturing real-time biometric data in public spaces, and engaging in social scoring. These are banned outright as they undermine human dignity or democratic principles.

High-risk AI systems, often used in critical infrastructure, healthcare, and financial applications, must comply with strict requirements such as detailed risk management, robust data governance, human oversight, and rigorous testing.

Starting August 2, 2025, general-purpose AI models will need to adhere to transparency requirements and provide technical documentation. High-risk GPAI models will face additional obligations like model evaluations and incident reporting.

In the healthcare sector, high-risk AI systems, such as those used for diagnosis or treatment, will be subject to strict regulations to ensure safety and reliability. Healthcare providers must ensure these systems are transparently documented and undergo rigorous testing to maintain regulatory compliance.

While specific regulations for education are not extensively detailed, AI systems used in this sector must comply with the broader risk-based framework of the Act. Systems with minimal impact would likely face fewer obligations.

The Act does not specifically address AI use in politics, but it targets manipulative AI systems that could influence human behaviour or undermine democratic processes, which may indirectly apply to political contexts.

The Act does not directly address deepfakes, but any AI system used for creating deepfakes might be categorized based on its potential risk to individual rights and public safety. Systems that manipulate or deceive could be classed as "high-risk" or potentially prohibited if they pose unacceptable risks.

Emotion recognition technologies are permitted for migration control purposes despite being banned in general contexts, leading to concerns about surveillance abuses against marginalized communities and people on the move.

The AI Act's numerous exemptions, particularly concerning transparency and oversight for high-risk AI systems used by law enforcement and migration authorities, risk undermining the Act's objectives.

The AI Act's final form adopts a "risk-based approach" to products or services. Real-time biometric facial recognition is prohibited in public areas, but exceptions are made for law enforcement in scenarios involving severe crimes like terrorism or the search for missing persons, contingent upon judicial approval.

The European Parliament approved the first comprehensive regulation of artificial intelligence globally in March 2024. The AI Act, hailed as the world's first comprehensive AI regulation, makes major concessions to both corporate interests and law enforcement authorities. European policymaking has been criticized for prioritizing the interests of national governments, law enforcement agencies, and Big Tech lobbies over public interest and human rights.

The AI Act's implementation is expected by mid-2026, with immediate actions focusing on the review and potential prohibition of certain AI applications, particularly biometric identification. Concerns were raised about further delays in passing the AI Act due to potential major shifts to the right and increased opposition to restrictions and oversight of AI use in the upcoming EU elections.

France's concerns about the AI Act could further undermine digital sovereignty, pushing reliance towards non-European AI solutions. The AI Act targets general-purpose "models" that power AI tools like ChatGPT or Google's Bard, requiring developers to maintain detailed technical documentation and comply with regulations. High-risk applications, such as those in medical devices and critical infrastructure like water or electricity, as well as in education and employment, must meet rigorous requirements.

In conclusion, the EU AI Act represents a significant step towards regulating AI systems across various sectors. While it faces challenges in implementation and criticism for potential loopholes, it aims to ensure the safety, reliability, and ethical use of AI systems in the EU.

The EU AI Act, as the world's first comprehensive AI regulation, addresses the use of data-and-cloud-computing in various sectors, including politics, healthcare, education, and those related to deepfakes. This Act, concerned with policy-and-legislation, demands transparency and technical documentation from general-news-related general-purpose AI models starting August 2, 2025.

In the political context, while the Act does not specifically address AI use in politics, it aims to prohibit manipulative AI systems that could influence human behavior or undermine democratic processes. Furthermore, systems creating deepfakes might be categorized based on their potential risk to individual rights and public safety.

Read also:

    Latest