Regulation (EU) 2024/1689 Deep Dive
Table of Contents
Subject Matter & Scope
Article 2: Scope (The Brussels Effect)
The Act applies to providers placing AI systems on the EU market, regardless of establishment location. It also applies where the output produced by the system is used in the Union.
Article 3: Definitions
AI System: A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that infers how to generate outputs such as predictions, content, recommendations, or decisions.
Prohibited AI Practices
Practices carrying Unacceptable Risk are banned outright:
Subliminal Manipulation
Techniques to distort behavior and impair informed decision-making.
Social Scoring
Evaluating persons leading to detrimental treatment in unrelated contexts.
Real-time Remote Biometric ID
In public spaces for law enforcement (narrow exceptions).
Emotion Recognition
Banned in workplace and education institutions.
Predictive Policing
Assessing risk based solely on profiling or personality traits.
Untargeted Scraping
Building facial recognition databases by scraping internet/CCTV.
High-Risk AI Systems
Classification (Art 6)
- 1. Safety Components: AI used in regulated products (Toys, Cars, Medical Devices).
- 2. Annex III Systems: Biometrics, Critical Infrastructure, Education, Employment, Law Enforcement.
Core Obligations (Art 8-15)
General Purpose AI Models (GPAI)
Regulates foundation models (GPT-4, Gemini, Claude). Distinguishes standard GPAI from those with Systemic Risk.
All GPAI Models
- • Maintain technical documentation
- • Comply with EU Copyright Law
- • Publish training content summary
Systemic Risk Models
- • Model evaluations (Red Teaming)
- • Assess and mitigate systemic risks
- • Report serious incidents to AI Office
Penalties
Prohibited Practices (Art 5)
High-Risk AI Obligations
Incorrect Information