Skip to content
MEOK.AI
🚀 Activate your agent

Free forever · No credit card

← All EU AI Act articles

EU AI Act Article 5 — Prohibited AI Practices

IN FORCE since 2 Feb 2025 — €35M / 7% turnover penalty

Article 5 is the EU AI Act's hard floor. Eight categories of AI practice are banned outright, regardless of whether you call yourself a provider or a deployer. Penalty ceiling is the highest in the regulation. Already fully applicable — no Digital Omnibus delay for this article.

The 8 prohibitions (Article 5(1)(a)-(h))

  1. (a) Subliminal / manipulative techniques — that distort behaviour materially impairing informed decision-making, causing significant harm.
  2. (b) Exploitation of vulnerabilities — due to age, disability, or specific social/economic situation, with material distortion + significant harm.
  3. (c) Social scoring — by public OR private actors, evaluating/classifying based on social behaviour OR personal characteristics, leading to detrimental treatment unrelated to context or disproportionate.
  4. (d) Predictive policing solely from profiling — assessing risk of criminal offence based solely on profiling or personality traits/characteristics. (Carve-out for analytical tools supporting human assessment based on objective + verifiable facts directly linked to criminal activity.)
  5. (e) Untargeted facial scraping — to build facial-recognition databases from internet or CCTV imagery.
  6. (f) Emotion recognition in workplace + education — except for medical or safety purposes.
  7. (g) Biometric categorisation inferring sensitive attributes — race, political opinions, trade union membership, religious/philosophical beliefs, sex life, sexual orientation. (Carve-out for lawful labelling/filtering of lawfully acquired biometric datasets in law enforcement.)
  8. (h) Real-time remote biometric identification — in publicly accessible spaces for law enforcement, except for the narrow exceptions in 5(2)-(7) (search for victims, prevention of imminent threat to life or substantial physical safety, identification of suspects of certain serious offences) — and only with prior judicial/administrative authorisation.

What this means in practice

  • Recruitment AI — emotion-recognition layers in interview-scoring AI are now banned outside medical/safety context. Sentiment analysis of free-text answers is borderline; document carefully.
  • Workforce monitoring — keystroke + screen analytics that infer emotion are banned. Pure productivity metrics are fine.
  • EdTech — emotion recognition in classroom AI is banned. Engagement detection through camera proxies is risky.
  • Marketing AI — manipulative dark-pattern targeting of vulnerable groups (debt-stressed, addictively-vulnerable) is banned.
  • Insurance AI — pricing models that score based on inferred sensitive attributes (Article 5(1)(g)) are banned.
  • Law enforcement AI — pure-profile predictive policing is banned. Facial scraping for databases is banned.

How MEOK helps you stay clear

  • /scorecard (free) — flags Article 5 risk in 90 seconds. Generates signed attestation.
  • meok-eu-ai-act-mcp — runs Article 5 prohibition check against your system description in your agent stack.
  • /audit-prep-bundle (£4,950) — Article 5 + Articles 9-15 + Article 26(9) FRIA in a 14-day signed evidence pack.
  • /consulting (£950/day) — borderline-case review (workplace AI, marketing personalisation, biometric tools).

Frequently asked

What's the date Article 5 became binding?

Article 5 prohibitions have been fully applicable since 2 February 2025 with no grace period. They were always Phase 1 of the EU AI Act timeline. The Digital Omnibus did NOT delay Article 5 — only the Annex III high-risk obligations and Annex I product-safety obligations got pushed.

What are the 8 prohibited categories?

Article 5(1)(a)-(h) bans: (a) subliminal / manipulative techniques causing significant harm, (b) exploitation of vulnerabilities (age, disability, social/economic situation), (c) social scoring by public + private actors that leads to detrimental treatment, (d) predictive policing based solely on profiling natural persons, (e) untargeted scraping of facial images from internet/CCTV to build facial-recognition databases, (f) emotion recognition in workplace + education (with narrow medical/safety exceptions), (g) biometric categorisation inferring sensitive attributes (race, political views, sexuality, religion), (h) real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow national-security exceptions and judicial authorisation).

What's the penalty?

Article 99(3) sets the maximum fine for Article 5 violations at €35M or 7% of total worldwide annual turnover (whichever is higher). This is the highest band in the regulation. Sanctions are imposed by national supervisory authorities + the EU AI Office for systemic providers.

Who enforces?

Each member state designates a national supervisory authority. France: CNIL + Arcom + ANSSI sharing competence. Germany: BNetzA + BSI + Datenschutzbehörden. Italy: ACN + Garante. The EU AI Office (DG CONNECT) coordinates cross-border + GPAI matters.

What about emotion recognition for safety?

Article 5(1)(f) carves out workplace + education emotion recognition explicitly except 'where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.' Driver-fatigue detection in commercial vehicles is the canonical safety case. Plant-floor anomaly detection that incidentally observes worker emotion is borderline — document the intended purpose carefully and prefer non-emotion-based proxies.

Run free Article 5 check →£4,950 Audit-Prep Bundle →

Source: EU AI Act Regulation 2024/1689 Art. 5 · MEOK AI Labs · CSOAI LTD · UK Companies House 16939677