EU AI Act for Legal Tech
Annex III(8) for judicial AI. Article 14 oversight is the hard part.
Judicial AI (sentencing aid, judgment prediction, court research) is Annex III(8) high-risk. Article 14 oversight requires the human reviewer to be able to override + intervene meaningfully. Confidentiality + legal-professional privilege + GDPR Article 22 all stack.
The legal tech compliance stack
- EU AI Act Annex III(8) — judicial AI high-risk classification.
- Article 14 — effective human oversight, automation-bias awareness training, override capability.
- Article 26(9) FRIA — mandatory for public-sector deployers (courts, ministries of justice, ADR providers).
- GDPR Article 22 — solely-automated-decision protection.
- Legal professional privilege — preserved end-to-end; AI must not break confidentiality.
- Bar association rules — varies by member state. Some require disclosure to clients of AI use in legal advice.
- Charter of Fundamental Rights Articles 6 + 47 — liberty + effective remedy + fair trial — particularly bite in sentencing-aid contexts.
Frequently asked
Is legal tech AI high-risk?
Annex III(8) covers AI 'intended to be used by a judicial authority or on its behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or used in a similar way in alternative dispute resolution.' Sentencing aids, judgment-prediction tools, AI-assisted case research deployed in court contexts are explicitly captured. Pure law-firm-internal research tools are LOW-risk unless they make decisions affecting fundamental rights.
What about contract analysis AI?
Generally NOT Annex III high-risk for B2B contract analysis. BUT: if used to analyse contracts that affect natural persons (employment, consumer credit, insurance), Article 26(9) FRIA may apply downstream when the analysis feeds decisions. And GDPR + confidentiality + legal-professional privilege still apply throughout. Article 13 transparency to deployers is mandatory regardless of risk tier.
What's special about Article 14 oversight?
Article 14 requires the high-risk AI to be designed and developed so it can be effectively overseen by natural persons during use. For legal tech that means: (a) the human reviewer must be able to fully understand the system's capabilities + limitations, (b) be aware of automation bias, (c) correctly interpret outputs in context, (d) decide not to use the AI in a particular case, (e) override or reverse outputs, (f) intervene or interrupt operation. UI affordances must be in place — not just policy.
How does GDPR Article 22 stack?
Article 22 prohibits decisions based solely on automated processing that produce legal or similarly significant effects on natural persons, with narrow exceptions (consent, contract necessity, EU/MS law authorisation). For most legal-tech use cases, the practical effect is that the human-in-the-loop must be meaningful, not just a rubber-stamp. Legal-tech vendors should show that their UI actively facilitates substantive human review.
What does MEOK ship for legal tech?
Article 14 oversight templates + decision-trace logging at /transparency (£399/mo). Article 13 transparency-to-deployer disclosure templates. /audit-prep-bundle (£4,950) for full Annex IV + FRIA where needed. Bias detection (£299/mo) for any tool ranking judgments or precedents.
Win the procurement, don't lose it to the AI Act
EU justice ministries + bar associations are putting Article 14 + Article 26(9) into RFPs. Ship signed evidence and you get past the gate.
MEOK AI Labs · CSOAI LTD · UK Companies House 16939677