EU AI Act · Reg. 2024/1689

EU AI Act compliance.
Articles 5–56, in one brick.

Modular EU AI Act software covering 24 operator obligations across prohibited practices, high-risk AI requirements, provider obligations, deployer obligations, FRIA, EU database registration, transparency for chatbots and deepfakes, and General-Purpose AI model obligations.

24 controls. Four milestones. All Articles you'll act on.

No promises — verified live in production. Same depth as the ISO 27001 brick: 5-level maturity rubrics, evidence templates, gap detection, coaching tips per control.

Scope & Prohibited Practices (Art. 5–7) — 3 controls

Screen every AI system against the eight Article 5 prohibited categories. Classify high-risk per Annex I (product safety component) or Annex III (use case) with Article 6(3) derogation analysis. Track Annex III amendments under Article 7.

High-Risk AI Requirements (Art. 8–15) — 8 controls

Per high-risk AI system: risk management (Art. 9), data governance (Art. 10), Annex IV technical documentation (Art. 11), event logs (Art. 12), instructions for use to deployers (Art. 13), human oversight (Art. 14), accuracy / robustness / cybersecurity (Art. 15).

Provider Obligations (Art. 16–22) — 7 controls

General provider obligations + CE marking + EU declaration of conformity (Art. 16), quality management system (Art. 17), 10-year documentation retention (Art. 18), 6-month log retention (Art. 19), corrective actions (Art. 20), authority cooperation (Art. 21), authorised representative for non-EU providers (Art. 22).

Deployer / Special / GPAI (Art. 26, 27, 49–56) — 6 controls

Deployer obligations including human oversight + log retention + affected-person notifications (Art. 26), Fundamental Rights Impact Assessment (Art. 27), EU database registration (Art. 49), transparency for chatbots / deepfakes / synthetic content / public-interest text (Art. 50), General-Purpose AI model obligations (Art. 51–53), and systemic-risk GPAI obligations (Art. 55–56).

BYO LLM keys + EU-resident hosting

AI Act readiness narratives, control maturity scoring, and gap detection run on your OpenAI / Anthropic / Azure OpenAI / local model — your provider, your billing, your audit trail. Hosted in the EU (Frankfurt) by default. Per-tenant isolation.

Stack with ISO 42001 + NIST AI RMF

Most teams need at least two AI governance shapes — ISO 42001 for the certificate, AI Act for the regulator. BrickGRC tracks all three (ISO 42001, EU AI Act, NIST AI RMF) with shared AI system inventory; evidence collected for one rolls into the others where controls overlap.

What is the EU AI Act?

The EU AI Act — Regulation (EU) 2024/1689 — is the first horizontal regulation of artificial intelligence in any major jurisdiction. Adopted on 13 June 2024 and entering into force on 1 August 2024, it applies to providers, deployers, importers, distributors, and authorised representatives of AI systems placed on the EU market or put into service in the Union, regardless of where the operator is established. Enforcement is staggered.

The Act takes a risk-based approach with four tiers:

  • Unacceptable risk (Article 5): prohibited outright. Subliminal techniques, exploitation of vulnerabilities, social scoring by public authorities, untargeted facial-image scraping for face databases, emotion recognition in workplace and education with narrow exceptions, biometric categorisation for protected characteristics, real-time remote biometric identification in public spaces (with law-enforcement exceptions).
  • High risk (Articles 6–49): permitted but heavily regulated. Two routes: AI systems that are safety components of products covered by EU harmonisation legislation listed in Annex I, and AI systems used in the use cases listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration/asylum/border, justice, democratic processes). The bulk of the operational obligations live here.
  • Limited risk (Article 50): transparency obligations regardless of high-risk classification. Chatbot disclosure, deepfake disclosure, AI-generated synthetic content marking, AI-generated public-interest text disclosure.
  • Minimal risk: no specific obligations. Most AI systems sit here.

Plus general-purpose AI (GPAI) models: Articles 51–56 establish a separate regime for foundation models and a sub-regime for those with systemic risk (currently models trained with more than 10²⁵ FLOPs of compute, or designated by the Commission).

BrickGRC's EU AI Act brick maps directly onto the operator-obligation surface. The brick is honest about what it covers — Articles 5–22, 26–27, 49–50, 51–56 — and what it doesn't (notifying authorities, conformity assessment, market surveillance, fines apparatus under Article 99, all of which are regulator-side activities or one-time events). The intent is to drive operational adoption: get providers and deployers to "I have evidence of every required obligation" with as little friction as possible.

If you also pursue ISO 42001, most controls overlap meaningfully — risk management, data governance, technical documentation, record-keeping, human oversight, post-market monitoring. The two bricks are designed to stack.

EU AI Act with BrickGRC — common questions

Which Articles of the EU AI Act does the brick cover?

24 controls across Articles 5–56: scope and prohibited practices (5–7), high-risk AI requirements (8–15), provider obligations (16–22), and deployer / FRIA / EU-database / transparency / GPAI (26, 27, 49, 50, 51–56).

When does the EU AI Act apply to me?

Entered into force 1 August 2024. Phased: prohibited practices from 2 February 2025; governance / notified bodies from 2 August 2025; bulk of high-risk obligations from 2 August 2026; Annex I high-risk products from 2 August 2027. Verify your timeline against the Official Journal text.

Am I a provider, deployer, both?

Provider: you place an AI system on the market or put it into service under your name or trademark. Deployer: you use an AI system in the course of professional activity. The same organisation can be both. The brick walks you through this determination at project setup.

What is a Fundamental Rights Impact Assessment (FRIA)?

Article 27 requires deployers that are public bodies, private operators providing public services, and certain Annex III(5)(b)–(c) deployers to perform a FRIA before first deployment. Covers process, period of use, affected categories, specific risks of harm, oversight measures, mitigations. Authority notification of the outcome required.

Does the brick cover GPAI obligations?

Yes — Articles 51–53 (technical documentation per Annex XI, downstream-provider information per Annex XII, EU copyright policy, training-data public summary), Article 54 (authorised representative), and Articles 55–56 (systemic-risk GPAI: model evaluation, mitigation, serious incident reporting, cybersecurity).

How does this relate to ISO 42001?

ISO 42001 is a certifiable management-system standard; the EU AI Act is binding regulation. Many controls overlap. Most teams pursue both in parallel. BrickGRC stacks the bricks; evidence collected for one rolls into the other automatically. See our AI governance page for the full three-brick stack (ISO 42001 + EU AI Act + NIST AI RMF).

Get ahead of August 2026.

Pick the EU AI Act brick, classify your systems, generate Annex IV technical documentation, and let the AI Coach handle the evidence work. AI-Act-ready in weeks, not months.

Book a Demo Start Free Trial