Modular EU AI Act software covering 24 operator obligations across prohibited practices, high-risk AI requirements, provider obligations, deployer obligations, FRIA, EU database registration, transparency for chatbots and deepfakes, and General-Purpose AI model obligations.
No promises — verified live in production. Same depth as the ISO 27001 brick: 5-level maturity rubrics, evidence templates, gap detection, coaching tips per control.
Screen every AI system against the eight Article 5 prohibited categories. Classify high-risk per Annex I (product safety component) or Annex III (use case) with Article 6(3) derogation analysis. Track Annex III amendments under Article 7.
Per high-risk AI system: risk management (Art. 9), data governance (Art. 10), Annex IV technical documentation (Art. 11), event logs (Art. 12), instructions for use to deployers (Art. 13), human oversight (Art. 14), accuracy / robustness / cybersecurity (Art. 15).
General provider obligations + CE marking + EU declaration of conformity (Art. 16), quality management system (Art. 17), 10-year documentation retention (Art. 18), 6-month log retention (Art. 19), corrective actions (Art. 20), authority cooperation (Art. 21), authorised representative for non-EU providers (Art. 22).
Deployer obligations including human oversight + log retention + affected-person notifications (Art. 26), Fundamental Rights Impact Assessment (Art. 27), EU database registration (Art. 49), transparency for chatbots / deepfakes / synthetic content / public-interest text (Art. 50), General-Purpose AI model obligations (Art. 51–53), and systemic-risk GPAI obligations (Art. 55–56).
AI Act readiness narratives, control maturity scoring, and gap detection run on your OpenAI / Anthropic / Azure OpenAI / local model — your provider, your billing, your audit trail. Hosted in the EU (Frankfurt) by default. Per-tenant isolation.
Most teams need at least two AI governance shapes — ISO 42001 for the certificate, AI Act for the regulator. BrickGRC tracks all three (ISO 42001, EU AI Act, NIST AI RMF) with shared AI system inventory; evidence collected for one rolls into the others where controls overlap.
The EU AI Act — Regulation (EU) 2024/1689 — is the first horizontal regulation of artificial intelligence in any major jurisdiction. Adopted on 13 June 2024 and entering into force on 1 August 2024, it applies to providers, deployers, importers, distributors, and authorised representatives of AI systems placed on the EU market or put into service in the Union, regardless of where the operator is established. Enforcement is staggered.
The Act takes a risk-based approach with four tiers:
Plus general-purpose AI (GPAI) models: Articles 51–56 establish a separate regime for foundation models and a sub-regime for those with systemic risk (currently models trained with more than 10²⁵ FLOPs of compute, or designated by the Commission).
BrickGRC's EU AI Act brick maps directly onto the operator-obligation surface. The brick is honest about what it covers — Articles 5–22, 26–27, 49–50, 51–56 — and what it doesn't (notifying authorities, conformity assessment, market surveillance, fines apparatus under Article 99, all of which are regulator-side activities or one-time events). The intent is to drive operational adoption: get providers and deployers to "I have evidence of every required obligation" with as little friction as possible.
If you also pursue ISO 42001, most controls overlap meaningfully — risk management, data governance, technical documentation, record-keeping, human oversight, post-market monitoring. The two bricks are designed to stack.
24 controls across Articles 5–56: scope and prohibited practices (5–7), high-risk AI requirements (8–15), provider obligations (16–22), and deployer / FRIA / EU-database / transparency / GPAI (26, 27, 49, 50, 51–56).
Entered into force 1 August 2024. Phased: prohibited practices from 2 February 2025; governance / notified bodies from 2 August 2025; bulk of high-risk obligations from 2 August 2026; Annex I high-risk products from 2 August 2027. Verify your timeline against the Official Journal text.
Provider: you place an AI system on the market or put it into service under your name or trademark. Deployer: you use an AI system in the course of professional activity. The same organisation can be both. The brick walks you through this determination at project setup.
Article 27 requires deployers that are public bodies, private operators providing public services, and certain Annex III(5)(b)–(c) deployers to perform a FRIA before first deployment. Covers process, period of use, affected categories, specific risks of harm, oversight measures, mitigations. Authority notification of the outcome required.
Yes — Articles 51–53 (technical documentation per Annex XI, downstream-provider information per Annex XII, EU copyright policy, training-data public summary), Article 54 (authorised representative), and Articles 55–56 (systemic-risk GPAI: model evaluation, mitigation, serious incident reporting, cybersecurity).
ISO 42001 is a certifiable management-system standard; the EU AI Act is binding regulation. Many controls overlap. Most teams pursue both in parallel. BrickGRC stacks the bricks; evidence collected for one rolls into the other automatically. See our AI governance page for the full three-brick stack (ISO 42001 + EU AI Act + NIST AI RMF).
Pick the EU AI Act brick, classify your systems, generate Annex IV technical documentation, and let the AI Coach handle the evidence work. AI-Act-ready in weeks, not months.