AI & Automation 4 min read 7 May 2026

AI Act compliance: build your risk assessment before August 2025

The EU AI Act's risk classification system determines everything from audit requirements to potential fines. Getting your assessment wrong costs more than getting it right.

Elena Marín

Elena Marín

AI Editor

AI Act compliance: build your risk assessment before August 2025

Companies deploying AI systems have eighteen months to figure out if they're building a chatbot or a regulated medical device. The EU AI Act doesn't care what you call your system — it cares what your system actually does.

The risk pyramid that determines your compliance burden

The AI Act sorts every AI system into one of four risk categories. Most business applications land in the "limited risk" bucket, which sounds reassuring until you read the requirements. Your customer service chatbot needs transparent disclosure. Your recruitment screening tool faces bias audits. Your content moderation system requires human oversight protocols.

High-risk AI systems — those affecting safety, fundamental rights, or critical infrastructure — face the full regulatory weight. We've seen clients discover their employee monitoring software crosses into high-risk territory because it influences hiring decisions. The compliance cost jumps from thousands to hundreds of thousands of pounds.

Minimal risk systems get the lightest touch, but that classification isn't automatic. You need documentation proving your AI system doesn't influence decisions about people's lives, jobs, or access to services. That's harder to prove than most companies expect.

Foundation models face different rules entirely

If you're building on GPT-4, Claude, or similar foundation models, you inherit compliance obligations from both ends. The model provider handles systemic risk assessments and computational thresholds. You handle the downstream application risks.

This split responsibility creates gaps. When a foundation model updates its capabilities, your risk classification might change overnight. We're advising clients implementing AI systems to build monitoring processes for upstream model changes, not just their own code updates.

Companies training their own models with more than 10^25 FLOPs — roughly GPT-4 scale — become foundation model providers themselves. The obligations include sharing safety evaluations with regulators and implementing systemic risk mitigation. Most mid-market companies won't hit these thresholds, but enterprise clients with serious ML operations need to run the calculations.

Documentation requirements that actually matter

The AI Act demands technical documentation, but not the kind most engineering teams produce naturally. Compliance documentation focuses on decision-making processes, not system architecture. You need to explain how your AI system affects people, not how your neural network processes data.

Quality management systems become mandatory for high-risk AI applications. This isn't ISO certification — it's ongoing monitoring of your AI system's performance in production. Data drift monitoring, bias testing, and human oversight protocols all need formal documentation.

Risk management documentation must cover the entire system lifecycle. Your initial risk assessment gets updated when you retrain models, change data sources, or deploy to new use cases. We're seeing clients across different sectors struggle with this ongoing documentation burden because they treat it as a one-time compliance exercise.

Enforcement timelines that conflict with development cycles

Prohibited AI systems become illegal immediately when the Act takes effect in 2025. Most prohibited categories target government surveillance, but some affect business applications. AI systems that use subliminal techniques to influence behavior cross the line into prohibited territory.

High-risk AI systems get a two-year transition period, but only if they were already on the market. New high-risk systems need full compliance from day one of the August 2025 enforcement date. Foundation models with systemic risk implications face compliance requirements twelve months after the Act takes effect.

These timelines assume your risk classification stays constant. In practice, AI systems evolve faster than regulatory frameworks. Your minimal risk chatbot becomes limited risk when you add personalization features. Your limited risk content filter becomes high-risk when you apply it to job applications.

Start with use case mapping, not technology audits

Most companies begin AI Act compliance by cataloging their AI technologies. This approach misses the point entirely. The regulation cares about use cases, not algorithms. Your computer vision system might be minimal risk for inventory management and high-risk for security screening.

Map every AI use case against the Act's risk categories before you start building compliance processes. Document the human decisions your AI systems influence, the fundamental rights they might affect, and the safety implications of their failures. This mapping exercise often reveals compliance obligations companies didn't expect.

The companies getting AI Act compliance right aren't waiting for final guidance from regulators. They're building risk assessment processes that evolve with their AI deployments, treating compliance as an ongoing operational requirement rather than a legal checkbox. Your August 2025 deadline starts with the risk assessment you build today.

Elena Marín

Written by

Elena Marín

AI Editor

Have a project in mind?

Brighton & Madrid · senior team, ships on the date in the SOW.

Schedule a Demo

¿Listo para construir tu ventaja decisiva?

Hablemos de tu hoja de ruta de IA. 30 minutos gratis, sin venta — solo ingenieros que pueden dimensionar el trabajo.