AI & Automation 4 min read 14 May 2026

AI Act fines start at 1.5% of turnover: how to budget for compliance

The EU's AI Act penalty structure scales with company size, not system complexity. Here's how finance teams should model the real costs.

Elena Marín

Elena Marín

AI Editor

AI Act fines start at 1.5% of turnover: how to budget for compliance

The EU's AI Act fines start at €15 million or 1.5% of annual turnover for the least serious violations. Most CFOs haven't run these numbers yet, but they should.

Unlike GDPR, where penalties felt theoretical until the first big cases landed, AI Act enforcement targets are already visible. The regulation's tiered penalty system creates a clear cost hierarchy that finance teams can model today. The question isn't whether fines will materialise—it's whether your compliance budget reflects the actual risk exposure.

The penalty pyramid: four tiers, four budgets

The AI Act structures penalties across four violation categories, each with different cost implications for your business. Prohibited AI practices trigger the highest fines: €35 million or 7% of global turnover. High-risk system violations sit at €15 million or 3%. Documentation and transparency failures cost €7.5 million or 1.5%. Supply chain notification breaches start at €7.5 million or 1.5%.

For a mid-market software company with €50 million turnover, that 1.5% floor translates to €750,000 for relatively minor compliance gaps. Scale up to enterprise level—say €500 million revenue—and suddenly you're looking at €3.75 million for the same documentation shortfall.

The smart money isn't just budgeting for compliance costs. It's factoring in the penalty exposure when calculating ROI on AI system development projects. If your high-risk AI system could trigger a 3% penalty, the business case for robust compliance infrastructure changes dramatically.

Insurance gaps: why your current policy won't help

Most commercial insurance policies explicitly exclude regulatory fines from coverage. We've reviewed dozens of tech company policies over the past year, and fewer than 20% include regulatory defence cost coverage—let alone penalty payments.

The AI Act creates a specific problem for insurance underwriters: unlike data breaches, where liability estimates draw on established actuarial models, AI system failures have no precedent data. Early conversations with specialist insurers suggest AI liability policies will price in the full penalty exposure, making them prohibitively expensive for most businesses.

This insurance gap forces companies to self-insure against AI Act penalties through cash reserves or compliance investment. The arithmetic is stark: setting aside funds to cover potential 1.5-7% penalties, or spending that money upfront on systems that prevent violations. Most finance directors we speak to prefer the certainty of compliance costs over penalty reserves.

Revenue recognition: when AI systems become liabilities

The AI Act's risk classification system creates an accounting problem that few businesses have recognised yet. High-risk AI systems require ongoing compliance monitoring, which generates recurring costs that accounting teams must provision against future revenue.

Under IFRS 15, if your software includes high-risk AI components, the compliance costs become part of your contract fulfilment obligations. For SaaS businesses, this means calculating AI Act compliance expenses across multi-year customer contracts and recognising them as deferred costs.

One logistics platform we worked with discovered their route optimisation algorithm qualified as high-risk under the AI Act. The compliance monitoring costs—roughly €50,000 annually per deployment—had to be provisioned against three-year contracts worth €200,000. That changed their contribution margin from 60% to 35% overnight.

The revenue recognition impact varies by business model, but B2B software companies with AI components need to model these costs into their pricing from day one. Retrofitting compliance provisions into existing contracts rarely works.

Budget allocation: compliance as product feature

Most companies we encounter treat AI Act compliance as overhead—a necessary evil that sits outside core product development. The businesses getting ahead flip this thinking: they budget for compliance as a product feature that creates competitive differentiation.

The numbers support this approach. Building AI Act compliance into your product architecture from the start costs roughly 15-20% more than retrofitting after launch. But compliant systems command premium pricing in enterprise markets, often 25-30% above non-compliant alternatives.

Smart budget allocation puts compliance engineering inside product teams, not legal departments. The technical choices that determine AI Act risk classification happen during system design, not contract negotiation. Engineering teams that understand the penalty structure build different systems than those treating compliance as someone else's problem.

The key budget decision isn't how much to spend on compliance—it's whether to fund reactive penalty management or proactive compliance engineering. The penalty structure makes that choice for you: prevention costs less than cure, and the gap widens with company size.

Finance teams should model AI Act costs as operational risk, not project overhead. The regulation creates ongoing compliance obligations that scale with business growth, not one-time implementation expenses. Budget accordingly, because the alternative is budgeting for fines instead.

Elena Marín

Written by

Elena Marín

AI Editor

Have a project in mind?

Brighton & Madrid · senior team, ships on the date in the SOW.

Schedule a Demo

Ready to build your unfair advantage?

Let's discuss your AI roadmap. Free 45-minute call, no sales pitch — just engineers who can scope the work.