AI & Automation 4 min read 9 March 2026

The EU AI Act comes into force: what mid-market companies must know

Europe's AI regulation is now law. Here's what it means for your business in practice, not theory.

Elena Marín

Elena Marín

AI Editor

Listen to this article

The EU AI Act comes into force: what mid-market companies must know

August 2024 marked the end of the Wild West era for AI in Europe. The EU AI Act officially came into force, creating the world's first comprehensive legal framework for artificial intelligence systems. If you're running AI projects at a mid-market company, this isn't abstract regulatory theatre anymore.

The risk pyramid that determines your obligations

The Act works on a risk-based approach that sorts AI systems into four categories. Minimal risk systems face almost no requirements. Limited risk systems need transparency disclosures. High-risk systems must jump through extensive compliance hoops. Prohibited systems are banned outright.

Most businesses we work with fall into the first two categories. Your customer service chatbot? Limited risk. Your recruitment screening tool? That's high-risk territory. The difference determines whether you need a basic disclosure notice or a full conformity assessment costing tens of thousands.

Foundation models like GPT-4 or Claude get special treatment if they exceed 10^25 floating-point operations during training. This threshold captures the major models but leaves smaller, specialised systems alone. It's a sensible line that avoids crushing innovation while targeting the systems that could genuinely reshape society.

Timeline pressure is building faster than you think

Don't be fooled by the August start date. Different provisions kick in at different times, and some deadlines arrive quickly. Prohibited AI systems became illegal immediately. High-risk systems have until August 2026 for full compliance, but preparatory work needs to start now.

We've seen this pattern before with GDPR. Companies that waited until the last minute faced rushed implementations and unnecessary costs. The smart money starts mapping their AI inventory this year, not in 2025.

General-purpose AI models have their own timeline. Systems exceeding the computational threshold must comply with transparency and risk management requirements by August 2025. If you're building custom AI solutions on top of these models, you need to understand what obligations pass down to you.

What compliance actually looks like on Monday morning

Theory is nice, but engineering managers need practical steps. For limited risk systems, compliance means clear disclosure that users are interacting with AI. A simple notice in your chat interface usually suffices.

High-risk systems demand much more. You'll need risk management systems, data governance procedures, human oversight protocols, and accuracy documentation. Think ISO certification process, not weekend project.

The documentation requirements alone can overwhelm smaller teams. You need to track training data, model performance, testing procedures, and deployment decisions. One client spent three months retroactively documenting their recruitment AI because they'd built first and planned compliance later.

Quality management systems must be proportionate to your organisation size. A 50-person company won't need the same infrastructure as a multinational, but you still need demonstrable processes. The regulators understand resource constraints, but ignorance isn't a defence.

Practical steps for mid-market companies

Start with an AI audit of your current systems. List every AI tool, custom model, and automated decision system. Include the obvious ones like chatbots and the hidden ones like fraud detection algorithms.

Assess each system's risk category using the Act's criteria. When in doubt, err on the side of caution. Reclassifying from high-risk to limited risk is easier than the reverse.

For regulated industries like finance or healthcare, AI systems are automatically high-risk. Your compliance burden just got heavier, but the framework provides clarity you didn't have before.

  • Document your AI systems' purpose, capabilities, and limitations
  • Implement human oversight where required
  • Establish procedures for monitoring AI performance post-deployment
  • Create incident response plans for AI system failures

The bigger picture beyond compliance

Compliance isn't just about avoiding fines. The Act creates competitive advantages for companies that get it right. Demonstrable AI governance becomes a selling point with enterprise clients who face their own compliance pressures.

EU market access now requires AI Act compliance. If you're selling AI-powered products to European customers, this isn't optional. The extraterritorial reach means non-EU companies deploying AI systems that affect EU citizens must also comply.

The Act will influence global standards. California is already drafting similar legislation. Getting ahead of EU requirements positions you for a world where AI regulation becomes the norm, not the exception.

Smart companies are treating this as an opportunity to build better AI systems, not just compliant ones. The risk management and documentation requirements force good engineering practices that improve reliability and user trust. Contact us if you need help turning regulatory requirements into competitive advantages.

The compliance clock started ticking in August. Companies that act now have time to build thoughtful, proportionate responses. Those who wait will find themselves scrambling to meet deadlines while competitors have already turned good governance into market differentiation.

Elena Marín

Written by

Elena Marín

AI Editor

Have a project in mind?

Brighton & Madrid · senior team, ships on the date in the SOW.

Schedule a Demo

Ready to build your unfair advantage?

Let's discuss your AI roadmap. Free 30-minute call, no sales pitch — just engineers who can scope the work.