european union building with flag on the front
european union building with flag on the front
european union building with flag on the front

Table of contents

Building AI under the EU AI Act: How to stay agile, compliant and competitive?
Why is there an AI Act legislation?
The AI Act in brief: What should business leaders really know about the EU AI act?
AI Act timeline: some key milestones to keep in mind
Compliance is strategic, not just legal
Navigating the new landscape: How will your AI projects and partnerships evolve?
How will the AI act impact your AI projects?
What changes in AI partnerships under the EU AI Act?
How can AI Act compliance become a competitive advantage?
What are the practical steps to take right now for the AI Act?

Table of contents

Table of contents

Building AI under the EU AI Act: How to stay agile, compliant and competitive?
Why is there an AI Act legislation?
The AI Act in brief: What should business leaders really know about the EU AI act?
AI Act timeline: some key milestones to keep in mind
Compliance is strategic, not just legal
Navigating the new landscape: How will your AI projects and partnerships evolve?
How will the AI act impact your AI projects?
What changes in AI partnerships under the EU AI Act?
How can AI Act compliance become a competitive advantage?
What are the practical steps to take right now for the AI Act?

Building AI under the EU AI Act: How to stay agile, compliant and competitive?

Building AI under the EU AI Act: How to stay agile, compliant and competitive?

Published on:

Published on:

Published on:

07 Jul 2025

AI is transforming business, and the EU AI Act is changing the game. As the first major AI law, it sets strict rules for how AI is built, used, and governed. Learn what it means for your business, how to stay compliant and agile, and why early movers can turn regulation into advantage.

Artificial Intelligence is no longer a futuristic promise; it's a defining force in today’s business landscape. From boosting operational efficiency to enabling new customer experiences, AI is already reshaping whole industries. But with great power comes the need for clear governance. That’s where the EU AI Act comes in.

The EU AI Act, developed by the European Commission, is the first comprehensive legal framework of its kind. Its core goal? To balance innovation with the protection of fundamental rights and safety. But just as importantly, it aims to improve the functioning of the internal market, creating a predictable and harmonized set of rules (for anyone deploying applications) across all 27 member states. At its heart, the Act is about building trustworthy AI systems that are safe, lawful, and aligned with European values like privacy, fairness, and democracy.

Why is there an AI Act legislation?

Three big drivers shaped this legislation:

  • Protecting people and rights: AI can cause real harm in high-impact sectors like healthcare, finance, law enforcement, education, and public services, especially as many of its risks are not yet fully understood. The Act bans Prohibited AI, such as manipulative behavior tracking and social scoring, and imposes strict requirements and obligations on a separate category called high-risk AI systems. These are not banned, but are subject to extensive regulation due to their potential impact.

  • Creating clarity and trust: The Act’s primary economic goal is to prevent legal fragmentation. A single, clear EU-wide framework means fewer legal grey areas and ambiguities. For businesses, this reduces compliance risk and creates a stable environment to build innovative AI responsibly.

  • Leading by example: Similar to GDPR’s influence on global data privacy practices, the EU aims to set a global standard for ethical AI. The EU AI Act positions Europe as a frontrunner in “human-centric AI”.

The AI Act in brief: What should business leaders really know about the EU AI act?

The EU AI Act isn’t just a legal update, it’s an entire strategic shift. The regulation introduces a risk-based framework that establishes several distinct categories of AI with different obligations and requirements. Getting these categories right is the first step to compliance.

  • Prohibited AI systems: These are banned outright due to unacceptable risks. This includes social scoring by public authorities and manipulative techniques that materially distort a person’s behavior in a way that is likely to cause significant physical or psychological harm (defined in Article 5).

  • High-risk AI systems: Systems used in critical areas like employment, creditworthiness assessments, medical diagnostics, or law enforcement, are permitted under the EU AI Act face strict regulatory requirements, including risk management, data governance, transparency, human oversight, and quality assurance obligations.

  • General-Purpose AI (GPAI) Models: This is a critical category that many guides dangerously oversimplify. GPAI models are not a minor footnote, they are a central pillar of the EU AI Act with their own dedicated chapter. This category covers foundational models (like LLMs) that can be adapted to a wide range of tasks.

    • Crucially, the Act creates a sub-category for GPAI models that pose “systemic risk”. If your business develops or heavily relies on a state-of-the-art GPAI model, you will face some of the most stringent obligations in the entire Act, including mandatory model evaluations, adversarial testing, and serious incident reporting.

Other AI systems (like simple chatbots) face lighter obligations, such as transparency notices.
The message is clear: the more impact your AI has, the more closely it should be watched.

AI Act timeline: some key milestones to keep in mind

  • Feb 2nd, 2025: The ban on prohibited AI systems takes effect.

  • Aug 2nd, 2025: Obligations for the distinct category of General-Purpose AI (GPAI) models apply. The governance structure (AI Office, Board) and framework for Notifying Authorities and Bodies also become operational.

  • Aug 2nd, 2026: The regulation fully applies. Most high-risk AI systems must comply with all requirements including conformity assessments, detailed technical documentation, and affixing the CE marking to prove compliance.

Compliance is strategic, not just legal

Non-compliance isn’t an option. Penalties can reach up to €35 million or 7% of global turnover, a clear signal of the Act’s seriousness. But the AI Act also presents strategic opportunities:

  • More reliable procurement and vendor assessments

  • Preferred status for compliant solution providers (such as Superlinear)

  • Greater trust among customers, partners and regulators

Forward-looking companies see compliance not just as a regulatory burden, but as a signal of quality, professionalism and readiness to scale responsibly.

Navigating the new landscape: How will your AI projects and partnerships evolve?

How will the AI act impact your AI projects?

The AI Act introduces requirements and obligations that fundamentally change how AI projects are planned, built, and maintained. Here’s what most guides miss: compliance is not a death sentence for agility if you understand the rules.

  • Early-stage planning is more critical: You’ll need to clearly define use cases, assess risk levels, and document intended purposes right from the start. These steps lay the foundation for compliance and reduce surprises later on.

  • Documentation is non-negotiable: For high-risk systems, thorough technical documentation, risk assessments, and continuous logging are mandatory. Think of them as a “paper trail” for accountability.

  • Human oversight is a design requirement: The EU AI Act mandates that high-risk AI systems must be designed with safeguards for effective human oversight, allowing humans to monitor and, where necessary, intervene or override decisions. 

  • “Substantial Modification” is the key to agility, and most people get it wrong: While many believe any significant update to a high-risk AI system triggers a full re-certification, the reality is more nuanced and crucial for innovation. 

    • The General Rule: A high-risk system must undergo a new conformity assessment procedure if it undergoes a “substantial modification”.

    • The Critical Exception: For AI systems that “continue to learn after being placed on the market”, changes “shall not constitute a substantial modification”, provided they were “pre-determined by the provider at the moment of the initial conformity assessment”, and documented in the technical file.

    • Strategic Impact: This means that you can build for continuous improvement and stay agile, but only if your upfront design and documentation rigorously define the boundaries of that evolution. Getting this right from day one is critical, it’s the difference between innovating freely and getting stuck in regulatory loops.

What changes in AI partnerships under the EU AI Act?

AI development and deployment often involve multiple actors across a value chain. The AI Act assigns distinct legal obligations to specific roles like Provider, Deployer, Importer, and Distributor.

  • You must define roles clearly: Who’s the provider? The deployer? The importer? The distributor? The legal responsibilities vary significantly for each role.

  • Due diligence is essential: Vet your partners, and be ready to be vetted in return.

  • Formalize compliance across your supply chain: Your standard supplier contracts are no longer sufficient. A crucial, but often overlooked detail of the Act mandates that you must establish a formal written agreement with any third party supplying tools, services, or components integrated into your high-risk AI system. This agreement must specify the exact “information, capabilities, technical access, and other assistance” they will provide, based on the “generally acknowledged state of the art”, to enable you to fully comply with your obligations under the AI Act.

    • Crucially, while the Act exempts most tools provided under a free and open-source license, your commercial partnerships are directly targeted. This means your supplier contracts transform from routine business documents into essential tools for proving your compliance and managing regulatory risk.

How can AI Act compliance become a competitive advantage?

Here’s the upside: companies that embrace the AI Act’s principles early are already gaining ground.

Organizations that align early with the AI Act’s requirements can benefit:

  • Preferred vendor status: Governments and large enterprises are already prioritizing AI providers who demonstrate compliance.

  • Stronger partnerships: Ethical, compliant AI is becoming a key selection criterion for investors, customers, and collaborators.

  • Reputation and brand equity: Building trustworthy AI reinforces your reputation, and earns public credibility.

What are the practical steps to take right now for the AI Act?

The EU AI Act is rolling out in phases, but the time to prepare is now. Here’s a practical starting point:

  1. Map your AI and perform a high-risk triage: Inventory and catalog all AI systems you’re currently using or developing. For each, conduct a triage to determine its risk level by explicitly checking it against the prohibited practices in Article 5 and the high-risk use cases listed in Annex III of the EU AI Act.

  2. Conduct a detailed gap analysis for high-risk systems: If high-risk systems are identified, assess your current practices against the full suite of requirements in the AI Act. This goes beyond just data and transparency. You must specifically benchmark against the Act’s mandates for:

    1. Risk management

    2. Data governance

    3. Technical documentation

    4. Record-keeping and logging

    5. Transparency and instructions for use

    6. Human oversight

    This assessment must also cover your organizational processes, such as your Quality Management System. From this, identify gaps and build a comprehensive and actionable compliance roadmap, noting key AI Act deadlines.

  3. Formalize roles and review contracts: With legal input, clearly determine if your organization is a Provider, Deployer, Importer, or Distributor. Then, review all AI-related supplier contracts to incorporate the critical supply chain cooperation clauses we analyzed earlier.

  4. Foster true AI Literacy and Internal Awareness: Train your teams not just on the “what”, but the “why”. They need to understand their specific roles and how their daily work in development, legal, or procurement connects to the Act’s requirements. 

Author(s):

Jan-Willem Denys

AI Analyst

Article

Struggling with massive datasets, slow labeling, or inconsistent model performance? Accelerate your computer vision workflow with embeddings. Learn how to streamline data selection, cut labeling time, boost model performance, and handle data drift. A must-read for teams building faster, smarter, and more robust computer vision systems.

Article

Struggling with massive datasets, slow labeling, or inconsistent model performance? Accelerate your computer vision workflow with embeddings. Learn how to streamline data selection, cut labeling time, boost model performance, and handle data drift. A must-read for teams building faster, smarter, and more robust computer vision systems.

Article

Struggling with massive datasets, slow labeling, or inconsistent model performance? Accelerate your computer vision workflow with embeddings. Learn how to streamline data selection, cut labeling time, boost model performance, and handle data drift. A must-read for teams building faster, smarter, and more robust computer vision systems.

gen ai challenges

Article

Generative AI holds massive potential, but most businesses struggle to scale it. This guide breaks down 7 common generative AI challenges and how to turn AI into real ROI.

gen ai challenges

Article

Generative AI holds massive potential, but most businesses struggle to scale it. This guide breaks down 7 common generative AI challenges and how to turn AI into real ROI.

gen ai challenges

Article

Generative AI holds massive potential, but most businesses struggle to scale it. This guide breaks down 7 common generative AI challenges and how to turn AI into real ROI.

Article

Is your AI system prepared for Europe’s new regulations? The EU AI Act is now in effect, transforming how engineers must design, test, and deploy AI. From prohibited practices to high-risk system requirements, this practical guide outlines everything developers need to know to ensure AI Act compliance before the 2026 deadline.

Article

Is your AI system prepared for Europe’s new regulations? The EU AI Act is now in effect, transforming how engineers must design, test, and deploy AI. From prohibited practices to high-risk system requirements, this practical guide outlines everything developers need to know to ensure AI Act compliance before the 2026 deadline.

Article

Is your AI system prepared for Europe’s new regulations? The EU AI Act is now in effect, transforming how engineers must design, test, and deploy AI. From prohibited practices to high-risk system requirements, this practical guide outlines everything developers need to know to ensure AI Act compliance before the 2026 deadline.

Contact Us

Ready to tackle your business challenges?

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena

Ottergemsesteenweg-Zuid 808 b300

9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.