Table of contents

EU AI Act Compliance: A practical guide for engineers building AI systems
AI Act compliance: what is prohibited and what is regulated?
Optimize for data quality and avoid bias mitigation
Ensure transparency and user awareness
AI Act technical documentation: keep a clean and up-to-date technical documentation and recordkeeping
Allow for human oversight, monitoring, and accountability 
Regulations on LLMs to comply to the EU AI Act
AI literacy
Conclusions
Q&A
1. What are the main obligations for AI systems under the AI Act?
2. How does the AI Act impact engineers using pre-trained models and third-party APIs?
3. What are the prohibitions of the EU AI Act?
4. What is bias in the EU AI Act?
5. When do companies need to comply with the AI Act?

Table of contents

Table of contents

EU AI Act Compliance: A practical guide for engineers building AI systems
AI Act compliance: what is prohibited and what is regulated?
Optimize for data quality and avoid bias mitigation
Ensure transparency and user awareness
AI Act technical documentation: keep a clean and up-to-date technical documentation and recordkeeping
Allow for human oversight, monitoring, and accountability 
Regulations on LLMs to comply to the EU AI Act
AI literacy
Conclusions
Q&A
1. What are the main obligations for AI systems under the AI Act?
2. How does the AI Act impact engineers using pre-trained models and third-party APIs?
3. What are the prohibitions of the EU AI Act?
4. What is bias in the EU AI Act?
5. When do companies need to comply with the AI Act?

EU AI Act Compliance: A practical guide for engineers building AI systems

EU AI Act Compliance: A practical guide for engineers building AI systems

11 Apr 2025

Is your AI system prepared for Europe’s new regulations? The EU AI Act is now in effect, transforming how engineers must design, test, and deploy AI. From prohibited practices to high-risk system requirements, this practical guide outlines everything developers need to know to ensure AI Act compliance before the 2026 deadline.

On August 1, 2024, the European Artificial Intelligence Act (AI Act) officially came into force, setting a new regulatory framework for responsible AI development and deployment across the EU. While certain AI applications have been banned since February 2, most of the Act’s rules and requirements will take effect starting August 2, 2026.

AI Act compliance is now a critical priority. Non-compliance comes at a steep cost, with ai act fines ranging from 1.5% to 7% of a company’s global annual turnover. Given these high stakes, anyone involved in developing, distributing, or deploying AI systems must understand the Act’s implications to mitigate risks. This article highlights key aspects that engineers and developers should be aware of to ensure AI models align with the new regulations.

AI Act compliance: what is prohibited and what is regulated?

The first step towards understanding what the AI act entails is understanding the 4 AI systems that the act describes and how they are regulated:

  1. Unacceptable risks: The Act outright prohibits certain AI uses considered most harmful to people’s rights and safety. This includes social scoring, deceptive or manipulative AI, predictive policing, or indiscriminate scraping and handling of private and sensitive data (such as biometrics, emotions or surveillance footage). Engineers must be aware that building or enabling any of these functionalities can halt products from entering the EU market or lead to huge fines. The only exception is made towards systems that are solely targeted to national defence and security.

  2. High-risk applications: This category includes AI applications that could significantly impact health, safety, or fundamental rights. You can imagine these systems to include credit scoring algorithms that determine access to loans or insurances’ premiums, AI in education that scores exams or college admissions, or medical AI (e.g. in surgery or diagnosis). There are strict rules and compliance steps that companies and developers need to adhere to while developing these systems. The key requirements will be covered in the sections below.

  3. Specific transparency risk: Systems like chatbots must explicitly inform users that they are interacting with a machine. Similarly, AI-generated content (such as deepfake images or synthetic videos) must be labeled accordingly to prevent misinformation.

  4. Minimal risk: Most AI systems such as spam filters and AI-enabled video games are not regulated by the AI Act.

ai act risk-based approach

Source: european commission

Let’s now cover the key requirements necessary for a high risk application.

Optimize for data quality and avoid bias mitigation

The Act places a strong emphasis on data governance for AI, requiring that high-risk systems be trained on high-quality, representative data to prevent discrimination. In practice, this requires development teams to rigorously assess datasets for bias, inaccuracies, and demographic coverage. To comply with these requirements, teams can leverage several techniques and tools:

  • Ensuring data balance and fair representation

    • Reweighing & resampling: Adjusting dataset distribution by oversampling underrepresented groups or undersampling overrepresented ones to achieve fairness.

    • Feature scaling & normalization: Preventing disparities in model outcomes due to feature scale imbalances across demographic groups.

    • Duplicate & anomaly detection: Using tools like Pandas or PyOD to identify and remove duplicate entries and outliers that could introduce bias.

  • Auditing and bias detection

    • Fairlearn: Assesses fairness metrics such as demographic parity and equalized odds, flagging disparities in model outcomes across different groups.

    • SHAP (SHapley Additive exPlanations): Helps in model explainability by highlighting which features influence predictions the most. If certain attributes disproportionately impact outcomes, this may indicate bias.

  • Post-training explainability & bias detection

    • Custom tools: For example, Superlinear Conformal Tight enables visualization of model uncertainties, which can often be traced back to underlying data issues.

    • Cloud-based solutions: Platforms like Azure offer responsible AI guidelines and tools that use counterfactuals, what-if analysis, causal inference, and fairness assessments to detect and mitigate biases.

By integrating these strategies, development teams can proactively identify and mitigate biases, ensuring AI systems align with ethical and regulatory standards. For a practical example on how to mitigate bias in ML you can have a look at this article about bias mitigation in the HR world.

Ensure transparency and user awareness

If an AI system interacts directly with users, it should be clear and transparent that they are engaging with AI and what is the role of the AI in the interaction. This applies to chatbots, deepfake images and videos, and synthetic voices, where users need to understand the nature of the content they are consuming. 

This can be achieved in different ways:

  • Via visible disclosures before interaction, such as pop-ups, labels, or product descriptions. In chatbots, this can be done at the start of a conversation or within message prompts to clarify that responses are AI-generated.

  • For AI-generated images and videos, disclosure can instead be embedded in the content itself via watermarks or metadata. This is used to both identify the generated content and trace back the original creator if distributed. It can be done through visual watermarks, like those used by Sora on AI-generated videos (notice the small GIF animation at the bottom right of the video), or more advanced methods such as C2PA metadata and transformation-resistant watermarks like Meta’s AudioSeal and VideoSeal

AI Act technical documentation: keep a clean and up-to-date technical documentation and recordkeeping

The act requires maintaining technical documentation that outlines the AI’s purpose, development, data sources, monitoring, and risk management strategies. In addition, AI systems must also generate logs to ensure traceability and accountability throughout their lifecycles. Some official requirements that the documentation needs to include are: 

  • Description of the AI system and its components: A well-documented AI system should clearly describe its architecture, purpose, and development process. For this purpose, Model Cards can be used, which provides a structured way to outline the model’s intended use, limitations, and ethical considerations. This ensures that final users understand what the system is designed to do and where it might fall short. Complementing this, system architecture diagrams created using tools like Draw.io or Mermaid.js can visually represent the AI pipeline.

  • Detailed dataset documentation: Datasheets for Datasets could be used to standardize documentation on dataset origin, collection methods, intended use, and ethical considerations. For traceability instead, Data Version Control (DVC) can be used to track changes to datasets over time.

  • Model monitoring: A model can be monitored and logged during training via tools like TensorBoard, MlFlow, or Weights & Biases. Once deployed, Grafana can track key performance indicators (KPIs) and data shifts, while Sentry can log error traces for debugging.

  • Tracked metrics: The system could log various metrics, such as bias metrics (to ensure fairness), performance metrics (to evaluate accuracy and efficiency), operational metrics (for uptime and resource use), and drift detection (to identify changes in data distribution).

Allow for human oversight, monitoring, and accountability 

When dealing with high-impact AI systems, human oversight is mandatory, meaning that any critical decision shouldn’t be left to the algorithm alone without the possibility for human intervention. This can take different forms, such as manual override mechanisms, review workflows, or fail-safe protocols that pause or stop automated decisions under certain conditions. 

Automated systems that can significantly affect individuals (jobs, health, legal status, etc.) are expected to have either:

  • Human-in-the-loop (HITL): A person actively reviews and approves AI decisions before they take effect. This is common in high-stakes applications like medical diagnosis, credit scoring, or luggage screening. The AI here should not be allowed to take the final decision by the system.

  • Human-on-the-loop (HOTL): The AI operates autonomously but a human monitors its behavior and can step in if needed. A clear example for this type of system is in autonomous driving where the humans should always be able to oversee and overrule the vehicle commands.

Regulations on LLMs to comply to the EU AI Act

The use of general-purpose AI such as large language models and image generators is also influenced by the act. Providers of these models are required to assess and mitigate risks of misuse and disclosing key information to downstream developers. This should ensure that those integrating these models can comply with the EU AI Act regulations.

For AI/ML engineers leveraging pre-trained models or third-party APIs (e.g., OpenAI’s GPT-4 or Midjourney), this means paying closer attention to model integration, training, and deployment. Engineers must consult documentation and guidelines from upstream providers to ensure EU AI Act compliance. Additionally, as an integrator, you bear responsibility for how these models are used within your product. This may include:

  • Implementing content filters to prevent harmful or misleading outputs.

  • Defining clear usage policies to align with ethical and legal standards.

  • Applying additional fine-tuning or safety mechanisms to mitigate risks specific to your application.

AI literacy

The european AI Act mandates that both providers and deployers of AI systems, regardless of risk level, ensure that anyone dealing with the operation and use of these systems has adequate AI literacy. This means having the essential knowledge to use, understand, and critically assess AI, covering basic concepts, technical skills, ethical and legal awareness, and practical know-how. Here, we explore practical steps to build team-wide understanding, comply with regulations, and confidently navigate the complexities of AI integration.

Conclusions

In this article we’ve covered some of the key topics in the AI act that engineers need to be aware of, but this is just a glimpse of the aspects included within the legislation. While the full details go much deeper, knowing about its existence and starting to adapt before the 2026 August 2nd deadline is crucial.

Q&A

1. What are the main obligations for AI systems under the AI Act?

The Act categorizes AI systems by risk levels, with high-risk AI requiring strict compliance in areas like data governance, transparency, human oversight, and continuous monitoring. Providers must also maintain technical documentation and log activities for traceability.

2. How does the AI Act impact engineers using pre-trained models and third-party APIs?

Engineers integrating general-purpose AI (GPAI), like GPT-4 or Midjourney, must review provider documentation, apply content filtering, and define clear usage policies to prevent misuse. Compliance is a shared responsibility between model providers and downstream users.

3. What are the prohibitions of the EU AI Act?

The EU AI Act explicitly bans certain AI practices that are considered to pose unacceptable risks to fundamental rights and public safety. These include eight practices namely: harmful AI-based manipulation and deception, harmful AI-based exploitation of vulnerabilities, social scoring, Individual criminal offence risk assessment or prediction, untargeted scraping of the internet or CCTV material to create or expand facial recognition databases, emotion recognition in workplaces and education institutions, biometric categorisation to deduce certain protected characteristics, and real-time remote biometric identification for law enforcement purposes in publicly accessible spaces.

These practices are strictly prohibited, and using or developing systems that fall into these categories is not allowed under any circumstances within the EU.

4. What is bias in the EU AI Act?

Bias refers to unfair or discriminatory outcomes from AI systems, especially against protected groups. The Act requires developers to use representative data and actively detect and mitigate bias, particularly in high-risk applications.

5. When do companies need to comply with the AI Act?

The AI Act took effect on August 2, 2024, with phased enforcement depending on the AI system’s risk level until the 2nd of August 2026. Companies should assess their AI use cases now and start implementing necessary transparency, monitoring, and compliance measures to avoid last-minute adjustments.

Author(s):

Mattia Molon

Computer Vision Team Lead & Machine Learning Engineer

Article

What if training powerful GenAI models could be faster, cheaper, and more efficient? DeepSeek R1’s GRPO is changing the game, cutting memory and compute costs nearly in half. Through a Battleship-inspired simulation, learn how this breakthrough is reshaping Reinforcement Learning.

Article

What if training powerful GenAI models could be faster, cheaper, and more efficient? DeepSeek R1’s GRPO is changing the game, cutting memory and compute costs nearly in half. Through a Battleship-inspired simulation, learn how this breakthrough is reshaping Reinforcement Learning.

Article

What if training powerful GenAI models could be faster, cheaper, and more efficient? DeepSeek R1’s GRPO is changing the game, cutting memory and compute costs nearly in half. Through a Battleship-inspired simulation, learn how this breakthrough is reshaping Reinforcement Learning.

ai for agriculture and food systems

Article

Discover how AI is revolutionizing agriculture and food supply chains—boosting sustainability, cutting waste, and optimizing resources. From precision farming to smarter logistics, AI is shaping a greener future for food system.

ai for agriculture and food systems

Article

Discover how AI is revolutionizing agriculture and food supply chains—boosting sustainability, cutting waste, and optimizing resources. From precision farming to smarter logistics, AI is shaping a greener future for food system.

ai for agriculture and food systems

Article

Discover how AI is revolutionizing agriculture and food supply chains—boosting sustainability, cutting waste, and optimizing resources. From precision farming to smarter logistics, AI is shaping a greener future for food system.

Article

In an increasingly complex and competitive world, businesses must streamline operations to stay ahead. Optimizing your value chain through AI, automation, and smart decision-making can drive efficiency, cut costs, and enhance resilience.

Article

In an increasingly complex and competitive world, businesses must streamline operations to stay ahead. Optimizing your value chain through AI, automation, and smart decision-making can drive efficiency, cut costs, and enhance resilience.

Article

In an increasingly complex and competitive world, businesses must streamline operations to stay ahead. Optimizing your value chain through AI, automation, and smart decision-making can drive efficiency, cut costs, and enhance resilience.

Contact Us

Ready to tackle your business challenges?

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena

Ottergemsesteenweg-Zuid 808 b300

9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.