Skip to main content

New 2025 H-1B visa fees are here. EOR offers a simpler, cost-effective alternative > Read more

Blog
/
EU AI Act Compliance

EU AI Act Compliance

Blog Compliance
4 min read
Share
EU AI Act Compliance

The European Union’s Artificial Intelligence Act (EU AI Act) is one of the world’s first legal frameworks designed to regulate artificial intelligence. It aims to ensure the safety of AI systems developed, marketed, or used within the EU. It also intends to make sure that AI systems respect fundamental rights and promote trustworthy innovation.

A critical part of the European Union’s overall digital strategy, the act went into effect on August 1, 2024. With the European AI Office and European Artificial Intelligence Board activated on August 2, 2025, for enforcement and coordination across member states, now is the time to make sure you understand the act and how it impacts you. Additional provisions will go into effect in 2026 and 2027.

Penalty regime now in effect

The most important takeaway is this: The EU AI Act’s penalty regime is now in effect. Organizations that fail to comply with the act, or that are found guilty of insufficient compliance, may be fined according to the following criteria:

  • Up to €35 million EUR or 7% of global annual turnover for infringements relating to prohibited AI practices
  • Up to €15 million EUR or 3% of global annual turnover for infringements of certain other obligations under the Act, and
  • Up to €7.5 million EUR or 1% for supplying incorrect, incomplete, or misleading information to public authorities. (DLA Piper)

Note that penalties applicable to GPAI model providers will only apply as of August 2, 2026.

Levels of risk

The act assigns different requirements for different levels of risk, with more requirements assigned to AI systems that pose higher risks.

Minimal risk: AI systems such as spam filters and AI-enabled video games are considered to present minimal risk, so no requirements are imposed on these kinds of AI systems.

Limited risk: AI systems in the limited-risk category have specific obligations to inform users about what they are interacting with. For example, content that’s generated through an AI platform, such as images, videos, and audio files that mimic real events or people, must be labeled as such. And chatbots and digital assistants must clearly inform users that they are interacting with a machine and give them the option to opt out.

High risk: High-risk AI systems are those that have the potential to cause harm or impact human rights, safety, or health. Systems that fall in this category are biometric and biometrics-based systems; AI used in education, employment, and worker management; law enforcement, asylum, and migration control programs; and systems used for critical infrastructure management.

While all high-risk AI applications have to be assessed before being placed on the market and throughout their life cycle, certain high-risk AI systems need to be registered in an EU database.

Unacceptable risk: Systems that pose an unacceptable risk are those that clearly pose a danger to the rights and safety of people. An example is a facial recognition system used in a public space. Programs that manipulate the behavior of people or specific vulnerable groups — for example, a voice-activated toy that encourages a child to engage in dangerous behavior — are also included. While there are some exceptions for law enforcement purposes, most of these AI applications are now banned in the EU. These prohibited AI practices are covered in Article 5 of the EU AI Act.

General-purpose AI applications

High-impact general-purpose AI applications that can pose systemic risk, such as GPT-4, must be evaluated and any serious incidents must be reported to the European Commission.

The EU AI Act timeline

The table below provides the timeline for the act, including two remaining provisions that will go into effect in 2026 and 2027.

Key provisions of the EU Artificial Intelligence Act

EU AI Act — Key provisions & dates
Key provision Date Details
EU AI Act goes into force August 1, 2024
Through this act, the EU Parliament aims to ensure AI systems are “safe, transparent, traceable, non-discriminatory, and environmentally friendly.”

Ban on prohibited AI practices (applications that pose unacceptable risk) goes into force February 2, 2025
Applies to providers and deployers of AI systems. Fines for noncompliance can reach €35 million EUR or 7% of total worldwide annual turnover, whichever figure is higher.

Provisions on AI literacy requirements go into force February 2, 2025
Applies to providers and deployers of AI systems who are obligated to ensure that their staff have the skills, knowledge, and understanding to operate the relevant AI systems.

Obligations for General Purpose AI (GPAI) models and new governance structures go into force August 2, 2025
Activates the European AI Office and European Artificial Intelligence Board. They will oversee enforcement and coordination across member states. National authorities must be designated by this date. Transparency, documentation, and copyright compliance obligations will kick in for GPAI model providers.

Most provisions, including those for high-risk AI systems, become fully applicable August 2, 2026
The broader framework, including obligations for high-risk AI systems, comes into effect.

Provision on a specific type of high-risk AI system take effect August 2, 2027
The final compliance deadline. Obligations for high-risk systems will be fully enforceable as of this date.

Sources: European Parliament, Mayer Brown, Vinson & Elkins LLP

Codes of Practice

Those who provide and deploy AI applications should be aware of the Codes of Practice, (“the Code”) which are outlined in Article 56 of the act. These guidelines help ensure compliance with the act between the time the GPAI model provider obligations come into effect and the time that harmonized European standards are adopted in August 2027. The Code is broken down into three sections: Transparency, Copyright, and Safety and Security.

Transparency: Those who sign onto the Code are committed to developing and keeping up-to-date model documentation and ensuring the quality, security, and integrity of documented information. Information must be provided to downstream users upon request.

Copyright: Signatories are committed to creating and implementing a copyright policy and keeping it up to date. The policy must ensure that the web crawlers of AI models only reproduce and extract lawfully accessible, copyright-protected content. They must also comply with rights reservations, mitigate the risk of producing output that infringes on copyrights, designate a point of contact, and allow for complaints of noncompliance to be submitted.

Safety and Security: This applies only to GPAI models with systemic risk and commits providers to adopting and implementing a Safety and Security Framework that lays out the details for risk assessment, mitigation, and governance to keep systemic risks within acceptable levels.

The scope of the EU AI Act

The EU AI Act can apply to companies and individuals outside the EU. According to coverage from Cooley, the act applies to “providers, deployers, providers and deployers, importers and distributors, and product manufacturers,” which can include organizations based outside the EU.

For more information about other regulations that may impact your organization, visit our Regulatory Hub.


Disclaimer: The information provided on or through this website is for informational purposes only and does not constitute legal advice. Safeguard Global expressly disclaims any liability with respect to warranty or representation concerning the information contained herein, including the lost essence, interpretation, accuracy and/or completeness of the information in transit and language translation.

More Resources