LEGAL NODES

Become compliant with legal guidance from AI Act experts

Book a call to get started
Schedule free discovery call
400+ companies already use Legal Nodes

With the adoption of world's first comprehensive regulatory framework for AI, the EU AI Act totally prohibits certain AI uses and implements strict governance, risk management and transparency requirements for AI operators. 

The provisions on penalties under the Act apply from 2 August 2025 and penalties range from EUR 7.5 million, or 1.5% of overall annual turnover, to EUR 35 million, or 7% of global annual turnover, depending on the type of breach of compliance.

To support business in addressing regulatory obligations, we have prepared this article presenting a step-by-step roadmap to compliance, designed to guide businesses through the key measures required under the EU AI Act and help them systematically align their practices with the Regulation.

Step 1: AI systems identification and mapping 

Businesses should conduct an internal audit of all AI technologies used within the organisation. 

This includes internally developed systems, third-party AI solutions, embedded AI components in products, and general-purpose AI models used in business operations.

For in-house developments, it is essential to fully document each AI system or model.

For governance and compliance teams, thorough documentation provides evidence that AI systems have been properly designed, tested, and monitored. It supports audits, handovers, and updates under EU AI Act and standards such as ISO/IEC 42001. 

The documents used for audit may vary depending on the organisation, AI system, or regulatory context, but they should generally include items such as:

- Model documentation – detailing the purpose, design, training data, performance metrics, limitations, and risk assessments of each AI system;

- Security certifications – for example, SOC 2 or ISO 27001, demonstrating that information security and data protection measures are in place;

- AI governance disclosures – outlining policies, oversight structures, and procedures for responsible AI deployment and terms of use and indemnity clauses – specifying the rights and responsibilities of users, limits of liability, and legal safeguards related to AI outputs.

The internal audit of AI systems does not require a specific form and may be maintained in any structured or flexible format, as long as it clearly documents all AI technologies, their purposes, risks, and compliance measures.

Step 2. Determination of the organisation role under the AI Act

Once AI systems are identified, organisations should determine their role within the AI value chain. Under the AI Act, an entity may act as a provider, deployer, importer, distributor, or supplier, and each role triggers different regulatory obligations. To achieve this, the company should first consider the material scope of the Regulation (EU) 2024/1689, which defines a roles of entities which develop, distribute, implement or place AI system on the market.

Next, the territorial scope should be considered: the AI Act applies to providers, deployers, importers, distributors, and authorised representatives operating within the EU, as well as to providers and deployers located outside the EU if the outputs of the AI system are used in the EU. For example, a company established in a third country that places an AI system on the Union market, or whose system produces outputs used by EU customers, is subject to the Regulation.

By systematically applying these criteria, an organisation can identify its responsibilities under the AI Act and further document its role.

Step 3. Classification and verifications of AI systems against prohibited AI practices

When applicability is confirmed, the system should be evaluated against the AI Act’s four risk categories. The EU Artificial Intelligence Act (AI Act) establishes a risk-based regulatory framework that classifies AI systems according to the level of risk they pose to fundamental rights, safety, and public interests. The Act distinguishes four categories of risk:

i. Unacceptable risk – AI systems that pose a clear threat to fundamental rights and are therefore prohibited under Article 5 (Chapter II);

ii. High risk – AI systems that may significantly impact health, safety, or fundamental rights and are subject to stringent ex-ante and ex-post compliance obligations pursuant to Article 6 and Annex III of the Regulation (EU) 2024/1689;

iii. Limited risk – AI systems that are permitted but subject to specific transparency obligations, as set out in Article 50 (Chapter IV);

iv. Minimal or no risk – AI systems that do not fall within the preceding categories and are generally not subject to regulatory obligations under the AI Act. 

Systems posing an unacceptable risk, such as those using manipulative subliminal techniques, social scoring, or real-time biometric identification in public spaces, are prohibited.

High-risk systems, including safety components in products, remote biometric identification, or AI affecting critical infrastructure, education, or employment, require strict compliance measures such as risk management, human oversight, and conformity assessments.

Limited-risk systems are allowed but subject to transparency obligations, while minimal or no-risk systems generally do not trigger regulatory requirements.

The assessment involves reviewing the intended purpose of the AI system, the type of data it processes, and the outputs it generates to determine whether it falls under prohibited or high-risk categories. For example, a system that processes machine operational data but does not monitor or evaluate individual workers would not be classified as high-risk for employment purposes. Each criterion must be individually evaluated, and the organisation should record whether the system meets, partially meets, or does not meet the thresholds for each risk category.

Step 4. Compliance measures implementation

Once AI systems have been classified according to their risk level, organisations must implement the appropriate compliance measures to meet the obligations under the AI Act. Compliance requirements vary depending on the system’s risk category, and the measures must align with the scope, purpose, and use of each AI system.

For unacceptable risk AI systems, use is strictly prohibited within the European Union. Applications such as government social scoring, subliminal behaviour manipulation, or real-time biometric surveillance in public spaces (unless explicitly authorized by judicial authorities) are deemed incompatible with fundamental rights and democratic values. Any attempt to deploy these systems exposes the organisation to severe penalties, and therefore they must be removed from use or disabled entirely.

High-risk AI systems are subject to the strict regulatory obligations. Organisations must implement a comprehensive set of measures, including: establishing a documented AI risk management system, maintaining technical documentation, ensuring transparency and traceability of outputs, assigning competent human oversight, and monitoring systems post-deployment. Additional obligations may include conducting impact assessments, performing conformity assessments, and obtaining certification from notified bodies. 

For limited-risk AI systems, transparency measures are the primary compliance requirement. Users must be informed clearly when interacting with AI, particularly in cases of automated content generation, emotion recognition, or chatbots. Organisations should implement clear user notifications, labels, or other mechanisms to indicate that content is AI-generated, ensuring informed consent and preventing deception.

Minimal-risk AI systems, which do not materially affect safety, rights, or transparency, are largely exempt from binding obligations. However, organisations are encouraged to follow best practices, such as maintaining basic oversight, monitoring system performance, and ensuring accountability for AI outputs. Examples include recommendation engines, spam filters, logistics optimization tools, and machine translation software.

Across all categories, organisations – including those outside the EU – must consider the extraterritorial reach of the AI Act. Non-European providers placing AI systems on the EU market must comply with the same obligations and, if necessary, appoint a local legal representative.

Get EU AI ACT Compliant with Legal Guidance

With August being a few months away, businesses in scope for the EU AI Act are in a hurry to get affairs in order and comply or risk sanctions.

Legal Nodes supports you with transparent, end-to-end guidance on the EU AI Act compliance. From assessment of your organization's role within the EU AI Act all the way to System Incident response policy drafting and review.

Schedule a free discovery call to speak with our EU AI Act experts.

TABLE OF CONTENTS