2025: A new ambition for MDS.   Discover our strategy and innovations soon. In the meantime, explore our vision

Notre communauté

Understanding the fundamentals of the AI Act: An introductory guide for businesses

In a world where artificial intelligence is rapidly revolutionizing all sectors, businesses face a major new challenge: understanding and complying with European AI regulation. The AI Act represents the world’s first comprehensive legislation specifically dedicated to artificial intelligence. For French organizations, this new regulation adds to existing obligations such as the GDPR, with which it shares several fundamental principles.

This new regulation is profoundly reshaping the European and global digital landscape, establishing strict standards for the use and development of AI systems. Whether you are an innovative startup or a large enterprise, understanding this legal framework is essential to ensure your competitiveness while respecting the fundamental rights of European citizens.

The European AI Act: Origins and Strategic Objectives

The European AI Act, officially the “Artificial Intelligence Regulation,” marks a decisive step in the global governance of AI technologies. The result of a long legislative process launched in 2021, this regulation is part of the broader European digital strategy, alongside other initiatives such as the Data Governance Act and the Digital Services Act.

The Ambitions of the European Legislator

The AI Regulation pursues several complementary objectives:

  • Make the European Union a global leader in the development of ethical and responsible AI
  • Protect the fundamental rights and safety of European citizens
  • Strengthen trust in AI technologies
  • Establish a harmonized legal framework across Europe
  • Encourage innovation while ensuring appropriate safeguards

This balanced approach between innovation and protection reflects the European vision of “human-centered AI.” Unlike other jurisdictions that either prioritize innovation at all costs or impose strict restrictions, Europe has chosen a middle path based on a proportional risk assessment.

A Global Strategic Positioning

With the AI Act, the EU positions itself as a pioneer in the regulation of artificial intelligence. This proactive approach aims to influence global standards, potentially creating a “Brussels effect” similar to what was seen following the adoption of the GDPR. For businesses, this dynamic offers an opportunity to turn regulatory compliance into a competitive advantage in international markets, where business ethics is becoming a key differentiating factor.

A Risk-Based Approach: The Core of the Framework

The AI Act adopts a tiered approach based on risk levels, establishing a framework proportional to the potential dangers of each AI system.

The Four Risk Levels

The regulation distinguishes four main categories of AI systems:

1. Unacceptable Risk

These systems are considered a clear threat to fundamental rights and are strictly prohibited. They include:

  • Systems exploiting individuals’ vulnerabilities
  • General-purpose social scoring systems
  • Real-time biometric identification technologies in public spaces (with certain exceptions)
  • Manipulative or exploitative AI systems

For companies, identifying features that would place a system in this category becomes crucial from the design phase.

2. High Risk

This category includes AI systems used in critical areas such as:

  • Essential infrastructures
  • Education and vocational training
  • Employment and workforce management
  • Access to essential services
  • Law enforcement
  • Migration management
  • Administration of justice

High-risk systems are subject to strict obligations, particularly in terms of conformity assessments, similar to Data Protection Impact Assessments (DPIA) required by the GDPR.

3. Limited Risk

These systems must meet specific transparency obligations. For example, companies using chatbots or deepfakes must clearly inform users that they are interacting with an AI system or that the content is artificially generated.

4. Minimal Risk

Most AI applications fall into this category and are only subject to minimal requirements or voluntary codes of conduct. This approach helps avoid overregulation of low-risk technologies.

Implications for Business Strategy

This tiered approach offers companies a clear framework for assessing their existing and future AI systems. Early identification of the risk level associated with each application allows organizations to anticipate regulatory obligations and incorporate compliance requirements from the design phase (compliance by design).

For organizations already working with an external DPO, expanding this role to also cover AI Act compliance is an effective strategic approach, creating synergies between different regulatory requirements.

Specific Obligations for High-Risk AI Systems

AI systems classified as high risk are subject to a comprehensive set of requirements aimed at ensuring their safety and respect for fundamental rights.

Enhanced Governance and Documentation

Developers and deployers of high-risk systems must implement:

  • A rigorous risk management system
  • Comprehensive technical documentation
  • Detailed activity logs
  • Effective human oversight
  • Robust cybersecurity measures

These requirements echo those of the DORA (Digital Operational Resilience Act) for the financial sector, creating regulatory synergies for companies affected by both texts.

Transparency and Human Oversight

The AI Act places particular emphasis on:

  • Explaining decisions made by AI
  • Ensuring adequate human supervision
  • Providing users with the ability to understand and, if necessary, contest automated decisions

These principles align with data subjects’ rights under the GDPR, including the right to human intervention and the right to an explanation of automated decisions.

Mandatory Conformity Assessment

Before being placed on the market, high-risk systems must undergo a rigorous conformity assessment. Depending on the system type, this assessment can be conducted internally or may require the involvement of a notified body.

Related Articles
Share