As artificial intelligence profoundly transforms our society, Europe is once again positioning itself as a pioneer in digital regulation. Following the success of the General Data Protection Regulation (GDPR), the European Union has adopted the AI Act, the world’s first legislation specifically regulating artificial intelligence systems. For companies already engaged in GDPR compliance, this new regulation raises many questions. How do these two regulatory frameworks interact? What are their similarities and differences? How can you optimize your compliance strategy to effectively meet both requirements? Our GDPR audit experts offer a detailed analysis to help you understand the challenges and prepare your organization for this new regulatory era.
The foundations and objectives of the two regulations
GDPR: A revolution in personal data protection
The GDPR, which came into force on May 25, 2018, marked a major turning point in personal data protection. This regulation pursues several key objectives:
- Strengthening individual rights over their personal data
- Holding data-processing organizations accountable
- Harmonizing the legal framework across the European Union
- Creating a climate of digital trust
The GDPR applies to any organization, public or private, that processes data of European residents, regardless of its geographic location. Its core principle is accountability—organizations must be able to demonstrate compliance at any time.
The AI Act: A risk-based approach to regulating AI
Adopted in 2024, the AI Act is part of Europe’s broader strategy for ethical, human-centered digital transformation. This regulation aims to:
- Ensure the safety and fundamental rights of European citizens regarding AI systems
- Foster innovation and AI development in Europe
- Prevent fragmentation of the internal market
- Establish a framework of trust for AI adoption
Unlike the GDPR, which applies uniformly to all personal data processing, the AI Act follows a risk-based approach. It categorizes AI systems into different risk levels: unacceptable, high, limited, or minimal.
Convergence points between the GDPR and the AI Act
A shared European vision of digital technologies
Both regulations reflect a consistent European vision for digital technologies, placing human rights and dignity at the core. They share several key principles:
Transparency is a fundamental pillar of both texts. The GDPR requires organizations to clearly inform individuals about how their data is used, while the AI Act mandates transparency in how AI systems function, especially when interacting with humans.
Documented governance is another major point of convergence. The GDPR requires a data processing register and impact assessments (PIA/AIPD) for high-risk processing. Similarly, the AI Act requires full technical documentation and conformity assessments for high-risk AI systems.
Comparable oversight and sanction mechanisms
Both regulations establish supervisory authorities to oversee their enforcement. For the GDPR, these are national data protection authorities (such as the CNIL in France). The AI Act relies on national AI oversight authorities and introduces a European AI Board.
The administrative penalty regimes are also similar, with significant fines for non-compliance. The GDPR allows fines of up to €20 million or 4% of global annual turnover. The AI Act foresees penalties of up to €35 million or 7% of global turnover for the most serious breaches.
Key differences between the GDPR and the AI Act
Distinct scopes and protection targets
The first key difference lies in their respective protection targets. The GDPR focuses exclusively on protecting personal data and individual privacy. The AI Act, on the other hand, has a broader scope, aiming to protect citizens’ safety, health, and fundamental rights in relation to AI use, whether or not personal data is involved.
The material scope also differs. While the GDPR applies to all personal data processing, the AI Act specifically targets AI systems, especially those deemed high-risk. A company may thus fall under the AI Act even if its AI systems do not process personal data.
Regulatory approaches and specific obligations
The regulatory approach is another major difference. The GDPR follows a relatively uniform approach, with obligations applying across all personal data processing (with some variation based on risk). The AI Act, however, implements a tiered regime based on AI system risk levels:
- Unacceptable risk: prohibited systems
- High risk: strict and comprehensive obligations
- Limited risk: transparency obligations
- Minimal risk: few or no obligations
This difference is also reflected in the specific obligations imposed by the AI Act, such as training data quality, technical robustness, conformity assessments, and human oversight of high-risk AI systems. These requirements have no direct equivalent in the GDPR, although they may intersect with principles like data accuracy or processing security.
Interactions between GDPR and AI Act: Towards integrated compliance
Overlapping obligations for AI systems processing personal data
For organizations using AI systems that process personal data, both regulations will apply simultaneously, resulting in overlapping obligations—sometimes complementary, sometimes redundant.
For instance, a high-risk AI system analyzing personal data will have to comply with both:
- GDPR requirements on legal basis, data minimization, data subject rights, etc.
- AI Act obligations related to risk assessment, system quality, human oversight, etc.
This overlap calls for an integrated compliance approach, as offered by our GDPR outsourcing service, which can be tailored to include AI Act requirements.
Impact assessment: a shared tool to optimize
The data protection impact assessment (DPIA) required by the GDPR and the conformity assessment required by the AI Act for high-risk systems share many similarities. Both mechanisms aim to identify, assess, and mitigate risks related to the use of technology.
Organizations may consider developing an integrated analysis methodology covering both GDPR and AI Act requirements to improve efficiency and coherence. Our experts in GDPR support in Toulouse are trained to guide you through this integration process.
Compliance strategies under the dual regulatory framework
Adapting data and AI governance
To effectively meet the requirements of the GDPR and AI Act, organizations must rethink their data and AI governance. This involves:
- Integrating ethical and legal considerations from the design stage of AI systems (privacy and ethics by design)
- Establishing enhanced technical documentation
- Adopting a cross-functional approach involving various departments of the organization
Appointing an external DPO in Paris or an external DPO in Lyon can be a major asset in this integrated governance strategy, providing combined expertise in data protection and AI compliance.
Fostering a culture of digital ethics
Beyond mere regulatory compliance, organizations have every interest in fostering a genuine digital ethics culture. This approach involves:
- Training employees on data protection and responsible AI issues
- Establishing ethical charters to guide technology use
- Implementing ongoing processes for evaluating societal impacts
This initiative aligns with a broader vision of ethics in business and represents a competitive differentiator in the era of responsible digital transformation.
Pooling compliance investments
With the proliferation of digital regulations (GDPR, AI Act, and also DORA for the financial sector), organizations should consider pooling their compliance investments. This can take several forms:
- Using a GDPR software solution that gradually integrates AI Act features
- Establishing multidisciplinary teams dedicated to global digital compliance
- Developing audit methodologies that cover all regulatory requirements
This integrated approach not only reduces costs but also ensures greater coherence in managing digital risks.
Specific challenges across different sectors
The healthcare sector facing dual challenges
The healthcare sector, already heavily impacted by the GDPR due to the processing of sensitive data, will face new challenges under the AI Act. AI systems used for medical diagnosis, health risk prediction, or therapeutic decision support will generally be classified as high-risk systems, resulting in enhanced obligations.
Healthcare institutions and healthtech companies must pay close attention to:
- Ensuring algorithm transparency while preserving medical confidentiality
- Providing adequate human oversight of automated decisions
- Thoroughly documenting clinical validation measures for AI systems
Our GDPR support service in French Guiana has developed particular expertise for the healthcare sector, which will naturally extend to AI Act-related challenges.
The specificities of the financial sector
The financial sector, already subject to a strict regulatory framework including the GDPR, will also need to comply with the AI Act for its many AI applications: credit risk assessment, fraud detection, algorithmic trading, etc.
Financial institutions will need to pay particular attention to:
- Explainability of decisions made by AI systems, which is especially crucial in this sector
- Managing algorithmic bias that could lead to discrimination
- Resilience of systems against cyber threats
These requirements will be added to those of the GDPR and other sector-specific regulations like DORA, requiring a truly integrated compliance approach.
Conclusion: Toward a global digital compliance strategy
The emergence of the AI Act alongside the GDPR marks a new stage in building a coherent European regulatory framework for the digital era. Rather than seeing these regulations as separate constraints, organizations should develop a global digital compliance strategy that integrates all regulatory requirements.
This holistic approach not only helps optimize resources and investments but also turns compliance into a real lever of trust and differentiation. By placing ethics, transparency, and respect for fundamental rights at the heart of their digital strategy, organizations can better prepare for a future where advanced technologies like AI will play an increasingly central role.
To support you in this effort, My Data Solution offers tailored GDPR consulting and support, continuously evolving to integrate new regulatory requirements like the AI Act. Our experts help you build lasting digital compliance tailored to your industry and aligned with your strategic objectives.
Feel free to visit our main website to discover all our services and our unique approach to digital compliance.