Agentic AI and the GDPR: Analyzing the AEPD's Groundbreaking Guidance
Generative AI is already widely adopted across industries. But the next wave is already here, and it changes the compliance equation fundamentally: agentic AI. These systems do not just generate content in response to prompts. They autonomously plan, reason, access data, call external services, and take actions in the real world. They book flights, negotiate with other agents, query databases, and execute multi-step workflows with minimal human oversight.
The AEPD (Agencia Espanola de Proteccion de Datos) has published what is, to my knowledge, the first comprehensive supervisory authority guidance specifically addressing agentic AI from a data protection perspective. It is a significant document — not because it creates new rules, but because it maps existing GDPR obligations onto a technology paradigm that the regulation was not designed to anticipate.
From Generative to Agentic: Why the Shift Matters
The distinction between generative AI and agentic AI is not merely technical — it has direct regulatory consequences.
A generative AI system responds to a prompt and produces output. The data flows are relatively predictable: input goes in, output comes out. The controller can, in principle, oversee what the system does.
An agentic AI system operates differently. Given a high-level objective, it decomposes the task into sub-goals, reasons about the best approach, calls external tools and services, processes data from multiple sources, and takes autonomous actions — potentially across multiple interactions and over extended time periods. The data flows become exponentially more complex, less predictable, and harder to oversee.
Consider the AEPD's illustrative example: a travel agent AI. A user asks it to plan a business trip. The agent autonomously searches flight databases, compares hotel options, checks calendar availability, accesses the user's payment information, communicates with booking APIs, and potentially interacts with other AI agents representing airlines or hotels. Each of these interactions involves personal data processing. Each creates potential GDPR obligations. And the user may have limited visibility into what the agent is actually doing.
Key GDPR Challenges Identified by the AEPD
Controller Liability Under Art. 5(2) and Art. 24
The AEPD is clear: the organization deploying an agentic AI system bears controller responsibility for all processing the agent carries out, including actions the agent takes autonomously. Under Art. 5(2) GDPR (accountability) and Art. 24 (controller obligations), the deploying organization must be able to demonstrate compliance with all GDPR principles — even for processing steps it did not explicitly instruct.
This is a significant challenge. If an AI agent autonomously decides to query an external service, share personal data with a third-party API, or store information for future use, the controller is responsible. The traditional model of controller oversight — where a human defines the processing purposes and means — becomes strained when the agent itself is making operational decisions about how to achieve its objectives.
Transparency Gaps Under Art. 13/14
GDPR's transparency requirements assume that the controller can describe, in advance, what personal data will be processed, for what purposes, and who the recipients will be. Agentic AI disrupts this assumption. When an agent dynamically decides which external services to call, what data to share, and how to combine information from multiple sources, providing meaningful advance transparency becomes genuinely difficult.
The AEPD notes that organizations must find ways to provide at least general transparency about the categories of processing an agent may perform, while acknowledging that the specific processing activities may not be fully predictable in advance.
Art. 22 and Automated Decision-Making
Art. 22 GDPR restricts decisions based solely on automated processing that produce legal or similarly significant effects. The AEPD's analysis raises a critical question: when an agentic AI system makes decisions with real-world consequences — booking a flight, approving a request, negotiating terms — does this constitute automated decision-making under Art. 22?
The answer depends on the degree of human oversight and the significance of the decision. But the AEPD suggests that many agentic AI use cases will fall uncomfortably close to the Art. 22 threshold, particularly when agents operate with high autonomy and their decisions directly affect individuals.
Agent Memory as a Compliance Minefield
This is perhaps the most underappreciated challenge. Agentic AI systems often maintain persistent memory — storing information from previous interactions to improve performance in future ones. The AEPD identifies this as creating direct tension with several GDPR principles:
- Data minimization (Art. 5(1)(c)) — Agent memory may accumulate personal data far beyond what is necessary for any individual task
- Right to erasure (Art. 17) — Deleting personal data from agent memory systems is technically complex and may not be fully achievable
- Data protection by design (Art. 25) — Memory systems must be designed with privacy principles embedded, not bolted on after deployment
Organizations deploying agents with persistent memory need clear retention policies, technical mechanisms for data deletion, and honest assessments of whether their memory architectures can actually comply with erasure requests.
The "Rule of 2" and Interaction Complexity
The AEPD introduces a useful conceptual framework: the "Rule of 2." When two or more AI agents interact — for example, a user's travel agent negotiating with an airline's pricing agent — the number of data processing relationships, controller-processor determinations, and potential data flows multiplies rapidly.
Each agent-to-agent interaction requires analysis of who controls what processing, what data is shared, and on what legal basis. In a system with multiple interacting agents, the governance complexity can quickly become unmanageable without deliberate architectural design.
International Transfers Under Chapter V
Agentic AI amplifies international transfer risks. When an agent autonomously calls external APIs and services, some of those services may be located outside the EU/EEA. Each such call potentially constitutes an international data transfer under Chapter V GDPR, requiring transfer impact assessments and appropriate safeguards.
The challenge is that the controller may not know in advance which external services the agent will choose to use, making it difficult to conduct transfer impact assessments proactively. The AEPD suggests that organizations must either restrict agents to pre-approved services within acceptable jurisdictions or implement real-time transfer controls.
The Positive Angle: Agents as Privacy Enhancing Technology
The AEPD's guidance is not entirely cautionary. It also recognizes that agentic AI can function as a Privacy Enhancing Technology (PET). An agent acting as an intermediary between a user and external services can, in principle, minimize the personal data shared with those services — negotiating on the user's behalf without disclosing unnecessary information, aggregating results without exposing raw data, and implementing privacy preferences automatically.
This is an important recognition. The technology itself is not inherently privacy-hostile. The compliance outcome depends entirely on how agents are designed, deployed, and governed.
What Organizations Should Do Now
The AEPD's guidance is non-binding, but it represents the direction supervisory authorities are thinking. Organizations deploying or planning to deploy agentic AI systems should take several immediate steps:
- Map agent data flows — understand what data your agents access, process, share, and store, including interactions with external services
- Define agent boundaries — establish clear limits on what autonomous actions agents can take, which services they can call, and what data they can share
- Assess Art. 22 exposure — determine whether your agent use cases involve automated decisions with significant effects on individuals
- Design memory governance — implement retention policies and deletion mechanisms for agent memory systems
- Pre-approve external services — restrict agent interactions to vetted services with known data protection standards and transfer safeguards
Agentic AI is not a future scenario — it is being deployed today. The organizations that address these data protection challenges proactively will have a significant advantage over those that wait for enforcement actions to clarify the rules.