The compliance question with AI agents is not "is it allowed" but "under which article". An AI agent that reads customer email is processing personal data. An agent that auto-declines an application is automated decision-making. An agent that logs every action for 18 months is a retention question. GDPR has specific rules for each. This post walks through the five articles that actually bite when you run AI agents on European data, what a compliant architecture looks like, and the mistakes that bring the regulator to your door.
What counts as personal data when an agent is involved
GDPR Article 4 defines personal data broadly: any information relating to an identified or identifiable natural person. Agents typically touch several categories at once: email contents, IP logs, form submissions, customer service transcripts, CRM records. Pseudonymised data is still in scope (only fully anonymised data sits outside). When your agent processes any of this, your organisation is either the data controller or a processor. The distinction matters because the contractual obligations you owe to customers, and the responsibilities you take on toward sub-processors like LLM vendors, depend on which role you occupy. Get the mapping right before anything else.
Articles 5 and 6: lawful basis and minimization
Article 5 lists the processing principles: lawfulness, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality, and accountability. In agent terms this means picking an Article 6 lawful basis (usually contract or legitimate interest for business operations), not giving the agent access to more data than the task requires, and documenting why you chose that basis. The documentation step is the one most teams skip. Under the accountability principle you must be able to prove the decision, not just make it. Example: an email-sorting agent only needs the sender and subject line to triage, not the body of the email. Giving it the body anyway violates minimization and weakens your position if a regulator audits the governance of the processing.
Article 22: automated decisions that bite customers
This is the article most business owners do not know exists. Individuals have the right not to be subject to a decision based solely on automated processing where it produces legal effects concerning them or similarly significantly affects them. Credit decisions, job-application rejections, insurance pricing tiers, and personalisation at a level that affects access to a service are all in scope. The exceptions are narrow: necessary for entering into or performing a contract, authorised by EU or member-state law, or based on the individual's explicit consent. Even under an exception, you owe the person the right to obtain human intervention, to express their point of view, and to contest the decision. The practical answer is human-in-the-loop review on any agent output that touches access to a service, pricing, or an individual's opportunities. Build the review process before you deploy the agent, not after the first complaint.
Articles 30 and 32: records and security
Article 30 requires Records of Processing Activities (RoPA). For every processing operation, including each AI agent, you document purpose, data categories, recipients, retention periods, and security measures. Article 32 requires appropriate technical and organisational measures based on the risk of the processing: encryption, access controls, incident detection, and an audit trail. Agents are naturally well-suited here because every call is loggable by design: input, model output, action taken, timing, outcome. Alignment with ISO 27001 covers most of the Article 32 surface area, though the certification alone is not an automatic GDPR pass (see the FAQ below).
Chapter V: why your hosting choice decides half the compliance burden
Schrems II (2020) invalidated Privacy Shield. Transfers of personal data to the US are still possible under Standard Contractual Clauses plus supplementary measures, but the documentation and risk-assessment burden is real. The cleanest path is processing everything in the EU. If your agent sends customer data to a US-based LLM provider, you are making a Chapter V transfer and you have to document it, justify it, and revisit it whenever adequacy decisions change. A local model running on EU infrastructure (Ollama on Hetzner, for example) removes the transfer question entirely for the majority of requests. Frontier models remain available for the hard cases, and each such call becomes a discrete, logged, justifiable event rather than a default pattern.
A compliant architecture, concretely
The pattern we ship looks like this: EU-only hosting on Hetzner Cloud in Finland and Germany, a local model handling the majority of requests, frontier model calls only for cases the local model cannot answer (each one logged as a Chapter V event with justification), a per-tenant audit trail with documented retention, automatic deletion at end of window, ISO 27001-aligned controls, and cookieless analytics so that visitor data does not create a separate consent burden on top of the agent-processing burden. Every link in this chain is a defensible Article 32 measure. This is also the architecture we ship as part of our agent systems service so clients do not have to reinvent the compliance posture.
Frequently asked questions
Q: Do we need a DPIA if we deploy an AI agent? A: Article 35 requires a DPIA when processing is likely to result in a high risk to individuals. Automated decision-making with legal or similarly significant effects, systematic large-scale monitoring, and processing of special-category data all trigger the obligation. An email-sorting agent that does not make user-facing decisions usually does not. A credit-scoring agent usually does. When uncertain, document the reasoning for why a DPIA was not conducted. The documentation itself is a compliance artefact under Article 24 accountability.
Q: Can we use ChatGPT with customer data? A: You can, with a Data Processing Addendum in place with the vendor, a lawful basis under Article 6, and a documented Chapter V transfer. Most European companies find the paperwork and supplementary-measures burden heavier than the alternative: a local model for routine processing and the frontier model only where the local model cannot answer. That hybrid architecture avoids most of the transfer documentation and keeps customer data in the EU for the default case.
Q: What happens if the agent makes an incorrect automated decision? A: Under Article 22, individuals have the right to human review of significant automated decisions. You need a defined process: the individual requests review, a qualified human examines the case, the decision is reconsidered or confirmed, and the individual is notified with reasoning. If the agent affects access to credit, jobs, or services, this process is built before deployment, not after the first complaint. Failing to provide it is one of the most commonly sanctioned patterns in recent enforcement actions.
Q: How long can we retain AI agent logs? A: Only as long as necessary for the documented purpose. Audit logs for security incident investigation commonly retain for 12 months. Operational debugging logs often need only 30 days. Model output logs tied to individual customers usually align with the retention window of the underlying customer record. The essential requirement is a documented retention window in your RoPA and automatic deletion at end of window, not indefinite retention "in case it is useful later".
Q: Is ISO 27001 enough for GDPR Article 32 compliance? A: ISO 27001 covers most of the technical and organisational measures Article 32 expects, but it is not an automatic pass. Article 32 requires measures appropriate to the risk of the specific processing. The certification demonstrates a management system is in place, which takes you most of the way. Processing-specific documentation, data subject request procedures, and breach notification processes still need separate attention. ISO 27001 does not directly certify any of them.
If you are starting before the compliance question is in scope, the primer on what AI agents actually do covers the ground first. When you are ready to design a GDPR-compliant deployment for your own operation, book an intro call and we will walk you through it.
---
*Written by the Leap Laboratory team. Not legal advice; consult a GDPR practitioner for decisions that affect production data. Updated April 2026.*