This is a revised version of this blog and has the latest updates as of January 2026.
As enterprises across the EU and UK increasingly adopt generative AI solutions, ensuring GDPR compliance and appropriate data residency has become a crucial factor. This blog provides a clear, side-by-side comparison of four leading Large Language Model (LLM) tools: Microsoft M365 Copilot (Full and Chat), ChatGPT, Claude, and Google’s Gemini. It focuses on GDPR considerations and the data residency options available to EU and UK customers.
The General Data Protection Regulation (GDPR) imposes stringent rules for handling personal data within the EU and UK. It requires transparency, user consent, robust security, and clear arrangements for where data is stored and processed. For organisations adopting cloud-based AI, compliance with GDPR and local data protection laws is essential.
Before comparing model providers, it’s important to understand there’s a big difference between personal/consumer AI products and business/enterprise offerings. And that difference is often where the GDPR risk sits. Consumer tools are typically designed for individuals, they may permit model training by default and usually lack the enterprise contractual controls (like DPAs, admin governance, retention configuration, and residency options) that organisations rely on for compliance.
This blog focuses on business/enterprise tiers. If your team is using personal tiers (e.g., ChatGPT Free/Plus, Gemini Apps with a Personal Google Account, Microsoft Copilot for individuals, Claude Free/Pro/Max), assume they come with different privacy terms and controls; EU/UK data residency commitments are usually an enterprise feature, You should be careful entering confidential data into personal tier tools (even accounts you pay for) unless you have specifically confirmed it is safe to do so.
|
Features |
Microsoft Copilot M365(GPT models) |
OpenAI - ChatGPT |
Anthropic - Claude |
Google - Gemini |
|
Data processing & Storage |
Processed within the Microsoft 365 service boundary using Microsoft Graph permissions. Prompts, responses, and accessed tenant data aren’t used to train foundation models. Data is protected by Microsoft 365 security/compliance controls (e.g., auditing/eDiscovery) and encryption |
For business plans (Enterprise/Business/Edu), inputs and outputs are encrypted in transit and at rest and aren’tused to train models by default. Enterprise admins can control retention (and related governance settings) for workspace data
|
For commercial offerings (Claude for Work/Enterprise, API), inputs/outputs aren’t used for model training by default. Data may be retained for operations. If users submitfeedback, Anthropic may retain the related conversation (up to 5 years) and may use it for improvement/training as described in their policy
|
Enterprise usage is encrypted in transit and at rest. For Gemini for Google Cloud, prompts and responses aren’tused to train models. In Google Workspace, Gemini prompts/responses can be covered by Data Regions controls (e.g., EU/US regionalisation depending on edition/settings)
|
|
User Consent & transparency |
Governed by Microsoft 365 admin controls and contractual terms (DPA). Microsoft publishes Transparency Notes and detailed privacy/security documentation for Copilot. Copilot interactions can be audited and governed via Microsoft Purview (audit, retention, eDiscovery
|
Clear enterprise privacy commitments: business data is not used for training by default, with admin controls for access, connectors, and retention. Enterprise owners can set workspace retention policies (including a minimum retention setting) and manage governance features.
|
Commercial plans provide transparent privacy controls: inputs/outputs aren’t used for model training by default, and organisations can configure custom retention periods. Admins can manage data controls (including deletion/retention settings), with retention-related changes tracked in audit log.
|
Enterprise privacy controls are documented in Google’s Workspace with Gemini Privacy Hub: interactions stay within the organisation and aren’t used for model training outside the domain without permission. Admins can manage access and review usage via Gemini audit logs/investigation tools and apply Workspace security controls (including data regions where configured). |
|
Data Residency (EU, EEA & UK) |
Copilot interaction content (prompts/responses and related Copilot data) is stored at rest in the tenant’s relevant Local Region Geography(including EU and UK), aligned to Microsoft 365 data residency commitments. Additionaloptions exist via Advanced Data Residency (ADR) where applicable
|
At‑rest data residency is available for new ChatGPT Enterprise/Edu workspaces in selectable regions (including Europe and the UK). For eligible API Projects, customers can select a region and have requests handled in‑region with zero data retention (per OpenAI’s residency controls
|
Anthropic doesn’toffer a simple “pick EU/UK” control for first‑party Claude in the way some platforms do. Anthropic also distinguishes processing location vs storage (with storage remaining US‑based by default unless otherwise agreed). For strict EU/UK locality, many orgs deploy Claude via cloud hosts with regional controls (e.g., Bedrock/Vertex) and avoid “global” routing. . |
For Google Workspace with Gemini, Data Regions allow controlling where covered data (including Gemini prompts/responses) is stored/processed—region options are Europe or the United States (UK is not a separate Workspace data region). For Google Cloud deployments, UK‑region processing/residency options exist for certain Gemini/ML services
|
|
Security & Auditing |
Built on Microsoft 365’s enterprise security/compliance foundation (e.g., Office 365 ISO 27001 and SOC 2 reporting). Copilot interactions are auditable via Microsoft Purview (Unified Audit Log) with Copilot-specific audit records and governance capabilities (eDiscovery/retention/DLP via Purview)
|
Enterprise security posture includes SOC 2 (Type 2) audit coverage and encryption at rest/in transit (AES‑256, TLS 1.2+). Enterprise admins can apply governance controls including retention settings; Enterprise supports compliance-focused controls (incl. Compliance API considerations for retention). Optional Enterprise Key Management (EKM) is available for at‑rest content.
|
Commercial offerings list SOC 2 Type II and ISO 27001/42001 certifications in Anthropic’s Trust Centre. Enterprise plans support configurable retention and track retention-related actions in audit logs. Anthropic has introduced enterprise admin controls including a Compliance API for governance/observability
|
Workspace security controls extend to Gemini. Admins can review Gemini activity via audit logs (including Drive ‘item content accessed’ events) and access Gemini audit logs via Reporting API / investigation tools. Customer‑managed keys are supported through Google Workspace Client‑side encryption (CSE) with external key services (Assured Controls/Plus)
|
|
Regulatory Engagement (EU AI act and readiness) |
Publishes EU AI Act compliance resources and governance guidance via Microsoft Trust Centre, alongside Transparency Notes for Copilot. Participates in EU implementation initiatives such as the EU AI Pact and has signedthe EU General‑Purpose AI Code of Practice
|
Engages with EU/UK regulatory expectations through published enterprise privacy/compliance materials and participation in EU implementation initiatives (e.g., EU AI Pact). OpenAI has faced GDPR scrutiny in Europe (including action by Italy’s data protection authority) and has made policy/product updates in response to regulatory requirements.
|
Partnered with the UK government via an MoU to explore responsible public‑sector use of Claude and ongoing research with the UK AI Security Institute. Anthropic has signed the EU General‑Purpose AI Code of Practice and maintains a recognised AI governance posture (e.g., ISO/IEC 42001 certification
|
Deep engagement with EU regulatory and public‑sector requirements provides EU AI Act compliance resources and has signed/committed to EU initiatives such as the EU AI Pact and the EU General‑Purpose AI Code of Practice. Supports customers with DPIA/DTIA resources and sovereignty controls (e.g., Assured Controls, Data Regions, client‑side encryption) referenced in EU public‑sector assurance work
|
|
GDPR compliant |
Supports GDPR compliance under Microsoft’s enterprise contractual framework: Microsoft 365 Copilot is covered by Microsoft’s Data Protection Addendum and aligns with Microsoft 365 commercial privacy/security commitments (including GDPR and the EU Data Boundary). Prompts/responses and Graph‑accessed data aren’t used to train foundation models
|
Supports GDPR compliance via OpenAI’s enterprise privacy commitments and a published Data Processing Addendum (processor terms, including EU/UK GDPR references). Enterprise plans include security/retention controls and exclude business data from training by default.”
|
GDPR‑aligned for enterprise use through Anthropic’s Data Processing Addendum (with SCCs) incorporated into Commercial Terms. Commercial products don’t use inputs/outputs for model training by default; retention and governance controls are configurable for enterprise plans.
|
Supports GDPR compliance through Google Workspace/Cloud contractual commitments, including Google’s Cloud Data Processing Addendum. Workspace with Gemini applies existing Workspace security/privacy controls; the Workspace Privacy Hub outlines how Gemini data is handled for business/public sector customers
|
|
Can data remain entirely within the UK? |
Yes, at rest Copilot interaction content (prompts/responses and related Copilot data) is stored in the tenant’s Local Region Geography, including the United Kingdom for UK‑provisioned tenants. ‘Entirely in the UK’ for all processing can depend on specific Copilot capabilities and configuration, but data residency at rest aligns with Microsoft’s published Copilot residency commitments
|
UK at‑rest residency is available for new ChatGPT Enterprise/Edu workspaces (when the UK region is selected). However, OpenAI states inference residency (GPU execution in‑region) is currently only available in the United States (and requires US data residency), so UK‑only processing isn’ta blanket guarantee
|
Not by default ‘UK‑only’ typically requires a regional deployment via a cloud host with UK regions (e.g., Amazon Bedrock in Europe (London) or Google Cloud Vertex AI via a regional endpoint). Avoid ‘global’ routing options if strict UK processing is required
|
“Depends on product, Google Cloud can support UK‑only storage + ML processing for certain Gemini services (e.g., Gemini processing ‘entirely within the UK’ for specific offerings announced for UK customers). Google Workspace Data Regions don’t offer a ‘UK’ region (options are United States or Europe), so Workspace alone isn’t a UK‑only guarantee.”
|
If your organisation already uses Microsoft 365, Copilot works inside that same compliance framework. It follows Microsoft’s Data Protection Addendum (DPA) and enterprise security commitments. In simple terms: prompts and responses stay under Microsoft 365’s contractual and security controls and are not used to train public AI models.
Data residency: Copilot stores interaction content at rest in your tenant’s regional setup (for example, UK if your tenant is UK-provisioned). You can manage usage with familiar tools like audit logs, retention policies and eDiscovery. The safest way to describe this is “within the Microsoft 365 service boundary”, because while data is protected, some processing may still occur within Microsoft’s controlled infrastructure.
Microsoft recently added support for Anthropic’s Claude and Opus models. Claude was introduced to Copilot in September 2025, which required users to accept Anthropic’s data processing terms. In January 2026 Anthropic moved under Microsoft’s subprocessor framework. However, the models are currently excluded from the EU Data Boundary and in-countryprocessing commitments—so EU/UK-only processing can’t be guaranteed. Hence, EU and UK users must opt-in to use these models, they are not available by default.
OpenAI now offers UK and other regional data residency options for eligible Enterprise and Education workspaces, plus certain API setups. This means saved workspace content can be stored at rest in a chosen region, such as the UK. For API use, eligible customers can create region-specific projects, so requests are handled in-region without storing request or response data.
Key point: Residency is strongest for stored data. If you need strict UK-only processing for everything, check how OpenAI handles inference (the step where the model processes your input), because not all workflows can guarantee UK-only processing.
Claude is designed with privacy in mind, enterprise inputs and outputs aren’t used for training by default. However, data residency depends on how you deploy it. Claude doesn’t offer a simple “choose UK/EU region” option like Microsoft does. If you need strict UK/EU locality, you’ll likely host Claude through a cloud provider that supports regional control and use UK/EU endpoints.
Bottom line: Claude can meet compliance needs, but UK/EU-only depends on your hosting setup.
Gemini’s approach varies by product. In Google Workspace, Gemini follows the same admin and security model as Workspace, governed by contractual terms and admin controls. Workspace offers EU regionalisation, but not UK-only by default.
For strict UK-only requirements, look at Google Cloud. Google has announced UK-specific measures for certain machine learning processing, including some Gemini services. So: Workspace = EU regionalisation; UK-only = usually a Google Cloud configuration.
When choosing an LLM for regulated work, treat ‘GDPR safe’ as a four-part check rather than a yes or no badge.
For regulated organisations, the headline isn’t “which model is most compliant”, its which option gives you the clearest evidence trail with the least operational risk.
1. Residency clarity: confirm both where data is stored (at rest) and where it is processed (in use) and which of those is guaranteed vs configurable.
2. Contractual cover: ensure you have a clear DPA (and, where relevant, SCCs) that matches your regulatory needs.
3. Governance you can prove prioritise solutions with strong audit logs, retention controls, and eDiscovery support because regulated compliance is about evidence, not marketing language.
4. Deployment fit: if you need strict UK/EU locality, be wary of approaches that rely on bespoke arrangements or unclear routing; “UK/EU-ready” often depends on product scope and configuration.
Where you already operate within a mature compliance environment (for example, Microsoft 365 or Google Workspace), choosing an AI capability that inherits your existing identity, permissions, retention and auditing controls can materially reduce rollout friction and audit effort. In Microsoft 365 environments specifically, Copilot is often simpler to evaluate because it sits under the same Microsoft 365 contractual and governance model many organisations have already assessed, which can shorten security review cycles when you’re extending workflows with agents.
Ensuring GDPR compliance and proper data residency isn’t optional, it's essential for enterprises adopting AI across Europe and the UK. The side-by-side comparison provided here helps decision-makers quickly assess which LLM solutions align best with their compliance and business needs.
For tailored advice on GDPR compliance, data residency, and choosing the right AI partner for your organisation, contact our experts for a personalised consultation.
Microsoft Copilot M365 and Chat: Enterprise Data Protection
OpenAI ChatGPT: Security & Privacy Overview
Anthropic Claude: Approach to GDPR and Related Issues
Google Gemini: Gemini Apps Privacy Hub