THE CORE ISSUE
Data sovereignty means the right to control where your data lives, who can access it, what it is used for, and the ability to delete it completely. In 2026, none of ChatGPT, Claude, or Replika give you full data sovereignty by default. This post explains what each platform actually does, what your legal rights are under UK GDPR, and what genuine sovereignty looks like in practice.
Does ChatGPT use your conversations to train its AI?
By default, OpenAI may use conversations from free and ChatGPT Plus web users to improve its models. The opt-out is in Settings > Data Controls > Improve the model for everyone. If you do not actively disable it, your conversations are eligible for training. ChatGPT API users and Enterprise customers are protected by default — OpenAI's API usage policy states that API-submitted data is not used to train models without explicit consent.
Conversations are retained for safety monitoring regardless of the training opt-out, and are stored on Microsoft Azure infrastructure predominantly in the United States. The same underlying model therefore operates under different data policies depending entirely on which access route you use — a distinction the vast majority of users are unaware of.
Short answer: ChatGPT may train on free and Plus user conversations by default. Opt out via Settings > Data Controls. API and Enterprise users are excluded by default, but all users' data is retained for safety purposes.
Does Anthropic train Claude on your conversations?
Anthropic's privacy policy for Claude.ai states that conversations from free-tier users may be used to train and improve Claude models unless the user opts out. The opt-out is in Account Settings > Privacy > Use my data to improve Claude. Claude Pro subscribers and API customers are not included in training data by default — Anthropic's developer documentation explicitly confirms this distinction, mirroring OpenAI's tiered approach.
Anthropic runs infrastructure on AWS primarily in the United States, and states it complies with GDPR for European users. For UK residents, compliance is governed by the UK GDPR as incorporated into UK law via the Data Protection Act 2018.
Short answer: Anthropic may train on free-tier Claude.ai conversations by default. Claude Pro and API users are excluded. The opt-out is in account settings. Data is stored on US-based AWS infrastructure.
What happened to Replika user data, and why does it matter?
In 2023, Replika updated its privacy policy to permit sharing of user data with third parties for advertising and analytics purposes. Reports indicated that anonymised behavioural data was being shared with Meta via its advertising SDK embedded in the Replika app. This was a deliberate policy decision, not a breach or hack.
The Italian data protection regulator, the Garante per la protezione dei dati personali, blocked Replika from processing Italian residents' data in February 2023, citing insufficient safeguards around minors' data and emotionally vulnerable users. In response, Replika made abrupt changes to the AI model — removing romantic and intimate features overnight — causing significant distress to users who had built long-term emotional relationships with their companion. Users received no warning and had no way to export what they had built over months or years.
Short answer: Replika updated its policy in 2023 to permit third-party data sharing including with Meta for advertising. Italy's regulator then blocked Replika, prompting abrupt model changes that ended users' emotional relationships overnight with no recourse and no export.
What are your UK GDPR rights over AI conversation data?
The UK GDPR — retained from EU law via the Data Protection Act 2018 — gives UK residents specific, enforceable rights over their personal data. Two articles are directly relevant to AI conversations.
UK GDPR Article 17 — Right to Erasure
Also known as the “right to be forgotten.” You can request that an AI provider deletes all personal data it holds about you. The controller must comply without undue delay — generally within one calendar month. Data already incorporated into model weights presents a practical complication regulators are still working through, but the right to delete stored conversation records is clear and enforceable.
UK GDPR Article 20 — Right to Data Portability
You have the right to receive your personal data in a structured, commonly used, machine-readable format (such as JSON), and to transmit it to another controller. In practice, AI companies must give you your conversation history as JSON or CSV — not just a printable PDF. This right applies where processing is based on your consent or a contract and is carried out by automated means.
Submit a Subject Access Request or erasure request to the company's data protection contact. They have one month to respond. If they refuse without valid grounds, escalate to the Information Commissioner's Office (ICO).
How do ChatGPT, Claude, Replika, and MEOK compare on data policy?
Side-by-side comparison across key data sovereignty dimensions. Based on each platform's published privacy policies as of March 2026.
| Policy dimension | ChatGPT | Claude | Replika | MEOK |
|---|---|---|---|---|
| Trains on free-tier conversations | Yes (opt-out available) | Yes (opt-out available) | Yes (policy updated 2023) | Never |
| Trains on paid-tier conversations | No (Plus / API excluded) | No (Pro / API excluded) | Yes | Never |
| Encryption at rest | Yes (OpenAI-held keys) | Yes (Anthropic-held keys) | Yes (Replika-held keys) | AES-256 user-held keys |
| User-controlled encryption keys | No | No | No | Yes |
| Data portability / export | JSON export via Settings | Formal request required | No structured export | Full JSON export always |
| Right to erasure (Art. 17) | Account deletion available | Account deletion available | Account deletion available | Delete-everything endpoint |
| Third-party data sharing | Specified partners / safety | Specified partners / safety | Meta advertising (2023) | None |
| Server location | USA (Microsoft Azure) | USA (AWS) | USA | EU / UK (user-selectable) |
Sources: OpenAI Privacy Policy, Anthropic Privacy Policy, Replika Privacy Policy, MEOK Privacy Covenant. Verified March 2026. Policies may change.
What does genuine AI data sovereignty look like in practice?
The phrase “your data is encrypted” means almost nothing on its own. The question that matters is: who holds the keys? When ChatGPT, Claude, or Replika encrypt your data, they hold the encryption keys — meaning anyone who compromises their systems, or any government issuing a lawful intercept order, can read your conversations. Encryption you do not control is security, not sovereignty.
Genuine sovereignty requires four properties:
User-held encryption keys
The company cannot decrypt your data even if compelled. AES-256 with user-generated keys stored only on your device is the baseline standard.
Full portability at any time
Export everything in structured JSON whenever you choose — not a summary, not a PDF, with no form submission or multi-day processing delay.
Verified deletion
A delete-everything endpoint you can audit — an immediate, verifiable action with a timestamped receipt, not a promise of deletion within 90 days.
No training on your data
Your conversations are never used to improve the model for other users. Your vulnerability and intimacy are not a training contribution.
How does MEOK approach data sovereignty differently?
MEOK AI LABS was founded by Nicholas Templeman after observing the Replika 2023 incident and recognising that every major AI companion platform had made the same structural error: building the user relationship on infrastructure the user does not own.
Encryption standard
AES-256
User-generated keys
Data export
Full JSON
Available any time
Training on your data
Never
Privacy Covenant
Deletion
Immediate
Verified endpoint
MEOK's Privacy Covenant is an architectural commitment, not a policy document. On Pro and Sovereign tiers, memory is encrypted client-side before transmission — the server stores ciphertext it cannot read. MEOK cannot train on your conversations because the architecture physically prevents the server from seeing them in plaintext.
MEOK also provides multi-model portability: switch from Claude to GPT-4 or to a locally-run model (available in the desktop OS release, Summer 2026) and your entire memory and companion configuration migrate with you. No other platform offers this. Your relationship is not bound to a single model vendor.
Why does AI data sovereignty matter more than general data privacy?
When you share data with a social media platform, you share what you chose to post publicly. When you share data with a search engine, you share your queries. When you share data with an AI companion, you share something categorically different: your fears, your relationships, your health concerns, your grief, your secrets. The intimacy of AI conversation data exceeds almost any other data category.
This intimacy creates three risks that do not exist in the same form elsewhere:
Psychological leverage
A company with access to your most vulnerable conversations has insight into your psychology that can be used for targeted advertising, dark pattern design, or coercive subscription retention.
Relationship discontinuity
As the Replika 2023 incident demonstrated, a single regulatory action or business pivot can instantly alter or delete the AI relationship you have invested in — with no warning and no recourse.
Training amplification
Your most personal conversations, if used for training, become embedded in a model used by millions of other people. Your private disclosures shape how an AI responds to strangers.
This is why MEOK frames data sovereignty not as a feature but as a foundational ethical requirement. The question is not whether you want privacy — it is whether the platform you trust with your inner life has structured itself so that sovereignty is the default, not an opt-in.
Can you delete your data from ChatGPT, Claude, or Replika right now?
Yes — but the process and completeness vary significantly. Here are the steps for each platform based on their current March 2026 interfaces.
ChatGPT
- Opt out of training: Settings > Data Controls > toggle off "Improve the model for everyone"
- Delete individual conversations: hover a conversation in the sidebar > three-dot menu > Delete
- Export your data: Settings > Data Controls > Export data (JSON delivered to your email)
- Delete your account: Settings > Data Controls > Delete account
Note: Data already used in training cannot be removed from model weights after the fact.
Claude (Anthropic)
- Opt out of training: Claude.ai Account Settings > Privacy > disable "Use my data to improve Claude"
- Delete conversations: available via the conversation interface
- Request data export: submit a UK GDPR portability request to privacy@anthropic.com
- Delete your account: Account Settings > Delete account
Note: No one-click JSON export exists in the UI — a formal email request is required for data portability.
Replika
- Review data sharing: Privacy settings within the app
- Request deletion: email privacy@replika.ai or use in-app account deletion
- Data export: not available — Replika provides no structured JSON download
- Account deletion: Settings > My Account > Delete my account
Note: No structured export exists. This is a potential UK GDPR Article 20 compliance gap.
YOUR DATA. YOUR KEYS. YOUR AI.
Start with data sovereignty as the default
MEOK encrypts your memory with keys only you hold. Your conversations never train our models. Full JSON export and immediate deletion are available from day one on every tier.
Begin your sovereign AIFree tier available. No credit card required. Delete everything at any time.