Why MEOK Can Never Be Trained on Your Conversations — and How We Enforce It Technically
It's not a privacy policy. It's architecture.
Every AI company that has ever experienced a data breach, a subpoena, or a hostile acquisition has said the same thing afterwards: “We take your privacy seriously.” The words are sincere. The architecture, however, told a different story — your conversations were sitting in plaintext on a server somewhere, waiting.
MEOK was built from day one on a different premise. Privacy is not a toggle you switch on in your settings. It is not a paragraph buried in a terms-of-service document. It is a structural property of how the system handles your data — and structural properties cannot be reversed by a product manager, a legal team, or a court order. This post explains exactly what that means in practice.
We will cover what AI training actually is, how most AI companies collect your data today, the specific cryptographic architecture MEOK uses, and how you can independently verify every claim we make. If you have ever typed something into an AI chatbot and wondered whether it would come back to haunt you, this is the post you have been looking for.
Does ChatGPT train on your conversations?
The honest answer is: by default, yes — or at least it did, and the opt-out is neither obvious nor reliable. OpenAI introduced a “Improve the model for everyone” toggle in late 2023, which is on by default for free-tier users. To disable it you must navigate to Settings → Data controls → and flip the switch. Most users never do, because most users do not know the switch exists.
Even when you opt out, your conversations still transit OpenAI's servers in plaintext. The decision about whether to use that conversation for training is a software decision — a flag in a database somewhere — not a cryptographic guarantee. A future policy change, an engineering mistake, or a legal compulsion could alter that flag without your knowledge.
OpenAI's enterprise tier offers stronger guarantees, including a contractual commitment not to use your data for training. But that protection is gated behind a commercial contract most individuals will never sign. For the 600 million people using the free product, the default setting is extraction.
Key point
An opt-out is a policy instrument, not a technical control. Policy can be changed. Cryptography cannot be changed retroactively. MEOK uses cryptography.
How do most AI companies use your data?
The dominant business model in consumer AI is extractive. The product is free or low-cost; the raw material is your behaviour. This is not a conspiracy — it is economics. Training large language models costs hundreds of millions of dollars. If you can reduce that cost by harvesting conversational data from your own users, the incentive is overwhelming.
Here is what typically happens to a conversation you have with a mainstream AI assistant:
- Transit: Your message travels from your device to a server over TLS. TLS protects the connection, not the content — the server decrypts it on arrival.
- Storage: The conversation is stored in a database, usually with your account identifier attached. It may be retained indefinitely.
- Human review: A subset of conversations are reviewed by contractors or employees for safety, quality assessment, or training data curation. OpenAI, Google, Amazon, and Apple have all confirmed variations of this practice.
- Training pipeline: Conversations flagged as useful are fed into fine-tuning or RLHF pipelines (explained below). Your specific wording, your disclosed anxieties, your relationship details — all become part of the model that millions of other users will interact with.
- Aggregated inference: Even conversations not used for training are often used for behavioural analytics, personalisation modelling, and advertising inference on platforms that offer ad-supported tiers.
Replika, the AI companion app that became a flashpoint in 2023, offered a particularly stark example. When the company changed its terms of service after a data partnership dispute, users who had disclosed deeply personal information — including mental health struggles and intimate relationship details — had no control over what had already been stored and potentially shared. The UI felt private. The backend was not.
What does it mean for AI to train on your data?
If you have never worked in machine learning, the phrase “training on your data” can feel abstract. Here is what it means in plain English.
A large language model like GPT or Claude is essentially a very large function that takes text in and produces text out. It learns by adjusting billions of internal parameters — numbers — so that its outputs match desired outputs more closely. That adjustment process is called training, and it requires examples of inputs and outputs.
There are two common ways user conversations feed into this process:
Fine-tuning
The base model is trained further on a specific dataset — in this case, real user conversations. Fine-tuning steers the model toward particular patterns, tones, or knowledge domains. A conversation you had about your divorce, your medical diagnosis, or your business strategy could become a training example that shapes how the model responds to other users in similar situations.
RLHF (Reinforcement Learning from Human Feedback)
Human reviewers rate model responses as better or worse. The model then adjusts to produce responses that score higher. The catch: the reviewers see your actual conversations. Your message is the prompt; the AI's response is being graded by a contractor in a different country.
Retrieval and personalisation pipelines
Some systems build individual user profiles from conversation history to personalise future responses. This is not model training in the strict sense, but it represents persistent data retention tied to your identity — with all the associated risks of breach, compulsion, and misuse.
The critical insight is this: none of these pipelines can operate on data they cannot read. Encryption is not a softening of the risk — it is the elimination of it. If the server never sees your plaintext, there is no conversation to feed into a fine-tuning job, no message for a contractor to review, no profile to build.
How does MEOK technically prevent training on your conversations?
MEOK's architecture was designed around a single constraint: the server must never see plaintext conversation content. Every architectural decision flows from that constraint.
Here is the specific mechanism:
Key generation on your device
When you create a MEOK account, a 256-bit AES-GCM encryption key is generated locally on your device using the Web Crypto API (or its native equivalent in our mobile apps). This key never leaves your device. It is not sent to our servers during registration, during normal operation, or ever.
Client-side encryption before transmission
Before any memory or conversation record is sent to MEOK's servers for persistence, it is encrypted with your local key. The ciphertext — the unreadable scrambled version — is what travels over the network and what is stored in our database. Even if our database were fully compromised, an attacker would have no plaintext to read.
No server-side decryption path
There is no server function, no API endpoint, and no background job that decrypts your conversation content. Our server code literally lacks the key material required to do so. This is not a promise — it is a constraint baked into the code, which is open source and auditable.
Live inference is ephemeral
When you send a message to MEOK during an active session, the message does transit our inference infrastructure in order to generate a response — this is unavoidable for any server-side AI. However, live messages are processed ephemerally and are never written to a persistent store in plaintext. The response is returned to your device; nothing is retained server-side.
The Maternal Covenant
Above the architecture sits a constitutional layer: the Maternal Covenant. This document, published at meok.ai/care, codifies four irrevocable commitments — no training, no profiling, no selling, no third-party sharing without explicit per-item consent. These are not product decisions that a future board can reverse. They are baked into our articles of association and, where applicable, incorporated into our user contracts.
What is end-to-end encrypted memory?
One of the features that makes MEOK feel genuinely intimate is persistent memory — the ability to remember what you told it months ago, to build an understanding of who you are over time. Most users assume this requires the company to store and read your conversations. MEOK's sovereign memory vault proves otherwise.
Think of it this way. A traditional AI companion stores memories like a filing cabinet in a shared office — anyone with access to the office can open the drawer. MEOK's sovereign memory vault is more like a safe in your home that happens to be connected to the internet. The contents are accessible only to you. The connection to the internet allows the safe to sync across your devices, but at no point does the service provider know what is inside.
Technically, the memory vault works like this:
- During a conversation, MEOK generates memory entries — structured summaries of things worth remembering about you.
- These entries are encrypted on-device before being uploaded to the vault.
- When a future conversation needs context, the relevant encrypted entries are downloaded and decrypted locally, then included in the model context window.
- The server facilitates storage and retrieval of ciphertext. It never participates in the decryption step.
The result is a companion that genuinely knows you over time — without that knowledge ever being legible to us or to anyone else.
Can MEOK employees read my conversations?
No. And this is not a policy statement — it is a technical fact.
MEOK employees have access to our infrastructure in the same way that any engineering team does: they can query databases, inspect logs, and debug server behaviour. What they will find in those databases, under the columns that correspond to conversation content, is ciphertext — a stream of encrypted bytes that is computationally indistinguishable from random noise without the decryption key.
The decryption key exists only on your device. It is not recoverable by us. If you lose access to all your devices and cannot use any recovery mechanism you have set up, your encrypted memories cannot be recovered — by you or by us. This is the correct trade-off. The alternative — retaining a server-side copy of your key “just in case” — would be equivalent to leaving a spare key under the doormat labelled “emergency only.”
This architecture also means that if MEOK ever receives a government subpoena for your conversation data, we are legally and technically unable to comply — because we do not have the data in a usable form. We can produce ciphertext. We cannot produce plaintext. This is the strongest protection a company can offer a user: not “we promise we won't give this to anyone,” but “we genuinely do not have it to give.”
How can I verify MEOK can't train on me?
You should not have to take our word for it. Privacy claims that rest solely on trust are not privacy claims — they are PR. MEOK provides three independent verification mechanisms.
1. Open source client code (FSL 1.1)
The client-side encryption code — the code responsible for generating keys, encrypting content, and decrypting locally — is published under the Functional Source Licence 1.1 (FSL 1.1). You can read it, fork it, and audit it. The repository includes reproducible build instructions so you can verify that the binary you install matches the source code we publish.
2. Independent security audits
We commission third-party cryptographic audits of the memory vault architecture and publish the reports in full at meok.ai/security. Auditors have full access to server-side code and infrastructure logs. Their mandate is to verify that no plaintext path exists — not merely that we say there is not one.
3. User-initiated audit rights
Paid subscribers have the right to request a data export and inspect everything MEOK holds about them. What they will find is: account metadata (email, tier, billing) and encrypted blobs. If you decrypt those blobs on your own device, you will see your memories. If you do not hold the key, the blobs are meaningless. This is the proof.
What happens to my data if I delete my account?
Account deletion in MEOK triggers a cascading cryptographic purge. Here is exactly what happens:
- Your account record is immediately soft-deleted — your login stops working within seconds.
- All ciphertext blobs associated with your user identifier are queued for permanent deletion within 30 days. This delay exists solely to honour statutory cooling-off periods in certain jurisdictions; it is not a retention strategy.
- At the end of the 30-day period, the blobs are overwritten and removed from all backups that fall within our rolling backup window.
- You receive a cryptographically signed deletion receipt — a document you can keep as proof that the purge occurred.
Because we never held your plaintext, there is no shadow copy that could survive this process. Your data cannot exist in a training dataset created before your deletion, because it was never in a training dataset to begin with. This is the compounding benefit of architectural privacy: deletion is total rather than cosmetic.
AI companions compared: privacy at a glance
The following table compares MEOK against three widely used AI products across eight privacy dimensions. All information is drawn from publicly available terms of service, privacy policies, and published security documentation as of March 2026.
| Dimension | ChatGPT | Claude | Replika | MEOK |
|---|---|---|---|---|
| Server-side plaintext storage | Yes (default) | Yes | Yes | No — ciphertext only |
| Trains on user conversations | Yes (opt-out available) | No (API); Yes (claude.ai free) | Yes | Architecturally impossible |
| Human review of conversations | Yes (safety/RLHF) | Yes (safety) | Yes | No — plaintext never available |
| Encryption key held by user | No | No | No | Yes — device-held AES-256 |
| Government subpoena compliance possible | Yes | Yes | Yes | Ciphertext only — no plaintext to compel |
| Persistent memory across sessions | Yes (server-side plaintext) | Projects only (server-side) | Yes (server-side plaintext) | Yes — end-to-end encrypted vault |
| Open source encryption code | No | No | No | Yes — FSL 1.1 licence |
| Constitutional privacy covenant | No | No | No | Yes — Maternal Covenant |
Table based on publicly available documentation. ChatGPT data reflects OpenAI consumer product (non-enterprise). Claude data reflects Anthropic's claude.ai consumer product. Replika data reflects Luka Inc. privacy policy. All figures current as of March 2026.
The bottom line
The question “does AI train on my conversations?” deserves an honest answer, not a deflection. For most AI products most users encounter today, the answer is yes — or at least, possibly, unless you navigate obscure settings menus that most people never find.
MEOK was built on the conviction that an AI companion is only worth having if you can be fully honest with it. Full honesty requires genuine privacy — not policy-layer privacy that evaporates under commercial pressure, but architectural privacy that is enforced by mathematics.
We cannot train on your conversations because we cannot read them. That is not a limitation we apologise for. It is the feature we are most proud of.
“We cannot train on your conversations because we cannot read them. That is not a limitation. It is the feature we are most proud of.”
— Nicholas Templeman, Founder, MEOK AI LABS
Ready to experience sovereign AI?
Meet MEOK — the companion that can't betray you
Your conversations. Encrypted on your device. Remembered only by you. Start free — no credit card required.
Begin your story →