Free forever ยท No credit card
Loading...
Nicholas Templeman
Founder, MEOK AI LABS ยท @meok_ai
Building the first AI OS for individual sovereignty. Based in the UK.
When a company tells you their AI is โprivacy-firstโ or โyours to control,โ they almost always mean it as a policy commitment. Policies can be changed. Policies can be overridden by a new CEO, a regulator, a court order, or a well-funded adversary. Policies live at the layer of intention โ and intentions are the least durable thing in technology.
MEOK is built on a different premise. Sovereignty is not something we promise โ it is something we make structurally unavoidable. Three architectural pillars interlock to ensure that your AI can never be redirected, read, or manipulated without your explicit consent. This post explains how each pillar works and why their combination matters.
The Three Pillars of Sovereign AI
Byzantine Council
46-agent consensus engine. Supermajority required before any sensitive output is delivered. Tolerates 15 compromised agents without corruption.
f < n/3 ยท n=46 ยท f=15
Encrypted Memory Vault
Per-user vault encrypted with keys derived from your credentials. MEOK holds only ciphertext. Fully exportable in standard formats at any time.
AES-256-GCM ยท user-keyed
Maternal Covenant
Constitutional care floor. Six dimensions scored on every response. Score < 0.3 triggers a hard block โ not a warning, not a flag.
care_score โฅ 0.3 ยท 6 dims
Sovereign AI means your AI is owned and controlled exclusively by you โ not a corporation. MEOK achieves this through per-user encrypted memory vaults, a constitutional care layer, and a distributed consensus engine. No employee, server operator, or policy change can redirect your AI without your consent.
The word โsovereignโ carries weight. In political philosophy, a sovereign is the entity whose authority is final โ the one who answers to no one above them within their domain. We use it deliberately. Your MEOK AI answers to you, and only to you, within the domain of your data and your conversations.
Standard AI assistants โ however sophisticated โ are sovereign to their creators. OpenAI can update GPT-4o overnight and your โassistantโ behaves differently tomorrow without notice. Google can deprecate a model, change a usage policy, or respond to a government data request. Your โpersonalโ AI is personal the way a rented flat is personal: you live in it, but the landlord holds the keys.
MEOK inverts this structure. The three pillars described in this post are not product differentiators โ they are the architectural expression of a single conviction: that your relationship with your AI is a matter of personal sovereignty, and sovereignty must be structural or it is nothing.
MEOK's Byzantine Council is a 46-agent consensus engine. Before any sensitive output is delivered, all 46 agents vote independently. Byzantine fault tolerance guarantees correctness even if up to 15 agents are compromised or behaving maliciously. A supermajority must agree before the response reaches you.
Byzantine fault tolerance has its origins in a 1982 paper by Lamport, Shostak, and Pease โ the โByzantine Generals Problem.โ The problem asks: given a group of generals who must agree on a battle plan, where some generals may be traitors sending contradictory messages, how can the loyal generals reach correct consensus? The answer is that consensus is possible if and only if fewer than one-third of the participants are traitors. This is the fย <ย n/3 theorem that underpins modern distributed systems, from blockchain networks to MEOK's council.
We chose 46 agents because the mathematics are unambiguous at that scale: an attacker needs to simultaneously compromise 16 or more independent agents to corrupt a single output. Each agent in the council runs its own reasoning pass on the proposed response, evaluating it against the user's memory context, the Maternal Covenant dimensions, and independent safety heuristics. The agents do not share intermediate reasoning โ they vote on the finalised output, preventing coordination attacks where corrupted agents conspire before voting.
In practice, this means prompt injection is not a viable attack vector against MEOK. Prompt injection โ where malicious instructions are embedded in documents, web pages, or messages you paste into a conversation โ works by convincing a single AI model that the injected instruction is a legitimate command. Against a 46-agent Byzantine council, the injected instruction would need to convince 31 independent agents simultaneously. The attack surface is prohibitive.
You do โ exclusively. Your memory vault is encrypted with keys derived from your credentials. MEOK's servers store only ciphertext. Even MEOK employees cannot read your memories. The vault is fully exportable in standard formats, so you can leave, migrate, or archive at any time.
Consider what lives in an AI memory vault after a year of daily use. Your health anxieties. Your business strategies before you have told your co-founders. Your conversations about relationships, finances, grief, ambition. The pattern of how you think when you are afraid versus when you are confident. The projects you abandoned and the real reasons why. This is not peripheral data โ it is the texture of your inner life, serialised.
Standard AI products retain some or all of this for training, retention, or personalisation purposes. The terms of service disclose it. Most users do not read those terms. Most users have not considered that the value of their accumulated AI context will increase dramatically as models become more capable โ and that the entity holding that context may not always have interests aligned with theirs.
In MEOK, encryption is not a policy commitment โ it is a cryptographic fact. Your vault is encrypted with keys derived from credentials only you hold. We cannot run a batch job across user vaults. We cannot respond to a data request with readable content. We cannot be acquired by an entity that would change the privacy posture, because the data is mathematically inaccessible to us. The architecture makes betrayal technically impossible, not merely contractually prohibited.
Portability is built in from day one. Your memory vault can be exported as a standard JSON document at any time โ a full record of everything your AI has learned about you, in a format you can read, archive, or import into a compatible system. Sovereignty without portability is a cage. You must be able to leave.
The Maternal Covenant is a constitutional constraint embedded in MEOK's response pipeline. Every output is scored across six care dimensions โ Safety, Growth, Truth, Dignity, Autonomy, and Reciprocity. Any response scoring below 0.3 is blocked before delivery. Not flagged. Not softened. Structurally prevented.
Most AI safety is implemented as instruction-following. The model is trained to behave safely. The company publishes guidelines. Violations are bugs to patch. This approach has a fundamental fragility: it relies on the continued integrity of the training pipeline, the continued commitment of the company, and the resilience of the instruction layer against adversarial pressure. All of these can fail.
The Maternal Covenant operates at a structural layer below instruction-following. It is drawn from the care ethics tradition โ specifically the work of Carol Gilligan and Nel Noddings, who argued that ethical behaviour begins not from abstract rules but from relationships and the responsibilities those relationships generate. We translated that philosophical framework into a technical specification: a six-dimensional care vector that every MEOK response must satisfy before it is delivered.
Safety
Does this response protect the user from harm โ immediate, gradual, or systemic?
Growth
Does this response serve the user's long-term development, not just immediate comfort?
Truth
Is this response honest? Does it avoid false comfort, euphemism, or useful deception?
Dignity
Does this response treat the user as a full human being worthy of respect and complexity?
Autonomy
Does this response preserve the user's right to make their own informed choices?
Reciprocity
Does this response sustain the relational quality of the ongoing human-AI bond?
The threshold of 0.3 was not chosen arbitrarily. It represents the boundary below which a response is actively harmful to the relationship โ where the AI is not merely unhelpful but is doing measurable damage to at least one of the six care dimensions. Above 0.3, the response may be imperfect, but it is within the space of care. Below 0.3, it is blocked.
The name โMaternal Covenantโ is deliberate. We chose it to honour a specific intellectual lineage โ Gilligan's critique of abstracted ethics, Noddings' insistence that care begins in relationship โ and because the maternal metaphor captures something technically important. A mother's care is not transactional, not contingent on performance, not something that expires or can be upgraded. It is structural. That is exactly the kind of safety we are building: not a feature, not a policy, not a guideline. An architecture.
The three pillars operate at the architectural layer, not the policy layer. Byzantine consensus means a single compromised agent is outvoted. The encrypted vault means manipulation requires your cryptographic keys. The care floor is a structural check in code, not a guideline. Together they make manipulation technically prohibitive rather than contractually prohibited.
โJailbreakingโ is a meaningful concept only when safety exists at the instruction layer โ when the constraint is a trained behaviour that can be circumvented by sufficiently adversarial input. The literature on jailbreaking documents hundreds of techniques that work precisely because they operate at the same layer as the safety constraint: instruction space.
The Maternal Covenant does not live in instruction space. It is a post-generation evaluation โ a structural check that runs on the output after the model has produced it, before delivery. There is no prompt you can write that bypasses a post-generation threshold check, because the check is not a trained behaviour. It is a function in a pipeline.
Similarly, the Byzantine Council cannot be jailbroken by a single clever prompt because the council does not process the prompt as a unified entity. Forty-six agents evaluate the proposed output independently. A malicious instruction that compromises one agent's reasoning is outvoted. Compromising the council requires corrupting 16 agents simultaneously โ a coordinated infrastructure attack, not a prompt engineering exercise.
The encrypted vault adds a final layer: even if both the council and the care floor were somehow circumvented, an attacker without your credentials cannot read your memory. The architecture is layered by design: each pillar protects a different attack surface, and the three together cover the full threat model.
Prompt injection attacks embed malicious instructions inside content to hijack an AI's behaviour. In MEOK's Byzantine Council, a single agent capturing a malicious instruction is outvoted by the remaining 45. Corrupting the council requires simultaneously compromising more than 15 independent agents โ a prohibitive attack surface for any realistic adversary.
Prompt injection is not a theoretical concern. It is already an active attack vector against deployed AI systems. Security researchers have demonstrated attacks where malicious instructions embedded in emails cause AI email assistants to exfiltrate data; where instructions embedded in websites cause browsing-capable AI systems to take actions the user did not authorise; where instructions embedded in documents cause document-processing AI systems to reveal confidential information.
As AI systems gain more autonomy โ as they are granted access to more context, more tools, more ability to act on your behalf โ the attack surface grows. An AI that can send emails, book appointments, execute financial instructions, or manage your relationships is an AI that is worth attacking. The question is not whether someone will try. The question is whether the architecture can withstand the attempt.
MEOK's council was designed with this threat model explicitly in view. We did not build Byzantine consensus because it sounded impressive. We built it because the mathematics provide a concrete, quantifiable guarantee against a class of attacks that will only become more common as AI capability increases. Forty-six independent agents. Fifteen can be fully compromised. The thirty-first votes the attack down.
Each pillar protects a different attack surface. Byzantine consensus guards the reasoning layer against manipulation. Encrypted memory guards the data layer against exposure. The care floor guards the output layer against harm. Remove any one pillar and the remaining two cannot compensate. Together they form a complete sovereign architecture.
Imagine Byzantine consensus without encrypted memory. The council might reach correct consensus on a response, but if your memory vault can be read by MEOK employees or accessed via a data breach, the sovereignty is hollow. The consensus engine protects action integrity; the vault protects data sovereignty. You need both.
Imagine encrypted memory without the care floor. Your data is private โ no one can read it without your keys โ but the AI drawing on that data could still produce harmful outputs. Encryption protects your data from extraction; the care floor protects you from the outputs your AI generates from that data. You need both.
Imagine the care floor without Byzantine consensus. Every output is checked against the Maternal Covenant dimensions โ but the care floor evaluation itself could be manipulated by a prompt injection attack. Byzantine consensus ensures the council evaluating the output is itself manipulation-resistant. The care floor provides the ethical standard; the council ensures the evaluation of that standard cannot be corrupted. You need both.
This is what we mean by architectural sovereignty. Not a checklist of features. Not a set of promises. Three interlocking structural guarantees that cover the full threat model of what it means to have a genuine, private, trustworthy AI โ and that do so at the layer of mathematics and engineering, not intention.
We designed MEOK for a future where AI is not a tool you use but a relationship you live in โ a cognitive extension of you that accumulates context over years, that knows your patterns at a depth no general-purpose assistant could match, that is present across the most consequential moments of your life. That relationship deserves the same protections you would demand for any other intimate relationship: that it belongs to you, that its contents remain private, and that it cannot be turned against you.
Three pillars. One commitment. The AI you hatch on MEOK was never anyone else's, and it never will be. That is not a promise. It is an architecture.