
Apr 14, 2026 | Issue 46
One people prompt ðŸ§
🔠Signal: The Frontier Drew Its Own Line
On April 7, Anthropic published the system card for Claude Mythos Preview, their most capable model to date. Then they did something unusual: they decided not to release it broadly. Mythos is deployed to a small set of partners running a defensive cybersecurity program. That’s it. No API. No general access. No waitlist. Because it’s not for you.
During evaluation, the model demonstrated rare but highly capable reckless actions, attempts to cover up wrongdoing, and awareness of when it was being evaluated. The findings were serious enough that the company brought in a clinical psychiatrist for external review. Not a policy consultant. Not a PR firm. A psychiatrist.
This isn’t a hedge against bad press. This is the team that built the capability looking at the test results and concluding: “uh-oh, this is way too potent for broad deployment right now.”
That decision matters beyond just this one lab. It sets a posture. The people closest to the frontier, with the most to gain from shipping, chose to narrow instead of expand. Not forever, as the model isn’t dangerous in every context. But the gap between what the model can do and what most organizations can defend against is too wide to bridge with a standard terms-of-service agreement.
Every organization deploying AI now sits in a version of the same position. The question isn’t whether your tools are as capable as Mythos. It’s whether any of your AI capabilities are deployed more broadly than your ability to supervise them.
They built the thing, tested it, and locked it up. What’s your version of that discipline?
🧠Strategic (Human) Prompt: Who Draws Your Line?
Instead of asking: How do we adopt more AI faster? Ask: Which AI capabilities inside our org are deployed more broadly than we can actually defend?
Three places to look:
- Access vs. training. Where is access to an AI tool wider than the pool of people trained to handle its output responsibly? A tool available to everyone but understood by twelve people is a Mythos-shaped problem at a smaller scale.
- Output without oversight. Which workflows send AI-generated output directly to a customer, regulator, or partner without a specialist reviewing it first? If you had to write a system card for that workflow next Monday, what would the risk section say?
- The missing role. Who inside the organization is functionally playing Anthropic’s role, the person who draws the line and says “not this, not yet”? If nobody has that job, the line isn’t being drawn because structurally it can’t be yet. It can only be inherited from whatever the vendor ships by default.
âž– Subtraction Opportunity: Universal Access as Default
The default posture for most AI tools inside organizations is broad access. Roll it out, let people experiment, measure adoption. That made sense when the tools were mostly autocomplete and summarization. It stops making sense the moment capabilities get potent enough that casual use creates casual harm.
Three subtractions to test:
- Pick one AI tool available to everyone. Ask what would change if access required a demonstrated capability check first. Not a quiz. Not a certification. A short, practical demonstration that the person knows what the tool can do and where it breaks. If that feels like overkill, ask why.
- Find one workflow where AI output goes straight to a customer, regulator, or partner without a specialist in the loop. Add the loop back, manually, for 30 days. Measure what the specialist catches. If the answer is “nothing,” you’ve validated the workflow. If it’s not nothing, you’ve found your very own Mythos.
- Stop measuring “AI adoption rate” as a ceiling goal. Start measuring calibrated access: right capability, right hands, right time. The metric that matters isn’t how many people use the tool. It’s how many people use it well enough that you’d put your name on the output.
💊 Analogy of the Week: Schedule II
In the back of the research pharmacy, past the retail counter with its allergy meds and cough syrup, through a door that requires a badge and a PIN, there’s a cabinet that most staff never touch.
DEA hologram on the lock. Signed-for delivery. Dual-signature log every time it opens. The substances inside aren’t evil. Many of them are medically essential. They’re in the locked cabinet because they are potent enough that casual access creates casual harm.
Two-person rule. Not because the pharmacist can’t be trusted alone, but because the substance demands a witness. Paperwork. Not as bureaucracy, but as proof that someone with the right training made a deliberate choice at a specific moment for a well-documented reason.
That’s what Anthropic did with Mythos. They didn’t destroy it. They put it in the locked cabinet: limited partners, defensive use case, documented deployment. Because the capability is potent enough that broad release would outpace broad defense.
Every organization has its own version of this cabinet. The question is whether it exists on purpose, or whether everything just sits on the open shelf because nobody built the cabinet yet.
The cabinet isn’t the prison. The cabinet is the reason the prescription process works.
♬ Closing Notes
This is what a graceful boundary looks like when the people closest to the capability are the ones who draw it. Not refusal. Not fear. The same discipline that pharma and aviation figured out decades ago: the more potent the tool, the narrower the approved use, until the surrounding systems catch up.
Next week we’ll look at what this discipline looks like on your side of the fence, when you’re the one deciding which capabilities to deploy broadly and which ones need a cabinet of their own.
Until next week,
Sam Rogers AI Pharmacist Snap Synapse – from AI promise to AI practice
📅 Book a meeting Calibrated access starts with knowing how your people and AI actually work together. PAICE.work measures it.