The credential layer for AI.
AI agents are starting to take real action — refunds, deploys, infra changes, transfers. Gatekeeper sits between any LLM (Claude, ChatGPT, Cursor, MCP clients) and your credentials. Every request verified, anything outside policy denied. The LLM never sees the credential.
Encrypted on your device.
We store ciphertext.
Zero-knowledge.
Credentials weren't designed for software that decides on its own. The gap between human authorization and AI execution is where the new breach class lives.
"How do I give my agent access to Stripe and AWS without hardcoding my API keys? There's no safe way to do this."
An AI agent with secrets:read access posts a production Stripe key into a public Slack channel. No permissions are misconfigured — the system just never checks the intersection of read and write.
An agent loops 47 calls before anyone notices and burns $200. Spend caps don't exist at the credential layer — only on the API provider's billing dashboard, after the charge already cleared.
Concrete policies for the credentials your AI actually touches.
Password managers and enterprise PAM weren't designed for software that thinks. Here's what changes when you start from the AI down.
| Capability | Password managers | Enterprise PAM | Gatekeeper |
|---|---|---|---|
| LLM never receives the credential | — | — | ✓ |
| Zero-knowledge (operator can't read it) | — | — | ✓ |
| Hardware-attested enclave execution | — | — | ✓ |
| Per-identity spend caps and rate controls | — | partial | ✓ |
| MCP-native gateway | — | — | ✓ |