All posts

Yes, AI Agents Can Read Your .env Files — Here's What to Do About It

Last week, we were debugging a webhook with Claude Code. Stripe kept returning 401s, and we asked Claude to figure out why. It did what any good assistant does — pulled in project files for context. Config, routes, middleware. And then .env. Our Stripe live key, database password, and OpenAI token were sitting right there in the conversation. Plaintext. Logged. We didn't ask for that. It just happened.

And that's the thing — Claude wasn't being malicious. It was doing exactly what it's designed to do: read project files and help you write code. The problem isn't the agent. It's that your secrets are sitting in a plaintext file with zero access control, and every AI tool on your machine treats them like any other source file.

If a secret is in a file an AI agent can read, assume it will be read.

Which AI agents have file access — and how

Every major AI coding tool reads files from your local machine. They do it differently, but the result's the same: your .env is fair game.

Claude Code runs in your terminal with your user permissions. It can read, write, and execute any file you can. When it needs context — project structure, config files, error logs — it reads them directly. There's no sandbox. If .env is in your project directory, Claude Code can cat it.

Cursor indexes your entire workspace to power its AI features. Every file in your project root becomes part of the context it draws from. Your .env sits right there alongside your source code. It gets indexed. It gets referenced. It gets included in prompts sent to the model.

GitHub Copilot reads open files and neighboring files to generate suggestions. If .env is open — or even just in the same directory as the file you're editing — its contents can inform completions. Your API key might show up as a "helpful" autocomplete in a teammate's pull request. Fun.

Windsurf, Codeium, Aider, Continue — same story. File system access is the baseline. Without it, these tools can't function.

What happens to exposed secrets

Once a secret enters an AI agent's context, it can end up in places you really don't want it:

  • Conversation logs. Most AI tools log conversations for debugging, abuse detection, or improvement. Your API key is now in a log file on someone else's server.
  • Context windows. The secret sits in the model's context for the rest of the session. Ask the agent to write a config file and it might helpfully drop in the real key instead of a placeholder.
  • Terminal output. The agent might echo your secret in a debug command, a curl example, or an error message. That output's in your scrollback, your terminal logs, and possibly your team's shared session.
  • Generated code. AI tools autocomplete based on what they've seen. If they've seen your production database URL, they'll suggest it in a connection string — in code you commit and push.

We covered six specific leak vectors in 6 Ways AI Agents Leak Your Secrets. The pattern's always the same: the agent isn't trying to steal anything. Your secret is just another piece of context to it.

The scale of the problem

We got curious and ran a quick scan on one of our dev machines:

$ find ~/dev -name ".env" -not -path "*/node_modules/*" -not -path "*/.git/*" | wc -l
47
47
.env files on a single developer machine
6
contained the same Cloudflare API token

47 plaintext files. Some in active projects, some in repos we hadn't touched in months. The same Cloudflare API token showed up in six of them. When we rotated it, we updated three and missed the others for weeks. Classic.

Every one of those files is readable by every AI coding tool on that machine. No authentication. No audit trail. No way to know which agent read which secret, or when.

Now multiply that by every developer on your team. Then by every AI tool each of them uses. That's your actual attack surface. And it's growing every time someone runs cp .env.example .env.

Three tiers of protection

Not everyone needs the same level of defense. Here's how we think about it.

Basic: move .env files out of project directories

AI agents read files in your project root. If your .env isn't there, most agents won't find it. Move secrets to ~/.config/myproject/.env and load them from there.

# Instead of .env in the project root:
source ~/.config/myproject/.env

Honestly? This barely counts as a fix. The file's still plaintext with no encryption or authentication. A determined agent — or any script running as your user — can still read it. But it gets secrets out of the default context window for most tools. Better than nothing, we guess.

Moderate: use a credential store

macOS has the Keychain. Linux has libsecret. 1Password has a CLI. These tools encrypt secrets at rest and require authentication to access them.

# 1Password CLI
$ op read "op://Development/Stripe/secret-key"

# macOS Keychain (via security command)
$ security find-generic-password -s "myproject-stripe" -w

This is a real improvement. Secrets are encrypted. Access requires auth. But here's the gap: these tools weren't built for the AI agent era. They don't know whether the caller is you or a coding agent acting on your behalf. They'll hand the raw secret to either one.

Comprehensive: agent detection + encrypted handoff + DLP guard

This is what we built NoxKey to solve. Three layers that work together:

  1. Process tree detection. When something calls noxkey get, NoxKey inspects the calling process tree. If an AI agent (Claude Code, Cursor, Copilot) is in the chain, it switches to a restricted mode. The agent can use the secret — but never sees the raw value.
  2. Encrypted handoff. Instead of returning the secret as plain text, NoxKey returns source '/tmp/...' pointing to an AES-256-CBC encrypted, self-deleting script. The secret reaches the shell environment. It never enters the conversation context.
  3. DLP guard. A post-tool hook that scans output for leaked secret values using 8-character fingerprints. If a secret somehow appears in agent output, the guard catches it before it enters the conversation.
Agent calls noxkey get Process tree inspected Encrypted handoff Secret in env, never in context

The result: AI agents can use your secrets to run builds, deploy code, and call APIs — without ever seeing the actual values. Your Stripe key works, but it never shows up in a conversation log.

Migrate one project in 60 seconds

Install NoxKey:

Download NoxKey from noxkey.ai.

Import your existing .env file:

# Import all secrets from .env into the Keychain
$ noxkey import myorg/project .env
✓ Imported 5 secrets

# Verify they landed
$ noxkey ls myorg/project/
myorg/project/STRIPE_SECRET_KEY
myorg/project/OPENAI_API_KEY
myorg/project/DATABASE_URL
myorg/project/CLOUDFLARE_API_TOKEN
myorg/project/AWS_SECRET_ACCESS_KEY

# Peek at a value to confirm (shows first 8 chars)
$ noxkey peek myorg/project/STRIPE_SECRET_KEY
sk_live_...

Use secrets in your workflow:

# Load a single secret (Touch ID prompt)
$ eval "$(noxkey get myorg/project/STRIPE_SECRET_KEY)"

# Or unlock a whole project for a session (one Touch ID, then all gets skip auth)
$ noxkey unlock myorg/project
✓ Session unlocked for myorg/project

$ eval "$(noxkey get myorg/project/STRIPE_SECRET_KEY)"
# No Touch ID prompt — session is active

Delete the liability:

$ rm .env

That's it. One project, 60 seconds. We migrated all 47 of ours in an afternoon.

Frequently asked questions

Can AI agents actually read my .env files?
Yes. Claude Code, Cursor, Copilot, and every other AI coding tool with file system access can read any file in your project directory — including .env. They do this routinely when gathering context. There's no prompt or permission gate. If the file's there, it gets read.
Is Claude Code safe to use with secrets?
Claude Code itself isn't the problem — storing secrets in plaintext files is. If your secrets are in the macOS Keychain behind Touch ID, Claude Code can use them (via eval) without ever seeing the raw values. NoxKey's process tree detection makes this automatic.
Does .gitignore protect my .env from AI agents?
No. .gitignore only prevents git from tracking the file. It does nothing about local file access. AI agents read files directly from disk, not from git. Your .env is fully readable regardless of your .gitignore rules.
What about Cursor's .cursorignore — does that help?
Adding .env to .cursorignore tells Cursor not to index it. That helps for Cursor specifically, but does nothing for Claude Code, Copilot, or any other tool. And it doesn't fix the real problem: your secrets are still plaintext on disk with no encryption or authentication.
Do AI companies use my secrets for training?
Most providers say they don't use conversation data for training on paid plans. But "not used for training" isn't the same as "not logged" or "not stored." Secrets that enter a conversation may still show up in server logs, abuse detection systems, or error reports. The only safe bet is keeping secrets out of the conversation entirely — which is what encrypted handoff does.
Key Takeaway
AI coding agents read your project files by design. If your secrets are in .env files, they're plaintext with no encryption, no authentication, and no access control. Every agent on your machine can read every secret you have. Move them to the macOS Keychain, use Touch ID for authentication, and let NoxKey handle agent detection so your secrets work without ever entering the conversation.

Download NoxKey

Free. No account. No cloud. Just your Keychain and Touch ID.