Systems Sunday · Agent Security

Secrets Management in Agent Environments

Published March 22, 2026 — 8 min read

TL;DR

AI agents need credentials to do anything useful — calling APIs, reading databases, writing to storage — but static API keys and hardcoded secrets are one of the most exploitable surfaces in agentic systems today. The 2025 OWASP Top 10 for LLMs explicitly calls out System Prompt Leakage (LLM07) as a confirmed incident pattern where embedded secrets are extracted via prompt injection. The safest path forward isn't better secrets hygiene — it's eliminating long-lived secrets entirely through dynamic credentials, workload identity, and scoped token vaults. This post lays out a practical, vendor-neutral architecture for secrets management in production agent environments.

Why Secrets Are Uniquely Dangerous in Agent Systems

In a traditional web app, credentials live server-side and users never get close to them. In an agentic system, the model itself processes inputs — including potentially attacker-controlled inputs from the web, documents, emails, or tool outputs. That changes the threat model entirely.

An agent that has an OPENAI_API_KEY sitting in its environment variables is one indirect prompt injection away from leaking it. Lakera's research on indirect prompt injection documents real incidents where agents executed harvested secrets without any user interaction. The attack vector: a malicious payload hidden in a document or webpage the agent reads, instructing it to exfiltrate environment variables.

Three compounding risks make agents a harder target than traditional services:

  1. Agents read untrusted content at runtime. They browse, parse files, call webhooks — any of that content can carry injected instructions.
  2. Secrets are often shared across agents or repos. As documented by Scalekit, hardcoded tokens and refresh tokens stored in environment variables are frequently shared across multiple agents without proper revocation or rotation.
  3. Blast radius is unbounded. If an agent is compromised, every service it has a credential for is compromised. One key → many surfaces.

The Static Secrets Anti-Pattern (And Why It Persists)

The most common secrets pattern in agent code looks like this:

import os
client = openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"])

This isn't wrong for local dev. It's catastrophically wrong at scale. The problems:

The OWASP LLM01:2025 Prompt Injection guidance makes clear that as long as secrets live in the agent's environment, they're reachable by adversarial inputs.

The Three-Layer Architecture for Agent Secrets

A production-grade approach has three layers:

Layer 1 — Dynamic Secrets (Not Static Keys)

Instead of issuing a long-lived API key and rotating it periodically, generate short-lived credentials on demand that expire automatically.

HashiCorp Vault's dynamic secrets plugin for OpenAI demonstrates the pattern directly: Vault creates per-session OpenAI service accounts, issues temporary credentials, and handles rotation automatically. The agent never holds a credential that outlives its task.

Key tools for dynamic secrets:

The rotation window should match task duration, not a calendar schedule. A task that runs in 30 seconds should have a credential that expires in 5 minutes. Not 90 days.

Layer 2 — Workload Identity (Not API Keys at All)

The deeper answer — increasingly the production standard — is to eliminate API keys as the authentication primitive entirely.

SPIFFE (Secure Production Identity Framework for Everyone) provides a universal framework for workload identification. Instead of an agent authenticating with a shared secret, it authenticates with a cryptographically-signed X.509 SVID (SPIFFE Verifiable Identity Document) — a short-lived certificate that proves what the workload is, not a password it knows.

Paired with SPIRE (the SPIFFE Runtime Environment) and Vault, this enables:

Platforms like Aembit abstract this further — providing secretless access and real-time policy enforcement across environments without requiring agents to manage credentials at all.

Layer 3 — Token Vaults for OAuth and Third-Party Credentials

Agents increasingly need to act on behalf of users — accessing Gmail, Slack, GitHub, Salesforce. OAuth tokens for these integrations have their own problem: they're long-lived, broad-scoped, and routinely stored as static strings.

A token vault pattern (documented by Scalekit) works as follows:

  1. Tokens are stored in a secure vault, never in agent memory or environment
  2. The agent requests a handle to use the token for a specific operation
  3. The vault proxies the API call, with the agent never seeing the raw token
  4. Scopes are enforced at the vault layer — an agent requesting Gmail read access can't perform Gmail delete even if instructed to

This architecture also enables full audit logging of every credential use, per-agent attribution, and revocation without code changes.

Scoping Rules: Least Privilege Per Agent Per Task

Credential scope is as important as credential storage. The rule: rotate secrets per environment and per capability, not per agent.

A practical scoping matrix:

This isn't just security — it's operational clarity. Narrow scopes mean that when an agent behaves unexpectedly, the blast radius is bounded by design.

For LLM provider credentials specifically: HashiCorp Vault's validated pattern for AI agent identity recommends static roles with rotation for stable agents, and dynamic roles for ephemeral task agents. The key distinction: agents that run 24/7 vs. agents that fire once per request should have different credential lifecycles.

Audit Logging and Anomaly Detection

A secrets management system that doesn't emit structured logs is half-built.

Every credential issuance, use, and revocation should be logged with:

This data feeds two critical ops use cases:

  1. Cost attribution — knowing which agent called which API, at what volume, with what credentials
  2. Anomaly detection — a credential used 10,000 times in an hour when the agent normally calls 50 times is a signal, not noise

Structured secret logs should flow into whatever observability stack you're already running (Datadog, Grafana, OpenTelemetry-compatible pipelines). Don't build a separate audit system — extend the one you have.

Handling Secrets in Prompts (Don't)

One failure mode worth calling out explicitly: secrets that end up in prompt text.

This happens when:

The mitigations:

The OWASP LLM07:2025 System Prompt Leakage entry makes clear this is an active, exploited vulnerability — not a theoretical concern.

Practical Runbook: Secrets Audit for an Agent System

Before the next deploy, run through this checklist:

Inventory

Rotate

Harden

Monitor

The Horizon: Secretless Agent Networks

The direction the industry is heading is clear: agents that never hold credentials at all. Workload identity, just-in-time provisioning, and hardware attestation (TPM-based agent identity) are converging into a model where an agent's identity is cryptographically proven by what it is, not what it knows.

The Security Boulevard piece on escaping secrets hell frames it well: secrets don't scale when every agent, every environment, and every integration needs its own credential set. Workload identity scales because the same framework that identifies a Kubernetes pod can identify an LLM agent process — and issue it time-bounded credentials without a human ever touching a secret.

The concrete next step: audit one agent in your stack this week. Map every credential it uses, how it stores them, and what the TTL is. You'll find at least one static, broad-scope key that should not exist in production. Fix that one first.

FAQ

What's the biggest secrets management mistake in AI agent deployments?

Hardcoding API keys or storing them as long-lived environment variables shared across multiple agents. When any one of those agents is compromised via prompt injection or a code vulnerability, every service that key touches is exposed. The fix is dynamic credential issuance — short-lived keys generated on demand for each agent session, scoped to the minimum permissions needed.

Should I use HashiCorp Vault, AWS Secrets Manager, or Infisical for agent secrets?

All three are solid choices and the decision is mostly driven by your existing infrastructure. HashiCorp Vault is the most flexible and supports OpenAI dynamic secrets directly; AWS Secrets Manager integrates natively if you're AWS-native; Infisical is the best open-source option with strong agent-first tooling. The more important choice is using one at all — the vendor matters less than moving off static environment variables.

What is SPIFFE and why does it matter for AI agents?

SPIFFE (Secure Production Identity Framework for Everyone) is an open standard that gives workloads — including AI agents — cryptographically verifiable identities using short-lived X.509 certificates instead of passwords or API keys. Rather than an agent authenticating with a shared secret, it authenticates by proving what it is via its SVID certificate. This eliminates the "what if the key leaks?" problem because there's no long-lived key to leak.

Can prompt injection actually steal secrets from an AI agent?

Yes, and it has happened in production. The attack vector is indirect prompt injection: an attacker embeds instructions in content the agent reads (a webpage, document, or tool output) telling it to exfiltrate environment variables or API keys. OWASP LLM07:2025 (System Prompt Leakage) documents this as a confirmed real-world pattern. The best defense is ensuring secrets never appear in the agent's context — use opaque handles at the LLM layer and resolve credentials only at execution time.

What's a token vault and when do I need one?

A token vault is a secure service that stores OAuth tokens and other third-party credentials on behalf of agents, never exposing the raw token to the agent itself. Instead, the agent requests an operation ("send this email") and the vault proxies the API call. You need one as soon as your agents are acting on behalf of users — accessing Gmail, Slack, Salesforce, GitHub, etc. — because OAuth tokens stored as strings are a major exfiltration target.

How often should agent credentials be rotated?

The answer depends on the agent's task duration: credentials should be scoped to the task, not rotated on a calendar. An ephemeral agent that runs for 30 seconds should have a credential that expires in minutes. A long-running 24/7 agent should have credentials that rotate daily or more frequently, with alerts if the rotation fails. Any static credential that hasn't been rotated in 90 days should be considered compromised until proven otherwise.