In early 2025, a supply chain attack hit several popular GitHub Actions — including actions in the Trivy/Aqua Security ecosystem — by injecting code that printed runner process environments and the GITHUB_ENV file to workflow logs. Any secrets accessible to those workflows were exfiltrated to the attacker. AWS credentials, npm tokens, Artifactory passwords, database connection strings — everything that had been set as an environment variable or loaded from GitHub Secrets.
The affected organizations scrambled to rotate credentials. Some rotated within hours. Some took days. During that window, the attacker had live credentials with full access.
Here's the thing: if those credentials had been dynamic — generated on demand, scoped to a single workflow run, with a 1-hour TTL — they would have expired long before most teams knew there was an incident. The attacker exfiltrates a token that's valid for 59 more minutes, not indefinitely.
That's the dynamic secrets pitch, and it's not theoretical.
What static secrets cost you
Static secrets are credentials that exist independently of their use: a database password you set once and store forever, an AWS access key you copy into a .env file, an npm token you paste into GitHub Secrets. They have two problems:
They compound over time. Every system that needs the secret gets a copy. GitHub Secrets, CI runners, developer laptops, Kubernetes secrets, config files, deployment scripts. When the secret leaks, all of those copies are compromised simultaneously. Rotation means finding every copy and updating it — which almost never happens completely.
Their blast radius is unbounded. A static database credential has no expiry. If an attacker gets it in January and you don't discover the breach until April, they've had three months of database access. The credential doesn't know it was stolen.
What dynamic secrets are
A dynamic secret is generated on demand, scoped to a specific requestor, and has a hard expiry (TTL) after which it's automatically revoked. When your CI pipeline needs database access:
- The pipeline authenticates to Vault using an identity it already has (a Kubernetes service account token, a GitHub Actions OIDC token, an AppRole credential)
- Vault generates a unique database user with a random password, valid for 1 hour
- The pipeline uses those credentials for the job
- When the TTL expires, Vault drops the database user and the credentials stop working — regardless of whether anyone knew they leaked
Nobody stores the database password. It never goes into GitHub Secrets, never touches a developer's laptop, never appears in a config file. Each pipeline run gets a fresh credential. The attacker who exfiltrated it has a ticking clock, not a standing invitation.
HashiCorp Vault
Vault is the most widely deployed open source secrets management platform. It has two relevant features for this pattern:
Secret engines — backends that generate dynamic credentials for specific systems. Database credentials, AWS IAM credentials, GCP OAuth tokens, PKI certificates, SSH signed certificates. The secret engine connects to the target system, generates a time-limited credential, and tracks the lease.
Auth methods — how callers authenticate to get a Vault token before they can request secrets. Kubernetes (pod service account JWT), JWT/OIDC (GitHub Actions tokens), AppRole (machine-to-machine), GitHub (personal access tokens for humans), cloud IAM (AWS, GCP, Azure instance identity).
The combination: GitHub Actions authenticates to Vault with its OIDC token, gets a Vault token scoped by policy to what that repo is allowed to request, uses the token to get a dynamic database credential, uses the credential, credential expires.
Self-hosted vs HCP Vault
You can run Vault yourself (open source, free) or use HCP Vault (HashiCorp's managed offering). Self-hosted gives you full control but you own the HA setup, storage backend, and unsealing. HCP Vault manages the infrastructure. For most teams, HCP Vault is the right starting point — the operational complexity of running Vault reliably is significant.
Setting up Vault for GitHub Actions
The JWT auth method
Vault's JWT auth method works with GitHub's OIDC tokens the same way GCP Workload Identity Federation does. Configure it with GitHub's JWKS endpoint and define role bindings that map claim conditions to Vault policies.
# Enable the JWT auth method
vault auth enable jwt
# Configure it to trust GitHub's OIDC tokens
vault write auth/jwt/config \
oidc_discovery_url="https://token.actions.githubusercontent.com" \
bound_issuer="https://token.actions.githubusercontent.com"
Define a role that maps a specific repository and environment to a Vault policy:
vault write auth/jwt/role/my-app-prod \
role_type="jwt" \
bound_audiences="https://vault.mycompany.com" \
user_claim="sub" \
bound_claims_type="glob" \
bound_claims='{
"sub": "repo:myorg/my-app:environment:prod-deploy",
"repository_owner_id": "987654321"
}' \
policies="my-app-prod" \
ttl="1h"
The bound_claims here restrict which tokens can assume this role — same principle as GCP WIF attribute conditions. Only tokens with the matching sub claim (specific repo + environment) and repository_owner_id can get the my-app-prod Vault token.
The workflow side
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
environment: prod-deploy
steps:
- uses: actions/checkout@v4
- uses: hashicorp/vault-action@v3
id: secrets
with:
url: https://vault.mycompany.com
role: my-app-prod
method: jwt
jwtGithubAudience: https://vault.mycompany.com
secrets: |
database/creds/my-app-prod username | DB_USERNAME ;
database/creds/my-app-prod password | DB_PASSWORD ;
aws/creds/my-app-prod access_key | AWS_ACCESS_KEY_ID ;
aws/creds/my-app-prod secret_key | AWS_SECRET_ACCESS_KEY
- run: ./deploy.sh
env:
DB_HOST: ${{ vars.DB_HOST }}
DB_USERNAME: ${{ steps.secrets.outputs.DB_USERNAME }}
DB_PASSWORD: ${{ steps.secrets.outputs.DB_PASSWORD }}
hashicorp/vault-action handles the OIDC token exchange and returns the dynamic credentials as step outputs. The credentials exist for the duration of the job's lease TTL. When the TTL expires, the database user is dropped and the AWS credentials are revoked.
Dynamic database credentials
The database secrets engine is the one I implement most often. It maintains a connection to your database, and when a credential is requested it creates a new user with a randomized password and a bounded lease.
Setup on GCP Cloud SQL (PostgreSQL)
# Enable the database secrets engine
vault secrets enable database
# Configure a connection to Cloud SQL via Cloud SQL Auth Proxy
vault write database/config/my-app-prod \
plugin_name="postgresql-database-plugin" \
connection_url="postgresql://{{username}}:{{password}}@127.0.0.1:5432/myapp" \
allowed_roles="my-app-prod,my-app-readonly" \
username="vault-admin" \
password="initial-password-rotated-after-this"
# Let Vault rotate the root credential immediately (so nobody knows it)
vault write -force database/config/my-app-prod/rotate-root
That last command is important. Vault generates a new random password for the vault-admin user and stores it internally. Nobody — not even the person who set this up — knows the current vault-admin password. The only way to get database credentials is through Vault.
Define the roles (SQL statement templates Vault uses to create ephemeral users):
# Read-write role for application deploys — 1 hour TTL
vault write database/roles/my-app-prod \
db_name="my-app-prod" \
creation_statements="
CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";
" \
revocation_statements="DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="4h"
# Read-only role for CI test runs — 30 minute TTL
vault write database/roles/my-app-readonly \
db_name="my-app-prod" \
creation_statements="
CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";
" \
revocation_statements="DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="30m" \
max_ttl="1h"
The {{name}}, {{password}}, and {{expiration}} placeholders are filled by Vault at issuance time. Every pipeline run gets a credential like v-jwt-my-app-prod-xK2mNq9L that exists for exactly 1 hour and is then dropped.
Look in pg_roles after a deploy:
SELECT rolname, rolvaliduntil FROM pg_roles WHERE rolname LIKE 'v-%';
You'll see the ephemeral users. After the TTL, they're gone.
Dynamic AWS credentials
The AWS secrets engine generates temporary IAM credentials with a scoped policy document per role:
vault secrets enable aws
vault write aws/config/root \
access_key="${VAULT_AWS_ACCESS_KEY}" \
secret_key="${VAULT_AWS_SECRET_KEY}" \
region="us-east-1"
# Rotate root immediately
vault write -force aws/config/rotate-root
# Define a role for ECR push + ECS deploy
vault write aws/roles/my-app-ecr-deploy \
credential_type="iam_user" \
policy_document='{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ecr:GetAuthorizationToken"],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["ecr:BatchCheckLayerAvailability","ecr:PutImage","ecr:InitiateLayerUpload","ecr:UploadLayerPart","ecr:CompleteLayerUpload"],
"Resource": "arn:aws:ecr:us-east-1:123456789012:repository/my-app"
},
{
"Effect": "Allow",
"Action": ["ecs:UpdateService","ecs:DescribeServices"],
"Resource": "arn:aws:ecs:us-east-1:123456789012:service/prod/my-app"
}
]
}' \
default_ttl="1h" \
max_ttl="4h"
The generated IAM user has exactly those permissions — nothing else — and is deleted when the TTL expires. If that credential is exfiltrated, the attacker can push container images to exactly one ECR repository and update exactly one ECS service, for at most 1 hour.
Compare that to a static IAM user credential where someone gave it AmazonEC2ContainerRegistryFullAccess and AmazonECS_FullAccess because those were the managed policies they found in the console.
Dynamic GCP credentials
The GCP secrets engine generates short-lived OAuth access tokens or service account keys for GCP service accounts:
vault secrets enable gcp
vault write gcp/config \
credentials="@/path/to/vault-admin-sa-key.json"
# Better: use GCE metadata auth so Vault itself doesn't have a key file
# vault write gcp/config credentials="" (uses ADC when Vault runs on GCE)
# Role that generates an access token (preferred — no key file created)
vault write gcp/roleset/my-app-artifact-push \
project="my-gcp-project" \
secret_type="access_token" \
token_scopes="https://www.googleapis.com/auth/cloud-platform" \
bindings='{
"resource": "//cloudresourcemanager.googleapis.com/projects/my-gcp-project",
"roles": ["roles/artifactregistry.writer"]
}'
Requesting credentials returns an OAuth access token valid for 1 hour. No service account key is created, stored, or ever touches the filesystem — a significant improvement over the static SA key JSON that usually ends up committed somewhere.
Vault in Kubernetes
Applications running in Kubernetes can authenticate to Vault using their pod service account JWT:
# Enable Kubernetes auth
vault auth enable kubernetes
# Configure with the cluster's API server
vault write auth/kubernetes/config \
kubernetes_host="https://kubernetes.default.svc" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# Define a role: pods running as this service account get this Vault policy
vault write auth/kubernetes/role/my-app \
bound_service_account_names="my-app" \
bound_service_account_namespaces="production" \
policies="my-app-prod" \
ttl="1h"
The Vault Agent sidecar or the Vault Secrets Operator handles the authentication and credential injection into pods transparently. Your application reads a credential from a file or environment variable; it never knows Vault exists.
# Pod annotation approach (Vault Agent injector)
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "my-app"
vault.hashicorp.com/agent-inject-secret-db: "database/creds/my-app-prod"
vault.hashicorp.com/agent-inject-template-db: |
{{- with secret "database/creds/my-app-prod" -}}
DB_USERNAME={{ .Data.username }}
DB_PASSWORD={{ .Data.password }}
{{- end }}
The Vault Agent sidecar writes those credentials to /vault/secrets/db inside the pod. When the lease TTL approaches, the agent renews it automatically. When the pod terminates, the lease is revoked.
The breach math: TTLs and the detection window
The Trivy/Actions supply chain incident is a good case study. The malicious code exfiltrated secrets by printing them to workflow logs — which were publicly visible for a period before the repos were taken down or logs were cleared.
With static secrets, the attacker's timeline is: exfiltrate → use until discovered and rotated. In a complex environment with many credentials, discovery can take hours to days. Rotation of all affected credentials can take longer.
With dynamic secrets (1-hour TTL), the timeline is: exfiltrate → use for at most 1 hour → credentials expire automatically. Even if the team never detects the breach, the credentials are dead.
A 1-hour TTL means 1 hour of access, not zero. An attacker who gets a database credential and immediately exfiltrates the data or creates a backdoor account still causes damage. TTL reduces exposure time; it doesn't replace breach detection and response.
The other advantage: Vault's audit log. Every credential request, issuance, and revocation is logged with the entity that requested it. If your workflow suddenly requests a database credential from an IP in Eastern Europe, that's in the audit log. With static secrets distributed everywhere, you have no equivalent signal.
Immediate revocation on suspected compromise
If you detect a supply chain compromise in a workflow that used Vault:
# List all active leases for the database backend
vault list sys/leases/lookup/database/creds/my-app-prod
# Revoke a specific lease immediately
vault lease revoke lease_id
# Revoke ALL leases from the database backend (break glass)
vault lease revoke -prefix database/creds/my-app-prod
# Revoke all credentials issued to a specific Vault entity
vault token revoke -accessor <accessor_id>
With static credentials, "revoke everything" means rotating every password in every system simultaneously and hoping you don't miss any. With Vault, it's one command.
The tradeoff: operational complexity
Vault is not simple infrastructure. Running it reliably means:
- HA deployment (3+ nodes, ideally with Raft integrated storage)
- Unseal key management (or auto-unseal with KMS)
- TLS everywhere
- Backup and recovery procedures for the Vault storage backend
- Monitoring unsealed status, lease counts, error rates
This is why HCP Vault is worth considering for teams who want the security benefits without owning the operational burden. The trade is cost and some configuration flexibility.
The simpler starting point: use OIDC for cloud provider authentication (covered here), which eliminates the biggest category of static secrets in CI/CD pipelines (cloud credentials), and adopt Vault for database credentials where dynamic issuance has the clearest value.
You don't have to migrate everything at once. Each static credential you replace with a dynamic one is a credential that can't be exfiltrated for indefinite use. Start with the credentials that have the widest blast radius and work down.