In my post on CI/CD supply chain risks, I called out overly broad OIDC trust as one of the patterns I keep finding in production pipelines. But the right answer isn't to avoid OIDC — it's to use it correctly and use it everywhere you currently have a long-lived secret.
Here's the problem with long-lived credentials in GitHub secrets: they never expire, they're accessible to anyone who can run a workflow on that repository, they show up in leaked environment dumps, and revoking them requires finding every place they're used. A GCP_SA_KEY JSON file in your GitHub secrets is a standing invitation.
OIDC replaces that model. Your workflow requests a short-lived signed JWT from GitHub's OIDC provider. Your cloud provider (or npm registry, or PyPI) verifies the signature against GitHub's public keys, checks the claims in the token against your trust policy, and issues a short-lived access token if everything matches. No stored credentials anywhere in the chain.
The OIDC token GitHub issues
When a workflow requests an OIDC token, GitHub signs a JWT containing claims about the workflow execution context. These claims are what you use to restrict access. Here's what's available:
{
"iss": "https://token.actions.githubusercontent.com",
"aud": "https://iam.googleapis.com/projects/123/locations/global/...",
"sub": "repo:myorg/myrepo:ref:refs/heads/main",
"repository": "myorg/myrepo",
"repository_id": "123456789",
"repository_owner": "myorg",
"repository_owner_id": "987654321",
"ref": "refs/heads/main",
"ref_type": "branch",
"sha": "abc123...",
"workflow": "Deploy to Production",
"workflow_ref": "myorg/myrepo/.github/workflows/deploy.yml@refs/heads/main",
"job_workflow_ref": "myorg/shared-workflows/.github/workflows/deploy.yml@refs/heads/main",
"job_workflow_sha": "def456...",
"environment": "production",
"actor": "patrick",
"actor_id": "11111111",
"event_name": "push",
"runner_environment": "github-hosted"
}
The sub claim is the one most people configure their trust policy on, but you can restrict on any of these. That flexibility is what makes OIDC genuinely powerful when used correctly.
What you can restrict on
By repository and branch (most common):
repo:myorg/myrepo:ref:refs/heads/main
This is the baseline. Only the main branch of a specific repo can assume this role.
By environment (best for prod/staging separation):
repo:myorg/myrepo:environment:production
Only workflows running against a GitHub Environment named "production" can assume the role. Environments support required reviewers and deployment protection rules — this is the pattern I recommend for anything touching production infrastructure.
By tag (for release deployments):
repo:myorg/myrepo:ref:refs/tags/v*
Only workflows triggered by a tag matching v* can assume the role. Useful for publish workflows.
By reusable workflow (powerful, underused):
repo:myorg/myrepo:job_workflow_ref:myorg/shared-workflows/.github/workflows/deploy.yml@refs/heads/main
The job_workflow_ref claim is set to the reusable workflow's ref when a job calls a reusable workflow. This lets you create a deploy role that only workflows calling your org's blessed reusable deploy workflow can assume — regardless of what repo is calling it.
By runner environment:
"token.actions.githubusercontent.com:runner_environment": "github-hosted"
Add this condition to prevent self-hosted runners from assuming sensitive roles. Self-hosted runners have different security properties — you probably don't want them using your production cloud credentials.
By actor (use sparingly):
"token.actions.githubusercontent.com:actor": "deploy-bot"
Restricts to workflows triggered by a specific user or bot. Useful for manual deployment approvals, fragile otherwise.
Repository names and org names can be transferred. The repository_owner and repository string claims follow renames, but if you're being paranoid (good), restrict on repository_owner_id and repository_id instead — numeric IDs don't change.
GCP: Workload Identity Federation
Workload Identity Federation is GCP's implementation of this pattern. You create a Workload Identity Pool (a trust boundary), add an OIDC provider (GitHub), define attribute mappings and conditions, and then bind a service account to allow federation.
1. Create the pool and provider
export PROJECT_ID="my-project"
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format='value(projectNumber)')
# Create the pool
gcloud iam workload-identity-pools create "github-actions" \
--project="${PROJECT_ID}" \
--location="global" \
--display-name="GitHub Actions"
# Create the OIDC provider
gcloud iam workload-identity-pools providers create-oidc "github" \
--project="${PROJECT_ID}" \
--location="global" \
--workload-identity-pool="github-actions" \
--display-name="GitHub" \
--issuer-uri="https://token.actions.githubusercontent.com" \
--attribute-mapping="
google.subject=assertion.sub,
attribute.actor=assertion.actor,
attribute.repository=assertion.repository,
attribute.repository_owner=assertion.repository_owner,
attribute.repository_owner_id=assertion.repository_owner_id,
attribute.job_workflow_ref=assertion.job_workflow_ref,
attribute.runner_environment=assertion.runner_environment
" \
--attribute-condition="assertion.repository_owner_id == '987654321'"
The --attribute-condition is the most important flag here. It filters which tokens the pool will even accept — before any service account binding is checked. Setting it to your org's numeric ID means tokens from other orgs are rejected at the pool level.
2. Bind a service account
gcloud iam service-accounts add-iam-policy-binding \
"deploy-sa@${PROJECT_ID}.iam.gserviceaccount.com" \
--project="${PROJECT_ID}" \
--role="roles/iam.workloadIdentityUser" \
--member="principalSet://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/github-actions/attribute.repository/myorg/myrepo"
For environment-based restriction, use the environment attribute instead:
--member="principalSet://iam.googleapis.com/.../attribute.repository/myorg/myrepo" \
And add an IAM condition:
--condition="expression=attribute.environment=='production',title=production-only"
3. The workflow
permissions:
id-token: write # required to request the OIDC token
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
environment: production # ties to the environment claim
steps:
- uses: actions/checkout@v4
- id: auth
uses: google-github-actions/auth@v2
with:
workload_identity_provider: "projects/123456789/locations/global/workloadIdentityPools/github-actions/providers/github"
service_account: "deploy-sa@my-project.iam.gserviceaccount.com"
- uses: google-github-actions/setup-gcloud@v2
- run: gcloud run deploy my-service --image gcr.io/my-project/my-image:latest
The id-token: write permission is required — without it, the workflow can't request an OIDC token and the auth step will fail silently.
AWS: IAM OIDC Identity Provider
AWS uses a similar model. You register GitHub's OIDC provider with IAM, create a role with a trust policy that restricts on the token claims, and then use aws-actions/configure-aws-credentials to exchange the OIDC token for temporary credentials.
1. Register the OIDC provider
You only need to do this once per AWS account. You can do it through the console (IAM → Identity Providers → Add Provider) or via CLI:
aws iam create-open-id-connect-provider \
--url "https://token.actions.githubusercontent.com" \
--client-id-list "sts.amazonaws.com" \
--thumbprint-list "6938fd4d98bab03faadb97b34396831e3780aea1"
2. Create the IAM role
The trust policy is where you restrict access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
"token.actions.githubusercontent.com:sub": "repo:myorg/myrepo:environment:production"
}
}
}
]
}
Don't use StringLike with wildcards here. Use StringEquals and enumerate exactly what you want to allow. If you need multiple repos to share a role, use a ForAnyValue:StringEquals condition with an explicit list — never a wildcard.
For reusable workflow restriction on AWS (checking job_workflow_ref), AWS IAM conditions can only check claims that appear in the sub field or explicit OIDC claim names. Since job_workflow_ref isn't in the sub, you need to add it as a custom claim condition:
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
},
"StringLike": {
"token.actions.githubusercontent.com:sub": "repo:myorg/*:*"
},
"ForAnyValue:StringEquals": {
"token.actions.githubusercontent.com:job_workflow_ref": [
"myorg/shared-workflows/.github/workflows/deploy.yml@refs/heads/main"
]
}
}
This is one place where GCP's attribute mapping approach is cleaner — AWS IAM condition syntax for custom OIDC claims is verbose.
3. The workflow
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/GitHubActionsDeployRole
aws-region: us-east-1
- run: aws ecs update-service --cluster prod --service my-service --force-new-deployment
Reusable workflow restrictions: the underused pattern
This is the one I wish more teams knew about. The job_workflow_ref claim identifies the reusable workflow file that a job is calling. Combined with strict trust policies, this lets you centralize your deploy logic and enforce that any deployment to production must go through your org's blessed workflow — not a one-off script someone wrote inline.
The setup:
# .github/workflows/shared-deploy.yml in myorg/shared-workflows
on:
workflow_call:
inputs:
service:
required: true
type: string
environment:
required: true
type: string
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
environment: ${{ inputs.environment }}
steps:
- id: auth
uses: google-github-actions/auth@v2
with:
workload_identity_provider: "..."
service_account: "deploy-sa@my-project.iam.gserviceaccount.com"
# ... deploy steps
And in the trust policy, restrict job_workflow_ref to this specific workflow:
myorg/shared-workflows/.github/workflows/shared-deploy.yml@refs/heads/main
Now the service account can only be assumed by jobs that call that reusable workflow. A developer can't bypass the shared deploy process by writing their own deploy job inline — the OIDC token won't satisfy the trust condition.
Pin the ref by SHA for extra safety:
myorg/shared-workflows/.github/workflows/shared-deploy.yml@abc123def456...
npm: Package provenance
npmjs.com supports OIDC through package provenance — a cryptographic attestation that links a published package to the specific GitHub Actions run that built it. It uses the OIDC token to sign the package with Sigstore's cosign, creating a verifiable chain from the package in the registry back to the source commit.
permissions:
id-token: write # required for provenance signing
contents: read
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
registry-url: 'https://registry.npmjs.org'
- run: npm ci
- run: npm publish --provenance --access public
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
The --provenance flag is the key part. npm still needs an authentication token for the publish itself, but provenance adds a signed attestation using the OIDC token. Anyone can verify the provenance of a package you publish:
npm audit signatures my-package@1.0.0
This matters for supply chain trust. If your package is provenance-signed, a user can verify that version 1.0.0 was built from commit abc123 in your repository by a specific GitHub Actions run — not by someone who compromised an npm token.
PyPI: Trusted Publishers (fully keyless)
PyPI's Trusted Publishers is the cleanest implementation of this pattern — it's genuinely keyless. You configure the trusted publisher in your PyPI project settings, and then the GitHub Actions OIDC token is used directly for authentication with no stored token anywhere.
Configure in PyPI project settings → Publishing → Add a new publisher:
- Owner:
myorg - Repository:
myrepo - Workflow filename:
publish.yml - Environment (optional but recommended):
pypi
permissions:
id-token: write # required for PyPI OIDC
jobs:
publish:
runs-on: ubuntu-latest
environment: pypi # ties to the environment claim PyPI validates
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.x'
- run: pip install build
- run: python -m build
- uses: pypa/gh-action-pypi-publish@release/v1
# No password or token needed — OIDC handles it
No NPM_TOKEN equivalent. No secret to rotate. No credential to exfiltrate. PyPI validates the OIDC token claims against your trusted publisher configuration and issues a short-lived upload token for that specific publish operation.
Least privilege: one identity per trust context
Here's the mistake I see after teams successfully migrate to OIDC: they replace one overpermissive long-lived key with one overpermissive service account mapped to everything. The credentials are short-lived now, which is better — but the blast radius of a compromised workflow is the same.
OIDC lets you do something more precise. Because each workflow context (repository, branch, environment, operation type) produces a distinct set of claims, you can map different workflow contexts to different cloud identities, each scoped to only what that specific operation needs.
The pattern: define your trust contexts as the combination of environment × permission level, then create one identity per context.
The Terraform example
Terraform is a good illustration because it has two distinct operations with very different permission requirements. plan needs read access to detect drift — it should never be able to change anything. apply needs write access, but only to the specific resources Terraform manages.
Here's the identity matrix for a dev/prod Terraform setup:
| Operation | Environment | GitHub environment claim | What it can do |
|---|---|---|---|
| tf plan | dev | dev-plan | Read all resources in the dev project |
| tf apply | dev | dev-apply | Write specific resources in the dev project |
| tf plan | prod | prod-plan | Read all resources in the prod project |
| tf apply | prod | prod-apply | Write specific resources in prod, requires reviewer approval |
That's four service accounts (GCP) or four IAM roles (AWS), each with different permissions, each only accessible to a specific workflow context.
GCP: Four service accounts
# Create the four service accounts
for sa in tf-plan-dev tf-apply-dev tf-plan-prod tf-apply-prod; do
gcloud iam service-accounts create $sa \
--project="${PROJECT_ID}" \
--display-name="Terraform ${sa}"
done
Plan accounts — read-only:
# tf-plan-dev: read everything in dev, read state bucket
gcloud projects add-iam-policy-binding "my-project-dev" \
--member="serviceAccount:tf-plan-dev@my-project-dev.iam.gserviceaccount.com" \
--role="roles/viewer"
gcloud projects add-iam-policy-binding "my-project-dev" \
--member="serviceAccount:tf-plan-dev@my-project-dev.iam.gserviceaccount.com" \
--role="roles/iam.securityReviewer" # roles/viewer doesn't include IAM policy reads
# State bucket read access (plan reads state, doesn't write)
gsutil iam ch \
serviceAccount:tf-plan-dev@my-project-dev.iam.gserviceaccount.com:roles/storage.objectViewer \
gs://my-tfstate-dev
Repeat for tf-plan-prod on the prod project and prod state bucket.
Apply accounts — scoped write access:
Don't reach for roles/editor here. Look at what your Terraform actually manages and grant only those roles:
# tf-apply-dev: write access to what TF actually manages in dev
# Example: Cloud Run + GKE + Artifact Registry
for role in \
roles/run.admin \
roles/container.admin \
roles/artifactregistry.writer \
roles/secretmanager.admin \
roles/resourcemanager.projectIamAdmin; do # if TF manages IAM bindings
gcloud projects add-iam-policy-binding "my-project-dev" \
--member="serviceAccount:tf-apply-dev@my-project-dev.iam.gserviceaccount.com" \
--role="$role"
done
# State bucket write access (apply reads and writes state)
gsutil iam ch \
serviceAccount:tf-apply-dev@my-project-dev.iam.gserviceaccount.com:roles/storage.objectAdmin \
gs://my-tfstate-dev
If your Terraform manages IAM bindings, the apply service account needs resourcemanager.projectIamAdmin — but that role can grant any permission to any principal. Consider using resourcemanager.projectIamBinding instead, which only allows granting roles the SA already has (no privilege escalation through IAM).
GCP: Binding each SA to its workflow context
Each service account gets a separate WIF binding scoped to its specific GitHub environment:
PROJECT_NUMBER=$(gcloud projects describe my-project-dev --format='value(projectNumber)')
POOL="projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/github-actions"
# tf-plan-dev: accessible only from dev-plan environment
gcloud iam service-accounts add-iam-policy-binding \
"tf-plan-dev@my-project-dev.iam.gserviceaccount.com" \
--role="roles/iam.workloadIdentityUser" \
--member="principalSet://iam.googleapis.com/${POOL}/attribute.repository/myorg/infra-repo" \
--condition="expression=attribute.environment=='dev-plan',title=dev-plan-only"
# tf-apply-dev: accessible only from dev-apply environment
gcloud iam service-accounts add-iam-policy-binding \
"tf-apply-dev@my-project-dev.iam.gserviceaccount.com" \
--role="roles/iam.workloadIdentityUser" \
--member="principalSet://iam.googleapis.com/${POOL}/attribute.repository/myorg/infra-repo" \
--condition="expression=attribute.environment=='dev-apply',title=dev-apply-only"
# Repeat for prod-plan and prod-apply on the prod project's pool binding
AWS: Four IAM roles
Same structure, AWS syntax. Each role has a trust policy scoped to a specific environment claim:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::DEV-ACCOUNT-ID:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
"token.actions.githubusercontent.com:sub": "repo:myorg/infra-repo:environment:dev-apply"
}
}
}]
}
Four trust policies, four roles:
| Role | Trust condition | Attached policies |
|---|---|---|
| GitHubActions-TF-Plan-Dev | environment:dev-plan | ReadOnlyAccess + state bucket read |
| GitHubActions-TF-Apply-Dev | environment:dev-apply | Specific service policies on dev account |
| GitHubActions-TF-Plan-Prod | environment:prod-plan | ReadOnlyAccess on prod account |
| GitHubActions-TF-Apply-Prod | environment:prod-apply | Specific service policies on prod account |
For the plan roles, AWS ReadOnlyAccess managed policy is a reasonable baseline — it gives read access to almost everything without any write permissions. For apply, enumerate specific managed policies (AmazonEKSClusterPolicy, AmazonEC2FullAccess, etc.) matching what Terraform actually manages in that account. Never attach AdministratorAccess.
The workflow that uses all four identities
name: Terraform
on:
push:
branches: [main]
pull_request:
jobs:
plan-dev:
runs-on: ubuntu-latest
environment: dev-plan # maps to tf-plan-dev SA — read only
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- uses: google-github-actions/auth@v2
with:
workload_identity_provider: ${{ vars.WIF_PROVIDER }}
service_account: "tf-plan-dev@my-project-dev.iam.gserviceaccount.com"
- uses: hashicorp/setup-terraform@v3
- run: terraform plan
working-directory: environments/dev
apply-dev:
runs-on: ubuntu-latest
needs: plan-dev
if: github.ref == 'refs/heads/main'
environment: dev-apply # maps to tf-apply-dev SA — write to dev only
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- uses: google-github-actions/auth@v2
with:
workload_identity_provider: ${{ vars.WIF_PROVIDER }}
service_account: "tf-apply-dev@my-project-dev.iam.gserviceaccount.com"
- uses: hashicorp/setup-terraform@v3
- run: terraform apply -auto-approve
working-directory: environments/dev
plan-prod:
runs-on: ubuntu-latest
needs: apply-dev
if: github.ref == 'refs/heads/main'
environment: prod-plan # maps to tf-plan-prod SA — read only on prod
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- uses: google-github-actions/auth@v2
with:
workload_identity_provider: ${{ vars.WIF_PROVIDER_PROD }}
service_account: "tf-plan-prod@my-project-prod.iam.gserviceaccount.com"
- uses: hashicorp/setup-terraform@v3
- run: terraform plan
working-directory: environments/prod
apply-prod:
runs-on: ubuntu-latest
needs: plan-prod
if: github.ref == 'refs/heads/main'
environment: prod-apply # maps to tf-apply-prod SA — write to prod, requires reviewer
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- uses: google-github-actions/auth@v2
with:
workload_identity_provider: ${{ vars.WIF_PROVIDER_PROD }}
service_account: "tf-apply-prod@my-project-prod.iam.gserviceaccount.com"
- uses: hashicorp/setup-terraform@v3
- run: terraform apply -auto-approve
working-directory: environments/prod
Set the prod-apply GitHub Environment to require a reviewer approval before the job runs. That human gate, combined with the restricted OIDC identity, means a compromised pipeline can't silently apply changes to production — it still needs someone to click approve.
The principle applied elsewhere
The same pattern works beyond Terraform:
Container build vs deploy: A build job that pushes to Artifact Registry only needs artifactregistry.writer on the registry — not deploy rights. A deploy job that updates Cloud Run needs run.admin — not registry write access. Separate identities, no overlap.
Read-only audit workflows: If you have a workflow that runs terraform plan to detect drift, or kubectl get to audit cluster state, scope it to a read-only identity. It has no legitimate reason to mutate anything.
Cross-environment contamination: A dev deploy identity should have zero permissions in prod. Not reduced permissions — zero. If a dev workflow is compromised, it should be physically incapable of touching prod resources.
The pattern is always the same: ask what the minimum permissions are for this specific operation in this specific environment, create an identity with exactly those permissions, and bind it to exactly the workflow context that needs it.
What to migrate first
If you're sitting on a pile of long-lived cloud credentials in GitHub secrets, the priority order:
- Production deploy credentials — highest blast radius, migrate first. The GCP WIF or AWS OIDC setup takes about 30 minutes.
- Container registry push credentials — often scoped to your cloud provider anyway, covered by the same WIF setup.
- npm/PyPI publish tokens — provenance signing and Trusted Publishers are straightforward for new releases.
- Terraform state backend credentials — these are often GCS or S3, covered by the cloud provider OIDC setup.
What you can't replace with OIDC yet: third-party services that don't support it (databases, some SaaS APIs). Those still need secrets. Keep them scoped and rotated, and make sure they're not accessible to workflows that don't need them.
The bottom line
Every long-lived credential you remove from your GitHub secrets is a credential that can't be exfiltrated, can't expire silently, and doesn't need a rotation process. OIDC is well-supported across GCP, AWS, npm, and PyPI — there's no good reason not to use it for cloud and registry authentication.
The implementation work is a few hours. The security improvement is permanent.