Kubernetes Security Checklist: RBAC, Network Policies, and Pod Security
Kubernetes clusters have a large attack surface. This checklist covers RBAC, network policies, pod security standards, secrets management, image policies, and admission controllers to secure your K8s deployment.
Kubernetes is powerful but ships with insecure defaults. A default cluster has no network policies (all pods can talk to each other), permissive RBAC, no pod security restrictions, and secrets stored in etcd as base64. Security must be explicitly configured.
This checklist covers the most important Kubernetes security controls, organized by the CIS Kubernetes Benchmark categories.
1. RBAC (Role-Based Access Control)
Cluster-Wide Rules
□ RBAC enabled (default in Kubernetes 1.6+, but verify)
□ No wildcard permissions in production: avoid `verbs: ["*"]` and `resources: ["*"]`
□ No `cluster-admin` ClusterRoleBinding for service accounts
□ Avoid binding `cluster-admin` to groups; use specific roles
□ Regularly audit ClusterRoleBindings and RoleBindings
# Audit who has cluster-admin
kubectl get clusterrolebindings -o json | \
jq '.items[] | select(.roleRef.name=="cluster-admin") | .subjects'
# List all service accounts with cluster-admin
kubectl get clusterrolebindings -o json | \
jq '.items[] | select(.roleRef.name=="cluster-admin") | .subjects[] | select(.kind=="ServiceAccount")'
Service Account Configuration
# ✅ Disable automounting of service account tokens when not needed
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
automountServiceAccountToken: false # Don't mount SA token
containers:
- name: app
image: myapp:1.0.0
# ✅ Minimal RBAC for a typical read-only service
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: production
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"] # Only what's needed
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: production
subjects:
- kind: ServiceAccount
name: my-app
namespace: production
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
2. Network Policies
By default, all pods in a Kubernetes cluster can communicate with all other pods. Network policies are the firewall rules for pod-to-pod communication.
□ Default deny policy in every namespace
□ Explicit allow rules for required communication only
□ No policy allows access to the Kubernetes API from application pods (unless required)
□ Egress policies restrict outbound internet access from sensitive pods
Default deny all ingress and egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Applies to all pods
policyTypes:
- Ingress
- Egress
# No ingress or egress rules = deny all
Allow specific communication
# Allow the web tier to talk to the API tier
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web
ports:
- protocol: TCP
port: 8080
Allow DNS resolution (required for all pods)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
3. Pod Security Standards
Kubernetes 1.23+ has Pod Security Standards (PSS) replacing the deprecated PodSecurityPolicy:
| Level | Description |
|---|---|
privileged | No restrictions (don't use in production) |
baseline | Prevents known privilege escalations |
restricted | Highly restricted, follows pod hardening best practices |
# Apply restricted PSS to a namespace (Kubernetes 1.25+)
kubectl label namespace production pod-security.kubernetes.io/enforce=restricted
kubectl label namespace production pod-security.kubernetes.io/enforce-version=v1.29
kubectl label namespace production pod-security.kubernetes.io/warn=restricted
kubectl label namespace production pod-security.kubernetes.io/audit=restricted
Pod Security Context
spec:
securityContext:
runAsNonRoot: true # Must run as non-root user
runAsUser: 1000 # Specific UID
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault # Enable seccomp filtering
containers:
- name: app
image: myapp:1.0.0
securityContext:
allowPrivilegeEscalation: false # Prevent setuid escalation
readOnlyRootFilesystem: true # Immutable filesystem
capabilities:
drop: ["ALL"] # Drop all Linux capabilities
seccompProfile:
type: RuntimeDefault
resources:
limits:
memory: "256Mi"
cpu: "500m"
requests:
memory: "128Mi"
cpu: "250m"
4. Secrets Management
Kubernetes Secrets are base64-encoded (not encrypted) by default in etcd. This means anyone with etcd access can read all secrets.
□ Encrypt secrets at rest in etcd using KMS encryption provider
□ Prefer external secret stores (AWS Secrets Manager, Vault) with External Secrets Operator
□ Avoid putting secrets in environment variables — use mounted secret files
□ Don't log secret values (prevent accidental exposure in logs)
□ Rotate secrets regularly
□ Use RBAC to restrict which pods can read which secrets
etcd encryption at rest
# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {} # Fallback for unencrypted secrets during migration
External Secrets Operator
# Sync from AWS Secrets Manager
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secrets
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: app-secrets # Kubernetes Secret name
data:
- secretKey: DATABASE_URL
remoteRef:
key: production/app/database-url
- secretKey: API_KEY
remoteRef:
key: production/app/api-key
5. Image Security
□ Only deploy images from trusted registries
□ Use image digest pinning (not just tags): myapp@sha256:abc123...
□ Scan images for CVEs before deployment (Trivy, Snyk, Grype)
□ Admission controller rejects images with critical vulnerabilities
□ Sign images with cosign (supply chain security)
□ Never use :latest tag in production
# Pin to digest instead of tag
spec:
containers:
- name: app
image: myregistry.com/myapp@sha256:abc123def456... # Immutable
Image scanning with admission controller (Kyverno)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-image-signature
spec:
validationFailureAction: Enforce
rules:
- name: verify-signature
match:
resources:
kinds: [Pod]
verifyImages:
- imageReferences:
- "myregistry.com/myapp:*"
attestors:
- entries:
- keyless:
subject: "https://github.com/my-org/my-repo/.github/workflows/build.yaml@refs/heads/main"
issuer: "https://token.actions.githubusercontent.com"
6. API Server Security
□ API server not exposed publicly (behind VPN or private subnet)
□ Anonymous authentication disabled: --anonymous-auth=false
□ Audit logging enabled on API server
□ Admission controllers enabled: NodeRestriction, AlwaysPullImages, PodSecurity
□ API server authorization mode includes RBAC: --authorization-mode=Node,RBAC
7. Running kube-bench
kube-bench runs the CIS Kubernetes Benchmark checks automatically:
# Run against current cluster
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
kubectl logs -f job.kube-bench
# Or run locally
docker run --rm \
-v $(which kubectl):/usr/local/mount-from-host/bin/kubectl \
-v ~/.kube:/root/.kube \
aquasec/kube-bench:latest \
--config-dir /opt/kube-bench/cfg \
--config /opt/kube-bench/cfg/config.yaml
kube-bench outputs pass/fail/warn for each CIS benchmark check with remediation instructions. Run it after initial deployment and after major upgrades.