Cloud Security

Amazon EKS Security: IRSA, Network Policies, and Logging

A comprehensive guide to securing Amazon EKS clusters — IAM Roles for Service Accounts (IRSA), EKS control plane logging, IMDSv2 enforcement, VPC CNI security, and pod-level security controls.

February 1, 20268 min readShipSafer Team

Amazon EKS blends Kubernetes security complexity with AWS IAM complexity, and understanding where one ends and the other begins is critical for securing EKS workloads. Pods running on EKS nodes can access the EC2 Instance Metadata Service, assume IAM roles, and interact with AWS services — creating an attack surface that doesn't exist in on-premises Kubernetes deployments.

The EKS Credential Chain

Understanding how pods get AWS credentials is foundational to EKS security. There are three mechanisms, in order of preference:

1. IAM Roles for Service Accounts (IRSA) — Preferred

IRSA uses OIDC federation to give each Kubernetes service account its own IAM role, scoped to exactly the permissions that workload needs. This is the recommended approach for all new workloads.

How IRSA works:

  1. EKS creates an OIDC identity provider endpoint for the cluster
  2. You create an IAM role with a trust policy that allows assumption by a specific Kubernetes service account
  3. The EKS pod identity webhook injects the OIDC token as a projected service account token
  4. AWS STS validates the token against the OIDC endpoint and issues temporary credentials

Setting up IRSA:

# Get the OIDC issuer URL for your cluster
OIDC_URL=$(aws eks describe-cluster \
  --name production-cluster \
  --query 'cluster.identity.oidc.issuer' \
  --output text | sed 's|https://||')

# Create the OIDC provider in IAM
aws iam create-open-id-connect-provider \
  --url "https://$OIDC_URL" \
  --client-id-list sts.amazonaws.com \
  --thumbprint-list $(echo | openssl s_client -connect $OIDC_URL:443 2>/dev/null | \
    openssl x509 -fingerprint -noout | sed 's/://g' | awk -F= '{print tolower($2)}')

# Create IAM role with trust policy scoped to specific service account
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

cat > trust-policy.json << EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_URL}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_URL}:sub": "system:serviceaccount:production:my-app",
          "${OIDC_URL}:aud": "sts.amazonaws.com"
        }
      }
    }
  ]
}
EOF

aws iam create-role \
  --role-name my-app-eks-role \
  --assume-role-policy-document file://trust-policy.json

# Attach minimal permissions
aws iam put-role-policy \
  --role-name my-app-eks-role \
  --policy-name s3-read-access \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [{
      "Effect": "Allow",
      "Action": ["s3:GetObject"],
      "Resource": "arn:aws:s3:::my-app-bucket/*"
    }]
  }'

Annotate the Kubernetes service account:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-app
  namespace: production
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/my-app-eks-role
    eks.amazonaws.com/token-expiration: "86400"  # 24 hours (default)
automountServiceAccountToken: false  # Pod spec will override with projected token

Pod configuration to use IRSA:

apiVersion: v1
kind: Pod
spec:
  serviceAccountName: my-app
  containers:
  - name: app
    image: my-app:latest
    env:
    - name: AWS_REGION
      value: us-east-1
    # AWS SDK automatically discovers credentials from:
    # $AWS_WEB_IDENTITY_TOKEN_FILE and $AWS_ROLE_ARN (injected by webhook)

2. EKS Pod Identity (Newer Alternative to IRSA)

AWS released EKS Pod Identity in 2023, which simplifies the IRSA setup by managing the OIDC association automatically. The pod annotation changes to:

metadata:
  annotations:
    eks.amazonaws.com/pod-identity-service-account: "true"

And the role association is made via EKS API rather than IAM trust policy, making cross-account usage easier.

3. Node IAM Role — Avoid for Application Credentials

Every EKS node has an IAM role (the node IAM role) that grants permissions needed for cluster operation — joining the cluster, pulling ECR images, reading EBS volumes. If pods use the node IAM role for application credentials (the old pattern), any pod compromise grants access to everything the node can do.

Block pod access to the node IMDS to prevent pods from stealing node credentials:

# During node group creation, enforce IMDSv2 with hop limit of 1
aws eks create-nodegroup \
  --cluster-name production-cluster \
  --nodegroup-name production-nodes \
  --launch-template id=lt-12345,version=1

In the launch template, configure:

{
  "MetadataOptions": {
    "HttpEndpoint": "enabled",
    "HttpTokens": "required",
    "HttpPutResponseHopLimit": 1
  }
}

HttpPutResponseHopLimit: 1 means the IMDSv2 session token PUT request cannot traverse a network hop — it works from the node itself but not from containers within the node, which have a TTL of 1 hop used up by the container network namespace.

Verify IMDSv2 is enforced on all node groups:

aws ec2 describe-instances \
  --filters "Name=tag:eks:cluster-name,Values=production-cluster" \
  --query 'Reservations[].Instances[].{ID:InstanceId,HttpTokens:MetadataOptions.HttpTokens,HopLimit:MetadataOptions.HttpPutResponseHopLimit}'

EKS Control Plane Logging

EKS control plane logs are not enabled by default. There are five log types:

Log TypeWhat It ContainsRecommended?
apiAll API server requestsYes
auditKubernetes audit logYes — critical
authenticatorAWS IAM Authenticator logsYes
controllerManagerController manager logsRecommended
schedulerScheduler decisionsOptional

Enable all critical log types:

aws eks update-cluster-config \
  --name production-cluster \
  --logging '{"clusterLogging":[{
    "types":["api","audit","authenticator","controllerManager","scheduler"],
    "enabled":true
  }]}'

Logs flow to CloudWatch Logs under /aws/eks/production-cluster/cluster. Query the audit log for suspicious activity:

# CloudWatch Logs Insights query for audit log
fields @timestamp, user.username, verb, objectRef.resource, objectRef.name, responseStatus.code
| filter ispresent(user.username)
| filter verb in ["create","update","patch","delete"]
| filter objectRef.resource in ["secrets","rolebindings","clusterrolebindings","serviceaccounts"]
| sort @timestamp desc
| limit 200

Alert on pod exec sessions (frequently used during active intrusions):

fields @timestamp, user.username, objectRef.namespace, objectRef.name
| filter verb = "create"
| filter objectRef.subresource = "exec"
| sort @timestamp desc

Network Policies with VPC CNI

AWS VPC CNI Security Groups for Pods

Standard Kubernetes network policies operate at the IP layer. AWS VPC CNI's "Security Groups for Pods" feature associates AWS security groups directly with pods, enabling native AWS firewall rules:

# Enable security groups for pods
aws eks update-cluster-config \
  --name production-cluster \
  --resources-vpc-config endpointPublicAccess=false

# Annotate the service account used by the pod
kubectl annotate serviceaccount my-app \
  eks.amazonaws.com/role-arn=arn:aws:iam::123456789012:role/my-app-role

# Create a SecurityGroupPolicy resource
kubectl apply -f - <<EOF
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
  name: my-app-sg-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: my-app
  securityGroups:
    groupIds:
      - sg-0123456789abcdef0  # Security group for my-app pods
EOF

The pod security group can then have rules like:

# Allow my-app pods to reach RDS on port 5432
aws ec2 authorize-security-group-egress \
  --group-id sg-0123456789abcdef0 \
  --protocol tcp \
  --port 5432 \
  --source-group sg-rds-group

This provides RDS-level network access control without managing IP CIDR ranges.

Kubernetes Network Policies for East-West Traffic

For pod-to-pod traffic not covered by security groups for pods, use standard Kubernetes network policies:

# Default deny all in production namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
---
# Allow frontend to call backend API
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontend-to-backend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - port: 8080
      protocol: TCP
---
# Allow DNS (required)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns-egress
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - ports:
    - port: 53
      protocol: UDP
    - port: 53
      protocol: TCP

Cluster Endpoint Access

By default, the EKS cluster API endpoint is publicly accessible. This is convenient for development but means the Kubernetes API is reachable from the internet:

# Restrict API endpoint access to your corporate CIDR ranges
aws eks update-cluster-config \
  --name production-cluster \
  --resources-vpc-config \
    endpointPublicAccess=true,\
    publicAccessCidrs=203.0.113.0/24,\
    endpointPrivateAccess=true

# For highest security: disable public endpoint entirely
aws eks update-cluster-config \
  --name production-cluster \
  --resources-vpc-config \
    endpointPublicAccess=false,\
    endpointPrivateAccess=true

With endpointPublicAccess=false, kubectl access requires VPN or AWS Systems Manager Session Manager. This is the recommended configuration for production clusters.

RBAC for EKS Users

EKS maps IAM identities to Kubernetes RBAC via the aws-auth ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::123456789012:role/EKSNodeRole
      username: system:node:{{EC2PrivateDNSName}}
      groups:
      - system:bootstrappers
      - system:nodes
    - rolearn: arn:aws:iam::123456789012:role/DeveloperRole
      username: developer
      groups:
      - developers  # Maps to a Kubernetes RBAC group
  mapUsers: |
    - userarn: arn:aws:iam::123456789012:user/admin-user
      username: admin-user
      groups:
      - system:masters  # Full admin access

Minimize the number of principals in system:masters. Use granular RBAC roles for most users:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: developer
  namespace: production
rules:
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "list", "watch"]
# Developers can exec into pods (for debugging) but not create/delete
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create"]

Secrets Management in EKS

Never store Kubernetes Secrets in the default manner for sensitive data — base64 encoding is not encryption. Use the Secrets Store CSI Driver to mount AWS Secrets Manager or SSM Parameter Store values directly as files or environment variables:

# Install the CSI driver and provider
helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver \
  --namespace kube-system \
  --set syncSecret.enabled=true \
  --set enableSecretRotation=true

helm repo add aws-secrets-manager https://aws.github.io/secrets-store-csi-driver-provider-aws
helm install --namespace kube-system aws-provider aws-secrets-manager/secrets-store-csi-driver-provider-aws

# Create SecretProviderClass
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: app-secrets
  namespace: production
spec:
  provider: aws
  parameters:
    objects: |
      - objectName: "prod/my-app/database-url"
        objectType: "secretsmanager"
      - objectName: "/prod/my-app/api-key"
        objectType: "ssmparameter"
  secretObjects:
  - secretName: app-secrets
    type: Opaque
    data:
    - objectName: "prod/my-app/database-url"
      key: DATABASE_URL

Then mount in the pod:

volumes:
- name: secrets
  csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
      secretProviderClass: "app-secrets"
containers:
- volumeMounts:
  - name: secrets
    mountPath: "/mnt/secrets"
    readOnly: true
  env:
  - name: DATABASE_URL
    valueFrom:
      secretKeyRef:
        name: app-secrets
        key: DATABASE_URL

Secrets are mounted as files or synced to Kubernetes Secrets, with automatic rotation when the source secret changes.

The combination of IRSA (pod-scoped credentials), IMDSv2 with hop limit 1 (blocking credential theft), private API endpoint (limiting API surface), network policies (east-west segmentation), and Secrets Store CSI (no plaintext in etcd) addresses the majority of EKS-specific attack vectors.

EKS
Kubernetes
IRSA
AWS
network policies
IMDSv2
VPC CNI
pod security

Check Your Security Score — Free

See exactly how your domain scores on DMARC, TLS, HTTP headers, and 25+ other automated security checks in under 60 seconds.