Security

Cloud-Native Security: Securing Containers, Orchestrators, and Microservices

Learn defense-in-depth strategies for Kubernetes-based apps — from container hardening to service mesh policies and runtime threat detection.

March 9, 20265 min readShipSafer Team

Cloud-Native Security: Securing Containers, Orchestrators, and Microservices

Cloud-native architectures unlock developer velocity, but they also expand the attack surface in ways that traditional perimeter security cannot address. When an application is decomposed into dozens of microservices running across ephemeral containers, every deployment pipeline, image registry, service mesh, and orchestrator configuration becomes a potential entry point.

Defense in depth for Kubernetes-based applications means layering controls at every tier: the image, the container runtime, the pod, the cluster, the network, and the application code itself.

Layer 1: Secure Container Images

The foundation of cloud-native security is a minimal, verified image.

Use distroless or minimal base images. Google's distroless images contain only the application runtime and its dependencies — no shell, no package manager, no utilities an attacker can abuse post-exploitation.

FROM gcr.io/distroless/nodejs20-debian12
COPY --from=builder /app/dist /app
CMD ["/app/server.js"]

Scan images in CI before they reach the registry. Tools like Trivy, Grype, and Snyk Container analyze layers for known CVEs and secrets.

trivy image --exit-code 1 --severity HIGH,CRITICAL myapp:latest

Sign images with cosign so Kubernetes admission controllers can verify provenance before scheduling:

cosign sign --key cosign.key myregistry/myapp:v1.2.3

Never run containers as root. Set a non-root UID in the Dockerfile:

RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

Layer 2: Kubernetes RBAC and Admission Control

Kubernetes Role-Based Access Control (RBAC) is your primary authorization layer.

  • Create a dedicated ServiceAccount for each workload. Never use the default service account.
  • Scope roles to the minimum required verbs and resources.
  • Audit existing bindings regularly with kubectl auth can-i --list --as system:serviceaccount:default:myapp.

Admission controllers enforce policy before objects are persisted to etcd. Key ones to enable:

ControllerWhat it enforces
PodSecurityPod Security Standards (restricted/baseline)
OPA/GatekeeperCustom Rego policies
KyvernoYAML-native policy engine

A Kyverno policy requiring non-root containers:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-non-root
spec:
  rules:
    - name: check-runAsNonRoot
      match:
        resources:
          kinds: ["Pod"]
      validate:
        message: "Containers must not run as root."
        pattern:
          spec:
            containers:
              - securityContext:
                  runAsNonRoot: true

Layer 3: Network Policies and Service Mesh

By default, all pods in a Kubernetes cluster can communicate with each other. Network Policies change that to deny-by-default.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Ingress

Then explicitly allow only the traffic your application needs.

Service meshes like Istio and Linkerd add mutual TLS (mTLS) between every service pair, so even east-west traffic is encrypted and authenticated without changing application code. They also provide fine-grained authorization policies:

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: frontend-to-api
  namespace: production
spec:
  selector:
    matchLabels:
      app: api-service
  rules:
    - from:
        - source:
            principals: ["cluster.local/ns/production/sa/frontend"]

Layer 4: Secrets Management

Kubernetes Secrets are base64-encoded, not encrypted, by default. Anyone with etcd access can read them.

Enable encryption at rest for etcd via the EncryptionConfiguration API. Better yet, use an external secrets operator:

  • External Secrets Operator syncs secrets from AWS Secrets Manager, GCP Secret Manager, or HashiCorp Vault into Kubernetes Secrets.
  • Vault Agent Injector injects secrets directly into pod filesystems as in-memory files, never persisting them to etcd.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-credentials
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets-manager
    kind: ClusterSecretStore
  target:
    name: db-credentials
  data:
    - secretKey: password
      remoteRef:
        key: prod/db/password

Layer 5: Runtime Threat Detection

Static controls are necessary but not sufficient. You need runtime visibility to detect threats that slip past admission controls.

Falco watches kernel system calls and alerts on suspicious behavior — a container spawning a shell, reading /etc/shadow, making unexpected outbound connections.

A sample Falco rule:

- rule: Shell in container
  desc: A shell was spawned in a container
  condition: >
    spawned_process and container and
    proc.name in (shell_binaries)
  output: >
    Shell spawned in container (container=%container.name
    image=%container.image.repository proc=%proc.name)
  priority: WARNING

eBPF-based tools (Tetragon, Cilium Hubble) provide even deeper kernel-level visibility with lower overhead than sidecar-based approaches.

Layer 6: Supply Chain and CI/CD Security

The build pipeline is increasingly the target of supply chain attacks.

  • Pin all dependencies — both direct and transitive — using lockfiles and digest-based image references (image@sha256:...).
  • Use ephemeral, isolated build environments. Builds should not have access to production credentials.
  • Implement SLSA levels. Start at SLSA Level 1 (provenance generation) and work toward Level 3 (isolated, reproducible builds) as your maturity increases.
  • Sign commits and tags so the pipeline can verify that code originates from authorized contributors.

Putting It Together: The Defense-in-Depth Stack

A mature cloud-native security posture looks like this:

  1. Image scanning catches known CVEs and secrets before the image is pushed
  2. Image signing ensures only verified images run in the cluster
  3. Admission control enforces security standards at scheduling time
  4. RBAC limits what each workload can do in the cluster
  5. Network policies + mTLS restrict lateral movement
  6. External secrets management keeps credentials out of etcd
  7. Runtime detection alerts on anomalous behavior in running containers
  8. Audit logging provides a trail for forensic investigation

No single layer is sufficient. An attacker who exploits a vulnerability in your application can bypass image scanning entirely — but network policies and runtime detection will limit the blast radius and surface the intrusion.

Cloud-native security is not a one-time configuration exercise. As your cluster configuration drifts, new images are deployed, and team membership changes, continuous scanning and policy enforcement are what keep controls effective over time.

Check Your Security Score — Free

See exactly how your domain scores on DMARC, TLS, HTTP headers, and 25+ other automated security checks in under 60 seconds.