Security

Top 10 Cloud Misconfigurations That Lead to Breaches

The most common cloud misconfigurations that cause real breaches: public S3 buckets, open security groups, IMDSv1 abuse, overprivileged IAM, and more.

March 9, 20266 min readShipSafer Team

Top 10 Cloud Misconfigurations That Lead to Breaches

The majority of cloud breaches do not involve sophisticated zero-day exploits. They happen because something was misconfigured. An S3 bucket left public. A security group that allows inbound traffic from 0.0.0.0/0 on port 22. An IAM role with AdministratorAccess attached to a Lambda function that reads RSS feeds. These are the misconfigurations that show up repeatedly in post-incident reports — and all of them are preventable.

1. Public S3 Buckets (and GCS/Azure Blob Equivalents)

The S3 public bucket is the most documented cloud misconfiguration in history, yet it continues to cause breaches. When a bucket's Block Public Access settings are disabled and the bucket policy allows s3:GetObject for Principal: "*", any object in that bucket is readable by anyone on the internet.

// Dangerous bucket policy — never use this
{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::my-bucket/*"
  }]
}

Fix: Enable S3 Block Public Access at the account level. AWS now enables this by default for new accounts, but legacy accounts and explicitly overridden buckets remain a risk. Run aws s3api get-public-access-block --bucket BUCKET for every bucket in your inventory.

2. Open Security Groups (0.0.0.0/0 Inbound)

Security groups that allow all inbound traffic on administrative ports (22/SSH, 3389/RDP, 5432/Postgres, 27017/MongoDB) are a critical finding. Even with strong authentication, exposing these services to the entire internet increases your attack surface exponentially.

# Find security groups with 0.0.0.0/0 inbound rules
aws ec2 describe-security-groups \
  --query "SecurityGroups[?IpPermissions[?IpRanges[?CidrIp=='0.0.0.0/0']]].{ID:GroupId,Name:GroupName}" \
  --output table

Fix: Replace 0.0.0.0/0 rules with specific IP ranges (your office CIDR, VPN endpoints). For services that do need internet access, use a load balancer and restrict the EC2 instances to private subnets.

3. IMDSv1 Enabled (Metadata Service Abuse)

The EC2 Instance Metadata Service (IMDS) provides IAM credentials to applications running on the instance. IMDSv1 responds to any HTTP request from the instance — including requests made by SSRF vulnerabilities in your application code. An attacker who can make your app issue HTTP requests can steal the instance's IAM credentials.

The 2019 Capital One breach was enabled by an IMDS credential theft via SSRF.

# Check if IMDSv2 is enforced on an instance
aws ec2 describe-instances \
  --instance-ids i-1234567890abcdef0 \
  --query "Reservations[].Instances[].MetadataOptions"

Fix: Require IMDSv2 (token-based) for all instances. IMDSv2 requires a PUT request with a TTL header before GET requests will work, breaking the simple SSRF exploit pattern.

# Enforce IMDSv2 on a running instance
aws ec2 modify-instance-metadata-options \
  --instance-id i-1234567890abcdef0 \
  --http-tokens required \
  --http-endpoint enabled

4. Overprivileged IAM Roles and Policies

The most common IAM misconfiguration is attaching AdministratorAccess (or policies with "Action": "*") to roles, users, or Lambda functions that only need narrow permissions. Compromising any such principal gives an attacker full control of the account.

// Bad — wildcard permissions
{
  "Effect": "Allow",
  "Action": "*",
  "Resource": "*"
}

// Good — least privilege for a Lambda reading from DynamoDB
{
  "Effect": "Allow",
  "Action": ["dynamodb:GetItem", "dynamodb:Query"],
  "Resource": "arn:aws:dynamodb:us-east-1:123456789:table/orders"
}

Use AWS IAM Access Analyzer and aws iam generate-service-last-accessed-details to identify permissions that have never been used and can be safely removed.

5. Exposed Kubernetes Dashboard and API Server

The Kubernetes dashboard deployed without authentication and exposed to the internet has enabled numerous cryptocurrency mining attacks. The Kubernetes API server on port 6443, if exposed without network controls, can be enumerated and exploited even with authentication configured.

# Check if the API server is publicly reachable
curl -k https://YOUR_K8S_API:6443/api/v1/namespaces --max-time 5

# Scan for unauthenticated access
kubectl --insecure-skip-tls-verify get pods --all-namespaces 2>/dev/null

Fix: Place the Kubernetes API server behind a private load balancer. Restrict access with network policies and authorized networks. Remove the Kubernetes dashboard entirely, or deploy it with authentication and no public ingress.

6. Exposed Elasticsearch and OpenSearch Clusters

Elasticsearch clusters without authentication, exposed on port 9200, have resulted in some of the largest data leaks in history. By default, older Elasticsearch versions have no authentication at all.

# Check if your cluster requires authentication
curl http://your-elasticsearch:9200/_cat/indices?v

If that returns data without credentials, you have a critical exposure. Fix: Enable X-Pack security (now free in Elasticsearch 7.1+), require TLS for all connections, and never expose Elasticsearch directly to the internet. Place it in a private subnet with access only from application servers.

7. Publicly Accessible RDS Instances

AWS RDS databases with PubliclyAccessible: true are reachable from the internet. Even with a strong password, internet-exposed databases are subject to brute force, credential stuffing, and vulnerability exploitation.

# Find all publicly accessible RDS instances
aws rds describe-db-instances \
  --query "DBInstances[?PubliclyAccessible==true].{ID:DBInstanceIdentifier,Engine:Engine}" \
  --output table

Fix: Set PubliclyAccessible: false and access the database through a bastion host, VPN, or AWS Systems Manager Session Manager.

8. CloudTrail Disabled or Incomplete

Not a misconfiguration that directly enables attacks, but one that guarantees you cannot detect or investigate them. CloudTrail, AWS Config, and VPC Flow Logs must be enabled in all regions to provide the audit trail required for incident response.

# Check if CloudTrail is enabled in all regions
aws cloudtrail describe-trails --include-shadow-trails --query "trailList[].{Name:Name,Region:HomeRegion,MultiRegion:IsMultiRegionTrail}"

Fix: Enable a multi-region CloudTrail trail with log file integrity validation. Ship logs to a centralized S3 bucket in a separate security account with SCPs preventing deletion.

9. Lambda Functions with Excessive Environment Variable Exposure

Lambda functions often store database credentials, API keys, and other secrets in environment variables. These are visible in the AWS console, CLI, and SDK to anyone with IAM access to the Lambda configuration — a much broader group than those who need the actual secrets.

Fix: Store secrets in AWS Secrets Manager or Parameter Store (SecureString). Retrieve them at runtime via SDK calls rather than environment variables. Restrict Lambda configuration read access using IAM.

import boto3

def get_db_password():
    client = boto3.client('secretsmanager')
    response = client.get_secret_value(SecretId='prod/database/password')
    return response['SecretString']

10. Service Account Keys with No Expiry (GCP) / Long-Lived Access Keys (AWS)

GCP service account keys and AWS IAM access keys that never expire are a persistent credential risk. If a key is leaked (in a Git commit, an environment variable in a public container image, a Slack message), it remains valid indefinitely.

# Find AWS access keys older than 90 days
aws iam generate-credential-report
aws iam get-credential-report --output text --query Content | base64 -d | \
  awk -F, '$9 != "N/A" && $9 < "'$(date -d '-90 days' +%Y-%m-%d)'" {print $1, $9}'

Fix: Rotate access keys every 90 days. Prefer short-lived credentials (IAM roles, Workload Identity Federation in GCP) over static keys entirely. Enable AWS Config rules access-keys-rotated and iam-user-no-policies-check.

The Common Thread

Every one of these misconfigurations is detectable with automated scanning — AWS Security Hub, GCP Security Command Center, and third-party tools like ShipSafer flag them continuously. The problem is not visibility; it is remediation velocity. Build processes that surface these findings to the people who can fix them and track them to closure, and your cloud security posture will measurably improve.

Check Your Security Score — Free

See exactly how your domain scores on DMARC, TLS, HTTP headers, and 25+ other automated security checks in under 60 seconds.