Terraform Security Best Practices: State, Modules, and Secrets
Secure your infrastructure-as-code: remote state with encryption, state locking, keeping secrets out of .tf files, IaC scanning with tfsec and Checkov, module pinning, and Sentinel policies.
Infrastructure-as-code introduces a new attack surface: your .tf files, state files, and module dependencies are now security artifacts as much as they are operational ones. A leaked Terraform state file can expose production credentials, database connection strings, and private keys. Overly permissive modules can create security holes at scale. This guide covers the security practices that matter most for teams using Terraform seriously.
Remote State: Never Store State Locally
Terraform state (.tfstate) is a JSON file that contains the complete representation of your managed infrastructure, including sensitive values like database passwords, generated certificates, and resource IDs.
The risks of local state:
- Files on developer laptops are outside your security controls
- No audit trail of who accessed or modified state
- No locking leads to concurrent modifications corrupting state
- Sensitive values are stored in plaintext by default
Use remote state with encryption at rest:
AWS S3 + DynamoDB
terraform {
backend "s3" {
bucket = "my-terraform-state-prod"
key = "services/api/terraform.tfstate"
region = "us-east-1"
encrypt = true
kms_key_id = "arn:aws:kms:us-east-1:123456789:key/abc-def"
dynamodb_table = "terraform-state-locks"
# Force TLS for all S3 API calls
# (enforced via bucket policy on the S3 side)
}
}
The S3 bucket should have:
- Versioning enabled (to roll back accidental state corruption)
- Server-side encryption with KMS (not just SSE-S3)
- A bucket policy that denies non-TLS access and restricts access to your Terraform execution role
- No public access
The DynamoDB table provides state locking — Terraform acquires a lock before any plan/apply operation and releases it when done. This prevents two engineers from running terraform apply simultaneously.
GCP GCS
terraform {
backend "gcs" {
bucket = "my-terraform-state-prod"
prefix = "services/api"
# GCS encrypts at rest by default using Google-managed keys
# Use CMEK for customer-managed encryption
}
}
Terraform Cloud / HCP Terraform
HashiCorp's managed service handles state storage, locking, and encryption automatically. Remote execution means no credentials on CI runners. The free tier supports up to 5 users.
Keep Secrets Out of .tf Files
Terraform configurations are typically checked into version control. Any value written directly in a .tf file is visible to everyone with repo access and lives forever in git history.
Never do this:
# BAD: Hardcoded credential
resource "aws_db_instance" "main" {
username = "admin"
password = "my-super-secret-password" # Visible in git, in state, in plan output
}
Use AWS Secrets Manager or SSM Parameter Store:
# Fetch the secret at plan/apply time — not stored in the .tf file
data "aws_secretsmanager_secret_version" "db_password" {
secret_id = "prod/database/password"
}
resource "aws_db_instance" "main" {
username = "admin"
password = data.aws_secretsmanager_secret_version.db_password.secret_string
}
Note: Even with this approach, the secret value will appear in Terraform state. The state file itself must be encrypted (KMS) and access-controlled.
For variables passed to modules, use sensitive = true:
variable "db_password" {
type = string
sensitive = true # Redacts value in plan/apply output
}
Use HashiCorp Vault for dynamic credentials:
provider "vault" {}
data "vault_aws_access_credentials" "creds" {
backend = "aws"
role = "my-terraform-role"
}
provider "aws" {
access_key = data.vault_aws_access_credentials.creds.access_key
secret_key = data.vault_aws_access_credentials.creds.secret_key
}
Vault issues short-lived credentials that expire after a configurable TTL — far safer than static access keys.
IaC Scanning with tfsec and Checkov
Static analysis tools catch common misconfigurations before your infrastructure is deployed.
tfsec
tfsec analyzes your Terraform code for security issues:
# Install
brew install tfsec
# Scan
tfsec . --format=sarif --out=tfsec-results.sarif
# Or with specific severity
tfsec . --minimum-severity=HIGH
Example findings tfsec catches:
- S3 buckets without encryption or public access block
- Security groups with
0.0.0.0/0ingress rules on sensitive ports - CloudTrail without log file validation
- RDS instances without encryption
- IAM policies with
*actions or resources
Checkov
Checkov is broader — it scans Terraform, CloudFormation, Kubernetes manifests, Dockerfiles, and more:
# Install
pip install checkov
# Scan Terraform
checkov -d . --framework terraform --output sarif > checkov-results.sarif
# Scan with custom policies
checkov -d . --external-checks-dir ./custom-policies
CI/CD Integration (GitHub Actions):
- name: Run Checkov
uses: bridgecrewio/checkov-action@v12
with:
directory: .
framework: terraform
output_format: sarif
output_file_path: checkov-results.sarif
soft_fail: false # Fail the pipeline on findings
check: CKV_AWS_* # Only AWS checks
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: checkov-results.sarif
Module Pinning
Using Terraform modules from the registry or GitHub without version pinning is a supply chain risk — an update to the module could introduce breaking changes or, in a worst case, malicious code.
Always pin modules to a specific version:
# BAD: Unpinned — pulls latest on next terraform init
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
}
# BAD: Branch pinned — branch content can change
module "vpc" {
source = "git::https://github.com/my-org/terraform-modules.git//vpc?ref=main"
}
# GOOD: Version-pinned from the registry
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.5.1"
}
# GOOD: Git tag — tags are immutable on well-managed repos
module "vpc" {
source = "git::https://github.com/my-org/terraform-modules.git//vpc?ref=v2.3.0"
}
# BEST: Git SHA — truly immutable
module "vpc" {
source = "git::https://github.com/my-org/terraform-modules.git//vpc?ref=abc123def456"
}
Also pin provider versions:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0" # Allows patch updates within 5.x
}
}
required_version = ">= 1.6.0"
}
Use terraform providers lock to generate a .terraform.lock.hcl file that pins specific provider checksums — this ensures all team members and CI use identical provider binaries.
Drift Detection
Configuration drift occurs when someone makes a manual change to infrastructure outside of Terraform, causing the actual state to diverge from the declared state. Drift is a security problem: an undeclared security group rule or IAM policy change could be a backdoor.
Detect drift regularly:
# -detailed-exitcode: 0 = no changes, 1 = error, 2 = changes detected
terraform plan -detailed-exitcode
# In CI/CD as a scheduled job
- name: Drift Detection
run: |
terraform plan -detailed-exitcode -out=plan.out
EXIT_CODE=$?
if [ $EXIT_CODE -eq 2 ]; then
echo "DRIFT DETECTED"
# Send alert to Slack/PagerDuty
fi
Set up a weekly or daily scheduled pipeline job that runs terraform plan and alerts on any drift. Pair this with AWS Config or GCP Config Connector for real-time drift detection at the resource level.
Sentinel Policies (Terraform Enterprise/Cloud)
HashiCorp Sentinel is a policy-as-code framework for Terraform Cloud and Enterprise. It allows you to define compliance rules that must pass before a terraform apply can execute — enforcing security requirements as an automated gate.
Example Sentinel policy — prevent unrestricted SSH:
import "tfplan/v2" as tfplan
# Get all security group rules
sgs = filter tfplan.resource_changes as _, changes {
changes.type is "aws_security_group_rule" and
changes.change.actions contains "create"
}
# Deny any rule allowing 0.0.0.0/0 on port 22
no_unrestricted_ssh = rule {
all sgs as _, sg {
sg.change.after.from_port is not 22 or
sg.change.after.cidr_blocks not contains "0.0.0.0/0"
}
}
main = rule { no_unrestricted_ssh }
Policy enforcement levels:
advisory: Logs policy failures but allows apply to proceedsoft-mandatory: Blocks apply unless overridden by a privileged userhard-mandatory: Always blocks apply; cannot be overridden
For security-critical policies (no public S3 buckets, no unencrypted RDS instances), use hard-mandatory. For guidelines (tagging requirements, naming conventions), use advisory or soft-mandatory.
Open-source alternatives to Sentinel for policy-as-code include OPA (Open Policy Agent) with conftest, which works with any CI system:
# Write Rego policies, then validate Terraform plans
terraform plan -out=plan.bin
terraform show -json plan.bin > plan.json
conftest test plan.json --policy ./policies/
This gives you the same enforcement capability without requiring Terraform Enterprise licensing.