Compliance frameworks like SOC 2, ISO 27001, and PCI DSS define controls your organisation must implement. Traditionally, proving compliance means spreadsheets, screenshots, and annual audits. Compliance as Code turns those controls into automated, testable, continuously verified guardrails that live alongside your application code.
Why compliance as code
| Traditional compliance | Compliance as Code |
|---|---|
| Annual audit (point-in-time) | Continuous verification |
| Screenshots and documents as evidence | Automated test results as evidence |
| Manual evidence collection (weeks) | Evidence generated automatically |
| Drift detected at next audit | Drift detected in minutes |
| "We checked the box" | "We can prove it right now" |
The shift is from proving you were compliant at audit time to proving you are compliant at all times.
Common frameworks and their controls
SOC 2
SOC 2 covers five Trust Service Criteria: Security, Availability, Processing Integrity, Confidentiality, and Privacy. Example controls:
| Control area | Example requirement | Automated check |
|---|---|---|
| Access control | Users have minimum necessary access | IAM policy audit, RBAC verification |
| Change management | Changes are reviewed before deployment | PR review enforcement, branch protection |
| Encryption | Data encrypted in transit and at rest | TLS configuration check, storage encryption audit |
| Logging | Security events are logged and monitored | Log pipeline health check, alert coverage audit |
| Incident response | Incidents are detected and responded to | Alert SLA monitoring, runbook existence check |
ISO 27001
ISO 27001 defines an Information Security Management System (ISMS) with controls from Annex A. Many overlap with SOC 2:
| Annex A control | Automated check |
|---|---|
| A.8.9 — Configuration management | IaC drift detection |
| A.8.24 — Use of cryptography | TLS and encryption policy checks |
| A.8.25 — Secure development lifecycle | CI pipeline security gates |
| A.8.28 — Secure coding | SAST scan results |
| A.5.15 — Access control | IAM policy audit |
PCI DSS
PCI DSS applies to organisations handling cardholder data. It has specific, prescriptive requirements:
| Requirement | Automated check |
|---|---|
| 2.2 — Secure system configurations | CIS benchmark scans |
| 6.2 — Protect against known vulnerabilities | Dependency scan results, patching SLAs |
| 6.5 — Train developers in secure coding | Training completion tracking |
| 8.3 — MFA for all access | MFA enforcement policy check |
| 10.2 — Audit trail for all access to cardholder data | Log completeness verification |
| 11.3 — Penetration testing | Pen test scheduling and results tracking |
Implementing compliance as code
Policy as code with OPA
Open Policy Agent (OPA) lets you define policies in Rego and enforce them across your infrastructure:
# policy/require_encryption.rego
package aws.s3
deny[msg] {
bucket := input.resource.aws_s3_bucket[name]
not bucket.server_side_encryption_configuration
msg := sprintf("S3 bucket '%s' does not have encryption enabled", [name])
}
deny[msg] {
bucket := input.resource.aws_s3_bucket[name]
bucket.acl == "public-read"
msg := sprintf("S3 bucket '%s' has public read access", [name])
}
Run OPA against Terraform plans in CI:
- name: Check compliance policies
run: |
terraform plan -out=tfplan
terraform show -json tfplan > tfplan.json
opa eval --data policy/ --input tfplan.json "data.aws.s3.deny" --fail-defined
Infrastructure compliance with Checkov
Checkov scans Terraform, CloudFormation, Kubernetes manifests, and Dockerfiles against hundreds of built-in compliance checks:
# Scan Terraform with specific framework mapping
checkov -d . --framework terraform --check CKV_AWS_18,CKV_AWS_19,CKV_AWS_145
# Scan with SOC 2 mapping
checkov -d . --compliance-framework soc2
Example output:
Passed checks: 47
Failed checks: 3
FAILED: CKV_AWS_18 — Ensure the S3 bucket has access logging enabled
File: /main.tf:23-35
FAILED: CKV_AWS_145 — Ensure S3 bucket is encrypted with KMS
File: /main.tf:23-35
FAILED: CKV_AWS_19 — Ensure the S3 bucket has server-side encryption
File: /storage.tf:12-20
Kubernetes compliance with Kyverno
Kyverno enforces policies on Kubernetes resources at admission time:
# Require resource limits on all containers (availability control)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-resource-limits
spec:
validationFailureAction: Enforce
rules:
- name: check-limits
match:
any:
- resources:
kinds: ["Pod"]
validate:
message: "All containers must have CPU and memory limits."
pattern:
spec:
containers:
- resources:
limits:
memory: "?*"
cpu: "?*"
# Require non-root containers (security control)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-non-root
spec:
validationFailureAction: Enforce
rules:
- name: check-non-root
match:
any:
- resources:
kinds: ["Pod"]
validate:
message: "Containers must run as non-root."
pattern:
spec:
containers:
- securityContext:
runAsNonRoot: true
Git-based compliance controls
Branch protection rules and CI requirements enforce change management controls:
# GitHub branch protection as code (via Terraform)
resource "github_branch_protection" "main" {
repository_id = github_repository.app.node_id
pattern = "main"
required_pull_request_reviews {
required_approving_review_count = 1
dismiss_stale_reviews = true
require_code_owner_reviews = true
}
required_status_checks {
strict = true
contexts = ["unit-tests", "security-scan", "compliance-check"]
}
enforce_admins = true
}
This enforces:
- All changes require a PR with at least one approval (change management)
- Security scans must pass before merge (secure development)
- Even admins must follow the process (no bypass)
Evidence collection
Automated evidence generation
Instead of collecting screenshots before an audit, generate evidence continuously:
# CI job that generates compliance evidence
compliance-evidence:
schedule: "0 0 * * *" # Daily
steps:
- name: IAM audit
run: |
aws iam generate-credential-report
aws iam get-credential-report --output json > evidence/iam-report.json
- name: Encryption audit
run: |
aws s3api list-buckets --query 'Buckets[].Name' --output json | \
xargs -I{} aws s3api get-bucket-encryption --bucket {} > evidence/s3-encryption.json
- name: MFA compliance
run: |
aws iam list-users --query 'Users[].UserName' --output text | \
xargs -I{} aws iam list-mfa-devices --user-name {} > evidence/mfa-status.json
- name: Store evidence
run: |
aws s3 cp evidence/ s3://compliance-evidence/$(date +%Y-%m-%d)/ --recursive
Evidence storage
| Requirement | Implementation |
|---|---|
| Immutable | Write-once S3 bucket with Object Lock enabled |
| Timestamped | Each evidence run tagged with ISO 8601 timestamp |
| Tamper-evident | SHA-256 hash of each evidence file stored separately |
| Retained | Retention policy matching compliance framework (typically 1–7 years) |
| Accessible | Auditors get read-only access to the evidence bucket |
Continuous compliance monitoring
Drift detection
Infrastructure drifts from its declared state. Someone makes a manual change in the console, and suddenly your encrypted-by-policy S3 bucket has encryption disabled.
# Scheduled Terraform drift detection
drift-check:
schedule: "0 */6 * * *" # Every 6 hours
steps:
- run: terraform plan -detailed-exitcode
# Exit code 2 = changes detected (drift)
- if: exitCode == 2
run: |
echo "Infrastructure drift detected"
# Alert security team, create ticket
Compliance dashboards
Build dashboards that show real-time compliance posture:
| Control | Status | Last checked | Evidence |
|---|---|---|---|
| S3 encryption | ✅ 100% compliant | 2 hours ago | [Report link] |
| MFA enforcement | ⚠️ 98% (2 users without MFA) | 6 hours ago | [Report link] |
| Branch protection | ✅ All repos compliant | 1 hour ago | [Report link] |
| Vulnerability SLAs | ❌ 3 P2s past SLA | Real-time | [Dashboard link] |
Mapping controls to evidence
Create a control matrix that maps each compliance requirement to:
- The automated check that verifies it
- The evidence artefact that proves it
- The remediation runbook if the check fails
# compliance-matrix.yaml
controls:
- id: SOC2-CC6.1
description: "Logical access to information assets is restricted"
checks:
- name: "IAM policy audit"
tool: "aws iam"
schedule: daily
evidence: "s3://evidence/iam-report-{date}.json"
- name: "Branch protection audit"
tool: "GitHub API"
schedule: daily
evidence: "s3://evidence/branch-protection-{date}.json"
remediation: "https://wiki.internal/runbooks/access-control-remediation"
Getting started
You do not need to automate everything on day one. Start with the controls that:
- Fail most often — if MFA compliance is your biggest audit finding, automate MFA checks first
- Are most painful to evidence — if evidence collection takes weeks, automate the collection
- Have the highest impact — encryption, access control, and change management affect every framework
Phased approach
Phase 1 (week 1–2): Automate evidence collection for your top 5 failing controls. Store evidence in an immutable bucket.
Phase 2 (week 3–4): Add compliance checks to CI/CD. Fail builds that violate critical policies (encryption, public access, missing authentication).
Phase 3 (month 2): Build a compliance dashboard. Set up drift detection. Map all controls to automated checks.
Phase 4 (month 3+): Achieve continuous compliance. Auditors review dashboards and evidence artefacts instead of requesting screenshots.
Summary
Compliance as Code transforms audit preparation from a manual, periodic exercise into continuous, automated verification. Use OPA, Checkov, and Kyverno to enforce policies. Generate evidence automatically with scheduled CI jobs and store it immutably. Detect drift with regular Terraform plan checks. Start with your most painful controls and expand incrementally. The goal is not to eliminate audits but to make them trivial — because the evidence is always current and always available.
