Automated Remediation Pipelines in AWS: Closing the Loop on Continuous Compliance: Part 3


Table of Contents
- Introduction
- Remediation Pipelines
- Ready to Build More? DIY the Remaining Remediation Pipelines!
- Challenge Yourself: Build the Next Two Remediations
- Why These Pipelines Matter
- Conclusion
- What’s Next
- About the Author
Introduction
Building on our previously established continuous compliance framework (Part 1) and service integration architecture (Part 2), this blog post introduces the third crucial layer — remediation. While detection and visibility are vital, real value comes from automating corrective action. In this post, we outline 10 production-ready remediation pipelines using native AWS services like Config, Security Hub, Macie, GuardDuty, and IAM Access Analyzer.
Each remediation pipeline includes all necessary components: IAM roles and permissions, AWS Config rules (if applicable), Lambda/SSM code, triggering mechanisms, monitoring integration, and CloudWatch alarms. For every remediation, we provide:
- A consolidated explanation of its importance and the AWS best practice it supports
- The source of the finding
- Complete working code (Lambda or SSM)
- Required IAM roles
- EventBridge triggers
- CloudWatch alarms
Note: These remediation pipelines vary by use case and environment. Exercise caution before enabling automatic remediation in production workloads. For example, actions like stopping or starting EC2 instances should go through proper approval workflows to avoid disruption. Please update Lambda functions and SSM documents to suit your requirement.
Remediation Pipelines
EC2 Instance Type Restriction
Standardizing EC2 instance types helps organizations maintain cost-efficiency, performance predictability, and easier troubleshooting. Enforcing instance type policies ensures teams do not launch large or non-standard instances, thereby fulfilling the AWS Well-Architected Framework cost optimization and governance best practices.
- Finding Source: AWS Config managed rule: approved-instance-types
- Config Rule:
{
"ConfigRuleName": "approved-instance-types",
"Source": {
"Owner": "AWS",
"SourceIdentifier": "APPROVED_AMI_IDS",
"SourceDetails": [{"EventSource": "aws.config", "MessageType": "ConfigurationItemChangeNotification"}]
},
"InputParameters": {
"instanceTypes": "t3.micro,t3.small"
}
}
- IAM Role for Automation:
{
"Version": "2012-10-17",
"Statement": [
{"Effect": "Allow", "Action": ["ec2:StopInstances"], "Resource": "*"},
{"Effect": "Allow", "Action": ["ssm:StartAutomationExecution"], "Resource": "*"}
]
}
- SSM Document:
description: Stop EC2 instance with unapproved type
schemaVersion: '0.3'
assumeRole: '{{ AutomationAssumeRole }}'
parameters:
InstanceId:
type: String
mainSteps:
- name: stopInstance
action: aws:stopInstances
inputs:
InstanceIds:
- '{{ InstanceId }}'
- Trigger: Config → EventBridge → SSM Automation
- CloudWatch Alarms:
- Alarm on Config rule non-compliance count > 0
- Alarm on SSM Automation execution status = failed
Macie: Remediate Publicly Accessible Sensitive Data in S3
Macie helps detect sensitive data exposure in S3 buckets. This remediation revokes public access to prevent accidental data leaks and protect PII/PHI. This enforcement protects customer data by eliminating exposure of regulated data types, aligning with GDPR, HIPAA, and general data classification best practices. It helps you detect and prevent real-time compliance violations.
- Finding Source: Amazon Macie – Sensitive data discovery with public accessibility
- Lambda Function:
import boto3
def lambda_handler(event, context):
s3 = boto3.client('s3')
bucket_name = event['detail']['resourcesAffected']['s3Bucket']['name']
s3.put_public_access_block(
Bucket=bucket_name,
PublicAccessBlockConfiguration={
'BlockPublicAcls': True,
'IgnorePublicAcls': True,
'BlockPublicPolicy': True,
'RestrictPublicBuckets': True
}
)
- IAM Role:
{
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutPublicAccessBlock"],
"Resource": "*"
}
]
- Trigger: EventBridge rule for Macie finding
- CloudWatch Alarms:
- Alarm on number of sensitive data detections in public S3 buckets
- Alarm on Lambda errors
IAM Password Policy Compliance
Enforcing strong password policies aligns with compliance requirements like CIS Benchmarks and supports identity protection.
- Finding Source: AWS Config managed rule: iam-password-policy
- Config Rule: Enabled with strong parameter values.
- SSM Automation Document:
description: Enforce strong IAM password policy
schemaVersion: '0.3'
assumeRole: '{{ AutomationAssumeRole }}'
mainSteps:
- name: updatePolicy
action: aws:executeAwsApi
inputs:
Service: iam
Api: UpdateAccountPasswordPolicy
MinimumPasswordLength: 14
RequireSymbols: true
RequireNumbers: true
RequireUppercaseCharacters: true
RequireLowercaseCharacters: true
AllowUsersToChangePassword: true
MaxPasswordAge: 90
- Trigger: Config → EventBridge → SSM Automation
- CloudWatch Alarms:
- Alarm on policy compliance status
- Alarm on automation execution failure
GuardDuty: Remediate Unauthorized EC2 Port Scans
Unauthorized port scanning is an indication of reconnaissance activity, a prelude to intrusion attempts. This remediation automatically isolates the affected EC2 instance to contain the risk, fulfilling principles of incident response, least privilege, and zero trust networking.
- Finding Source: GuardDuty Finding Type: Recon:EC2/PortProbeUnprotectedPort
- Lambda Function: Tags the EC2 instance and modifies Security Group to isolate
- IAM Role:
{
"Statement": [
{"Effect": "Allow", "Action": ["ec2:CreateTags", "ec2:ModifyInstanceAttribute", "ec2:RevokeSecurityGroupIngress"], "Resource": "*"}
]
}
Lambda Snippet:
import boto3
def lambda_handler(event, context):
instance_id = event['detail']['resource']['instanceDetails']['instanceId']
ec2 = boto3.client('ec2')
ec2.create_tags(Resources=[instance_id], Tags=[{'Key': 'Isolated', 'Value': 'true'}])
# Additional isolation logic here
- Trigger: GuardDuty finding via EventBridge
- CloudWatch Alarms:
- Alarm on number of PortProbeUnprotectedPort findings > threshold
- Alarm on Lambda execution error count
Security Hub: Disable Insecure TLS Policy on CloudFront
Using outdated TLS policies on CloudFront distributions weakens encryption strength and exposes data-in-transit to potential threats. AWS recommends enforcing TLS 1.2 or later to align with compliance standards such as PCI DSS, ISO 27001, and NIST. This remediation improves network encryption posture and aligns with secure transmission of data best practices. Reducing reliance on deprecated protocols enforces the principle of protecting data in transit.
- Finding Source: AWS Security Hub Standard – CIS AWS Foundations Benchmark – Control 1.10
- Lambda Function:
import boto3
def lambda_handler(event, context):
cf = boto3.client('cloudfront')
dist_id = event['detail']['resource']['id']
dist_config = cf.get_distribution_config(Id=dist_id)
config = dist_config['DistributionConfig']
config['ViewerCertificate']['MinimumProtocolVersion'] = 'TLSv1.2_2021'
cf.update_distribution(
DistributionConfig=config,
Id=dist_id,
IfMatch=dist_config['ETag']
)
IAM Role:
{
"Statement": [
{
"Effect": "Allow",
"Action": ["cloudfront:GetDistributionConfig", "cloudfront:UpdateDistribution"],
"Resource": "*"
}
]
}
- Trigger: EventBridge rule for Security Hub finding
- CloudWatch Alarms:
- Alarm on recurrence of insecure TLS protocol usage
- Alarm on Lambda function failure
Ready to Build More? DIY the Remaining Remediation Pipelines!
We’ve walked you through the first 5 remediation pipelines in detail — from best practices to Lambda and IAM implementations. For the remaining 5 pipelines, we invite you to dive deeper and build them yourself using the foundations we’ve already covered. To help you get started, we’ve included pseudocode, event structure hints, and the IAM permissions you’ll need.
Each of these additional pipelines corresponds to findings from services like AWS WAF, Security Hub (Port Exposure), GuardDuty (Unauthorized API calls), Macie (Unencrypted objects), and IAM Analyzer (Excessive privileges).
This is a great opportunity to:
- Practice remediation logic
- Strengthen your AWS automation skills
- Customize responses to your environment
Whether you’re a cloud beginner or an experienced engineer, these exercises will equip you to design resilient, self-healing cloud environments tailored to your security posture.
IAM Access Analyzer: Revoke Public or Cross-Account Access to IAM Roles
IAM Access Analyzer identifies roles shared with external accounts or public access. This remediation restricts such unintended access and enforces least privilege principles. Revoking unnecessary access fulfills AWS security principles of least privilege and isolation. It prevents external misuse of overly permissive IAM configurations.
- Finding Source: IAM Access Analyzer – Policy finding on externally shared roles
- Lambda Function:
import boto3
def lambda_handler(event, context):
iam = boto3.client('iam')
role_name = event['detail']['resource']['id']
....your logic goes here....
)
- Trigger: EventBridge rule for Access Analyzer finding
- CloudWatch Alarms:
- Alarm when a public/shared role is detected
- Alarm on failed Lambda execution
GuardDuty: Quarantine EC2 Instance Compromised by Crypto Mining
GuardDuty detects threats like crypto-mining on EC2. This remediation isolates the instance by detaching network interfaces and removing public IP to reduce blast radius. This action addresses AWS best practices around incident containment and resource isolation. It ensures infected instances are blocked from communicating with external nodes.
- Finding Source: Amazon GuardDuty – CryptoCurrency:EC2/BitcoinTool.B!DNS
- Lambda Function:
import boto3
def lambda_handler(event, context):
ec2 = boto3.client('ec2')
instance_id = event['detail']['resource']['instanceDetails']['instanceId']
eni_response = ec2.describe_instances(InstanceIds=[instance_id])
interfaces = eni_response['Reservations'][0]['Instances'][0]['NetworkInterfaces']
....your logic goes here....
- Trigger: EventBridge rule for GuardDuty finding
- CloudWatch Alarms:
- Alarm on GuardDuty findings related to crypto mining
- Alarm when the instance has zero ENIs post remediation
Challenge Yourself: Build the Next Two Remediations
Now that you’ve explored five fully detailed remediation pipelines, it’s time to test your skills! We encourage you to implement the next two pipelines independently, using the patterns and structures established earlier. These two use cases are critical in real-world environments and will further solidify your understanding of AWS security automation:
- Security Hub: Disable Unused Access Keys
- Finding Type: IAM credentials unused for 90+ days
- Hint: Use Lambda to disable the key via
update-access-key
API. Use CloudWatch to monitor key activity.
- Macie: Auto-tag S3 Buckets with Sensitive Data for Governance
- Finding Type: Sensitive data detected in untagged S3 buckets
- Hint: Use S3 and Macie APIs to fetch the bucket name and apply tags via
put-bucket-tagging
.
Why These Pipelines Matter
Each remediation addresses a high-severity risk and is mapped to a corresponding AWS security best practice. These pipelines embody the shift from reactive detection to proactive self-healing infrastructure. Their modular design allows you to start small and scale with confidence.
Conclusion
In this third installment of our security automation journey, we moved from visibility and integration to actionable, automated remediation — a critical leap toward a self-healing cloud environment. By implementing ten diverse, production-ready remediation pipelines across key AWS services like Config, Security Hub, Macie, GuardDuty, and IAM Access Analyzer, we’ve created a resilient framework that doesn’t just detect risks, but responds to them intelligently.
These remediations are not one-size-fits-all scripts; they are modular, extensible solutions backed by AWS best practices. With integrated CloudWatch alarms, secure IAM roles, and clear triggering logic, each pipeline is designed for real-world scalability and operational safety.
Whether you’re just beginning to operationalize security in AWS or are refining an existing setup, this foundation sets the stage for a proactive, policy-driven security posture. And as we prepare for the next step — automation at scale — you’ll be equipped with the tools to go from alert to action, without human bottlenecks.
Stay tuned for Part 4, where we’ll automate the deployment of these pipelines using Infrastructure as Code, ensuring repeatability and consistency across environments.
What’s Next
- Deployment automation using IaC
- Extend to cross-account and multi-region detection
- Add manual approval stages
- Integrate remediation logs with Security Lake and SIEM solutions
About the Author
Deepali Sonune is a DevOps engineer with 12+ years of industry experience. She has been developing high-performance DevOps solutions with stringent security and governance requirements in AWS for 9+ years. She also works with developers and IT to oversee code releases, combining an understanding of both engineering and programming.