Building an Enterprise-Grade AWS CI/CD Pipeline with Terraform
How I automated AWS deployments with CodePipeline, CodeCommit, and CodeBuild for under $5/month The DevOps Challenge Let me take you on a technical adventure that recently consumed my weekends and late nights. While I’ve mastered the arts of Jenkins, GitHub Actions, and Azure DevOps over the years, AWS’s native CI/CD services remained unexplored territory in my professional journey. When tasked with implementing a fully AWS-native DevOps pipeline for a crucial enterprise SSO project, I knew I was in for both a challenge and a revelation. Truth be told, I approached this mission with equal parts excitement and skepticism. Would AWS’s homegrown CI/CD solutions match the maturity of the standalone counterparts I’ve grown to love? Spoiler alert: they don’t — at least not yet — but they certainly bring their own flavor of magic to the table. The goal seemed straightforward at first: automate the deployment of AWS SSO configurations through a fully managed CI/CD pipeline. But as with most interesting DevOps problems, the devil was in the details. I would need to navigate the peculiarities of AWS’s native services, overcome their integration quirks, and piece together a solution that was both robust and maintainable. The journey took me through CodeCommit’s sparse interface, CodeBuild’s container idiosyncrasies, and CodePipeline’s rather opinionated workflow design — all while maintaining Terraform as my infrastructure orchestration tool of choice. The Mission: Infrastructure as Code Automation Before diving into code snippets and configuration files, let’s understand what I was building. My task centered around AWS Single Sign-On (SSO) automation for a large enterprise with dozens of AWS accounts and multiple teams requiring different levels of access: Create a system to provision Permission Sets (essentially IAM roles on steroids) into AWS SSO Establish automated linking between these Permission Sets, user Groups, and AWS accounts Set up a CI/CD pipeline to deploy changes automatically whenever configurations are updated Package everything neatly as infrastructure-as-code using Terraform Ensure the entire process was auditable, reproducible, and compliant with security best practices In essence, I needed to build a self-service platform where different teams could access their assigned AWS accounts with appropriate permission levels, all managed through a central AWS SSO portal. This would replace our existing manual process where identity and access management changes required tickets, manual approvals, and direct console work — a process that typically took days and was prone to human error. The catch? Making this process as automated and hands-off as possible while maintaining appropriate security controls. As AWS SSO lacks certain API capabilities for user and group provisioning (as of this writing), this part would remain a console operation performed by our security team. However, everything else was fair game for automation. The challenge was further complicated by our organization’s strict security requirements: All infrastructure changes must be tracked and auditable Deployments must require explicit approvals The entire solution must be version-controlled No permanent admin credentials should be used in the pipeline This is precisely the type of complex, multi-faceted DevOps problem I live for. Assembling My AWS DevOps Arsenal For this operation, I enlisted four core AWS services to create a seamless CI/CD experience: CodeCommit: My code repository (AWS’s equivalent of GitHub) CodeBuild: My execution environment for running Terraform commands CodePipeline: The orchestrator connecting all the pieces S3: My artifact storage system Each of these services has its own strengths and limitations that influenced my architectural decisions: CodeCommit offers tight integration with other AWS services but lacks many developer-friendly features found in GitHub or GitLab. While it supports branch policies and approval rules, its web interface for code reviews is quite basic. However, it does provide IAM-based access control, which aligns perfectly with our security requirements and eliminates the need for SSH keys or personal access tokens. CodeBuild runs your build commands in isolated Docker containers, which is excellent for consistency but introduces challenges with filesystem operations like symbolic links (which I’ll discuss later). It supports various compute types ranging from 3GB to 64GB of memory, though I found the smallest tier more than sufficient for our Terraform operations. CodePipeline orchestrates the entire process but follows a linear execution model with limited branching capabilities. It allows for manual approval steps, which was crucial for our compliance requirements, but lacks some advanced features like parallel execution paths or conditional stages. S3 serves as a simple yet effective artifact repository, with versioning c

How I automated AWS deployments with CodePipeline, CodeCommit, and CodeBuild for under $5/month
The DevOps Challenge
Let me take you on a technical adventure that recently consumed my weekends and late nights. While I’ve mastered the arts of Jenkins, GitHub Actions, and Azure DevOps over the years, AWS’s native CI/CD services remained unexplored territory in my professional journey. When tasked with implementing a fully AWS-native DevOps pipeline for a crucial enterprise SSO project, I knew I was in for both a challenge and a revelation.
Truth be told, I approached this mission with equal parts excitement and skepticism. Would AWS’s homegrown CI/CD solutions match the maturity of the standalone counterparts I’ve grown to love? Spoiler alert: they don’t — at least not yet — but they certainly bring their own flavor of magic to the table.
The goal seemed straightforward at first: automate the deployment of AWS SSO configurations through a fully managed CI/CD pipeline. But as with most interesting DevOps problems, the devil was in the details. I would need to navigate the peculiarities of AWS’s native services, overcome their integration quirks, and piece together a solution that was both robust and maintainable. The journey took me through CodeCommit’s sparse interface, CodeBuild’s container idiosyncrasies, and CodePipeline’s rather opinionated workflow design — all while maintaining Terraform as my infrastructure orchestration tool of choice.
The Mission: Infrastructure as Code Automation
Before diving into code snippets and configuration files, let’s understand what I was building. My task centered around AWS Single Sign-On (SSO) automation for a large enterprise with dozens of AWS accounts and multiple teams requiring different levels of access:
Create a system to provision Permission Sets (essentially IAM roles on steroids) into AWS SSO
Establish automated linking between these Permission Sets, user Groups, and AWS accounts
Set up a CI/CD pipeline to deploy changes automatically whenever configurations are updated
Package everything neatly as infrastructure-as-code using Terraform
Ensure the entire process was auditable, reproducible, and compliant with security best practices
In essence, I needed to build a self-service platform where different teams could access their assigned AWS accounts with appropriate permission levels, all managed through a central AWS SSO portal. This would replace our existing manual process where identity and access management changes required tickets, manual approvals, and direct console work — a process that typically took days and was prone to human error.
The catch? Making this process as automated and hands-off as possible while maintaining appropriate security controls. As AWS SSO lacks certain API capabilities for user and group provisioning (as of this writing), this part would remain a console operation performed by our security team. However, everything else was fair game for automation.
The challenge was further complicated by our organization’s strict security requirements:
All infrastructure changes must be tracked and auditable
Deployments must require explicit approvals
The entire solution must be version-controlled
No permanent admin credentials should be used in the pipeline
This is precisely the type of complex, multi-faceted DevOps problem I live for.
Assembling My AWS DevOps Arsenal
For this operation, I enlisted four core AWS services to create a seamless CI/CD experience:
CodeCommit: My code repository (AWS’s equivalent of GitHub)
CodeBuild: My execution environment for running Terraform commands
CodePipeline: The orchestrator connecting all the pieces
S3: My artifact storage system
Each of these services has its own strengths and limitations that influenced my architectural decisions:
CodeCommit offers tight integration with other AWS services but lacks many developer-friendly features found in GitHub or GitLab. While it supports branch policies and approval rules, its web interface for code reviews is quite basic. However, it does provide IAM-based access control, which aligns perfectly with our security requirements and eliminates the need for SSH keys or personal access tokens.
CodeBuild runs your build commands in isolated Docker containers, which is excellent for consistency but introduces challenges with filesystem operations like symbolic links (which I’ll discuss later). It supports various compute types ranging from 3GB to 64GB of memory, though I found the smallest tier more than sufficient for our Terraform operations.
CodePipeline orchestrates the entire process but follows a linear execution model with limited branching capabilities. It allows for manual approval steps, which was crucial for our compliance requirements, but lacks some advanced features like parallel execution paths or conditional stages.
S3 serves as a simple yet effective artifact repository, with versioning capabilities that create an audit trail of all our infrastructure changes.
Let’s see how I assembled these pieces into a coherent infrastructure automation puzzle.
First Steps: Setting Up CodeCommit
While my organization already uses GitHub for most projects, this particular project needed to live within our AWS ecosystem for security and compliance reasons. Setting up a CodeCommit repository is straightforward through the AWS console — much simpler than configuring a full-featured GitHub organization with proper permissions.
After creating a new repository (I named mine sso-permission-sets
), the critical step is capturing the Clone URL for local development:
Click on the repository name in the CodeCommit console
Open the “Clone URL” dropdown in the upper right
Select “Clone HTTPS (GRC)” — this is important as it uses the git-remote-codecommit helper
The resulting URL contains your region and repository name, looking something like https://git-codecommit.eu-west-2.amazonaws.com/v1/repos/sso-permission-sets
. This format threw me off initially - it's not the standard Git URL format you might be used to. Later, you'll notice we use a different format altogether with the git-remote-codecommit helper.
One nice aspect of CodeCommit is that it leverages your existing AWS authentication mechanisms rather than requiring separate SSH keys or personal access tokens. This simplifies credential management, especially in enterprise environments where key rotation policies are strictly enforced.
Creating My Local Development Environment
To effectively work with this setup, I needed a well-configured local development environment with several tools properly installed and configured. Here’s my detailed setup process:
Installing and Configuring Required Tools
First, I made sure I had AWS CLI v2 installed, as v1 lacks some of the SSO functionality needed:
aws --version
If you’re on macOS, you can install it with:
brew install awscli
Next came the most crucial part — configuring SSO access for seamless authentication. This was essential as my organization had moved away from long-lived IAM access keys to temporary credentials via SSO:
aws configure sso --profile terraform-deployer
This interactive command prompted me for:
The SSO start URL (our company portal)
AWS region for SSO (ap-south-1 in our case)
Default output format (json)
Default AWS account and permission set (I selected our infrastructure account and PowerUser role)
Once completed, a browser window opened for SSO authentication. With this configuration saved, I could now authenticate simply by running:
aws sso login --profile terraform-deployer
This generates temporary credentials valid for 8 hours — perfect for a development session without compromising security.
Setting Up Terraform and Version Management
For Terraform, I needed version 0.15 or higher to leverage newer features like improved plan file handling:
terraform --version
# Should return: Terraform v0.15.5 or higher
Since I juggle multiple client projects with different Terraform version requirements, I use tfenv, which is essentially nvm but for Terraform:
brew install tfenv
# Install specific Terraform version
tfenv install 0.15.5# Set as default
tfenv use 0.15.5# Verify installed versions
tfenv list
# Shows: * 0.15.5 (set by /Users/shivanshu.sharma/.tfenv/version)
# 0.14.11
# 0.13.7
Connecting to CodeCommit
Finally, to work with CodeCommit, I needed Git and the specialized CodeCommit helper that integrates with AWS authentication:
# Verify Git version
git --version
# Should be 1.7.9 or higher
# Install the CodeCommit helper
python3 -m pip install git-remote-codecommit
The git-remote-codecommit package is critical — it enables Git to understand and use the special codecommit:// URL scheme that integrates with AWS authentication. Without it, connecting to CodeCommit repositories becomes much more complicated.
With VS Code (plus the HashiCorp Terraform extension) as my IDE, I cloned the empty repository using the special codecommit format:
git clone codecommit::ap-south-1://terraform-deployer@sso-permission-sets
cd sso-permission-sets && code .
Notice the URL format: codecommit::
. This leverages the AWS credentials associated with the specified profile, eliminating the need for separate Git credentials.
Validating My Setup
To ensure everything was working correctly, I created a simple README.md file and pushed it to the repository:
echo "# AWS SSO Permission Sets" > README.md
git add README.md
git commit -m "Initial commit"
git push
After refreshing the CodeCommit console, I could see my README appeared correctly — confirmation that my local development environment was properly configured and ready for the real work ahead.
Repository Structure and Organization
Let me share how I organized my code:
sso-permission-sets/
├── configurations/
│ ├── permission_sets_pipeline/
│ │ ├── codepipeline.tf
│ │ ├── iam.tf
│ │ ├── codebuild.tf
│ │ ├── s3.tf
│ │ ├── buildspec-plan.yml
│ │ └── buildspec-apply.yml
│ ├── global.tf
│ ├── output.tf
│ ├── provider.tf
│ ├── version.tf
│ └── state.tf
I placed common configuration files at the root level and linked them into specific configuration directories. One quirk I discovered: AWS CodeBuild doesn’t handle symbolic links well, so I ultimately copied these files directly instead of linking them.
Orchestrating the CI/CD Pipeline
The heart of my solution lives in codepipeline.tf
, where I defined a four-stage pipeline:
resource "aws_codepipeline" "codepipeline" {
name = var.pipeline_name
role_arn = aws_iam_role.codepipeline_role.arn
artifact_store {
location = aws_s3_bucket.codepipeline_bucket.bucket
type = "S3"
} # Stage 1: Clone the repository
stage {
name = "Clone" action {
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["CodeWorkspace"] configuration = {
RepositoryName = var.repository_name
BranchName = var.branch_name
PollForSourceChanges = "true"
}
}
} # Stage 2: Run terraform plan
stage {
name = "Plan" action {
name = "Plan"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["CodeWorkspace"]
output_artifacts = ["TerraformPlanFile"]
version = "1" configuration = {
ProjectName = aws_codebuild_project.plan_project.name
EnvironmentVariables = jsonencode([
{
name = "PIPELINE_EXECUTION_ID"
value = "#{codepipeline.PipelineExecutionId}"
type = "PLAINTEXT"
}
])
}
}
} # Stage 3: Require manual approval
stage {
name = "Manual-Approval" action {
name = "Approval"
category = "Approval"
owner = "AWS"
provider = "Manual"
version = "1"
}
} # Stage 4: Apply the terraform changes
stage {
name = "Apply" action {
name = "Deploy"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["CodeWorkspace", "TerraformPlanFile"]
version = "1" configuration = {
ProjectName = aws_codebuild_project.apply_project.name
PrimarySource = "CodeWorkspace"
EnvironmentVariables = jsonencode([
{
name = "PIPELINE_EXECUTION_ID"
value = "#{codepipeline.PipelineExecutionId}"
type = "PLAINTEXT"
}
])
}
}
}
}
The pipeline flow is elegantly simple:
Clone: Fetch the latest code from CodeCommit
Plan: Run
terraform plan
and save the output as an S3 artifactManual-Approval: Allow human verification of proposed changes
Apply: Run
terraform apply
using the saved plan file
A few implementation insights I discovered along the way:
Artifact names should avoid hyphens to prevent ambiguous Terraform errors (use camelCase or snake_case instead)
CodePipeline variables can be accessed using the
#{codepipeline.VariableName}
syntaxFor the Apply stage, since it uses multiple input artifacts, I had to specify which one should be the primary source
Setting Up CodeBuild Projects
For each CodeBuild stage (Plan and Apply), I needed to define a build project:
resource "aws_codebuild_project" "plan_project" {
name = "${var.pipeline_name}-plan"
description = "Plan stage for ${var.pipeline_name}"
build_timeout = "5"
service_role = aws_iam_role.codebuild_role.arn
artifacts {
type = "CODEPIPELINE"
} environment {
compute_type = var.build_compute_type
image = var.build_image
type = var.build_container_type
image_pull_credentials_type = "CODEBUILD"
} source {
type = "CODEPIPELINE"
buildspec = file("${path.module}/buildspec-plan.yml")
}
}resource "aws_codebuild_project" "apply_project" {
name = "${var.pipeline_name}-apply"
description = "Apply stage for ${var.pipeline_name}"
build_timeout = "5"
service_role = aws_iam_role.codebuild_role.arn artifacts {
type = "CODEPIPELINE"
} environment {
compute_type = var.build_compute_type
image = var.build_image
type = var.build_container_type
image_pull_credentials_type = "CODEBUILD"
} source {
type = "CODEPIPELINE"
buildspec = file("${path.module}/buildspec-apply.yml")
}
}
The environment variables in my global.tf
defined the compute specifications:
variable "build_compute_type" {
description = "CodeBuild compute type"
type = string
default = "BUILD_GENERAL1_SMALL"
}
variable "build_image" {
description = "CodeBuild container image"
type = string
default = "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
}variable "build_container_type" {
description = "CodeBuild container type"
type = string
default = "LINUX_CONTAINER"
}
Crafting the BuildSpec Files
The real magic happens in the buildspec files that tell CodeBuild what to do. Here’s my plan buildspec:
version: 0.2
env:
variables:
TF_VERSION: "0.15.5"
PERMISSION_SETS_DIR: "configurations/permission_sets_pipeline"phases:
install:
runtime-versions:
python: 3.8
commands:
- echo "Installing terraform version ${TF_VERSION}"
- curl -s -qL -o terraform.zip "https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip"
- unzip terraform.zip
- mv terraform /usr/bin/terraform
- rm terraform.zip build:
commands:
- echo "Starting build phase"
- cd ${CODEBUILD_SRC_DIR}
# Copy linked files to make sure they're available
- cp ${CODEBUILD_SRC_DIR}/configurations/global.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
- cp ${CODEBUILD_SRC_DIR}/configurations/provider.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
- cp ${CODEBUILD_SRC_DIR}/configurations/version.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
- cp ${CODEBUILD_SRC_DIR}/configurations/state.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
- cp ${CODEBUILD_SRC_DIR}/configurations/output.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
- cd ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}
- terraform init
- terraform validate
- terraform plan -out=tfplan_commitid_${CODEBUILD_RESOLVED_SOURCE_VERSION}_pipelineid_${PIPELINE_EXECUTION_ID}
artifacts:
files:
- ${PERMISSION_SETS_DIR}/tfplan_commitid_${CODEBUILD_RESOLVED_SOURCE_VERSION}_pipelineid_${PIPELINE_EXECUTION_ID}
name: TerraformPlanFile
And the apply buildspec:
version: 0.2
env:
variables:
TF_VERSION: "0.15.5"
PERMISSION_SETS_DIR: "configurations/permission_sets_pipeline"phases:
install:
runtime-versions:
python: 3.8
commands:
- echo "Installing terraform version ${TF_VERSION}"
- curl -s -qL -o terraform.zip "https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip"
- unzip terraform.zip
- mv terraform /usr/bin/terraform
- rm terraform.zip build:
commands:
- echo "Starting build phase"
- cd ${CODEBUILD_SRC_DIR}
# Copy linked files to make sure they're available
- cp ${CODEBUILD_SRC_DIR}/configurations/global.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
- cp ${CODEBUILD_SRC_DIR}/configurations/provider.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
- cp ${CODEBUILD_SRC_DIR}/configurations/version.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
- cp ${CODEBUILD_SRC_DIR}/configurations/state.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
- cp ${CODEBUILD_SRC_DIR}/configurations/output.tf ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}/
- cd ${CODEBUILD_SRC_DIR}/${PERMISSION_SETS_DIR}
- terraform init
- cp ${CODEBUILD_SRC_DIR_TerraformPlanFile}/configurations/permission_sets_pipeline/tfplan_commitid_${CODEBUILD_RESOLVED_SOURCE_VERSION}_pipelineid_${PIPELINE_EXECUTION_ID} .
- terraform apply -auto-approve tfplan_commitid_${CODEBUILD_RESOLVED_SOURCE_VERSION}_pipelineid_${PIPELINE_EXECUTION_ID}
A critical discovery I made: when dealing with multiple artifacts in CodeBuild, you need to use CODEBUILD_SRC_DIR_ArtifactName
to access secondary artifacts. This was the trickiest part of the whole setup!
Creating the S3 Artifact Bucket
Finally, I needed a place to store my pipeline artifacts:
resource "aws_s3_bucket" "codepipeline_bucket" {
bucket = var.artifact_bucket
}
resource "aws_s3_bucket_acl" "codepipeline_bucket_acl" {
bucket = aws_s3_bucket.codepipeline_bucket.id
acl = "private"
}resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
bucket = aws_s3_bucket.codepipeline_bucket.id rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
Nothing fancy here — just a private, encrypted bucket for storing our Terraform plans and other artifacts.
Final Result and Cost Analysis
After applying all these configurations, I had a fully functional AWS-native CI/CD pipeline that would:
Detect changes to the repository
Plan the Terraform changes
Wait for human approval
Apply the changes automatically
The best part? The entire solution costs approximately $3.01 per month if you run it once daily. That’s less than a fancy coffee!
Overcoming Unexpected Challenges
While the solution appears straightforward now, I encountered several unexpected obstacles along the way:
Dynamic State Management
My first attempt used a local state file, which quickly proved problematic in a CI/CD environment. I pivoted to using S3 backend with DynamoDB locking to ensure state consistency:
terraform {
backend "s3" {
bucket = "terraform-state-xxcloud-infra"
key = "sso-permission-sets/terraform.tfstate"
region = "ap-south-1"
dynamodb_table = "terraform-state-lock"
encrypt = true
}
}
Handling Permission Boundaries
Our organization requires permission boundaries on all IAM roles. The IAM roles created by CodeBuild and CodePipeline needed these boundaries applied, which required some additional configurations:
resource "aws_iam_role" "codebuild_role" {
name = "${var.pipeline_name}-codebuild-role"
assume_role_policy = data.aws_iam_policy_document.codebuild_assume_policy.json
permissions_boundary = "arn:aws:iam::${local.account_id}:policy/StandardPermissionsBoundary"
}
Cross-Account Pipeline Considerations
While my initial implementation worked within a single account, we later expanded it to deploy permission sets across multiple accounts. This required adjusting the trust relationships and adding cross-account role assumptions to the pipeline.
Security Considerations
Security was paramount in this implementation:
Least Privilege Access: The IAM roles for CodeBuild and CodePipeline have tightly scoped permissions
Artifact Encryption: All pipeline artifacts are encrypted at rest in S3
Manual Approval Gates: Changes require explicit human approval before deployment
Auditable History: Every change is trackable through Git history and pipeline execution logs
Temporary Credentials: No long-lived access keys are used in the pipeline
Cost Analysis and Optimization
One pleasant surprise was the cost-effectiveness of this solution. The entire infrastructure costs approximately $4 to $5 per month with daily executions.
We could further optimize by using EventBridge to trigger the pipeline only on actual changes rather than polling for changes, but the cost savings would be minimal.
Conclusion and Key Takeaways
Building this AWS-native CI/CD pipeline was a fascinating journey that expanded my DevOps toolkit. While AWS’s CI/CD services don’t yet match the polish of dedicated solutions like Jenkins or GitHub Actions, they integrate beautifully with other AWS services and provide a cost-effective approach to automating infrastructure deployments.
The most valuable lessons I learned:
Filesystem Quirks: CodeBuild has peculiar behavior with symbolic links and secondary artifacts
Environment Variable Magic: Use pipeline execution IDs and commit hashes to create unique artifact names
This infrastructure-as-code approach to managing AWS SSO permission sets has dramatically improved our operational efficiency. What used to take days of manual work and coordination now happens automatically in minutes with full traceability and security controls.
Have you built similar pipelines with AWS’s native services? What challenges did you encounter?
I’d love to hear about your experiences and any optimization tips you might have!