Secure and Fast Static Website Deployment on AWS using Pulumi
This is a submission for the Pulumi Deploy and Document Challenge: Fast Static Website Deployment What I Built In this project, I used Pulumi as an Infrastructure as Code (IaC) tool to create a secure and scalable static website infrastructure on AWS. The solution sets up an Amazon S3 bucket to host static site files and integrates it with Amazon CloudFront for fast, global content delivery. To ensure secure HTTPS access, I restricted permissions to only CloudFront, obtained an SSL certificate using AWS Certificate Manager (ACM), and set up a custom domain with Route 53. Everything was written in Python using pulumi's Python SDK, which makes the codebase easy to read and maintain. The result is a modern react static website setup that is ready for production, cost-effective, and secure, all fully automated with pulumi. Live Demo Link CloudFront Links https://d8fuic0sibn8l.cloudfront.net/ https://d8fuic0sibn8l.cloudfront.net/nonexistent-page Custom Domain Links https://challenge.drintech.online https://challenge.drintech.online/nonexistent-page Project Repo DrInTech22 / Pulumi_Static_Website Secure and Fast Static Website Deployment on AWS using Pulumi This is a repository to set up a secure and fast static website deployment using pulumi. Table of content Overview Objectives Technologies Project Architecture Project Structure Prerequisites Pulumi Installation Setup Project Template Deploy the Static Website Secure the website deployment Setup a Custom Doman Best Practices Overview This project uses Pulumi as an IAC tool to set up a secure and fast static website deployment. Pulumi is a unique IAC tool that lets you define your infrastructure using your favorite programming language. With Pulumi, we'll set up the infrastructure (S3 bucket, CloudFront, and Route 53) on AWS and securely deploy the static website files to this infrastructure using Python. Technologies Pulumi: An Infrastructure as Code (IaC) tool that provisions and manages cloud resources using Python. Python: A programming language that provides modules to interact with Pulumi and AWS… View on GitHub My Journey I started this project by exploring the Pulumi documentation since it was my first time using Pulumi. It turned out to be a great experience because the Pulumi documentation is developer-friendly and easy to follow. Being a fan of Python and quite familiar with it, I chose to use the Pulumi Python SDK to set up the static website infrastructure and deploy a static React website to it. My development journey was rewarding because I gained a better understanding of the technologies, but it was also filled with unexpected challenges. At one point, I even thought I had downloaded malware on my machine, lol. I began the journey using the helpful static website template provided by the Pulumi documentation. Let's go over the steps and the challenges I faced during development. Summary of Technologies Used Pulumi: An Infrastructure as Code (IaC) tool that provisions and manages cloud resources using Python. Python: A programming language that provides modules to interact with Pulumi and AWS resources. S3 Bucket: An object storage service that stores static website files and assets, serving as the origin for CloudFront. CloudFront: A Content Delivery Network (CDN) that caches and securely delivers website content globally with low latency. These contents are cached at edge locations closer to your users. Route 53: A service that manages DNS records, enabling the use of a custom domain for the website. Architecture Prerequisites To get started, the following are required: Python3, pip, and python3-venv installed A basic understanding of Python AWS CLI installed and configured Step 1: Pulumi Installation The command below installs pulumi on linux machine, refreshes the shell so the changes can reflect, and confirms the installation. curl -fsSL https://get.pulumi.com | sh source ~/.bashrc pulumi version Step 2: Setup Project Template Next, I created a new folder website/ and setup the static website template. The static website template bundles sample files and configurations to deploy static website with S3 and CloudFront. mkdir website && cd website pulumi new static-website-aws-python Since I'm using the pulumi new command for the first time, I needed to sign in to Pulumi Cloud to authenticate my CLI. I found this interesting because, after authentication, I could immediately start building the infrastructure. Pulumi automatically manages the state file remotely for me, unlike other IaC tools where I have to manually set up the backend file to manage the state file remotely before focusing on automation. After authentication, I configured the project and stack for the website deployment. Th

This is a submission for the Pulumi Deploy and Document Challenge: Fast Static Website Deployment
What I Built
In this project, I used Pulumi as an Infrastructure as Code (IaC) tool to create a secure and scalable static website infrastructure on AWS. The solution sets up an Amazon S3 bucket to host static site files and integrates it with Amazon CloudFront for fast, global content delivery. To ensure secure HTTPS access, I restricted permissions to only CloudFront, obtained an SSL certificate using AWS Certificate Manager (ACM), and set up a custom domain with Route 53.
Everything was written in Python using pulumi's Python SDK, which makes the codebase easy to read and maintain. The result is a modern react static website setup that is ready for production, cost-effective, and secure, all fully automated with pulumi.
Live Demo Link
CloudFront Links
https://d8fuic0sibn8l.cloudfront.net/
https://d8fuic0sibn8l.cloudfront.net/nonexistent-page
Custom Domain Links
https://challenge.drintech.online
https://challenge.drintech.online/nonexistent-page
Project Repo
Secure and Fast Static Website Deployment on AWS using Pulumi
This is a repository to set up a secure and fast static website deployment using pulumi.
Table of content
- Overview
- Objectives
- Technologies
- Project Architecture
- Project Structure
- Prerequisites
- Pulumi Installation
- Setup Project Template
- Deploy the Static Website
- Secure the website deployment
- Setup a Custom Doman
- Best Practices
Overview
This project uses Pulumi as an IAC tool to set up a secure and fast static website deployment. Pulumi is a unique IAC tool that lets you define your infrastructure using your favorite programming language. With Pulumi, we'll set up the infrastructure (S3 bucket, CloudFront, and Route 53) on AWS and securely deploy the static website files to this infrastructure using Python.
Technologies
-
Pulumi: An Infrastructure as Code (IaC) tool that provisions and manages cloud resources using Python.
-
Python: A programming language that provides modules to interact with Pulumi and AWS…
My Journey
I started this project by exploring the Pulumi documentation since it was my first time using Pulumi. It turned out to be a great experience because the Pulumi documentation is developer-friendly and easy to follow. Being a fan of Python and quite familiar with it, I chose to use the Pulumi Python SDK to set up the static website infrastructure and deploy a static React website to it.
My development journey was rewarding because I gained a better understanding of the technologies, but it was also filled with unexpected challenges. At one point, I even thought I had downloaded malware on my machine, lol. I began the journey using the helpful static website template provided by the Pulumi documentation. Let's go over the steps and the challenges I faced during development.
Summary of Technologies Used
Pulumi: An Infrastructure as Code (IaC) tool that provisions and manages cloud resources using Python.
Python: A programming language that provides modules to interact with Pulumi and AWS resources.
S3 Bucket: An object storage service that stores static website files and assets, serving as the origin for CloudFront.
CloudFront: A Content Delivery Network (CDN) that caches and securely delivers website content globally with low latency. These contents are cached at edge locations closer to your users.
Route 53: A service that manages DNS records, enabling the use of a custom domain for the website.
Architecture
Prerequisites
To get started, the following are required:
Python3, pip, and python3-venv installed
A basic understanding of Python
AWS CLI installed and configured
Step 1: Pulumi Installation
The command below installs pulumi on linux machine, refreshes the shell so the changes can reflect, and confirms the installation.
curl -fsSL https://get.pulumi.com | sh
source ~/.bashrc
pulumi version
Step 2: Setup Project Template
- Next, I created a new folder
website/
and setup the static website template. The static website template bundles sample files and configurations to deploy static website with S3 and CloudFront.
mkdir website && cd website
pulumi new static-website-aws-python
Since I'm using the pulumi new
command for the first time, I needed to sign in to Pulumi Cloud to authenticate my CLI. I found this interesting because, after authentication, I could immediately start building the infrastructure. Pulumi automatically manages the state file remotely for me, unlike other IaC tools where I have to manually set up the backend file to manage the state file remotely before focusing on automation.
-
After authentication, I configured the project and stack for the website deployment. This include:
- setting the project name, description and stack name.
- selecting pip as the toolchain for installing dependencies.
- Setting
us-east-1
as the region to deploy our aws resources such as S3 buckets. - confirming the index, error document and website path.
After completing the configuration, pulumi proceeds to install the dependencies defined in the requirements.txt
but fails due to python env error. I did not have python3-venv installed. Luckily for me, python provided me with the right command to install the correct version in the error message.
Pulumi manages the infrastructure setup as project and stack. Projects represent the entire infrastructure definition. They include metadata, runtime, and some configuration to manage and execute your infrastructure code. A project is defined in the pulumi.yaml
file.
A stack is an instance of the project with its own configuration and state. Stacks include environment-specific configurations that allow us to deploy the infrastructure to different environments such as dev, staging, and prod. A stack is defined in the pulumi.dev.yaml
.
After setting up the project and stack, pulumi generated the following files for me.
-
pulumi.yaml
: This represents the project. It includes metadata, runtime and configs to manage the infrastructure code.
name: website
description: A Python program to deploy a static website on AWS
runtime:
name: python
options:
toolchain: pip
virtualenv: venv
config:
pulumi:tags:
value:
pulumi:template: static-website-aws-python
-
pulumi.dev.yaml
: This file represents the stack for the development environment. It includes environment-specific configurations such as the AWS region, the path to static files, and HTML documents.
config:
aws:region: us-east-1
website:indexDocument: index.html
website:errorDocument: error.html
website:path: ./www
__main__.py
: This is the main file that contains the code defining the infrastructure. It specifies the resources to be deployed and their configurations.requirements.txt
: The Python dependencies needed to deploy the infrastructure using Pulumi are listed here.
pulumi>=3.0.0,<4.0.0
pulumi-aws>=6.0.2,<7.0.0
pulumi-synced-folder>=0.0.0,<1.0.0
venv/
: This folder manages the Python dependencies for this project in an isolated virtual environment.www/
: This folder contains sample static web files, includingindex.html
anderror.html
..gitignore
: Files listed here are not tracked by Git. This prevents us from pushing dependencies to our repository.
A breakdown of the __main__.py
This file is where all the main operations happen. The breakdown highlights different parts of the code that work together to set up the static website infrastructure and how I address deprecated aspects of the code. After the detailed breakdown, I will adjust these components to deploy a more secure, personalized, and fast static website infrastructure.
- Import the required modules
import pulumi
import pulumi_aws as aws
import pulumi_synced_folder as synced_folder
This imports the necessary modules to create AWS resources and sync files to an S3 bucket.
- Read Configuration Settings
config = pulumi.Config()
path = config.get("path") or "./www"
index_document = config.get("indexDocument") or "index.html"
error_document = config.get("errorDocument") or "error.html"
This reads environment-specific settings from the stack using pulumi.Config
and passes them as variables.
- Create an S3 Bucket & Enable Static Website Hosting
bucket = aws.s3.BucketV2(
"bucket",
# website={
# "index_document": index_document,
# "error_document": error_document,
# },
)
bucket_website = aws.s3.BucketWebsiteConfigurationV2(
"bucket",
bucket=bucket.bucket,
index_document={"suffix": index_document},
error_document={"key": error_document},
)
This creates an S3 bucket and sets it up for static website hosting. It also specifies the index and error documents. The bucketV2
class no longer accepts the website
argument for configuring the bucket as a website, as it would cause errors. Instead, the BucketWebsiteConfigurationV2
class is used to configure the bucket for website hosting.
- Set Bucket Ownership Controls
ownership_controls = aws.s3.BucketOwnershipControls(
"ownership-controls",
bucket=bucket.bucket,
rule={
"object_ownership": "ObjectWriter",
},
)
This sets permissions that define who owns newly uploaded objects. "ObjectWriter" means the uploader retains ownership. I will modify this later on for enhanced security.
- Allow Public Access to Objects
public_access_block = aws.s3.BucketPublicAccessBlock(
"public-access-block",
bucket=bucket.bucket,
block_public_acls=False,
)
This allows public read access to objects by turning off the bucket public ACL blocking. In other words, it makes the bucket public so anyone can access it. However, this is not recommended for security best practices and it will be adjusted later because it lets anyone access our static files insecurely over HTTP.
- Upload static files to S3 bucket
bucket_folder = synced_folder.S3BucketFolder(
"bucket-folder",
acl="public-read",
bucket_name=bucket.bucket,
path=path,
opts=pulumi.ResourceOptions(depends_on=[ownership_controls, public_access_block]),
)
This uses pulumi_synced_folder
to upload the static files located at path
to the S3 bucket. It applies the public-read ACL settings to the uploaded objects, allowing public users to access the website files. It ensures that ownership_controls
and public_access_block
are set first. This order is important to avoid permission conflicts and ensure that the object ACL aligns with the bucket ownership settings.
- Create a CloudFront CDN for Caching & Distribution
cdn = aws.cloudfront.Distribution(
"cdn",
enabled=True,
origins=[
{
"origin_id": bucket.arn,
"domain_name": bucket_website.website_endpoint,
"custom_origin_config": {
"origin_protocol_policy": "http-only",
"http_port": 80,
"https_port": 443,
"origin_ssl_protocols": ["TLSv1.2"],
},
}
],
This creates a CloudFront distribution to serve the website using a CDN. The origin (source) is set to the S3 bucket website endpoint and uses "http-only" because S3 website hosting does not support HTTPS natively. This means CloudFront will internally fetch the static files from S3 over HTTP.
- Configure CloudFront Cache Behavior
default_cache_behavior={
"target_origin_id": bucket.arn,
"viewer_protocol_policy": "redirect-to-https",
"allowed_methods": [
"GET",
"HEAD",
"OPTIONS",
],
"cached_methods": [
"GET",
"HEAD",
"OPTIONS",
],
"default_ttl": 600,
"max_ttl": 600,
"min_ttl": 600,
"forwarded_values": {
"query_string": True,
"cookies": {
"forward": "all",
},
},
},
This setup forces users to always access the website through CloudFront using HTTPS (redirect-to-https). It also includes request methods, cookies, caching rules, and TTL (cache time) settings. The default TTL caches your static files for 600 seconds (10 minutes), which is suitable for content that changes often.
For example, if your static website is still in the early stages of development and you release several builds a day, this setting works well. Use 3600 seconds (1 hour) or 86400 seconds (24 hours) if your static website is more stable and rarely updated.
If you're using a longer TTL and you update any or all objects in the bucket, CloudFront offers an option to invalidate the cache for specific or all objects. This ensures that your latest changes are visible to users.
- Configure CloudFront Error Handling & Security
price_class="PriceClass_100",
custom_error_responses=[
{
"error_code": 404,
"response_code": 404,
"response_page_path": f"/{error_document}",
}
],
restrictions={
"geo_restriction": {
"restriction_type": "none",
},
},
viewer_certificate={
"cloudfront_default_certificate": True,
},
PriceClass_100
limits the CloudFront distribution to use edge locations in the U.S., Canada, and Europe to save on costs. If you need wider coverage for your users, you can look into other CloudFront price classes.
404 errors from the S3 bucket are managed by serving the errorDocument
(error.html). You can use geo_restriction to control regional access or keep your content available worldwide. CloudFront uses its default SSL certificate, enabling public users to securely access your content over HTTPS.
- Export Public URLs & Endpoint
pulumi.export("originURL", pulumi.Output.concat("http://", bucket_website.website_endpoint))
pulumi.export("originHostname", bucket.website_endpoint)
pulumi.export("cdnURL", pulumi.Output.concat("https://", cdn.domain_name))
pulumi.export("cdnHostname", cdn.domain_name)
This exports the necessary URL and hostname for S3 and CloudFront to access the newly deployed static website.
Step 3: Deploy the static website
Now that we understand how the infrastructure code works, let's deploy the infrastructure and access the sample static website.
Run
pulumi up
. This command validates and sets up the infrastructure. After a successful validation, it confirms if I would like to deploy the infrastructure.
After I select 'yes', pulumi deploys the infrastructure. I can track the deployed stack on the Pulumi console. The console offers an easy-to-use dashboard where I can monitor deployments, view logs (updates), and see the resources deployed to a stack.
The CloudFront distribution took the longest time to create. Once the deployment is complete, I will be able to access the sample static website over HTTP using the S3
originURL
and over HTTPS using the CloudFrontcdnURL
.
The the error page is viewed by adding a nonexistent path to the URL, for example,
https://cloudfronturl.net/nonexistent-page
.
During deployment, Pulumi displayed a warning: "website_endpoint deprecated." The website_endpoint
is an output property that is no longer supported. Pulumi Co-Pilot, available on the console, was helpful in fixing the warning.
To resolve it, I replaced the website_endpoint
property with the bucket_regional_domain_name
property, as suggested by Pulumi Co-Pilot, in the CDN configuration and export section at the bottom of the code. Then I used the pulumi preview
command to confirm that the warning was resolved.
To deploy any custom website to S3, you need to update the path in the stack file (pulumi.dev.yaml
) to point to the location of the static website files.
I wanted to make things more interesting by deploying a React application I built to the S3 bucket. I moved the react_website/
folder to the root of the repository. The production build, which contains the static files, is located in the react_website/build/
directory. The configuration in the dev stack is updated to deploy the React application.
config:
aws:region: us-east-1
website:indexDocument: index.html
website:errorDocument: error.html
website:path: ./react_website/build
-
pulumi up
command deploys the React application. The newly deployed static website can be accessed using any of the provided URL links. I tested the error page by add a non-existent path to the URL and it works fine. The next phase is were things gets tricky and almost messy.
Step 4: Secure the website deployment
This step was the most challenging, requiring the most troubleshooting. I frequently switched between Pulumi documentation and AWS console to resolve unexpected issues.
Currently, the website can be accessed via HTTP using the bucket endpoint and via HTTPS using the cdnURL. Disabling BucketPublicAccessBlock
makes the bucket and its objects publicly accessible, which poses a security risk. I set up a secure deployment to ensure users can only access the static website securely through the CloudFront URL.
To enhance security, I restricted the bucket permissions so that only CloudFront can fetch objects from the bucket. I modified the public_access_block
settings to block all public access to the S3 bucket. This ensures that no one can access the S3 bucket or read the objects unless explicitly permitted. In other words, the bucket is now private.
public_access_block = aws.s3.BucketPublicAccessBlock(
"public-access-block",
bucket=bucket.bucket,
block_public_acls=True,
block_public_policy=True,
ignore_public_acls=True,
restrict_public_buckets=True,
)
- Next, I set
object_ownership
toBucketOwnerEnforced
. This setting disables ACLs and ensures the bucket owner is the owner of all objects. With this configuration, access to the objects can only be explicitly granted using bucket policies.
ownership_controls = aws.s3.BucketOwnershipControls(
"ownership-controls",
bucket=bucket.bucket,
rule={
"object_ownership": "BucketOwnerEnforced",
},
)
- I created CloudFront Origin Access Identity (OAI). An OAI is a virtual identity that can be associated with a CloudFront distribution to secure access to private S3 bucket.
oai = aws.cloudfront.OriginAccessIdentity("oai")
- Then I attached the OAI to the CloudFront distribution by switching from
custom_origin_config
tos3_origin_config
. Thes3_origin_config
sets up CloudFront to access the S3 bucket through the OAI.
cdn = aws.cloudfront.Distribution(
...
origins=[
{
"origin_id": bucket.arn,
"domain_name": bucket.bucket_regional_domain_name,
"s3_origin_config": {
"origin_access_identity": oai.cloudfront_access_identity_path,
},
}
],
...
)
- I Created an S3 bucket policy that allows only the CloudFront OAI to access the S3 bucket. The policy grants limited permission, allowing CloudFront to only fetch objects from the bucket. The code below creates the policy and attaches it to the bucket.
bucket_policy = aws.s3.BucketPolicy(
"bucketPolicy",
bucket=bucket.id,
policy=pulumi.Output.all(bucket.arn, oai.iam_arn).apply(lambda args: f"""{{
"Version": "2012-10-17",
"Statement": [
{{
"Effect": "Allow",
"Principal": {{"AWS": "{args[1]}"}},
"Action": "s3:GetObject",
"Resource": "{args[0]}/*"
}}
]
}}""")
)
- When I ran
pulumi up
to test the new security changes, I encountered an 'AccessControlListNotSupported: The bucket does not allow ACLs' error for all the objects. This error occurred because thesynced_folder
module attaches ACLs to objects and uploads them to the S3 bucket. However, theBucketOwnerEnforced
setting disables ACLs, which conflicts withsynced_folder
.
Although the Pulumi documentation specified that the synced_folder
ACL property can be set to None
, I still encountered errors asking for the value when I executed the code. I manually wrote the code to upload objects to the S3 bucket using the bucketObject
class from the S3 submodule and a nested for
loop to iterate through all the files. Pulumi Co-Pilot was helpful for this.
I then replaced the synced_folder
code with the new one generated.
for root, _, files in os.walk(path):
for file in files:
file_path = os.path.join(root, file)
key = os.path.relpath(file_path, path)
aws.s3.BucketObject(
key,
bucket=bucket.bucket,
source=pulumi.FileAsset(file_path),
opts=pulumi.ResourceOptions(depends_on=[ownership_controls, public_access_block]),
)
In the code above, I used the os
module to get the path of the static files on my machine. I imported the module at the beginning of the code.
import os
Now, when I ran pulumi up
and tried to access the website, I got an access denied error. Checking my S3 bucket, it looked like a disaster had struck—nothing was there. So, I ran pulumi destroy && pulumi up
to rebuild the infrastructure from scratch. While this brought the objects back, the access denied error was still there.
I reviewed all the configurations in the code and double-checked the settings for CloudFront and the S3 bucket in the console, but everything seemed correct. While going through the CloudFront distribution dashboard for the third time, I noticed that the default root object
option was not set. It didn't seem like a problem because it was optional, and the setup worked fine without it until I set the OAI. When you set OAI, some optional settings become mandatory.
The default root object
tells CloudFront which object to return when users access the CloudFront URL, and in this case, it's index.html
. I asked Pulumi Copilot for the right argument to set it up in the CDN configuration.
cdn = aws.cloudfront.Distribution(
"cdn",
enabled=True,
default_root_object="index.html",
...
)
I deployed the new changes, expecting everything to work smoothly. But when I tried accessing the website using the CloudFront URL, a weird file named "download" with no extension got downloaded to my computer. I deleted it right away. For a moment, I worried that I had downloaded malware and that my secure website was under attack, after just setting up enhanced security for it.
I realized I needed to see what was happening in my infrastructure to answer questions like: Did my request reach the CloudFront distribution? Was it successful? Did it return an error, and if so, what was the error?
I switched to the CloudFront dashboard, navigated to the logging tab, and set up a logging destination for CloudFront to send logs to a CloudWatch log group I had created for a Lambda function in my previous project.
Then I tried accessing the link again so the request and response would be logged, allowing me to gather information to troubleshoot this unexpected behavior. I accessed the log from CloudWatch, but I couldn't understand it right away. I pasted it into Pulumi Copilot for analysis.
With the response from Pulumi Copilot, I realized that the code I used to upload objects didn't set the correct file type for them. Instead, it set the file type for all objects to application/octet-stream
instead of text/html
, which is needed to display the index.html.
This explains why a strange file was downloaded, which I suspected was my index.html, and I could confidently open it to confirm.
I asked Pulumi Copilot again for a solution to set the file type in the nested for
loop code that uploads the objects.
import mimetypes
...
for root, _, files in os.walk(path):
for file in files:
file_path = os.path.join(root, file)
key = os.path.relpath(file_path, path)
content_type, _ = mimetypes.guess_type(file_path)
print(f"Uploading: {file_path} as {key}")
aws.s3.BucketObject(
key,
bucket=bucket.bucket,
source=pulumi.FileAsset(file_path),
content_type=content_type or "application/octet-stream",
opts=pulumi.ResourceOptions(depends_on=[ownership_controls, public_access_block]),
)
The nested loop now uses the mimetypes
module to set the correct file type for each uploaded object. I ran pulumi up
, and it finally succeeded, or so I thought. My React static website is now securely accessible through the CloudFront URL link. When I tried the origin URL, it returned an access denied error, which was expected. However, when I tried to access the error page using the CloudFront URL, I also received an access denied error, which was unexpected.
So, I turned to the trusty old troubleshooting library (Stack Overflow). I discovered that because CloudFront and the S3 bucket are secured with OAI, the S3 bucket sends a 403 error code to CloudFront for pages that don't exist, instead of a 404. I tweaked the CloudFront settings to show the error page and send a 404 error response to users when it receives a 403 error code from the S3 bucket.
...
custom_error_responses=[
{
"error_code": 403,
"response_code": 404,
"response_page_path": f"/{error_document}",
}
]
...
I deployed the latest changes, and now I can access both the main page and the error page of the static website. The next step is to personalize it with a custom domain.
Setup a Custom Domain
Sharing the CloudFront URL with my friends to access my secure, fast, and beautiful static website felt boring. I wanted a custom domain that reflects a bit about me and what the website is about. I followed the steps in the Pulumi documentation to set up a custom domain for the static website.
The steps include:
Adding the domain and subdomain configuration to the stack.
Importing the new configurations into my code.
Creating and validating a new SSL/TLS certificate with ACM. I used a DNS validation method, which involves creating a DNS record in the hosted zone to confirm domain ownership.
Adjusting the CloudFront code to handle requests for the custom domain by using the
aliases
argument.Creating a Route 53 alias A record that maps our subdomain to the CloudFront domain.
You might be wondering why we use an alias A record instead of a CNAME record, which is commonly used for mapping one domain to another. The Route 53 alias A record is recommended for mapping domains to AWS services like CloudFront. With an alias A record, DNS queries are resolved internally, making them fast and more efficient.
After setting up the custom domain, I was able to access my static website using the personalized domain. When I create a new production build for the React application, all I need to do is run pulumi up
. Pulumi handles the rest by automating the deployment to the secure S3 bucket.
Using Pulumi
This is my first time using Pulumi as an IAC tool, and I find it amazing. Pulumi lets me use Python, which is my favorite programming language. This made Pulumi feel familiar to me, like it was a part of my workflow. Like other IAC tools, Pulumi saves me the hassle of clicking through the AWS console to set up infrastructure and allows me to develop repeatable infrastructure. But unlike other IAC tools, Pulumi lets me automate right away without first manually configuring the remote state file.
The Pulumi console and Pulumi Copilot (my personal assistant for this project) were very helpful in troubleshooting, referencing, and successfully completing this project. The Pulumi console provided updates (logs) on the changes I made, making it easy to track changes and reference past execution outputs. Pulumi Copilot, my intelligent personal assistant, helped me fix deprecated arguments, generate bucket policies, analyze logs, and provided possible solutions.
In fact, when I wanted to secure my static website infrastructure, I copied the entire code provided by the template and gave prompts to Pulumi Copilot to set up a secure website infrastructure. This simplified the process, allowing me to focus on adjusting and reviewing the code using the pulumi documentation to make a few tweaks. It's incredible to have an AI assistant that understands the infrastructure tool I'm using. It makes infrastructure provisioning easy.
Below are some of my key prompts:
Prompt 1
can you adjust this code to set the content type for each file uploaded?
for root, _, files in os.walk(path):
for file in files:
file_path = os.path.join(root, file)
key = os.path.relpath(file_path, path)
aws.s3.BucketObject(
key,
bucket=bucket.bucket,
source=pulumi.FileAsset(file_path),
)
Prompt 2
The option acl=None is not working. It still ask for acl value on runtime. Is there any good alternative to S3BucketFolder?
bucket_folder = synced_folder.S3BucketFolder(
"bucket-folder",
acl=None,
bucket_name=bucket.bucket,
path=path,
opts=pulumi.ResourceOptions(depends_on=[ownership_controls, public_access_block]),
)
Prompt 3
this is the cloudfront log I got from cloudwatch. Any reason why I can't access my static files via cloudfront URL after using OAI?
{
"date": "2025-04-02",
"time": "18:01:10",
"x-edge-location": "LOS50-P2",
"sc-bytes": "594",
"c-ip": "102.89.44.119",
"cs-method": "GET",
"cs(Host)": "d39ymmysanotpr.cloudfront.net",
"cs-uri-stem": "/",
"sc-status": "200",
"cs(Referer)": "-",
"cs(User-Agent)": "Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/135.0.0.0%20Safari/537.36",
"cs-uri-query": "-",
"cs(Cookie)": "-",
"x-edge-result-type": "Miss",
"x-edge-request-id": "-lB3Ff7SJ8HEdpSZeS8d8AwV8kjflOVlQdgaI_-yqDYQ-TenMjY2nA==",
"x-host-header": "d39ymmysanotpr.cloudfront.net",
"cs-protocol": "https",
"cs-bytes": "486",
"time-taken": "0.480",
"x-forwarded-for": "-",
"ssl-protocol": "TLSv1.3",
"ssl-cipher": "TLS_AES_128_GCM_SHA256",
"x-edge-response-result-type": "Miss",
"cs-protocol-version": "HTTP/2.0",
"fle-status": "-",
"fle-encrypted-fields": "-",
"c-port": "17055",
"time-to-first-byte": "0.480",
"x-edge-detailed-result-type": "Miss",
"sc-content-type": "application/octet-stream",
"sc-content-len": "238",
"sc-range-start": "-",
"sc-range-end": "-"
}
Kindly note this submission was published April 06, 8:42PM PDT.
Reference Links
Thanks for reading. Your suggestions and feedbacks are highly welcome. Kindly drop them in the comment section.