Astro On Aws
Let’s deploy
Build yourself an Astro site. I’ll wait…
So you’ve set up your sweet new project and now want to host it. And you want to host it on AWS? I can help!
So we have the gist of how this works, let go over the components of a deployment.
The Code
Now you have an Astro project all put together already. It’s in git, right?
CI/CD
Always keep your code in a Version Control System (VCS, but practically synonymous with “git” nowadays).
That does not mean we have any kind of testing or deployment automation set up yet.
However, setting up deployment automation is a different topic than setting up a deployment. There are many different git hosting providers and deployment automation services.
Pick one. Github Actions is easy to use.
You may even want to do some testing.
Deploying your site
The good bit!
Here’s how we’ll deploy our site:
An S3 Bucket
Hang with me, there is a bit more to it!
To serve a static site from that S3 bucket, we need to use a service to answer HTTP requests. That service is Cloudfront, the managed Content Delivery Network (CDN) offering from AWS.
But that Cloudfront distribution needs HTTPS for security. To encrypt and decrypt traffic to your site, Cloudfront needs a key and a certificate. We’ll use AWS Certificate Manager (ACM) for certificate management. A neat feature of ACM is that AWS will automatically rotate and update these certificates for us (the most annoying parts of managing HTTPS certificates, by the way).
Lastly we want our users to be able to find our site at a name.
Some thing like blog.mysite.com
.
By default, Cloudfront will host your site at a name like dg13n55k73mbx.cloudfront.net
, for example. To help users resolve blog.mysite.com
to Cloudfront, we can use Route53, which is the AWS managed DNS service.
DNS can be tricky. But we’ll keep things easy and minimal.
A note, we will also need to use Route53 for hosting “challenges”, to get our HTTPS certificate.
Using DNS we are able to prove that we own the domain mysite.com
.
That’s enough for AWS to trust us, and provide a certificate.
It’s our name, but signed with their reputation.
Aside: Terraform
I’ll be showing the relevant configurations for these resources with Terraform. Terraform is an Infrastructure-as-Code (IaC) tool for declaring and managing cloud resources.
If you like Terraform, please use it. If not, there are many other IaC tools available.
The important part is that we can declare what infrastructure we need, and how it is configured. The resources and configurations will be roughly the same, regardless of which tool you choose to use for managing them.
Let’s get to it.
Infrastructure
A Bucket
First we need a bucket:
resource "aws_s3_bucket" "site_s3_bucket" {
bucket = "my-site-bucket"
}
Not much to see there.
Some things to be aware of:
- Bucket names are globally unique (for the whole wide world)
- Buckets are regional in AWS
A Certificate
Next we want a certificate:
provider "aws" {
alias = "aws-east"
region = "us-east-1"
}
resource "aws_acm_certificate" "blog_cert" {
provider = aws.aws-east
domain_name = "mysite.com"
subject_alternative_names = ["www.mysite.com", "blog.mysite.com"]
validation_method = "DNS"
}
Notice you’ll want to create your certificate in the AWS us-east-1
region.
This is the “main” AWS region, the original region, and Cloudfront certificates must be created and managed there.
DNS Validation
Once you’ve created the ACM certificate, you need to validate it with DNS challenges.
If using Route53, AWS has made this easy. Browse to ACM in the AWS console, and follow the instructions and prompts.
It will take a bit for AWS to pick up the challenge records, and generate the certificate. Check back every few minutes (around two to five minutes usually)
Cloudfront
Let’s set up that CDN. This one is a bit involved.
resource "aws_cloudfront_distribution" "site_cloudfront" {
origin {
domain_name = aws_s3_bucket.site_s3_bucket.bucket_regional_domain_name
origin_id = "siteS3Bucket"
}
aliases = ["www.mysite.com", "blog.mysite.com"]
enabled = true
default_root_object = "index.html"
default_cache_behavior {
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "siteS3Bucket"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "redirect-to-https"
min_ttl = 3600
default_ttl = 3600
max_ttl = 86400
}
price_class = "PriceClass_All"
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
acm_certificate_arn = aws_acm_certificate.blog_cert.arn
ssl_support_method = "sni-only"
}
}
It’s quite an involved service. Let’s go over this piecemeal.
- The “origin” for a CDN is the upstream HTTP server that it will fetch content from. For our case, that is our S3 bucket, holding the data for our site.
aliases
configures alternate names that we want Cloudfront to use, instead of justdg13n55k73mbx.cloudfront.net
- We say we want it enabled, and to serve the path
/
from/index.html
- The default cache behavior configures how we want Cloudfront to handle proxying and caching
- We allow read-only HTTP methods, and only cache for GET and HEAD verbs in requests
- By default, fetch everything from our S3 bucket origin
- When forwarding requests to our S3 bucket, do not send query strings, or cookies. These wont be used, and dropping them improves caching
- All requests are to be over HTTPS, and if not, they will be redirected to HTTPS
- We set some nice long Time To Live cache times, as this is static content
- The price class decides which edge locations to host your site from. ALL means every region, but could be a bit more expensive
restrictions
mean that you are able to allowlist or denylist certain geographical regions.none
means allow global access.- The viewer certificate sets up our ACM certificate for HTTPS serving Note that we probably want to use SNI
Security Alert!
Cloudfront can’t actually access S3 just yet!
We do not want to expose our S3 bucket for public access. A quick search for “S3 bucket leak” will show you why.
To allow Cloudfront authenticate access, we can set up an Origin Access Policy.
First we will need to define an S3 IAM bucket policy:
data "aws_iam_policy_document" "cloudfront_oac_access" {
statement {
principals {
type = "Service"
identifiers = ["cloudfront.amazonaws.com"]
}
actions = [
"s3:GetObject"
]
resources = [
aws_s3_bucket.site_s3_bucket.arn,
"${aws_s3_bucket.site_s3_bucket.arn}/*"
]
condition {
test = "StringEquals"
variable = "AWS:SourceArn"
values = [aws_cloudfront_distribution.site_cloudfront.arn]
}
}
}
If we read his policy top to bottom, it says:
- The principal service cloudfront,
- can perform the
s3:GetObject
action - on our S3 bucket, and objects within it
- if called from our Cloudfront distribution
We attach that policy to our bucket:
resource "aws_s3_bucket_policy" "cf_oac_access" {
bucket = aws_s3_bucket.site_s3_bucket.id
policy = data.aws_iam_policy_document.cloudfront_oac_access.json
}
Cloudfront uses an origin_access_control
resource to connect that policy:
resource "aws_cloudfront_origin_access_control" "cf_oac" {
name = "site-s3-cloudfront-oac"
description = "Grant cloudfront access to s3 bucket ${aws_s3_bucket.site_s3_bucket.id}"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}
Lastly, we need to add to our Cloudfront configuration, to configure using this OAC policy for our S3 bucket origin:
resource "aws_cloudfront_distribution" "site_cloudfront" {
origin {
domain_name = aws_s3_bucket.site_s3_bucket.bucket_regional_domain_name
# Add `origin_access_control_id`:
origin_access_control_id = aws_cloudfront_origin_access_control.cf_oac.id
origin_id = "siteS3Bucket"
}
# ... the rest unchanged ...
}
DNS
One last step, add a CNAME DNS record pointing from blog.mysite.com
to the DNS name of our Cloudfront distribution.
The CDN will have it’s own name, like foobarbaz1234.cloudfront.net
.
Go live!
That’s it! Cloudfront will now serve our website content from our S3 bucket.
To deploy our Astro site to the bucket, we can run something like this:
#!/bin/bash
npm install
npm run build
aws s3 sync --delete ./dist s3://my-site-bucket/
From Github Actions, that might look something like this:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: 22
- name: Install
run: npm ci
- name: Build
run: npm run build
- name: AWS login
uses: aws-actions/configure-aws-credentials@v4
with:
aws-region: us-west-1
role-to-assume: arn:aws:iam::123456789:role/my-site-deploy
- name: Update site
run: aws --region us-west-1 s3 sync --delete ./dist s3://my-site-bucket/
- name: Invalidate Cloudfront
run: aws cloudfront create-invalidation --distribution-id DISTRIBUTIONID --paths "/*"
Happy Hacking
With this setup, you can automate deploys, from Github for example, for continuous deployment.
Further, using Cloudfront and S3 allows for a very fast and cost effective hosting solution.
You can further customize Cloudfront as needed. It is a very powerful and flexible service!
If needed we can add dynamic backend origins, if we need to serve more than just static content.
We can add functions to check or change requests on the fly. We can add additional logging and monitoring, as well as additional security settings, like bot protection.
The sky is the limit! Best of luck with your new website!