How to deploy a Postgres continuous backup solution in under 5 minutes using Docker

[Song of the day: HELL ON ME — Johnny Huynh] Hey there! I was recently looking to help a collaborator out with backing up a Postgres database and thought it would be a great idea for a blog post. So, let’s begin. Table of contents Deployment Method General Prerequisites Step 1: Gathering S3-Compatible Bucket Credentials Step 2: Get your schedule in Cron format Step 3: (Optional) Gather Sentry Credentials Step 4: Convert your postgres credentials to a connection URL Step 5: Filling out the .env file Step 6: Deploying Building the image for a version other than 14,15 and 16 GitHub Regarding stricter permissions for AWS Deployment Method There are 2 ways to deploy the backup solution: Using the Railway Template Using Docker on your machine / cloud We will go over each of the 2 step by step to not miss anything General Prerequisites — Required regardless the method A Cloudflare / Backblaze / AWS Account An accessible PostgreSQL database, version 16 to 14 (Optional) A Sentry Account Step 1: Gathering S3-Compatible Bucket Credentials Provider 1: AWS Head over to console.aws.amazon.com and login Navigate over to the S3 Dashboard in the region of your choice 3. Create or choose a bucket — Note down the bucket name and its region 4. Navigate over to AWS IAM 3. Click on Users then “Create User” 4. Give it the name “pg-backup” or whatever else you’d like 5. Assign the permission “AmazonS3FullAccess” (Note: As the permission’s name suggests, this grants the user FULL access to ALL s3 buckets, so a more restricted permission set is advised. Check the end of the post for detailed instructions) 5. Proceed to the next page and confirm the user creation. 6. You should now be on the “Users” page. Select the one you just created 7. Navigate over to “Security Credentials” 8. Select “Create Access Key” under “Access Keys” Choose “Other” on the Use Case Skip description (or dont?) 9. Save both the Access Key and Secret Access Key Provider 2: Cloudflare (R2) Head over to dash.cloudflare.com and login Choose “R2 Object Storage” from the sidebar 3. Create or choose a bucket and note down its name 4. Click on API > Manage API Tokens (See picture) 5. Create an “Account Token” like below: Set its name Change permissions to “Object Read & Write” Specify your backup bucket Click “Create” 6. Of the values shown Copy: The Access Key ID The Secret Access Key The Jurisdiction-Specific Endpoint Provider 3: Backblaze B2 Head over to https://secure.backblaze.com and sign in Click “Create a Bucket” and choose the following: Private files No encryption No Object Lock 3. From the info of the bucket, note: The Endpoint The Bucket name The Region (found in the endpoint, its first subdomain. For example, the region in the pic is eu-central-003) 4. Now, head over to https://secure.backblaze.com/app_keys.htm 5. Create an Application Key with the following properties A Name Access to ONLY the backup bucket Read & Write scopes 6. Note down the “Key ID” and “Application Key” as “Access Key ID” and “Secret Access Key” respectively Step 2: Get your schedule in Cron format Backups run on a cron job. So, we need the schedule for example “Every day at midnight” to become “0 0 * * *” Your options are: Use the prompt “cron schedule {schedule}” on ChatGPT Use crontab.guru To freshen up your memory on Cron: It uses the format [second] [minute] [hour] [day_of_month] [month] [day_of_week] , so * * * * * * runs every second. Also: * means “Every {minutes / hours / days etc}” */a (a is a number) means “Every a {minutes / hours / days etc}” Step 3: (Optional) Gather Sentry Credentials If you want to check the status of your backups, you will need Sentry for monitoring. So let’s begin. Head over to sentry.io and login If you haven’t yet, create a project by going to the top side of your sidebar, selecting “Projects” then on the top right, “Create new project” Now, head over to “Crons” then “Add Monitor” 4. Add a name, set it to your desired project, and add the cron schedule from the previous step. You can, of course, mess around with the other settings as well. Then press Create. 5. Copy the monitor slug like shown below and store it: 6. We now need the Sentry DSN: Go to Settings Select Projects Choose the project you used when creating the monitor Go to Client Keys Copy and store the DSN And that’s all for Sentry. Step 4: Convert your postgres credentials to a connection URL That’s one of the easy steps. If you already have a connection URL, you can skip this step. Anyway, to convert connection credentials to a URL, use this format: postgresql://[username]:[password]@[db-host]:[port]/[database] A note for Supabase users: The image doesn’t support IPv6. So please use the session pooler instead*.* Step

Apr 16, 2025 - 22:48
 0
How to deploy a Postgres continuous backup solution in under 5 minutes using Docker

[Song of the day: HELL ON ME — Johnny Huynh]

Hey there! I was recently looking to help a collaborator out with backing up a Postgres database and thought it would be a great idea for a blog post. So, let’s begin.

Table of contents

  • Deployment Method

  • General Prerequisites

  • Step 1: Gathering S3-Compatible Bucket Credentials

  • Step 2: Get your schedule in Cron format

  • Step 3: (Optional) Gather Sentry Credentials

  • Step 4: Convert your postgres credentials to a connection URL

  • Step 5: Filling out the .env file

  • Step 6: Deploying

  • Building the image for a version other than 14,15 and 16

  • GitHub

  • Regarding stricter permissions for AWS

Deployment Method

There are 2 ways to deploy the backup solution:

We will go over each of the 2 step by step to not miss anything

General Prerequisites — Required regardless the method

Step 1: Gathering S3-Compatible Bucket Credentials

Provider 1: AWS

  1. Head over to console.aws.amazon.com and login

  2. Navigate over to the S3 Dashboard in the region of your choice

3. Create or choose a bucket — Note down the bucket name and its region

4. Navigate over to AWS IAM

AWS IAM Dashboard

3. Click on Users then “Create User

4. Give it the name “pg-backup” or whatever else you’d like

5. Assign the permission “AmazonS3FullAccess

(Note: As the permission’s name suggests, this grants the user FULL access to ALL s3 buckets, so a more restricted permission set is advised. Check the end of the post for detailed instructions)

AWS IAM User creation flow

5. Proceed to the next page and confirm the user creation.

6. You should now be on the “Users” page. Select the one you just created

7. Navigate over to “Security Credentials

AWS IAM User page

8. Select “Create Access Key” under “Access Keys

  • Choose “Other” on the Use Case

  • Skip description (or dont?)

9. Save both the Access Key and Secret Access Key

Provider 2: Cloudflare (R2)

  1. Head over to dash.cloudflare.com and login

  2. Choose “R2 Object Storage” from the sidebar

Cloudflare Dashboard

3. Create or choose a bucket and note down its name

4. Click on API > Manage API Tokens (See picture)

Cloudflare R2 Dashboard

5. Create an “Account Token” like below:

  • Set its name

  • Change permissions to “Object Read & Write”

  • Specify your backup bucket

  • Click “Create

Cloudflare R2 Account token creation flow

6. Of the values shown Copy:

  • The Access Key ID

  • The Secret Access Key

  • The Jurisdiction-Specific Endpoint

Account token creation, credentials page

Provider 3: Backblaze B2

  1. Head over to https://secure.backblaze.com and sign in

  2. Click “Create a Bucket” and choose the following:

  • Private files

  • No encryption

  • No Object Lock

3. From the info of the bucket, note:

  • The Endpoint

  • The Bucket name

  • The Region (found in the endpoint, its first subdomain. For example, the region in the pic is eu-central-003)

Backblaze bucket image

4. Now, head over to https://secure.backblaze.com/app_keys.htm

5. Create an Application Key with the following properties

  • A Name

  • Access to ONLY the backup bucket

  • Read & Write scopes

6. Note down the “Key ID” and “Application Key” as “Access Key ID” and “Secret Access Key” respectively

Step 2: Get your schedule in Cron format

Backups run on a cron job. So, we need the schedule for example “Every day at midnight” to become “0 0 * * *”

Your options are:

  • Use the prompt “cron schedule {schedule}” on ChatGPT

  • Use crontab.guru

To freshen up your memory on Cron:

It uses the format [second] [minute] [hour] [day_of_month] [month] [day_of_week] , so

* * * * * * runs every second. Also:

  • * means “Every {minutes / hours / days etc}”

  • */a (a is a number) means “Every a {minutes / hours / days etc}”

Step 3: (Optional) Gather Sentry Credentials

If you want to check the status of your backups, you will need Sentry for monitoring. So let’s begin.

  1. Head over to sentry.io and login

  2. If you haven’t yet, create a project by going to the top side of your sidebar, selecting “Projects” then on the top right, “Create new project”

  3. Now, head over to “Crons” then “Add Monitor

Sentry Cron Monitor Dashboard

4. Add a name, set it to your desired project, and add the cron schedule from the previous step. You can, of course, mess around with the other settings as well. Then press Create.

5. Copy the monitor slug like shown below and store it:

Sentry cron monitor details

6. We now need the Sentry DSN:

  • Go to Settings

  • Select Projects

  • Choose the project you used when creating the monitor

Sentry project selection flow

  • Go to Client Keys

  • Copy and store the DSN

Sentry DSN flow

And that’s all for Sentry.

Step 4: Convert your postgres credentials to a connection URL

That’s one of the easy steps. If you already have a connection URL, you can skip this step. Anyway, to convert connection credentials to a URL, use this format:

postgresql://[username]:[password]@[db-host]:[port]/[database]

A note for Supabase users:

The image doesn’t support IPv6. So please use the session pooler instead*.*

Supabase connection pooler showcase

Step 5: Filling out the .env file

You will need to pass the backup config to the container somehow. Here’s how to do that:

  1. Download the .env.preset from GitHub:
curl -o .env "https://raw.githubusercontent.com/andriotisnikos1/pg-backup/refs/heads/main/.env.preset"

2. Fill out the env vars. Let’s go over them one by one:

S3-Compatible ENVs

  • S3_REGION Is the S3-API compatible region. Fill it with auto if using Cloudflare, or the region that you copied if not using Cloudflare

  • S3_ENDPOINT Is the endpoint to your bucket. Leave empty if using AWS, use the “ Jurisdiction-Specific Endpoint” if you are using Cloudflare and the bucket “Endpoint” if using Backblaze

  • S3_AUTH_KEY_ID Is the “Access Key ID”

  • S3_AUTH_KEY_SECRET Is the “Secret Access Key

  • S3_BUCKET is the bucket name

Backups Configuration ENVs

  • BACKUPS_CRON_SCHEDULE Is the schedule from Step 2

  • BACKUPS_MAX_KEEP_COUNT (Optional modification, defaults to 5) Is the amount of newest backups to keep before deleting the old ones

  • BACKUPS_FILE_IDENTIFIER (Optional) Is a string appended to the backup file’s names to distinguish them from others. For example, a configuration with the identifier set to “andriotis” will result in files with naming like so: {date}-{random 32 chars}.andriotis.dump . If left unset, the backup files will follow the {date}-{random 32 chars}.dump convention

  • BACKUPS_USE_PG_DUMPALL (Optional, defaults to “false”). Activates pg_dumpall instead of pg_dump . Used for bumping entire clusters, instead of a singular DB with its tables and data

Postgres Configuration ENVs

  • POSTGRES_URL Is the URL constructed at Step 4

(Optional) Sentry Configuration

  • SENTRY_ENABLED “true” or “false”, enables Sentry. Defaults to “false”

  • SENTRY_DSN Is the DSN of your project, we copied this at Step 3

  • SENTRY_MONITOR_SLUG Is the slug of the monitor we created at Step 3

Step 6: Deploying

A note before deploying:

Postgres 16 was released in 2023 and Postgres 15 in 2022. So if your DB is recent, you’re most probably using PG 16. The backup will error out if you’re using an incompatible version so you should probably check it before deploying. You can check the version of your DB by running the query:

select version();
  1. Docker
  • For Postgres 16:
docker run -d \
    -v "./.env:/app/.env" \
    andriotisnikos1/pg-backup
  • For Postgres 15
docker run -d \
    -v "./.env:/app/.env" \
    andriotisnikos1/pg-backup:pg-15
  • For Postgres 14
docker run -d \
    -v "./.env:/app/.env" \
    andriotisnikos1/pg-backup:pg-14

2. Railway

  • Go to the template and click Deploy Now — Postgres 16 is required!

  • Input the .env file contents from the previous step

  • Click deploy

3. Paid Deployment (Hire me!)

If deployment is a hassle, i can do it for you as well as guide you with collecting the right ENV values and answering any of your questions for a coffee (5$). You can find me on fiverr: https://www.fiverr.com/s/BRadeNz

Building the image for a version other than 14,15 and 16

If you want to build for another version, first make sure it supports pg_dump and/or pg_dumpall then:

  • Clone the repo

  • Build the image for your version

# clone repo
git clone https://github.com/andriotisnikos1/pg-backup.git

# cd
cd pg-backup

# build
docker build -t pg-backup --build-arg POSTGRES_VERSION=[your-version] .

GitHub

https://github.com/andriotisnikos1/pg-backup

Regarding stricter permissions for AWS

  1. Go to the AWS S3 on your desired region

  2. Select your bucket, then click on “Copy ARN”

AWS S3 dashboard

3. Go back to IAM

4. Click on Policies, then Create Policy

AWS IAM Policies dashboard

5. Switch to JSON mode and paste in the following:

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Sid": "Statement1",
   "Effect": "Allow",
   "Action": [
    "s3:ListBucket",
    "s3:PutObject",
    "s3:GetObject",
    "s3:DeleteObject"
   ],
   "Resource": [
    "[ARN from previously]"
   ]
  }
 ]
}

Don’t forget to replace your ARN!

This restricts permissions for your user to just the following on your specified bucket:

  • Retrieving objects

  • Listing objects

  • Adding objects

  • Deleting objects

5. Click Continue, give your policy a name (i named it pg-backup) then click Create policy.

6. Now, when creating your user, you can attach the policy to it, giving it just enough permissions to do its job.

AWS IAM User creation flow with custom policies

The steps for AWS credentials continue normally for the user creation flow, like in step 1!

Thumbnail