Declarative Schemas for Simpler Database Management

Today we’re releasing declarative schemas to simplify managing and maintaining complex database schemas. With declarative schemas, you can define your database structure in a clear, centralized, and version-controlled manner. ⚡️ More on Launch Week What are declarative schemas? Declarative schemas store the final desired state of the database in .sql files that can be saved and versioned alongside a project. For example, here is the declarative schema for a classic products table: create table "products" ( "id" serial primary key, "name" text not null, "description" text, "price" numeric(10,2) not null, "created_at" timestamp default now() ); alter table "products" enable row level security; Declarative schemas offer numerous benefits over making changes to your database schema directly: Single pane of glass. Maintain your entire database schema in one place, reducing redundancy and potential errors. Versioned migrations. Automatically generate migration files, ensuring consistent schema updated across environments. Store your declarative schema files alongside your project files in your version control system. Concise code reviews. Easily review changes to tables, views, and functions without manually repeating complex migration scripts. Declarative schemas vs migrations It's best practice to use Migrations to track and apply changes to databases. Every time you make a change, you create a new new file with all the changes, keeping changes versioned and reproducible. However, as the complexity of a database schemas grows, it becomes increasingly difficult to develop using versioned migrations as there isn't a single place to see the entire database schema. For example, at Supabase we have a complex and frequently-updated projects table. Here's partially what it looks like with RLS enabled: create table private.projects ( id bigint not null, name text not null, organization_id bigint not null, inserted_at timestamp not null, updated_at timestamp not null ); alter table private.projects enable row level security; create policy projects_insert on private.projects for insert to authenticated with check auth.can_write(project_id); create policy projects_select on private.projects for select to authenticated using auth.can_read(project_id); -- Users can only view the projects that they have access to create view public.projects as select projects.id, projects.name, projects.organization_id, projects.inserted_at, projects.updated_at from private.projects where auth.can_read(projects.id); The projects table is created in a private schema, with a public view exposed for reads. Attribute-based access control (ABAC) is implemented on top of RLS policies to ensure queries only return projects that the user has access to. Since Postgres views are not updatable by default, we have defined trigger functions to cascade writes to the underlying table when a Supabase user creates a new project. This makes development easier because the projects view can be inserted with regular PostgREST calls while invoking the appropriate RLS policies on the underlying table. -- Triggers to update views from PostgREST: select('projects').insert({ ... }) create function public.public_projects_on_insert() returns trigger as $$ begin insert into private.projects( name, organization_id, inserted_at, updated_at ) values ( NEW.name, NEW.organization_id, coalesce(NEW.inserted_at, now()), coalesce(NEW.updated_at, now()) ) returning * into NEW; return NEW; end $$ language plpgsql; create trigger public_projects_on_insert instead of insert on public.projects for each row execute function public.public_projects_on_insert(); This complexity slows down development velocity, as changes to the table might break other views or functions. Back in early 2022, a simple change to add a new column involved the following steps. Find the latest schema for projects table in our migration files or by querying our database. Write the alter table statement in a new migration file. Copy and update the projects view definition to include the new column. Copy and update the trigger function definition to include the new column. Add new pgTAP tests and verify that existing tests pass. Submit the new migration file for review, which would be at least a few hundred lines. This process is tedious and it's frustrating to have multiple engineers working on the projects table concurrently. Merging PRs would result in a merge conflict that must be resolved by repeating steps 1-5. Using declarative schemas in production Adopting declarative schemas gave our engineers a single pane of glass when updating database schemas. Instead of manually duplicating affected postgres entities in a migration file, we only need to change the schema definition in one place.

Apr 3, 2025 - 20:39
 0
Declarative Schemas for Simpler Database Management

Today we’re releasing declarative schemas to simplify managing and maintaining complex database schemas. With declarative schemas, you can define your database structure in a clear, centralized, and version-controlled manner.

⚡️ More on Launch Week

What are declarative schemas?

Declarative schemas store the final desired state of the database in .sql files that can be saved and versioned alongside a project. For example, here is the declarative schema for a classic products table:

create table "products" (
  "id" serial primary key,
  "name" text not null,
  "description" text,
  "price" numeric(10,2) not null,
  "created_at" timestamp default now()
);

alter table "products"
enable row level security;

Declarative schemas offer numerous benefits over making changes to your database schema directly:

  • Single pane of glass. Maintain your entire database schema in one place, reducing redundancy and potential errors.
  • Versioned migrations. Automatically generate migration files, ensuring consistent schema updated across environments. Store your declarative schema files alongside your project files in your version control system.
  • Concise code reviews. Easily review changes to tables, views, and functions without manually repeating complex migration scripts.

Declarative schemas vs migrations

It's best practice to use Migrations to track and apply changes to databases. Every time you make a change, you create a new new file with all the changes, keeping changes versioned and reproducible.

However, as the complexity of a database schemas grows, it becomes increasingly difficult to develop using versioned migrations as there isn't a single place to see the entire database schema.

For example, at Supabase we have a complex and frequently-updated projects table. Here's partially what it looks like with RLS enabled:

create table private.projects (
  id              bigint    not null,
  name            text      not null,
  organization_id bigint    not null,
  inserted_at     timestamp not null,
  updated_at      timestamp not null
);

alter table private.projects
enable row level security;

create policy projects_insert
  on private.projects
  for insert
  to authenticated
with check auth.can_write(project_id);

create policy projects_select
  on private.projects
  for select
  to authenticated
using auth.can_read(project_id);

-- Users can only view the projects that they have access to
create view public.projects as select
  projects.id,
  projects.name,
  projects.organization_id,
  projects.inserted_at,
  projects.updated_at
from private.projects
where auth.can_read(projects.id);

The projects table is created in a private schema, with a public view exposed for reads. Attribute-based access control (ABAC) is implemented on top of RLS policies to ensure queries only return projects that the user has access to.

Since Postgres views are not updatable by default, we have defined trigger functions to cascade writes to the underlying table when a Supabase user creates a new project. This makes development easier because the projects view can be inserted with regular PostgREST calls while invoking the appropriate RLS policies on the underlying table.

-- Triggers to update views from PostgREST: select('projects').insert({ ... })
create function public.public_projects_on_insert() returns trigger
as $$
begin
  insert into private.projects(
    name,
    organization_id,
    inserted_at,
    updated_at
  ) values (
    NEW.name,
    NEW.organization_id,
    coalesce(NEW.inserted_at, now()),
    coalesce(NEW.updated_at, now())
  ) returning * into NEW;
  return NEW;
end
$$ language plpgsql;

create trigger public_projects_on_insert
  instead of insert
  on public.projects
  for each row
execute function public.public_projects_on_insert();

This complexity slows down development velocity, as changes to the table might break other views or functions. Back in early 2022, a simple change to add a new column involved the following steps.

  1. Find the latest schema for projects table in our migration files or by querying our database.
  2. Write the alter table statement in a new migration file.
  3. Copy and update the projects view definition to include the new column.
  4. Copy and update the trigger function definition to include the new column.
  5. Add new pgTAP tests and verify that existing tests pass.
  6. Submit the new migration file for review, which would be at least a few hundred lines.

This process is tedious and it's frustrating to have multiple engineers working on the projects table concurrently. Merging PRs would result in a merge conflict that must be resolved by repeating steps 1-5.

Using declarative schemas in production

Adopting declarative schemas gave our engineers a single pane of glass when updating database schemas. Instead of manually duplicating affected postgres entities in a migration file, we only need to change the schema definition in one place.

We then use a schema diff tool, like migra, to figure out the necessary updates to views and functions when generating the migration file.

For example, adding a new metadata column to the projects table now becomes a single line diff.

--- a/supabase/schemas/projects.sql
+++ b/supabase/schemas/projects.sql
@@ -2,6 +2,7 @@ create table private.projects (
   id              bigint    not null,
   name            text      not null,
   organization_id bigint    not null,
+  metadata        jsonb,
   inserted_at     timestamp not null,
   updated_at      timestamp not null
 );

The same process also applies to views, database functions, RLS policies, role grants, custom types, and constraints. While manual reviews are still required on the generated migration file, it has cut down our development from hours to minutes. It's also much easier to rebase on merge conflicts introduced by other PRs.

Get started with declarative schemas

Declarative schemas are available today on Supabase.

We added the same set of tools that we used internally for the last 2 years to Supabase CLI. Whether you are just getting started with migrations or already fed up with managing hundreds of migration files, give declarative schemas a try as it will likely simplify your development process.

Check out our blog post on Postgres Language Server for better tooling and IDE integration when developing with declarative schemas.

Launch Week 14

Main Stage

Day 1 - Supabase UI Library
Day 2 - Supabase Edge Functions: Deploy from the Dashboard + Deno 2.1
Day 3 -Realtime: Broadcast from Database
Day 4 - Declarative Schemas for Simpler Database Management

Build Stage

Community Meetups