Introduction
Linter for dangerous Postgres migration patterns in Diesel and SQLx. Prevents downtime caused by unsafe schema changes.
✓ Detects operations that lock tables or cause downtime
✓ Provides safe alternatives for each blocking operation
✓ Works with both Diesel and SQLx migration frameworks
✓ Supports safety-assured blocks for verified operations
✓ Extensible with custom checks
⭐ If this looks useful, a star on GitHub helps more developers find it.
How It Works
diesel-guard analyzes your migration SQL using Postgres’s own parser (pg_query via libpg_query) and checks each statement against a set of safety rules. When it finds a potentially dangerous operation, it reports:
- The operation — what SQL was detected
- The problem — what lock it acquires, why it’s dangerous, and under what conditions
- A safe alternative — a step-by-step approach that achieves the same goal without the risk
Installation
From crates.io
cargo install diesel-guard
Prebuilt Binaries
macOS and Linux:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/ayarotsky/diesel-guard/releases/latest/download/diesel-guard-installer.sh | sh
Windows (PowerShell):
powershell -ExecutionPolicy Bypass -c "irm https://github.com/ayarotsky/diesel-guard/releases/latest/download/diesel-guard-installer.ps1 | iex"
Homebrew:
brew install ayarotsky/tap/diesel-guard
pre-commit
Add diesel-guard as a pre-commit hook to catch unsafe migrations before they’re committed.
In your .pre-commit-config.yaml:
repos:
- repo: https://github.com/ayarotsky/diesel-guard
rev: v0.8.0
hooks:
- id: diesel-guard
The hook triggers whenever a .sql file is staged and runs diesel-guard check against your migrations directory (as configured in diesel-guard.toml).
If diesel-guard is already installed (via Homebrew, cargo, or the shell installer) and you don’t want to use the Rust toolchain, change
languagetosystemin your.pre-commit-config.yaml:hooks: - id: diesel-guard language: system
If your migrations live outside the default migrations/ path, pass the path via args:
repos:
- repo: https://github.com/ayarotsky/diesel-guard
rev: v0.8.0
hooks:
- id: diesel-guard
args: [db/migrate/]
Verify Installation
diesel-guard --version
Initialize Configuration
Generate a documented configuration file in your project root:
diesel-guard init
This creates a diesel-guard.toml with all available options and their descriptions. See Configuration for full details.
Quick Start
Check a Single Migration File
diesel-guard check migrations/2024_01_01_create_users/up.sql
Check All Migrations in a Directory
diesel-guard check # defaults to ./migrations/
diesel-guard check migrations/ # or specify an explicit path
Pipe SQL Directly
cat migrations/2024_01_01_create_users/up.sql | diesel-guard check -
Example Output
When diesel-guard finds an unsafe operation:
❌ Unsafe migration detected in migrations/2024_01_01_create_users/up.sql
❌ ADD COLUMN with DEFAULT
Problem:
Adding column 'admin' with DEFAULT on table 'users' requires a full table rewrite on Postgres < 11,
which acquires an ACCESS EXCLUSIVE lock. On large tables, this can take significant time and block all operations.
Safe alternative:
1. Add the column without a default:
ALTER TABLE users ADD COLUMN admin BOOLEAN;
2. Backfill data in batches (outside migration):
UPDATE users SET admin = <value> WHERE admin IS NULL;
3. Add default for new rows only:
ALTER TABLE users ALTER COLUMN admin SET DEFAULT <value>;
Note: For Postgres 11+, this is safe if the default is a constant value.
Parse Errors
If a migration file contains invalid SQL, diesel-guard reports the file name and points to the exact failing statement:
× Failed to parse SQL: Invalid statement: syntax error at or near "@"
╭─[migrations/2024_01_01_create_users/up.sql:3:1]
2 │ CREATE TABLE b ();
3 │ CREATE TABLE @bad;
· ▲
· problematic SQL
╰─
help: Check that your SQL syntax is valid
JSON Output
For CI/CD or programmatic processing:
diesel-guard check migrations/ --format json
Inspect the AST
Use dump-ast to see the pg_query AST as JSON — essential for writing custom checks:
diesel-guard dump-ast --sql "CREATE INDEX idx_users_email ON users(email);"
diesel-guard dump-ast --file migration.sql
Example output:
[
{
"IndexStmt": {
"access_method": "btree",
"concurrent": false,
"idxname": "idx_users_email",
"if_not_exists": false,
"index_params": [
{
"node": {
"IndexElem": {
"name": "email",
...
}
}
}
],
"relation": {
"relname": "users",
"relpersistence": "p",
...
},
"unique": false,
...
}
}
]
The AST structure shown here tells you exactly which fields are available when writing custom Rhai checks. For example, node.IndexStmt.concurrent maps to the concurrent field above.
Configuration
Create a diesel-guard.toml file in your project root to customize behavior.
Initialize
Generate a documented configuration file:
diesel-guard init
Use --force to overwrite an existing file:
diesel-guard init --force
All Options
# Framework configuration (REQUIRED)
# Specify which migration framework you're using
# Valid values: "diesel" or "sqlx"
framework = "diesel"
# Skip migrations before this timestamp
# Accepts: YYYYMMDDHHMMSS, YYYY_MM_DD_HHMMSS, or YYYY-MM-DD-HHMMSS
# Works with any migration directory format
start_after = "2024_01_01_000000"
# Also check down.sql files (default: false)
check_down = true
# Disable specific checks (blacklist)
disable_checks = ["AddColumnCheck"]
# Run only specific checks (whitelist). Cannot be used with disable_checks.
enable_checks = ["AddIndexCheck", "AddNotNullCheck"]
# Downgrade specific checks to warnings instead of errors.
# Warnings are reported in output but do not cause a non-zero exit code.
# Useful for checks like TruncateTableCheck that are context-dependent.
warn_checks = ["TruncateTableCheck"]
# Directory containing custom Rhai check scripts
custom_checks_dir = "checks"
# Target Postgres major version.
# When set, version-aware checks adjust their behavior accordingly.
# Example: setting 11 allows ADD COLUMN with constant DEFAULT (safe on PG 11+),
# but still warns for volatile defaults like DEFAULT now() on all versions.
postgres_version = 16
Available Check Names
Use these names in disable_checks (blacklist), enable_checks (whitelist), or warn_checks (downgrade to warning):
| Check Name | Operation |
|---|---|
AddColumnCheck | ADD COLUMN with DEFAULT |
AddIndexCheck | CREATE INDEX without CONCURRENTLY; CONCURRENTLY inside a transaction |
AddJsonColumnCheck | ADD COLUMN with JSON type |
AddNotNullCheck | ALTER COLUMN SET NOT NULL |
AddPrimaryKeyCheck | ADD PRIMARY KEY to existing table |
AddSerialColumnCheck | ADD COLUMN with SERIAL |
AddUniqueConstraintCheck | ADD UNIQUE constraint via ALTER TABLE |
AlterColumnTypeCheck | ALTER COLUMN TYPE |
CharTypeCheck | CHAR/CHARACTER column types |
CreateExtensionCheck | CREATE EXTENSION |
CreateTableSerialCheck | CREATE TABLE with SERIAL |
DropColumnCheck | DROP COLUMN |
DropDatabaseCheck | DROP DATABASE |
DropIndexCheck | DROP INDEX without CONCURRENTLY; CONCURRENTLY inside a transaction |
DropPrimaryKeyCheck | DROP PRIMARY KEY |
DropTableCheck | DROP TABLE |
GeneratedColumnCheck | ADD COLUMN with GENERATED STORED |
ReindexCheck | REINDEX without CONCURRENTLY; CONCURRENTLY inside a transaction |
RenameColumnCheck | RENAME COLUMN |
RenameTableCheck | RENAME TABLE |
ShortIntegerPrimaryKeyCheck | SMALLINT/INT/INTEGER primary keys |
TimestampTypeCheck | TIMESTAMP without time zone |
TruncateTableCheck | TRUNCATE TABLE |
UnnamedConstraintCheck | Unnamed constraints (UNIQUE, FOREIGN KEY, CHECK) |
WideIndexCheck | Indexes with 4+ columns |
Custom check names are the filename stem of the .rhai file (e.g., require_concurrent_index.rhai → require_concurrent_index).
Framework Adapters
diesel-guard supports both Diesel and SQLx Postgres migrations. The framework is configured via diesel-guard.toml (see Configuration).
Diesel
Diesel uses a directory-based migration structure:
migrations/
└── 2024_01_01_000000_create_users/
├── up.sql
├── down.sql
└── metadata.toml (optional)
Supported timestamp formats for directory names:
YYYY_MM_DD_HHMMSS(e.g.,2024_01_01_000000)YYYY-MM-DD-HHMMSSYYYYMMDDHHMMSS
SQLx
SQLx supports multiple migration file formats. diesel-guard handles all of them.
Format 1: Suffix-based (recommended)
Most common SQLx format with separate up/down files:
migrations/
├── 20240101000000_create_users.up.sql
└── 20240101000000_create_users.down.sql
Format 2: Single file (up-only)
Single migration file without rollback:
migrations/
└── 20240101000000_create_users.sql
SQLx versions are any positive integer (e.g., 1, 001, 42, 20240101000000). Short numeric versions use numeric comparison for start_after filtering; 14-digit timestamps use string comparison.
Framework Configuration
diesel-guard requires explicit framework configuration in diesel-guard.toml:
# Framework configuration (REQUIRED)
framework = "diesel" # or "sqlx"
Generate a config file with:
diesel-guard init
SQLx Metadata Directives
SQLx uses comment directives for migration metadata. diesel-guard recognizes these and validates their usage:
-- no-transaction
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
diesel-guard will warn you if you use CONCURRENTLY operations without the -- no-transaction directive, since CONCURRENTLY cannot run inside a transaction block.
Diesel Metadata
For Diesel, use a metadata.toml file in the migration directory to run outside a transaction:
# migrations/2024_01_01_add_user_index/metadata.toml
run_in_transaction = false
This is required for any migration using CONCURRENTLY operations (CREATE INDEX CONCURRENTLY, DROP INDEX CONCURRENTLY, REINDEX CONCURRENTLY).
Checks
diesel-guard ships safety checks covering the most common Postgres migration hazards.
| Check | Operation | Lock Type |
|---|---|---|
| Add Check Constraint | ALTER TABLE ... ADD CONSTRAINT ... CHECK without NOT VALID | ACCESS EXCLUSIVE |
| ADD COLUMN with DEFAULT | ALTER TABLE ... ADD COLUMN ... DEFAULT | ACCESS EXCLUSIVE + table rewrite |
| Adding an EXCLUDE Constraint | ALTER TABLE ... ADD CONSTRAINT ... EXCLUDE | SHARE ROW EXCLUSIVE + full table scan |
| Add Foreign Key | ALTER TABLE ... ADD FOREIGN KEY without NOT VALID | ShareRowExclusiveLock |
| Adding an Index | CREATE INDEX without CONCURRENTLY; CREATE INDEX CONCURRENTLY inside a transaction | SHARE |
| Adding a UNIQUE Constraint | ALTER TABLE ... ADD UNIQUE | ACCESS EXCLUSIVE |
| Changing Column Type | ALTER TABLE ... ALTER COLUMN ... TYPE | ACCESS EXCLUSIVE + table rewrite |
| CHAR Fields | CHAR/CHARACTER column types | — (best practice) |
| Create Table with SERIAL | SERIAL/BIGSERIAL/SMALLSERIAL in CREATE TABLE | — (best practice) |
| Creating an Extension | CREATE EXTENSION | — (requires superuser) |
| Domain CHECK Constraint | ALTER DOMAIN ... ADD CONSTRAINT ... CHECK without NOT VALID | ACCESS EXCLUSIVE |
| Dropping a Column | ALTER TABLE ... DROP COLUMN | ACCESS EXCLUSIVE |
| Dropping a Constraint | Unnamed UNIQUE/FOREIGN KEY/CHECK constraints | — (best practice) |
| Dropping a Database | DROP DATABASE | Exclusive access |
| Dropping an Index | DROP INDEX without CONCURRENTLY; DROP INDEX CONCURRENTLY inside a transaction | ACCESS EXCLUSIVE |
| Dropping a Primary Key | ALTER TABLE ... DROP CONSTRAINT ... pkey | ACCESS EXCLUSIVE |
| Dropping a Table | DROP TABLE | ACCESS EXCLUSIVE |
| Generated Columns | ADD COLUMN ... GENERATED ALWAYS AS ... STORED | ACCESS EXCLUSIVE + table rewrite |
| JSON Fields | ADD COLUMN ... JSON | — (best practice) |
| Mutation without WHERE | DELETE FROM table or UPDATE table SET ... without WHERE | ACCESS EXCLUSIVE / ROW EXCLUSIVE |
| Wide Indexes | CREATE INDEX with 4+ columns | — (best practice) |
| REFRESH MATERIALIZED VIEW | REFRESH MATERIALIZED VIEW without CONCURRENTLY; REFRESH MATERIALIZED VIEW CONCURRENTLY inside a transaction | ACCESS EXCLUSIVE |
| Renaming a Column | ALTER TABLE ... RENAME COLUMN | ACCESS EXCLUSIVE |
| Renaming a Schema | ALTER SCHEMA ... RENAME TO | ACCESS EXCLUSIVE |
| Renaming a Table | ALTER TABLE ... RENAME TO | ACCESS EXCLUSIVE |
| REINDEX | REINDEX without CONCURRENTLY; REINDEX CONCURRENTLY inside a transaction | ACCESS EXCLUSIVE |
| SERIAL Primary Keys | ADD COLUMN ... SERIAL/BIGSERIAL | ACCESS EXCLUSIVE + table rewrite |
| SET NOT NULL | ALTER TABLE ... ALTER COLUMN ... SET NOT NULL | ACCESS EXCLUSIVE |
| Short Primary Keys | SMALLINT/INT primary keys | — (best practice) |
| TIMESTAMP Fields | TIMESTAMP without time zone | — (best practice) |
| TRUNCATE TABLE | TRUNCATE TABLE | ACCESS EXCLUSIVE |
| Unnamed Constraints | Constraints without explicit names | — (best practice) |
Need project-specific rules beyond these? See Custom Checks.
Adding Check Constraint without NOT VALID
Check Name: AddCheckConstraintCheck
Lock Type: AccessExclusive Lock
Bad
Adding a check constraint ‘{constraint_name}’ on table ‘{table_name}’ without NOT VALID scans the entire table to validate existing rows, which can block autovacuum. On larger tables this can cause performance issues.
ALTER TABLE orders ADD CONSTRAINT check_amount CHECK (amount > 0);
Good
Add the check first without validation using the NOT VALID clause. Validate the check later in a separate migration.
-- Step 1 (no table scans; lock acquired momentarily)
ALTER TABLE orders ADD CONSTRAINT check_amount CHECK (amount > 0) NOT VALID;
-- Step 2 (separate migration, acquires ShareUpdateExclusiveLock only)
ALTER TABLE orders VALIDATE CONSTRAINT check_amount;
ADD COLUMN with DEFAULT
Check name: AddColumnCheck
Lock type: ACCESS EXCLUSIVE + full table rewrite (Postgres < 11)
Bad
In Postgres versions before 11, adding a column with a default value requires a full table rewrite. This acquires an ACCESS EXCLUSIVE lock and can take hours on large tables, blocking all reads and writes.
ALTER TABLE users ADD COLUMN admin BOOLEAN DEFAULT FALSE;
Good
Add the column first, backfill the data separately, then add the default:
-- Migration 1: Add column without default
ALTER TABLE users ADD COLUMN admin BOOLEAN;
-- Outside migration: Backfill in batches
UPDATE users SET admin = FALSE WHERE admin IS NULL;
-- Migration 2: Add default for new rows only
ALTER TABLE users ALTER COLUMN admin SET DEFAULT FALSE;
Note: For Postgres 11+, adding a column with a constant default value is instant and safe. Set postgres_version = 11 (or higher) in diesel-guard.toml to suppress this check for constant defaults.
Adding an EXCLUDE Constraint
Check Name: AddExcludeConstraintCheck
Lock Type: SHARE ROW EXCLUSIVE
Bad
Adding an exclusion constraint scans the entire table to validate existing rows while holding a SHARE ROW EXCLUSIVE lock for the full duration. Unlike CHECK or FOREIGN KEY constraints, there is no NOT VALID escape hatch — exclusion constraints must be validated immediately.
ALTER TABLE meeting_rooms
ADD CONSTRAINT no_double_booking
EXCLUDE USING gist (room_id WITH =, during WITH &&);
Good
There is no non-blocking path for adding an exclusion constraint to an existing populated table. Options:
- Add during a low-traffic window and accept the full-table scan cost.
- Define the constraint at table creation time to avoid scanning existing rows:
CREATE TABLE meeting_rooms (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
room_id INT NOT NULL,
during TSTZRANGE NOT NULL,
CONSTRAINT no_double_booking EXCLUDE USING gist (room_id WITH =, during WITH &&)
);
- Use application-level enforcement if the table is too large to lock safely during the migration.
Escape Hatch
If you have reviewed this operation and confirmed it is safe (e.g., the table is empty or traffic is negligible), wrap it in a safety-assured block:
-- safety-assured:start
ALTER TABLE meeting_rooms
ADD CONSTRAINT no_double_booking
EXCLUDE USING gist (room_id WITH =, during WITH &&);
-- safety-assured:end
Adding Foreign Key without NOT VALID
Check Name: AddForeignKeyCheck
Lock Type: ShareRowExclusiveLock
Bad
Adding a foreign key with validation requires a ShareRowExclusiveLock, which blocks writes on the table.
On large tables, this can cause outages.
ALTER TABLE orders ADD CONSTRAINT fk_user_id
FOREIGN KEY (user_id) REFERENCES users(id);
Good
Add the foreign key first without validation using the NOT VALID clause. Validate the foreign key later in a separate
migration.
-- Step 1 (no validation scan; short metadata lock)
ALTER TABLE orders ADD CONSTRAINT fk_user_id
FOREIGN KEY (user_id) REFERENCES users(id) NOT VALID;
-- Step 2 (separate migration, acquires ShareUpdateExclusiveLock only)
ALTER TABLE orders VALIDATE CONSTRAINT fk_user_id;
Adding an Index
Check name: AddIndexCheck
Lock type: SHARE (blocks all writes)
Bad
Creating an index without CONCURRENTLY acquires a SHARE lock, blocking all write operations (INSERT, UPDATE, DELETE) for the duration of the index build.
CREATE INDEX idx_users_email ON users(email);
CREATE UNIQUE INDEX idx_users_username ON users(username);
Good
Use CONCURRENTLY to allow concurrent writes during the index build:
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
CREATE UNIQUE INDEX CONCURRENTLY idx_users_username ON users(username);
Important: CONCURRENTLY cannot run inside a transaction block.
For Diesel migrations: Add a metadata.toml file to your migration directory:
# migrations/2024_01_01_add_user_index/metadata.toml
run_in_transaction = false
For SQLx migrations: Add the no-transaction directive at the top of your migration file:
-- no-transaction
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
Adding a UNIQUE Constraint
Check name: AddUniqueConstraintCheck
Lock type: ACCESS EXCLUSIVE (blocks all reads and writes)
Bad
Adding a UNIQUE constraint via ALTER TABLE acquires an ACCESS EXCLUSIVE lock, blocking all reads and writes during index creation. This is worse than CREATE INDEX without CONCURRENTLY.
ALTER TABLE users ADD CONSTRAINT users_email_key UNIQUE (email);
ALTER TABLE users ADD UNIQUE (email); -- Unnamed is also bad
Good
Use CREATE UNIQUE INDEX CONCURRENTLY, then optionally add the constraint:
-- Step 1: Create the unique index concurrently
CREATE UNIQUE INDEX CONCURRENTLY users_email_idx ON users(email);
-- Step 2 (Optional): Add constraint using the existing index
-- This is instant since the index already exists
ALTER TABLE users ADD CONSTRAINT users_email_key UNIQUE USING INDEX users_email_idx;
Important: Requires a migration without a transaction block:
For Diesel:
# migrations/2024_01_01_add_unique/metadata.toml
run_in_transaction = false
For SQLx:
-- no-transaction
CREATE UNIQUE INDEX CONCURRENTLY users_email_idx ON users(email);
Adding a Primary Key
Adding a primary key constraint to an existing table acquires an ACCESS EXCLUSIVE lock, blocking all operations (reads and writes). The operation must also create an index to enforce uniqueness.
Check name: AddPrimaryKeyCheck
Bad
-- Blocks all operations while creating index and adding constraint
ALTER TABLE users ADD PRIMARY KEY (id);
ALTER TABLE users ADD CONSTRAINT users_pkey PRIMARY KEY (id);
Good
Use CREATE UNIQUE INDEX CONCURRENTLY first, then add the primary key constraint using the existing index:
-- Step 1: Create unique index concurrently (allows concurrent operations)
CREATE UNIQUE INDEX CONCURRENTLY users_pkey ON users(id);
-- Step 2: Add PRIMARY KEY using the existing index (fast, minimal lock)
ALTER TABLE users ADD CONSTRAINT users_pkey PRIMARY KEY USING INDEX users_pkey;
Important: The CONCURRENTLY approach requires a migration without a transaction block (same metadata.toml or -- no-transaction directive as above).
Note: This approach requires Postgres 11+. For earlier versions, you must use the unsafe ALTER TABLE ADD PRIMARY KEY during a maintenance window.
Changing Column Type
Check name: AlterColumnTypeCheck
Lock type: ACCESS EXCLUSIVE + full table rewrite
Bad
Changing a column’s type typically requires an ACCESS EXCLUSIVE lock and triggers a full table rewrite, blocking all operations.
ALTER TABLE users ALTER COLUMN age TYPE BIGINT;
ALTER TABLE users ALTER COLUMN data TYPE JSONB USING data::JSONB;
Good
Use a multi-step approach with a new column:
-- Migration 1: Add new column
ALTER TABLE users ADD COLUMN age_new BIGINT;
-- Outside migration: Backfill in batches
UPDATE users SET age_new = age::BIGINT;
-- Migration 2: Swap columns
ALTER TABLE users DROP COLUMN age;
ALTER TABLE users RENAME COLUMN age_new TO age;
Safe Type Changes
These type changes do not trigger a table rewrite on Postgres 9.2+ and are safe:
- Increasing VARCHAR length:
VARCHAR(50)→VARCHAR(100) - Converting to TEXT:
VARCHAR(255)→TEXT - Increasing numeric precision
CHAR Fields
Check name: CharTypeCheck
Lock type: None (best practice warning)
Bad
CHAR and CHARACTER types are fixed-length and padded with spaces. This wastes storage and can cause subtle bugs with string comparisons and equality checks.
ALTER TABLE users ADD COLUMN country_code CHAR(2);
CREATE TABLE products (sku CHARACTER(10) PRIMARY KEY);
Good
Use TEXT or VARCHAR instead:
-- For ALTER TABLE
ALTER TABLE users ADD COLUMN country_code TEXT;
ALTER TABLE users ADD COLUMN country_code VARCHAR(2);
-- For CREATE TABLE
CREATE TABLE products (sku TEXT);
CREATE TABLE products (sku VARCHAR(10));
-- Or TEXT with CHECK constraint for length validation
ALTER TABLE users ADD COLUMN country_code TEXT CHECK (length(country_code) = 2);
CREATE TABLE products (sku TEXT CHECK (length(sku) <= 10));
Why CHAR Is Problematic
- Fixed-length padding wastes storage
- Trailing spaces affect equality comparisons (
'US' != 'US ') - DISTINCT, GROUP BY, and joins may behave unexpectedly
- No performance benefit over VARCHAR or TEXT in Postgres
Create Table with SERIAL
Check name: CreateTableSerialCheck
Lock type: —
Bad
SERIAL / BIGSERIAL / SMALLSERIAL are PostgreSQL pseudo-types (not standard SQL).
They create separately-owned sequence objects, which can complicate permissions,
dump/restore, and replication workflows.
CREATE TABLE events (id BIGSERIAL PRIMARY KEY);
CREATE TABLE users (id SERIAL PRIMARY KEY);
CREATE TABLE users (external_id SERIAL NOT NULL UNIQUE, id BIGINT PRIMARY KEY);
Good
Prefer SQL-standard identity columns:
CREATE TABLE events (
id BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY
);
CREATE TABLE users (
id INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY
);
Key insight: For new schemas on PostgreSQL 10+, identity columns avoid SERIAL pseudo-type drawbacks while keeping auto-increment semantics.
Creating an Extension
Check name: CreateExtensionCheck
Lock type: Requires superuser privileges
Bad
Creating an extension in migrations often requires superuser privileges, which application database users typically don’t have in production environments.
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION uuid_ossp;
Good
Install extensions outside of application migrations:
-- For local development: add to database setup scripts
CREATE EXTENSION IF NOT EXISTS pg_trgm;
-- For production: use infrastructure automation
-- (Ansible, Terraform, or manual DBA installation)
Best Practices
- Document required extensions in your project README
- Include extension installation in database provisioning scripts
- Use infrastructure automation (Ansible, Terraform) for production
- Have your DBA or infrastructure team install extensions before deployment
Common extensions that require this approach: pg_trgm, uuid-ossp, hstore, postgis, pg_stat_statements.
CREATE TABLE without PRIMARY KEY
Operation: CREATE TABLE without PRIMARY KEY
Why this is dangerous
Tables without a primary key cannot participate in logical replication. PostgreSQL replication slots require each replicated table to have either a primary key or an explicit REPLICA IDENTITY setting (FULL or USING INDEX). Without one, any attempt to replicate the table will fail or require manual intervention.
Beyond replication, a primary key provides:
- A guaranteed unique row identifier for updates, deletes, and
ON CONFLICTclauses - A natural target for foreign key references from other tables
- An implicit index that speeds up single-row lookups
What diesel-guard checks
Any CREATE TABLE statement that defines no primary key — neither inline on a column nor as a separate table-level constraint — is flagged.
Exceptions (not flagged):
CREATE TEMP TABLE— temporary tables are session-scoped and never replicatedCREATE TABLE (LIKE other)— PK inheritance depends onINCLUDING CONSTRAINTS, which cannot be determined at parse time
Bad
CREATE TABLE events (
name TEXT,
payload JSONB
);
Good
-- Option 1: identity column (recommended)
CREATE TABLE events (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
name TEXT,
payload JSONB
);
-- Option 2: UUID
CREATE TABLE events (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT,
payload JSONB
);
-- Option 3: separate constraint
CREATE TABLE events (
id BIGINT GENERATED ALWAYS AS IDENTITY,
name TEXT,
payload JSONB,
PRIMARY KEY (id)
);
Escape hatch
If the table is intentionally without a primary key (for example, a log table where you plan to set REPLICA IDENTITY FULL), wrap the statement in a safety-assured block:
-- safety-assured:start
CREATE TABLE audit_log (
recorded_at TIMESTAMPTZ,
message TEXT
);
-- safety-assured:end
Domain CHECK Constraint without NOT VALID
Check Name: AddDomainCheckConstraintCheck
Lock Type: AccessExclusive Lock
Bad
Adding a CHECK constraint to an existing domain without NOT VALID causes Postgres to validate all columns that use that domain across all tables. This is a potentially slow, lock-holding full-scan operation.
ALTER DOMAIN positive_int ADD CONSTRAINT pos_check CHECK (VALUE > 0);
Good
Use NOT VALID to add the constraint without scanning existing data, then validate in a separate migration — the same two-step pattern used for table-level CHECK constraints.
-- Step 1 (fast, no full scan; lock acquired momentarily)
ALTER DOMAIN positive_int ADD CONSTRAINT pos_check CHECK (VALUE > 0) NOT VALID;
-- Step 2 (separate migration, acquires ShareUpdateExclusiveLock only)
ALTER DOMAIN positive_int VALIDATE CONSTRAINT pos_check;
Note:
CREATE DOMAIN ... CHECK (...)is always safe. The domain is new, so no table columns reference it yet — there is no existing data to scan.
Dropping a Column
Check name: DropColumnCheck
Lock type: ACCESS EXCLUSIVE + table rewrite
Bad
Dropping a column acquires an ACCESS EXCLUSIVE lock and typically triggers a table rewrite. This blocks all operations and can cause errors if application code is still referencing the column.
ALTER TABLE users DROP COLUMN email;
Good
Remove references from application code first, then drop the column in a later migration:
-- Step 1: Mark column as unused in application code
-- Deploy application code changes first
-- Step 2: (Optional) Set to NULL to reclaim space
ALTER TABLE users ALTER COLUMN email DROP NOT NULL;
UPDATE users SET email = NULL;
-- Step 3: Drop in later migration after confirming it's unused
ALTER TABLE users DROP COLUMN email;
Postgres doesn’t support DROP COLUMN CONCURRENTLY, so the table rewrite is unavoidable. Staging the removal minimizes risk by ensuring no running application code depends on the column at the time of the drop.
Dropping a Constraint (Unnamed Constraints)
Check name: UnnamedConstraintCheck
Lock type: None (best practice warning)
This check flags constraints added without an explicit name. Auto-generated names vary between databases and make future DROP CONSTRAINT operations unpredictable.
Bad
Adding constraints without explicit names results in auto-generated names from Postgres. These names vary between databases and make future migrations difficult.
-- Unnamed UNIQUE constraint
ALTER TABLE users ADD UNIQUE (email);
-- Unnamed FOREIGN KEY constraint
ALTER TABLE posts ADD FOREIGN KEY (user_id) REFERENCES users(id);
-- Unnamed CHECK constraint
ALTER TABLE users ADD CHECK (age >= 0);
Good
Always name constraints explicitly using the CONSTRAINT keyword:
-- Named UNIQUE constraint
ALTER TABLE users ADD CONSTRAINT users_email_key UNIQUE (email);
-- Named FOREIGN KEY constraint
ALTER TABLE posts ADD CONSTRAINT posts_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id);
-- Named CHECK constraint
ALTER TABLE users ADD CONSTRAINT users_age_check CHECK (age >= 0);
Naming Conventions
- UNIQUE:
{table}_{column}_keyor{table}_{column1}_{column2}_key - FOREIGN KEY:
{table}_{column}_fkey - CHECK:
{table}_{column}_checkor{table}_{description}_check
Named constraints make future migrations predictable:
-- Easy to reference in later migrations
ALTER TABLE users DROP CONSTRAINT users_email_key;
Dropping a Database
Check name: DropDatabaseCheck
Lock type: Exclusive access (all connections must be terminated)
Bad
Dropping a database permanently deletes the entire database including all tables, data, and objects. This operation is irreversible. Postgres requires exclusive access to the target database — all active connections must be terminated before the drop can proceed. The command cannot be executed inside a transaction block.
DROP DATABASE mydb;
DROP DATABASE IF EXISTS testdb;
Good
DROP DATABASE should almost never appear in application migrations. Database lifecycle should be managed through infrastructure automation or DBA operations.
-- For local development: use database setup scripts
-- For production: use infrastructure automation (Terraform, Ansible)
-- For test cleanup: coordinate with DBA or use dedicated test infrastructure
-- If absolutely necessary (e.g., test cleanup), use a safety-assured block:
-- safety-assured:start
DROP DATABASE test_db;
-- safety-assured:end
Important Considerations
- Database deletion should be handled by DBAs or infrastructure automation, not application migrations
- Ensure complete backups exist before proceeding
- Verify all connections to the database are terminated
- Consider using infrastructure tools (Terraform, Ansible) instead of migrations
Note: Postgres 13+ supports DROP DATABASE ... WITH (FORCE) to terminate active connections automatically, but this makes the operation even more dangerous and should be used with extreme caution.
Dropping an Index
Check name: DropIndexCheck
Lock type: ACCESS EXCLUSIVE (blocks all queries)
Bad
Dropping an index without CONCURRENTLY acquires an ACCESS EXCLUSIVE lock on the table, blocking all queries (SELECT, INSERT, UPDATE, DELETE) until the drop operation completes.
DROP INDEX idx_users_email;
DROP INDEX IF EXISTS idx_users_username;
Good
Use CONCURRENTLY to drop the index without blocking queries:
DROP INDEX CONCURRENTLY idx_users_email;
DROP INDEX CONCURRENTLY IF EXISTS idx_users_username;
Important: CONCURRENTLY requires Postgres 9.2+ and cannot run inside a transaction block.
For Diesel migrations: Add a metadata.toml file to your migration directory:
# migrations/2024_01_01_drop_user_index/metadata.toml
run_in_transaction = false
For SQLx migrations: Add the no-transaction directive at the top of your migration file:
-- no-transaction
DROP INDEX CONCURRENTLY idx_users_email;
Note: Dropping an index concurrently takes longer than a regular drop and uses more resources, but allows concurrent queries to continue. If it fails, the index may be left in an “invalid” state and should be dropped again.
DROP NOT NULL
Check name: DropNotNullCheck
Bad
Removing a NOT NULL constraint changes a contract that application code may depend on. Once NULL values are written to this column, any code that reads it without handling NULL will fail at runtime.
ALTER TABLE users ALTER COLUMN email DROP NOT NULL;
Good
Ensure this change is intentional and coordinated with application changes. Update all code paths that read this column to handle NULL before or alongside the migration.
If the change has been reviewed and confirmed safe, suppress it explicitly:
-- safety-assured:start
ALTER TABLE users ALTER COLUMN email DROP NOT NULL;
-- safety-assured:end
Dropping a Primary Key
Check name: DropPrimaryKeyCheck
Lock type: ACCESS EXCLUSIVE (blocks all operations)
Bad
Dropping a primary key removes the critical uniqueness constraint and breaks foreign key relationships in other tables that reference this table. It also acquires an ACCESS EXCLUSIVE lock, blocking all operations.
-- Breaks foreign keys that reference users(id)
ALTER TABLE users DROP CONSTRAINT users_pkey;
Good
If you must change your primary key strategy, use a multi-step migration approach:
-- Step 1: Identify all foreign key dependencies
SELECT
tc.table_name, kcu.column_name, rc.constraint_name
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.referential_constraints rc ON tc.constraint_name = rc.unique_constraint_name
WHERE tc.table_name = 'users' AND tc.constraint_type = 'PRIMARY KEY';
-- Step 2: Create the new primary key FIRST (if migrating to a new key)
ALTER TABLE users ADD CONSTRAINT users_new_pkey PRIMARY KEY (uuid);
-- Step 3: Update all foreign keys to reference the new key
-- (This may require adding new columns to referencing tables)
ALTER TABLE posts ADD COLUMN user_uuid UUID;
UPDATE posts SET user_uuid = users.uuid FROM users WHERE posts.user_id = users.id;
ALTER TABLE posts ADD CONSTRAINT posts_user_uuid_fkey FOREIGN KEY (user_uuid) REFERENCES users(uuid);
-- Step 4: Only after all foreign keys are migrated, drop the old key
ALTER TABLE users DROP CONSTRAINT users_pkey;
-- Step 5: Clean up old columns
ALTER TABLE posts DROP COLUMN user_id;
Important Considerations
- Review ALL tables with foreign keys to this table
- Consider a transition period where both old and new keys exist
- Update application code to use the new key before dropping the old one
- Test thoroughly in a staging environment first
Limitation: This check relies on Postgres naming conventions (e.g., users_pkey). It may not detect primary keys with custom names. Future versions will support database connections for accurate verification.
Dropping a Table
Check name: DropTableCheck
Lock type: ACCESS EXCLUSIVE (blocks all operations)
Bad
Dropping a table permanently deletes all data, indexes, triggers, and constraints. This operation acquires an ACCESS EXCLUSIVE lock and cannot be undone after the transaction commits. Foreign key relationships in other tables may block the drop or cause cascading deletes.
DROP TABLE users;
DROP TABLE IF EXISTS orders CASCADE;
Good
Before dropping a table in production, take these precautions:
-- Step 1: Verify the table is no longer in use
-- Check application code for references to this table
-- Monitor for queries against the table
-- Step 2: Check for foreign key dependencies
SELECT
tc.table_name, kcu.column_name, rc.constraint_name
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.referential_constraints rc ON tc.constraint_name = rc.constraint_name
WHERE rc.unique_constraint_schema = 'public'
AND rc.unique_constraint_name IN (
SELECT constraint_name FROM information_schema.table_constraints
WHERE table_name = 'users' AND constraint_type IN ('PRIMARY KEY', 'UNIQUE')
);
-- Step 3: Ensure backups exist or data has been migrated
-- Step 4: Drop the table (use safety-assured if intentional)
-- safety-assured:start
DROP TABLE users;
-- safety-assured:end
Important Considerations
- Verify all application code references have been removed and deployed
- Check for foreign keys in other tables that reference this table
- Ensure data backups exist before dropping
- Consider renaming the table first (e.g.,
users_deprecated) and waiting before dropping
Generated Columns
Check name: GeneratedColumnCheck
Lock type: ACCESS EXCLUSIVE + full table rewrite
Bad
Adding a GENERATED ALWAYS AS ... STORED column acquires an ACCESS EXCLUSIVE lock and triggers a full table rewrite because Postgres must compute and store the expression value for every existing row.
ALTER TABLE products ADD COLUMN total_price INTEGER GENERATED ALWAYS AS (price * quantity) STORED;
Good
-- Step 1: Add a regular nullable column
ALTER TABLE products ADD COLUMN total_price INTEGER;
-- Step 2: Backfill in batches (outside migration)
UPDATE products SET total_price = price * quantity WHERE total_price IS NULL;
-- Step 3: Optionally add NOT NULL constraint
ALTER TABLE products ALTER COLUMN total_price SET NOT NULL;
-- Step 4: Use a trigger for new rows
CREATE FUNCTION compute_total_price() RETURNS TRIGGER AS $$
BEGIN
NEW.total_price := NEW.price * NEW.quantity;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_total_price
BEFORE INSERT OR UPDATE ON products
FOR EACH ROW EXECUTE FUNCTION compute_total_price();
Note: Postgres does not support VIRTUAL generated columns (only STORED). For new empty tables, GENERATED STORED columns are acceptable.
JSON Fields
Check name: AddJsonColumnCheck
Lock type: None (best practice warning)
Bad
In Postgres, the json type has no equality operator, which breaks existing SELECT DISTINCT queries and other operations that require comparing values.
ALTER TABLE users ADD COLUMN properties JSON;
Good
Use jsonb instead of json:
ALTER TABLE users ADD COLUMN properties JSONB;
Benefits of JSONB over JSON
- Has proper equality and comparison operators (supports DISTINCT, GROUP BY, UNION)
- Supports indexing (GIN indexes for efficient queries)
- Faster to process (binary format, no reparsing)
- Generally better performance for most use cases
Note: The only advantage of JSON over JSONB is that it preserves exact formatting and key order, which is rarely needed in practice.
Mutation without WHERE
Check name: MutationWithoutWhereCheck
Lock type: ROW EXCLUSIVE (DELETE and UPDATE)
When this fires
A DELETE FROM table or UPDATE table SET ... with no WHERE clause affects every row in the table. In a migration this is almost always a mistake — a forgotten filter rather than an intentional bulk mutation.
-- Fires on DELETE with no WHERE
DELETE FROM users;
-- Fires on UPDATE with no WHERE
UPDATE users SET active = false;
Why it’s dangerous
Both hold a ROW EXCLUSIVE lock and touch every row in the table. On large tables this can take minutes and queue up other transactions. UPDATE also rewrites every row, causing table bloat.
Good alternative
Add a WHERE clause to target only the rows you intend to modify:
-- Targeted delete
DELETE FROM users WHERE deactivated_at < '2020-01-01';
-- Targeted update
UPDATE users SET active = false WHERE last_login < '2020-01-01';
If a backfill must touch every row, do it in batches outside the migration to avoid holding the lock for extended periods.
Escape hatch
When the full-table mutation is intentional (e.g. a one-time seed reset or known-empty table), use a safety-assured block:
-- safety-assured:start
-- Safe because: lookup table, always fewer than 100 rows
DELETE FROM countries;
-- safety-assured:end
-- safety-assured:start
-- Safe because: one-time backfill on a table with fewer than 5k rows
UPDATE feature_flags SET enabled = false;
-- safety-assured:end
Wide Indexes
Check name: WideIndexCheck
Lock type: None (best practice warning)
Bad
Indexes with 4 or more columns are rarely effective. Postgres can only use multi-column indexes efficiently when filtering on the leftmost columns in order. Wide indexes also increase storage costs and slow down write operations (INSERT, UPDATE, DELETE).
-- 4+ columns: rarely useful
CREATE INDEX idx_users_search ON users(tenant_id, email, name, status);
CREATE INDEX idx_orders_composite ON orders(user_id, product_id, status, created_at);
Good
Use narrower, more targeted indexes based on actual query patterns:
-- Option 1: Partial index for specific query pattern
CREATE INDEX idx_users_active_email ON users(email)
WHERE status = 'active';
-- Option 2: Separate indexes for different queries
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_status ON users(status);
-- Option 3: Covering index with INCLUDE (Postgres 11+)
-- Includes extra columns for SELECT without adding them to index keys
CREATE INDEX idx_users_email_covering ON users(email)
INCLUDE (name, status);
-- Option 4: Two-column composite (still useful for some patterns)
CREATE INDEX idx_users_tenant_email ON users(tenant_id, email);
When Wide Indexes Might Be Acceptable
- Composite foreign keys matching the referenced table’s primary key
- Specific, verified query patterns that need all columns in order
- Use
safety-assuredif you’ve confirmed the index is necessary
Performance tip: Postgres can combine multiple indexes using bitmap scans. Two separate indexes often outperform one wide index.
REFRESH MATERIALIZED VIEW
Check name: RefreshMatViewCheck
Lock type: ACCESS EXCLUSIVE (blocks all reads on the view)
Bad
Refreshing a materialized view without CONCURRENTLY acquires an ACCESS EXCLUSIVE on the view, blocking all reads (SELECT) until the refresh completes. Duration depends on view complexity and underlying data size.
REFRESH MATERIALIZED VIEW my_view;
Good
Use CONCURRENTLY to refresh the view without blocking reads:
REFRESH MATERIALIZED VIEW CONCURRENTLY my_view;
Important: CONCURRENTLY cannot run inside a transaction block, and requires a unique index on the materialized view.
Create the unique index before using the concurrent refresh:
CREATE UNIQUE INDEX ON my_view(id);
For Diesel migrations: Add a metadata.toml file to your migration directory:
# migrations/2024_01_01_refresh_my_view/metadata.toml
run_in_transaction = false
For SQLx migrations: Add the no-transaction directive at the top of your migration file:
-- no-transaction
REFRESH MATERIALIZED VIEW CONCURRENTLY my_view;
Note: If CONCURRENTLY fails, the view data remains unchanged — there is no partial update. Check that a unique index exists on the view before using this option.
Renaming a Column
Check name: RenameColumnCheck
Lock type: ACCESS EXCLUSIVE (brief, but breaks running app instances)
Bad
Renaming a column breaks running application instances immediately. Any code that references the old column name will fail after the rename is applied, causing downtime.
ALTER TABLE users RENAME COLUMN email TO email_address;
Good
Use a multi-step migration to maintain compatibility during the transition:
-- Migration 1: Add new column
ALTER TABLE users ADD COLUMN email_address VARCHAR(255);
-- Outside migration: Backfill in batches
UPDATE users SET email_address = email;
-- Migration 2: Add NOT NULL if needed
ALTER TABLE users ALTER COLUMN email_address SET NOT NULL;
-- Update application code to use email_address
-- Migration 3: Drop old column after deploying code changes
ALTER TABLE users DROP COLUMN email;
Important: The RENAME COLUMN operation itself is fast (brief ACCESS EXCLUSIVE lock), but the primary risk is application compatibility, not lock duration. All running instances must be updated to reference the new column name before the rename is applied.
Renaming a Schema
Check name: RenameSchemaCheck
Lock type: ACCESS EXCLUSIVE (blocks on busy databases; breaks all references to schema objects)
Bad
Renaming a schema breaks all application code, ORM models, and connection strings that reference any object within the schema. Unlike renaming a single table or column, a schema rename invalidates every qualified reference of the form old_schema.table, old_schema.function, old_schema.type, and so on — the blast radius is as wide as the schema itself.
ALTER SCHEMA myschema RENAME TO newschema;
Good
Avoid renaming schemas in production. If a rename is unavoidable, use a search_path alias to maintain compatibility while migrating references:
-- Step 1: Add a search_path alias so both names resolve
ALTER DATABASE mydb SET search_path TO newschema, myschema;
-- Step 2: Rename the schema
ALTER SCHEMA myschema RENAME TO newschema;
-- Step 3: Update all application code, ORM models, and connection strings
-- to use the new schema name, then deploy.
-- Step 4: Remove the search_path alias after all references are updated
ALTER DATABASE mydb RESET search_path;
Important: This is a high-risk operation. Coordinate with all teams that own code referencing the schema before proceeding.
Renaming a Table
Check name: RenameTableCheck
Lock type: ACCESS EXCLUSIVE (blocks on busy tables; breaks running app instances)
Bad
Renaming a table breaks running application instances immediately. Any code that references the old table name will fail after the rename is applied. Additionally, this operation requires an ACCESS EXCLUSIVE lock which can block on busy tables.
ALTER TABLE users RENAME TO customers;
Good
Use a multi-step dual-write migration to safely rename the table:
-- Migration 1: Create new table
CREATE TABLE customers (LIKE users INCLUDING ALL);
-- Update application code to write to BOTH tables
-- Migration 2: Backfill data in batches
INSERT INTO customers
SELECT * FROM users
WHERE id > last_processed_id
LIMIT 10000;
-- Update application code to read from new table
-- Deploy updated application
-- Update application code to stop writing to old table
-- Migration 3: Drop old table
DROP TABLE users;
Important: This multi-step approach avoids the ACCESS EXCLUSIVE lock issues on large tables and ensures zero downtime. The migration requires multiple deployments coordinated with application code changes.
REINDEX
Check name: ReindexCheck
Lock type: ACCESS EXCLUSIVE (blocks all operations)
Bad
Reindexing without CONCURRENTLY acquires an ACCESS EXCLUSIVE lock on the table, blocking all operations until complete. Duration depends on index size.
REINDEX INDEX idx_users_email;
REINDEX TABLE users;
Good
Use CONCURRENTLY to reindex without blocking operations:
REINDEX INDEX CONCURRENTLY idx_users_email;
REINDEX TABLE CONCURRENTLY users;
Important: CONCURRENTLY requires Postgres 12+ and cannot run inside a transaction block.
For Diesel migrations: Add a metadata.toml file to your migration directory:
# migrations/2024_01_01_reindex_users/metadata.toml
run_in_transaction = false
For SQLx migrations: Add the no-transaction directive at the top of your migration file:
-- no-transaction
REINDEX INDEX CONCURRENTLY idx_users_email;
Note: REINDEX CONCURRENTLY rebuilds the index without locking out writes. If it fails, the index may be left in an “invalid” state — check with \d tablename and run REINDEX again if needed.
SERIAL Primary Keys
Check name: AddSerialColumnCheck
Lock type: ACCESS EXCLUSIVE + full table rewrite
Bad
Adding a SERIAL column to an existing table triggers a full table rewrite because Postgres must populate sequence values for all existing rows. This acquires an ACCESS EXCLUSIVE lock and blocks all operations.
ALTER TABLE users ADD COLUMN id SERIAL;
ALTER TABLE users ADD COLUMN order_number BIGSERIAL;
Good
Create the sequence separately, add the column without a default, then backfill:
-- Step 1: Create a sequence
CREATE SEQUENCE users_id_seq;
-- Step 2: Add the column WITHOUT default (fast, no rewrite)
ALTER TABLE users ADD COLUMN id INTEGER;
-- Outside migration: Backfill existing rows in batches
UPDATE users SET id = nextval('users_id_seq') WHERE id IS NULL;
-- Step 3: Set default for future inserts only
ALTER TABLE users ALTER COLUMN id SET DEFAULT nextval('users_id_seq');
-- Step 4: Set NOT NULL if needed (Postgres 11+: safe if all values present)
ALTER TABLE users ALTER COLUMN id SET NOT NULL;
-- Step 5: Set sequence ownership
ALTER SEQUENCE users_id_seq OWNED BY users.id;
Key insight: Adding a column with DEFAULT nextval(...) on an existing table still triggers a table rewrite. The solution is to add the column first without any default, backfill separately, then set the default for future rows only.
SET NOT NULL
Check name: AddNotNullCheck
Lock type: ACCESS EXCLUSIVE (blocks all operations while scanning table)
Bad
Adding a NOT NULL constraint requires scanning the entire table to verify all values are non-null. This acquires an ACCESS EXCLUSIVE lock and blocks all operations.
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
Good
For large tables, use a CHECK constraint approach that allows concurrent operations:
-- Step 1: Add CHECK constraint without validating existing rows
ALTER TABLE users ADD CONSTRAINT users_email_not_null_check CHECK (email IS NOT NULL) NOT VALID;
-- Step 2: Validate separately (uses SHARE UPDATE EXCLUSIVE lock)
ALTER TABLE users VALIDATE CONSTRAINT users_email_not_null_check;
-- Step 3: Add NOT NULL constraint (instant if CHECK exists)
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
-- Step 4: Optionally drop redundant CHECK constraint
ALTER TABLE users DROP CONSTRAINT users_email_not_null_check;
The VALIDATE step allows concurrent reads and writes, only blocking other schema changes. On Postgres 12+, NOT NULL constraints are more efficient, but this approach still provides better control.
Short Primary Keys
Check name: ShortIntegerPrimaryKeyCheck
Lock type: None (best practice warning)
Bad
Using SMALLINT or INT for primary keys risks ID exhaustion. SMALLINT maxes out at ~32,767 records, and INT at ~2.1 billion. While 2.1 billion seems large, active applications can exhaust this faster than expected, especially with high-frequency inserts, soft deletes, or partitioned data.
Changing the type later requires an ALTER COLUMN TYPE operation with a full table rewrite and ACCESS EXCLUSIVE lock.
-- SMALLINT exhausts at ~32K records
CREATE TABLE users (id SMALLINT PRIMARY KEY);
-- INT exhausts at ~2.1B records
CREATE TABLE posts (id INT PRIMARY KEY);
CREATE TABLE events (id INTEGER PRIMARY KEY);
-- Composite PKs with short integers still risky
CREATE TABLE tenant_events (
tenant_id BIGINT,
event_id INT, -- Will exhaust per tenant
PRIMARY KEY (tenant_id, event_id)
);
Good
Use BIGINT for all primary keys to avoid exhaustion:
-- BIGINT: effectively unlimited (~9.2 quintillion)
CREATE TABLE users (id BIGINT PRIMARY KEY);
-- Identity-backed BIGINT: auto-incrementing and SQL-standard
CREATE TABLE posts (id BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY);
-- Composite PKs with all BIGINT
CREATE TABLE tenant_events (
tenant_id BIGINT,
event_id BIGINT,
PRIMARY KEY (tenant_id, event_id)
);
Storage overhead: BIGINT uses 8 bytes vs INT’s 4 bytes — only 4 extra bytes per row. For a 1 million row table, this is ~4MB of additional storage, which is negligible compared to the operational cost of changing column types later.
Identity columns: On PostgreSQL 10+, prefer GENERATED ... AS IDENTITY over SERIAL/BIGSERIAL. Use BIGSERIAL only when you must support older PostgreSQL versions.
Safe exceptions: Small, finite lookup tables with <100 entries (e.g., status codes, country lists) can safely use smaller types. Use safety-assured to bypass the check for these cases.
TIMESTAMP Fields
Check name: TimestampTypeCheck
Lock type: None (best practice warning)
Bad
TIMESTAMP (or TIMESTAMP WITHOUT TIME ZONE) stores values without timezone context, which can cause issues in multi-timezone applications, during DST transitions, and makes it difficult to determine the actual point in time represented.
-- ALTER TABLE
ALTER TABLE events ADD COLUMN created_at TIMESTAMP;
ALTER TABLE events ADD COLUMN updated_at TIMESTAMP WITHOUT TIME ZONE;
-- CREATE TABLE
CREATE TABLE events (
id SERIAL PRIMARY KEY,
created_at TIMESTAMP,
updated_at TIMESTAMP WITHOUT TIME ZONE
);
Good
Use TIMESTAMPTZ (TIMESTAMP WITH TIME ZONE) instead:
-- ALTER TABLE
ALTER TABLE events ADD COLUMN created_at TIMESTAMPTZ;
ALTER TABLE events ADD COLUMN updated_at TIMESTAMP WITH TIME ZONE;
-- CREATE TABLE
CREATE TABLE events (
id SERIAL PRIMARY KEY,
created_at TIMESTAMPTZ,
updated_at TIMESTAMP WITH TIME ZONE
);
Why TIMESTAMPTZ Is Better
- Stores values in UTC internally and converts on input/output based on session timezone
- Provides consistent behavior across different timezones and server environments
- Handles DST transitions correctly
- Makes it clear what point in time is represented
When TIMESTAMP Without Time Zone Might Be Acceptable
- Storing dates that are inherently timezone-agnostic (e.g., birth dates stored as midnight)
- Legacy systems where all data is known to be in a single timezone
- Use
safety-assuredif you’ve confirmed timezone-naive timestamps are appropriate
TRUNCATE TABLE
Check name: TruncateTableCheck
Lock type: ACCESS EXCLUSIVE (blocks all operations)
When this fires
TRUNCATE TABLE acquires an ACCESS EXCLUSIVE lock, blocking all reads and writes on the table. Unlike DELETE, it cannot be batched or throttled, making it risky on large tables in production.
TRUNCATE TABLE users;
TRUNCATE TABLE orders, order_items;
When TRUNCATE is actually fine
TRUNCATE is a legitimate operation in many migration contexts:
- Lookup / seed tables — small, fast to re-populate, often truncated before re-seeding
- Staging or test environments — no live traffic, large-table risk doesn’t apply
- Known-empty tables — e.g. clearing a table that was just created in the same migration
- Maintenance windows — explicit downtime where locking is acceptable
In these cases, the check produces noise. Use one of the escape hatches below.
Good alternative (large production tables)
Use batched DELETE to remove rows incrementally while allowing concurrent access:
-- Delete rows in small batches to allow concurrent access
DELETE FROM users WHERE id IN (
SELECT id FROM users LIMIT 1000
);
-- Repeat the batched DELETE until all rows are removed
-- Optional: Reset sequences if needed
ALTER SEQUENCE users_id_seq RESTART WITH 1;
-- Optional: Reclaim space
VACUUM users;
Escape hatches
Per-statement: safety-assured block
Use this when TRUNCATE is intentional in a specific migration:
-- safety-assured:start
-- Safe because: lookup table, always fewer than 100 rows
TRUNCATE TABLE countries;
-- safety-assured:end
Project-wide: downgrade to a warning
Report TRUNCATE but don’t fail CI. Useful when your project uses TRUNCATE routinely (e.g. seeding migrations) but you still want visibility:
# diesel-guard.toml
warn_checks = ["TruncateTableCheck"]
Warnings appear in output with ⚠️ and do not cause a non-zero exit code.
Project-wide: disable entirely
# diesel-guard.toml
disable_checks = ["TruncateTableCheck"]
Unnamed Constraints
Check name: UnnamedConstraintCheck
Lock type: None (best practice warning)
Bad
Adding constraints without explicit names results in auto-generated names from Postgres. These names vary between databases and make future migrations difficult — you can’t reliably reference a constraint you don’t know the name of.
-- Unnamed UNIQUE constraint
ALTER TABLE users ADD UNIQUE (email);
-- Unnamed FOREIGN KEY constraint
ALTER TABLE posts ADD FOREIGN KEY (user_id) REFERENCES users(id);
-- Unnamed CHECK constraint
ALTER TABLE users ADD CHECK (age >= 0);
Good
Always name constraints explicitly using the CONSTRAINT keyword:
-- Named UNIQUE constraint
ALTER TABLE users ADD CONSTRAINT users_email_key UNIQUE (email);
-- Named FOREIGN KEY constraint
ALTER TABLE posts ADD CONSTRAINT posts_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id);
-- Named CHECK constraint
ALTER TABLE users ADD CONSTRAINT users_age_check CHECK (age >= 0);
Naming Conventions
- UNIQUE:
{table}_{column}_keyor{table}_{column1}_{column2}_key - FOREIGN KEY:
{table}_{column}_fkey - CHECK:
{table}_{column}_checkor{table}_{description}_check
Named constraints make future migrations predictable:
-- Easy to reference in later migrations
ALTER TABLE users DROP CONSTRAINT users_email_key;
Custom Checks
Built-in checks cover common Postgres migration hazards, but every project has unique rules — naming conventions, banned operations, team policies. Custom checks let you enforce these with simple Rhai scripts.
Write your checks as .rhai files, point custom_checks_dir at the directory in diesel-guard.toml, and diesel-guard will run them alongside the built-in checks.
Quick Start
- Create a directory for your checks:
mkdir checks
- Write a check script (e.g.,
checks/require_concurrent_index.rhai):
let stmt = node.IndexStmt;
if stmt == () { return; }
if !stmt.concurrent {
let idx_name = if stmt.idxname != "" { stmt.idxname } else { "(unnamed)" };
return #{
operation: "INDEX without CONCURRENTLY: " + idx_name,
problem: "Creating index '" + idx_name + "' without CONCURRENTLY blocks writes on the table.",
safe_alternative: "Use CREATE INDEX CONCURRENTLY:\n CREATE INDEX CONCURRENTLY " + idx_name + " ON ...;"
};
}
// CONCURRENTLY cannot run inside a transaction block
if ctx.run_in_transaction {
let idx_name = if stmt.idxname != "" { stmt.idxname } else { "(unnamed)" };
let hint = if ctx.no_transaction_hint != "" { ctx.no_transaction_hint } else { "Run this migration outside a transaction block." };
#{
operation: "INDEX CONCURRENTLY inside a transaction: " + idx_name,
problem: "CREATE INDEX CONCURRENTLY cannot run inside a transaction block. PostgreSQL will raise an error at runtime.",
safe_alternative: hint
}
}
- Add to
diesel-guard.toml:
custom_checks_dir = "checks"
- Run as usual:
diesel-guard check
How It Works
- Each
.rhaiscript is called once per SQL statement in the migration - The
nodevariable contains the pg_query AST for that statement (a nested map) - The
configvariable exposes the currentdiesel-guard.tomlsettings (e.g.,config.postgres_version) - The
ctxvariable exposes per-migration metadata (e.g.,ctx.run_in_transaction) - Scripts match on a specific node type:
let stmt = node.IndexStmt; - If the node doesn’t match,
node.IndexStmtreturns()— early-return withif stmt == () { return; } - Return
()for no violation, a map for one, or an array of maps for multiple - Map keys:
operation,problem,safe_alternative(all required strings)
The config Variable
config gives scripts access to the user’s configuration. Use it to make version-aware checks:
// Only flag this on Postgres < 14
if config.postgres_version != () && config.postgres_version >= 14 { return; }
Available fields:
| Field | Type | Description |
|---|---|---|
config.postgres_version | integer or () | Target PG major version, or () if unset |
config.check_down | bool | Whether down migrations are checked |
config.disable_checks | array | Check names that are disabled |
The ctx Variable
ctx gives scripts access to per-migration metadata extracted by the framework adapter. Use it to condition on whether a migration runs inside a transaction:
// Flag CONCURRENTLY inside a transaction — PostgreSQL will error at runtime
if stmt.concurrent && ctx.run_in_transaction {
let hint = if ctx.no_transaction_hint != "" {
ctx.no_transaction_hint
} else {
"Run this migration outside a transaction block."
};
return #{
operation: "INDEX CONCURRENTLY inside a transaction",
problem: "CREATE INDEX CONCURRENTLY cannot run inside a transaction block.",
safe_alternative: hint
};
}
Available fields:
| Field | Type | Description |
|---|---|---|
ctx.run_in_transaction | bool | Whether the migration runs inside a transaction. Defaults to true. |
ctx.no_transaction_hint | string | Framework-specific instruction for opting out of transactions (empty string when unavailable). |
The hint is framework-specific:
- Diesel:
"Add run_in_transaction = false to the migration's metadata.toml." - SQLx:
"Add -- no-transaction at the top of the migration file." check_sql/ no framework: empty string — provide your own fallback.
Using dump-ast
Use dump-ast to inspect the AST for any SQL statement. This is the easiest way to discover which fields are available:
diesel-guard dump-ast --sql "CREATE INDEX idx_users_email ON users(email);"
Key fields and how they map to Rhai (using IndexStmt as an example):
| JSON path | Rhai access | Description |
|---|---|---|
IndexStmt.concurrent | stmt.concurrent | Whether CONCURRENTLY was specified |
IndexStmt.idxname | stmt.idxname | Index name |
IndexStmt.unique | stmt.unique | Whether it’s a UNIQUE index |
IndexStmt.relation.relname | stmt.relation.relname | Table name |
IndexStmt.index_params | stmt.index_params | Array of indexed columns |
Return Values
No violation — return () (either explicitly or by reaching the end of the script):
let stmt = node.IndexStmt;
if stmt == () { return; }
if stmt.concurrent {
return; // All good, CONCURRENTLY is used
}
Single violation — return a map with operation, problem, and safe_alternative:
#{
operation: "INDEX without CONCURRENTLY: idx_users_email",
problem: "Creating index without CONCURRENTLY blocks writes on the table.",
safe_alternative: "Use CREATE INDEX CONCURRENTLY."
}
Multiple violations — return an array of maps:
let violations = [];
for rel in stmt.relations {
violations.push(#{
operation: "TRUNCATE: " + rel.node.RangeVar.relname,
problem: "TRUNCATE acquires ACCESS EXCLUSIVE lock.",
safe_alternative: "Use batched DELETE instead."
});
}
violations
Common AST Node Types
| SQL | Node Type | Key Fields |
|---|---|---|
CREATE TABLE | CreateStmt | relation.relname, relation.relpersistence, table_elts |
CREATE INDEX | IndexStmt | idxname, concurrent, unique, relation, index_params |
ALTER TABLE | AlterTableStmt | relation, cmds (array of AlterTableCmd) |
DROP TABLE/INDEX/... | DropStmt | remove_type, objects, missing_ok, behavior |
ALTER TABLE RENAME | RenameStmt | rename_type, relation, subname, newname |
TRUNCATE | TruncateStmt | relations (array of Node-wrapped RangeVar) |
CREATE EXTENSION | CreateExtensionStmt | extname, if_not_exists |
REINDEX | ReindexStmt | kind, concurrent, relation |
Note: Column definitions (ColumnDef) are nested inside CreateStmt.table_elts and AlterTableCmd.def, not top-level nodes. Use dump-ast to explore the nesting for ALTER TABLE ADD COLUMN statements.
pg:: Constants
Protobuf enum fields like DropStmt.remove_type and AlterTableCmd.subtype are integer values. Instead of hard-coding magic numbers, use the built-in pg:: module:
// Instead of: stmt.remove_type == 42
if stmt.remove_type == pg::OBJECT_TABLE { ... }
ObjectType
Used by DropStmt.remove_type, RenameStmt.rename_type, etc.
| Constant | Description |
|---|---|
pg::OBJECT_INDEX | Index |
pg::OBJECT_TABLE | Table |
pg::OBJECT_COLUMN | Column |
pg::OBJECT_DATABASE | Database |
pg::OBJECT_SCHEMA | Schema |
pg::OBJECT_SEQUENCE | Sequence |
pg::OBJECT_VIEW | View |
pg::OBJECT_FUNCTION | Function |
pg::OBJECT_EXTENSION | Extension |
pg::OBJECT_TRIGGER | Trigger |
pg::OBJECT_TYPE | Type |
AlterTableType
Used by AlterTableCmd.subtype.
| Constant | Description |
|---|---|
pg::AT_ADD_COLUMN | ADD COLUMN |
pg::AT_COLUMN_DEFAULT | SET DEFAULT / DROP DEFAULT |
pg::AT_DROP_NOT_NULL | DROP NOT NULL |
pg::AT_SET_NOT_NULL | SET NOT NULL |
pg::AT_DROP_COLUMN | DROP COLUMN |
pg::AT_ALTER_COLUMN_TYPE | ALTER COLUMN TYPE |
pg::AT_ADD_CONSTRAINT | ADD CONSTRAINT |
pg::AT_DROP_CONSTRAINT | DROP CONSTRAINT |
pg::AT_VALIDATE_CONSTRAINT | VALIDATE CONSTRAINT |
ConstrType
Used by Constraint.contype.
| Constant | Description |
|---|---|
pg::CONSTR_NOTNULL | NOT NULL |
pg::CONSTR_DEFAULT | DEFAULT |
pg::CONSTR_IDENTITY | IDENTITY |
pg::CONSTR_GENERATED | GENERATED |
pg::CONSTR_CHECK | CHECK |
pg::CONSTR_PRIMARY | PRIMARY KEY |
pg::CONSTR_UNIQUE | UNIQUE |
pg::CONSTR_EXCLUSION | EXCLUSION |
pg::CONSTR_FOREIGN | FOREIGN KEY |
DropBehavior
Used by DropStmt.behavior.
| Constant | Description |
|---|---|
pg::DROP_RESTRICT | RESTRICT (default) |
pg::DROP_CASCADE | CASCADE |
Examples
The examples/ directory contains ready-to-use scripts covering common patterns — naming conventions, banned operations, version-aware checks, and more. Browse them to get started or use as templates for your own checks.
Disabling Custom Checks
Custom checks can be disabled in diesel-guard.toml using the filename stem as the check name:
# Disables checks/require_concurrent_index.rhai
disable_checks = ["require_concurrent_index"]
safety-assured blocks also suppress custom check violations — any SQL inside a safety-assured block is skipped by all checks, both built-in and custom.
Debugging Tips
- Inspect the AST: Use
diesel-guard dump-ast --sql "..."to see exactly what fields are available - Runtime errors: Invalid field access or type errors produce stderr warnings — the check is skipped but other checks continue
- Compilation errors: Syntax errors in
.rhaifiles are reported at startup - Infinite loops: Scripts that exceed the operations limit are terminated safely with a warning
Safety-Assured Blocks
When you’ve manually verified an operation is safe, use safety-assured comment blocks to bypass checks:
-- safety-assured:start
ALTER TABLE users DROP COLUMN deprecated_column;
ALTER TABLE posts DROP COLUMN old_field;
-- safety-assured:end
All statements between the start and end markers are skipped by all checks — both built-in and custom.
Multiple Blocks
-- safety-assured:start
ALTER TABLE users DROP COLUMN email;
-- safety-assured:end
-- This will be checked normally
CREATE INDEX users_email_idx ON users(email);
-- safety-assured:start
ALTER TABLE posts DROP COLUMN body;
-- safety-assured:end
When to Use Safety-Assured
Only use when you’ve taken proper precautions:
-
For DROP COLUMN:
- Stopped reading/writing the column in application code
- Deployed those changes to production
- Verified no references remain in your codebase
-
For other operations:
-- safety-assured:start -- Safe because: table is empty, deployed in maintenance window ALTER TABLE new_table ADD COLUMN status TEXT DEFAULT 'pending'; -- safety-assured:end
Error Handling
diesel-guard will error if blocks are mismatched:
Error: Unclosed 'safety-assured:start' at line 1
Rules:
- Blocks cannot be nested — a second
startbeforeendis an error - Unclosed blocks are errors
- Unmatched
enddirectives (without a precedingstart) are errors - The directives are case-insensitive
CI/CD Integration
GitHub Actions
Option 1: GitHub Action (Recommended)
Use the official GitHub Action:
name: Check Migrations
on: [pull_request]
jobs:
check-migrations:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: ayarotsky/diesel-guard-action@v1
with:
path: migrations/
This will:
- Install diesel-guard
- Check your migrations for unsafe patterns
- Display detailed violation reports in workflow logs
- Fail the workflow if violations are detected
Inputs:
| Input | Description | Default |
|---|---|---|
path | Path to migrations directory or a single .sql file | migrations/ |
version | diesel-guard binary version to install (e.g. 0.9.0) | latest |
Pin the diesel-guard binary version (recommended for reproducible builds):
- uses: ayarotsky/diesel-guard-action@v1
with:
version: '0.10.0'
Option 2: Manual Installation
For more control or custom workflows:
name: Check Migrations
on: [pull_request]
jobs:
check-migrations:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: stable
- name: Install diesel-guard
run: cargo install diesel-guard
- name: Check DB migrations
run: diesel-guard check