Your app is slow. Users are complaining. You suspect it’s the database. Here’s how to find the problem and fix it.
Step 1: Find which queries are slow
Before you can fix slow queries, you need to know which ones they are. PostgreSQL’s pg_stat_statements extension tracks query performance across all executions.
Enable pg_stat_statements
-- Check if it's already enabled
SELECT * FROM pg_extension WHERE extname = 'pg_stat_statements';
-- Enable it (requires superuser, and a restart or reload)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- In postgresql.conf, add:
-- shared_preload_libraries = 'pg_stat_statements'
-- Then: sudo systemctl restart postgresql
Find the slowest queries
-- Top 10 slowest queries by total time
SELECT
round(total_exec_time::numeric, 2) AS total_ms,
calls,
round(mean_exec_time::numeric, 2) AS avg_ms,
round((total_exec_time / sum(total_exec_time) OVER ()) * 100, 2) AS pct,
query
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;
This shows you what’s actually costing the most time in production — not what you guessed.
Step 2: Run EXPLAIN ANALYZE on the problem query
Once you know which query is slow, run EXPLAIN ANALYZE to see the query plan:
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT o.*, u.name, u.email
FROM orders o
JOIN users u ON u.id = o.user_id
WHERE o.status = 'pending'
ORDER BY o.created_at DESC
LIMIT 50;
Read the output
Sort (cost=5432.10..5432.23 rows=50) (actual time=234.567..234.589 rows=50 loops=1)
Sort Key: o.created_at DESC
Sort Method: top-N heapsort Memory: 34kB
-> Hash Join (cost=1842.00..5421.00 rows=4484) (actual time=45.123..230.456 rows=4484 loops=1)
Hash Cond: (o.user_id = u.id)
-> Seq Scan on orders o (cost=0.00..3200.00 rows=4484) (actual time=0.021..180.234 rows=4484 loops=1)
Filter: ((status)::text = 'pending')
Rows Removed by Filter: 95516
-> Hash (cost=1200.00..1200.00 rows=50000) (actual time=40.000..40.000 rows=50000 loops=1)
What to look for:
| Red flag | Meaning |
|---|---|
Seq Scan on large table | No index — full table scan |
Rows Removed by Filter: 95000 | Scanning 100k rows to get 4k — needs index |
actual time >> estimated time | Stale stats — run ANALYZE tablename |
Nested Loop with high row counts | Join might be missing an index |
Sort Method: external merge Disk | Sort spilling to disk — increase work_mem |
Step 3: Fix the most common problems
Missing index — the most common cause
-- The query is doing Seq Scan + Filter on status
-- Fix: add an index
CREATE INDEX CONCURRENTLY ON orders(status);
-- Better: add index on status + created_at (covers the ORDER BY too)
CREATE INDEX CONCURRENTLY ON orders(status, created_at DESC);
-- Now EXPLAIN ANALYZE shows Index Scan instead of Seq Scan
CONCURRENTLY builds the index without locking the table — safe for production.
N+1 queries — the silent killer
This isn’t in one SQL query — it’s in your application code:
// This fires 1 + N queries (one for orders, one per user)
const orders = await db.query('SELECT * FROM orders WHERE status = $1', ['pending']);
for (const order of orders) {
order.user = await db.query('SELECT * FROM users WHERE id = $1', [order.user_id]);
}
Fix: join at the database level:
SELECT o.*, u.name, u.email
FROM orders o
JOIN users u ON u.id = o.user_id
WHERE o.status = 'pending';
Or use WHERE id IN (...) to batch load:
const userIds = [...new Set(orders.map(o => o.user_id))];
const users = await db.query('SELECT * FROM users WHERE id = ANY($1)', [userIds]);
const userMap = Object.fromEntries(users.map(u => [u.id, u]));
orders.forEach(o => o.user = userMap[o.user_id]);
Stale statistics — when the plan is just wrong
PostgreSQL’s query planner uses statistics about your data to choose query plans. If the statistics are old, it makes bad decisions.
-- Update statistics for a specific table
ANALYZE orders;
-- Update all tables
ANALYZE;
-- Check when statistics were last updated
SELECT relname, last_analyze, last_autoanalyze
FROM pg_stat_user_tables
ORDER BY last_analyze NULLS FIRST;
Too much data in the WHERE clause scan
-- Slow: function on indexed column breaks the index
SELECT * FROM users WHERE LOWER(email) = 'alice@example.com';
-- (index on email can't be used because of LOWER())
-- Fast option 1: functional index
CREATE INDEX ON users(LOWER(email));
-- Fast option 2: use ILIKE or case-insensitive collation
SELECT * FROM users WHERE email ILIKE 'alice@example.com';
-- Fast option 3: store emails lowercase at write time
Slow JOIN — missing index on the foreign key
-- Seq Scan on orders for every user (classic)
SELECT u.name, COUNT(o.id)
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
GROUP BY u.id;
-- Fix: index on the foreign key
CREATE INDEX CONCURRENTLY ON orders(user_id);
Step 4: Check for locks and blocking queries
Sometimes the query itself is fast — it’s just waiting for a lock.
-- Show queries waiting for locks
SELECT
pid,
now() - query_start AS wait_duration,
state,
wait_event_type,
wait_event,
query
FROM pg_stat_activity
WHERE wait_event IS NOT NULL
AND state != 'idle'
ORDER BY wait_duration DESC;
-- Show the full lock dependency chain
SELECT
blocked.pid AS blocked_pid,
blocked.query AS blocked_query,
blocking.pid AS blocking_pid,
blocking.query AS blocking_query
FROM pg_stat_activity blocked
JOIN pg_stat_activity blocking
ON blocking.pid = ANY(pg_blocking_pids(blocked.pid))
WHERE cardinality(pg_blocking_pids(blocked.pid)) > 0;
-- Kill the blocking query (graceful)
SELECT pg_cancel_backend(blocking_pid);
-- Kill it hard if cancel doesn't work
SELECT pg_terminate_backend(blocking_pid);
Step 5: Look at connection counts
Too many connections can slow everything down — PostgreSQL has overhead per connection.
-- Current connections
SELECT count(*), state FROM pg_stat_activity GROUP BY state;
-- Max connections
SHOW max_connections;
-- If you're close to the limit, you need a connection pooler
-- (PgBouncer is the standard solution)
Quick diagnostic checklist
-- 1. Top slow queries
SELECT round(mean_exec_time::numeric, 1) AS avg_ms, calls, query
FROM pg_stat_statements
ORDER BY mean_exec_time DESC LIMIT 10;
-- 2. Tables without autovacuum running recently
SELECT relname, last_autovacuum, last_autoanalyze
FROM pg_stat_user_tables
WHERE last_autovacuum < NOW() - INTERVAL '1 day'
OR last_autovacuum IS NULL;
-- 3. Table sizes (are any unexpectedly huge?)
SELECT relname, pg_size_pretty(pg_total_relation_size(relid)) AS total_size
FROM pg_catalog.pg_statio_user_tables
ORDER BY pg_total_relation_size(relid) DESC;
-- 4. Unused indexes (candidates for removal)
SELECT schemaname, relname, indexrelname, idx_scan
FROM pg_stat_user_indexes
WHERE idx_scan = 0
ORDER BY pg_relation_size(indexrelid) DESC;
-- 5. Cache hit ratio (should be > 99%)
SELECT
round(sum(heap_blks_hit) * 100.0 / (sum(heap_blks_hit) + sum(heap_blks_read)), 2) AS cache_hit_pct
FROM pg_statio_user_tables;
The process in one sentence
Find the slow query with pg_stat_statements → understand why it’s slow with EXPLAIN ANALYZE → add an index, fix an N+1, or rewrite the query → verify with EXPLAIN ANALYZE again.
Most slow queries have one of four causes: missing index, N+1 pattern, stale statistics, or lock contention. Work through them in that order.
Keep these open while you debug: PostgreSQL Cheat Sheet | Run PostgreSQL Locally with Docker
Related Reading.
How to Run PostgreSQL Locally with Docker
Run a full PostgreSQL database locally in under 2 minutes using Docker — with persistent volumes, docker-compose setup, GUI client access, and dump/restore commands.
PostgreSQL Cheat Sheet: SQL, Indexes, EXPLAIN & psql
Complete PostgreSQL reference — psql commands, CRUD queries, joins, indexes, EXPLAIN ANALYZE, CTEs, window functions, JSONB operators, and admin queries.
How to Install VS Code on Ubuntu, macOS and Windows
Install Visual Studio Code on any OS using apt, Snap, Homebrew, or winget — with the essential extensions every developer should add first.