Postgres Partitioning for Time-Series Data in 2026: Range, List, and Hash Compared
A 2026 guide to Postgres partitioning for time-series workloads — range vs list vs hash strategies, automation with pg_partman, and the traps to avoid.
Your events table has 1.8 billion rows. A query that used to return in 40ms now takes eight seconds, and EXPLAIN ANALYZE shows a sequential scan that reads half the table because the planner can no longer trust the statistics. You’ve tuned indexes, vacuumed, raised work_mem, and considered sharding. Before you reach for the nuclear options, there’s a feature Postgres has quietly matured across versions 11 through 17 that was built exactly for this shape of problem: Postgres partitioning for time-series data.
This piece walks through when partitioning actually pays off in 2026, the practical differences between range, list, and hash partitioning, how pg_partman automates the lifecycle, and the four traps that bite teams rolling their own partitioning for the first time.
TL;DR
- Postgres partitioning splits a logical table into physical child tables; the planner prunes irrelevant partitions at query time.
- Range partitioning on a timestamp column is the default answer for time-series workloads — events, logs, metrics, orders.
- List partitioning fits when rows cleanly bucket by categorical value (tenant ID, region).
- Hash partitioning trades pruning for balanced write distribution — use it when you can’t partition on a time or categorical column.
- Use
pg_partmanfor automated creation and retention of time-based partitions — do not hand-roll this long-term. - The big wins are query pruning, faster VACUUM, bulk DROP instead of DELETE. The big traps are misaligned primary keys, foreign keys across partitions, and forgetting to partition-prune in every query.
Deep Dive: When Partitioning Actually Pays Off
The Rows-vs-Workload Question
Raw row count is a weak signal. A 500M-row table with queries that always filter by primary key can run fine forever. A 50M-row table with analytical queries that scan ranges of a timestamp column is already a candidate.
The clearer signals that Postgres partitioning will help:
- Queries filter on a natural bucketing column (most often
created_atorevent_time). - Old data is read-rarely but present in every scan — you’re paying I/O for cold rows on every query.
- VACUUM takes hours on the table and blocks autovacuum cycles elsewhere.
- You need bulk deletion of old data without a DELETE that rewrites indexes.
Our database design patterns SQL NoSQL piece covers the schema-level question; partitioning enters after that decision, not instead of it.
Range Partitioning (The Common Case)
CREATE TABLE events (
id bigserial,
user_id bigint NOT NULL,
event_time timestamptz NOT NULL,
payload jsonb
) PARTITION BY RANGE (event_time);
CREATE TABLE events_2026_04 PARTITION OF events
FOR VALUES FROM ('2026-04-01') TO ('2026-05-01');
Any query with WHERE event_time >= '2026-04-01' AND event_time < '2026-05-01' scans only the April partition. Queries that omit the partition key fall back to scanning all partitions — a silent performance cliff.
List and Hash
List partitioning buckets by known discrete values: PARTITION BY LIST (region) with partitions per region. Natural fit for multi-tenant SaaS with a tenant ID that is hot in every query.
Hash partitioning splits evenly across N partitions by hash of the key: PARTITION BY HASH (user_id). Use when the bucketing column has no natural ranges or lists, but you still want to break a monolithic table into smaller physical files. Hash partitioning does not help with pruning on arbitrary filters — every partition is scanned unless the query includes the exact partition key.
Primary Key and Index Constraints
A partitioned table’s primary key must include the partition key. If your events table has a simple id primary key today, partitioning on event_time forces it to (id, event_time). This is the single biggest migration gotcha and why hand-rolling partitioning in production is a minefield.
Unique constraints have the same rule. Plan the key shape up front.
Pros & Cons
| Postgres Partitioning | Unpartitioned Table | |
|---|---|---|
| Query pruning on partition key | Automatic | N/A |
| VACUUM duration | Per-partition, parallel-friendly | Monolithic |
| Bulk-delete old data | DROP TABLE in ms | DELETE rewrites indexes |
| Schema migration complexity | Higher — constraint rules | Simple |
| Cross-partition queries | Full scan unless key pruned | Always scanned |
| Index size | Smaller per partition | One large index |
| Operational complexity | Higher — lifecycle automation needed | Lower |
The trade: partitioning buys pruning and operational leverage at the cost of schema rigidity. For any table where data naturally ages out or where analytics dominate, the trade is worth making.
Who Should Use This
Reach for Postgres partitioning when:
- You have a time-series workload (events, logs, metrics, orders, audit trails) with retention rules that delete or archive old data.
- VACUUM is falling behind on a single large table and blocking autovacuum for the rest of the database.
- You want to tier storage by partition — recent partitions on fast SSD, older partitions on cheaper storage via tablespaces.
- You’re already patterns like Redis caching to reduce hot-path load — see our Redis caching strategies for backend — and now need to fix the cold-read problem at the database layer.
Don’t partition yet if:
- Your table is under 100M rows and queries reliably use selective indexes.
- Your query patterns do not filter on a natural bucketing column. Hash partitioning without selective filters is mostly operational overhead.
- You have heavy foreign key relationships into the table. Cross-partition FKs have historically been a rough edge and require careful planning.
pg_partman: Do Not Hand-Roll Lifecycle
The pg_partman extension automates monthly/daily partition creation, retention, and default-partition handling. In a production setup:
SELECT partman.create_parent(
p_parent_table => 'public.events',
p_control => 'event_time',
p_type => 'range',
p_interval => 'daily',
p_premake => 7
);
Set a cron or pg_cron job to run partman.run_maintenance() nightly. Forgetting this is the canonical production incident — queries start landing in the default partition and performance degrades silently.
FAQ
What Postgres version do I need?
Declarative partitioning landed in Postgres 10 and matured meaningfully through 15 and 16. On 16 or 17 you get logical replication of partitioned tables, parallel scans within partitions, and improved planner pruning. Below 13, upgrade before you partition — older versions have sharp edges around partition-wise joins.
Does partitioning help write throughput?
Indirectly. Each partition has smaller indexes, so index maintenance on inserts is cheaper. But raw insert throughput to a single partition is the same as to an unpartitioned table. For write-heavy workloads, partitioning helps more by keeping VACUUM tractable than by accelerating individual inserts.
How many partitions is too many?
Planner overhead grows with partition count. Under 1,000 partitions, modern Postgres handles it fine. Between 1,000 and 10,000, you start to see planner slowdowns on some workloads. Above 10,000, subpartitioning or aggregating old partitions is the escape hatch.
Can I change the partition key later?
Not without rebuilding the table. The migration is painful: create a new partitioned table with the desired key, copy data in, swap names. Plan the partition key during design, not after.
Does partitioning work with logical replication?
Yes, on recent versions. Initial support arrived in Postgres 13 and filesystems of edge cases were tightened through 16. For replicating to a read replica or downstream analytics store, this works.
Bottom Line
Postgres partitioning for time-series data is the right tool for tables where time is the natural axis and retention matters. Range partition by timestamp with pg_partman handling the lifecycle, plan the primary key to include the partition column from day one, and verify that every hot query actually filters on the partition key. Done right, it turns a table that was growing into a scaling problem into one that vacuum, drops, and plans faster every month.
Product recommendations are based on independent research and testing. We may earn a commission through affiliate links at no extra cost to you.