Five years ago, the database question was genuinely hard. MySQL for web apps, MongoDB if you wanted flexibility, maybe Cassandra if you needed to scale horizontally. Today, the answer to “what database should I use?” is almost always PostgreSQL - and the exceptions are getting rarer.

This isn’t hype. The 2025 Stack Overflow Developer Survey showed PostgreSQL at 49% usage, up from 36% in 2021, while MySQL dropped from 55% to 41%. MongoDB held steady around 24%. The shift is real and it accelerated fast.

What Changed

PostgreSQL always had a reputation for correctness and standards compliance. What changed is that it became genuinely competitive at everything else too.

The tipping point was JSON. When Postgres added jsonb in version 9.4, it quietly made the “flexible schema” argument for MongoDB much weaker - so much so that a single extension now replaces MongoDB for most use cases. You get document-style storage when you need it, with the full power of SQL joins and transactions when you need those too. The @> operator, GIN indexes on JSON columns, and json_path queries are legitimately good.

Then came the extension ecosystem. Here is what Postgres can do today that would have required separate databases five years ago:

Capability PostgreSQL Extension Standalone Alternative
Vector search pgvector Pinecone, Weaviate
Time series TimescaleDB InfluxDB
Column-oriented analytics pg_analytics / DuckDB FDW Redshift, BigQuery
Graph queries Apache AGE Neo4j
Full-text search built-in tsvector Elasticsearch
Geospatial PostGIS specialized GIS DBs

Each row there is a use case that used to require a separate database. Now it’s an extension you install in five minutes.

The MVCC Advantage

PostgreSQL’s multiversion concurrency control is genuinely better than MySQL’s InnoDB implementation for read-heavy workloads with occasional writes. Readers never block writers, writers never block readers. Under the default READ COMMITTED isolation level you get consistent reads without locks.

MySQL has MVCC too, but its implementation means older versions pile up in the undo log and you have to vacuum more aggressively. Postgres autovacuum is tunable and predictable, and the query planner rewards you for understanding it. This matters at scale - not theoretical scale, but the kind of scale a funded startup hits at 50 million rows.

The foreign key and constraint story is also cleaner. Postgres enforces NOT NULL, CHECK, and FOREIGN KEY constraints reliably, which MySQL historically has been more lenient about depending on the SQL mode. Data integrity by default is not a luxury.

pgvector and the AI Era

If there is one reason Postgres wins in 2026 specifically, it is pgvector. Every AI application needs vector similarity search. The instinct is to reach for Pinecone or Weaviate, spin up another service, add another bill, add another failure point.

With pgvector, you store embeddings in the same database as your application data. A query like this:

SELECT id, title, embedding <=> '[0.1, 0.2, ...]'::vector AS distance
FROM documents
ORDER BY distance
LIMIT 10;

That runs against an HNSW index and returns in milliseconds up to a few million rows. For most applications, pgvector is more than sufficient and you save $300-500/month on a managed vector database.

Supabase made this accessible to developers who aren’t database experts. Neon, a serverless Postgres provider, has seen 3x growth in 2025 partly because it ships pgvector out of the box.

Where Postgres Still Loses

Horizontal write scaling is the honest answer. If you need to shard writes across multiple machines, Postgres doesn’t have a clean native story. Citus (Microsoft-acquired) helps, but it adds complexity. CockroachDB and Spanner exist for a reason.

For pure append-only time series at massive ingest rates - think Prometheus-scale metrics - TimescaleDB is the right choice even if it is built on Postgres. The hypertable partitioning is worth it.

And if your team is deeply invested in MongoDB and your data is genuinely document-shaped with no relational structure anywhere, migration costs are real. Don’t rewrite what works.

The Managed Postgres Ecosystem

The managed options got excellent. Compare the main players:

Provider Strength Starting Cost
Supabase Full-stack BaaS, great DX Free tier
Neon Serverless, branching Free tier
PlanetScale Postgres Branching, DX $29/mo
RDS PostgreSQL AWS integration ~$15/mo
Cloud SQL GCP integration ~$10/mo
Aiven Multi-cloud, no lock-in $19/mo

Neon’s database branching feature is worth highlighting - you can branch your database like you branch code in git, which makes staging environments trivially cheap.

Why Engineers Default to It

The meta-reason Postgres wins is trust. When you’ve been burned by MongoDB’s document-level atomicity limitations, or MySQL’s utf8mb4 character set footguns, or DynamoDB’s eventual consistency surprises, you want a database with 35 years of battle-testing and a reputation for doing exactly what you told it to do.

PostgreSQL has exactly one job: store and retrieve data correctly. It has never pivoted, never been acquired by a company with conflicting interests, and the core developers have resisted feature creep while welcoming extension points.

That predictability compounds over years of operating a production system.

Bottom Line

PostgreSQL wins in 2026 because it stopped being just a relational database and became a platform. The extension ecosystem covers vector search, time series, full-text, and geospatial - use cases that used to require separate infrastructure. The managed providers (Supabase, Neon, Aiven) removed the operational burden. And pgvector specifically made it the default for AI applications.

Start with Postgres unless you have a specific reason not to. The bar for “specific reason” is high and getting higher.