You’re three months into a new project. You have a PostgreSQL database for your core data, Redis for caching, Elasticsearch for search, MongoDB for “flexible” documents, and TimescaleDB for metrics. Five different systems, five different failure modes, five different backup strategies. Your on-call rotation is a nightmare.

Here’s the thing - Postgres alone could have handled four of those five jobs. Maybe all five.

I’m not saying Postgres is the only database you’ll ever need. I’m saying it should be the last database you add to your stack, not the first one you try to replace. There’s a reason it keeps winning.

What Postgres Handles Natively

Most teams underestimate what modern Postgres can do out of the box.

Relational OLTP - This is the obvious one. ACID transactions, complex joins, foreign keys, constraints. If your data has relationships - and it almost always does - Postgres is the natural home for it. The query planner is remarkably sophisticated once you learn how to work with it rather than against it.

JSON and document storage - jsonb in Postgres isn’t a toy. It supports GIN indexes, partial indexing on JSON paths, and full query capabilities. You get the schema flexibility of a document store with the transactional guarantees of a relational database. For most teams who think they need MongoDB, Postgres extensions already replace that use case. The document model sounds appealing until you hit the antipatterns that show up at scale.

Full-text search - tsvector, tsquery, ranking, highlighting, language-aware stemming. Is it Elasticsearch? No. But for 80% of search use cases - product search, blog search, filtering with keyword matching - it’s more than enough. You avoid an entire separate cluster, a sync pipeline, and the operational overhead that comes with it.

Time-series data - With extensions like TimescaleDB (which runs inside Postgres), you get time-series capabilities without leaving the Postgres ecosystem. Hypertables, continuous aggregates, compression - all accessible through standard SQL.

Geospatial queries - PostGIS turns Postgres into a full GIS database. Distance calculations, polygon containment, spatial indexing. Companies run entire mapping platforms on it.

Job queues - SKIP LOCKED and LISTEN/NOTIFY give you a lightweight job queue without Kafka or RabbitMQ. For most applications doing under 10,000 jobs per minute, this is plenty.

The key insight: Postgres isn’t just a relational database anymore. It’s a data platform with a relational core. Every feature you use within it shares the same backup, replication, monitoring, and transaction model.

When to Add Redis on Top

Okay, so Postgres does a lot. But there’s one thing it doesn’t do well: sub-millisecond reads from memory.

If you need a caching layer, session store, or rate limiter, Redis is the right complement. Not a replacement - a complement. Your source of truth stays in Postgres. Redis sits in front as a read-through or write-behind cache.

This pattern works because the failure modes are simple. If Redis goes down, your app gets slower but doesn’t lose data. If Postgres goes down, you have bigger problems regardless.

Understanding Redis internals - clustering, sentinel, and pipelining helps you avoid over-engineering the caching layer. Most startups need a single Redis instance, maybe with a replica. That’s it. And if you’re curious about how Redis squeezes so much throughput from a single-threaded architecture with io-threads, that’s worth understanding before you start horizontally scaling it.

The “Postgres + Redis” combo genuinely handles 95% of startup workloads. Two systems. Two backup strategies. Two things to monitor. That’s a stack you can operate with a small team.

When to Consider NoSQL

The honest answer: less often than you think.

The SQL vs NoSQL decision isn’t really about SQL syntax versus JSON documents. It’s about access patterns. If your data is truly hierarchical, denormalized by design, and you never need cross-entity transactions, a document store might fit. But “might fit” isn’t “must use.”

Real cases where NoSQL earns its place:

  • Write-heavy workloads at extreme scale - Cassandra-style wide column stores for append-heavy time-series or event logging at millions of writes per second
  • Graph traversals - When your queries are “find all connections within 4 degrees,” a graph database outperforms recursive CTEs in Postgres
  • Truly schemaless ingestion - Logging pipelines where the schema changes with every deployment and you never query across fields

Notice that none of these are “I have a web app with users, products, and orders.” That’s relational data. Use Postgres.

When to Actually Shard

Sharding is the nuclear option, and most teams reach for it too early. Before you shard, you should have already done the following.

First, optimize your queries. A single query rewrite can sometimes cut database load by 90%. Second, make sure you’re using the right index types - BRIN indexes for time-ordered data, GIN for arrays and JSONB, partial indexes for filtered queries. Third, use read replicas for read-heavy workloads.

Only after all of that should you think about sharding. And when you do, have a real understanding of your ops/sec and where the actual bottleneck sits. Run the back-of-envelope calculations before making architectural commitments. Most Postgres instances handle tens of thousands of transactions per second on modern hardware. If you’re not there yet, you have a query problem, not a database problem.

When Postgres Genuinely Isn’t Enough

I said “until it isn’t” in the title, and I meant it. Here’s when you should genuinely look elsewhere:

  • Sub-millisecond p99 at millions of QPS - You need an in-memory data store. Redis, Memcached, or a purpose-built cache.
  • Multi-region active-active writes - Postgres streaming replication is single-primary. CockroachDB, Spanner, or YugabyteDB handle distributed consensus natively.
  • Petabyte-scale analytics - OLAP workloads at massive scale need columnar storage. ClickHouse, BigQuery, or Redshift will outperform Postgres by orders of magnitude.
  • Embedded/edge deployments - If your database needs to run on a phone or embedded device, SQLite is the right tool and it’s more capable in production than most people realize.
  • Real-time event streaming - Kafka handles durable, ordered, replayable event streams in ways that LISTEN/NOTIFY simply can’t.

The Practical Default

Start with Postgres. Learn it deeply. Use its extensions. Add Redis when you need a caching layer. Question every other addition aggressively.

This isn’t about being a Postgres fanatic. It’s about operational simplicity. Every database you add to your stack is another system that can fail at 3 AM, another backup you need to verify, another technology your team needs to understand.

Two databases, well understood and well operated, will outperform five databases duct-taped together by a team that’s spread thin across all of them.

Postgres is all you need. Until it isn’t. And you’ll know when that day comes - because you’ll have exhausted every option within Postgres first.