Most teams reach for MongoDB because they want schema flexibility - the classic SQL vs NoSQL trade-off. They get it - but they also get eventual consistency gotchas, a separate operational burden, anti-patterns that haunt you at scale, and a query language that feels alien after six months away from it.
Here is the thing: Postgres has had JSONB since 2014. Combined with a handful of extensions that are now production-ready, you can do everything MongoDB does without leaving the relational world.
What JSONB Actually Gives You
JSONB stores JSON in a decomposed binary format. It is not just text with a JSON label slapped on it. You get:
- GIN indexes on any key path, including nested arrays
- Partial indexes on specific JSON conditions
- Full-text search over JSON values
- Operators like
@>,<@,#>>, andjsonpathfor surgical queries
A query like WHERE data @> '{"status": "active", "tier": "pro"}' uses a GIN index and runs in microseconds on millions of rows. This is the same pattern you would write in MongoDB, and the performance is comparable or better.
The Extensions That Close the Gap
pg_jsonschema
Validation was MongoDB’s one legitimate advantage. With pg_jsonschema, you can enforce a JSON Schema against a JSONB column at write time:
CREATE TABLE events (
id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
payload JSONB NOT NULL,
CHECK (
validate_json_schema('{"type": "object", "required": ["event_type", "user_id"]}', payload)
)
);
You get schema validation with the flexibility to evolve it without migrations that touch every row.
pg_vectorize
If you are building AI features - and most teams are right now - you need vector search. pg_vectorize wraps pgvector with automatic embedding generation and similarity search:
SELECT id, title
FROM articles
ORDER BY embedding <-> query_embedding
LIMIT 10;
MongoDB Atlas does this too, but it requires their hosted product. With pgvector, you run this on any Postgres instance.
Supabase’s pg_graphql
Point pg_graphql at your schema and you get a GraphQL API with zero code. This includes JSONB fields, which are exposed as their actual types. MongoDB Realm/Atlas App Services does the same thing but locks you into their hosting.
The Comparison That Matters
| Feature | MongoDB | Postgres + Extensions |
|---|---|---|
| Flexible schema | Yes | Yes (JSONB) |
| Schema validation | Yes | Yes (pg_jsonschema) |
| Array operations | Yes | Yes (GIN + jsonpath) |
| ACID transactions | Multi-doc since 4.0 | Always |
| Vector search | Atlas only | pgvector, any host |
| Full-text search | Yes | Yes (tsvector) |
| Aggregation pipeline | Yes | Yes (SQL + jsonb_agg) |
| Change streams | Yes | Yes (logical replication) |
| Horizontal sharding | Yes | Citus, pg_partman |
MongoDB wins on: horizontal sharding at massive scale and a more ergonomic aggregation syntax for deeply nested documents.
Postgres wins on: everything else, plus you do not operate two databases.
The Operational Reality
Running MongoDB in production means:
- A separate connection pool (Mongos if you are sharded)
- Different backup strategies than your relational data
- Engineers who understand WiredTiger storage engine behavior
- Separate monitoring, alerting, and tuning knobs
If you already run Postgres, adding JSONB is adding a column type. The operational surface is zero.
When MongoDB Still Makes Sense
Be honest about the cases where MongoDB is genuinely better:
- You need horizontal write sharding across 10+ nodes - Postgres with Citus can do this, but MongoDB’s native sharding is more mature for document workloads at extreme scale
- Your data is almost entirely documents - If you have no relational data at all and 100% of queries are document-centric, MongoDB’s query planner is more optimized for this shape
- Your team knows Mongoose deeply - Migration costs are real. If you have 200,000 lines of Mongoose code, the calculus changes
These scenarios describe maybe 5% of MongoDB users. The rest chose it for flexibility they could have had in Postgres.
How to Migrate Off MongoDB
The migration path is less painful than it sounds:
- Add a JSONB column to a Postgres table for each MongoDB collection
- Write a one-time script to dump MongoDB documents and insert them as-is
- Run both systems in parallel, writing to both
- Verify query parity, then cut over reads
- Drain writes from MongoDB
The hardest part is translating aggregation pipelines. MongoDB’s $group, $unwind, and $lookup map to SQL GROUP BY, unnest(), and JOIN but the mental model shift takes a few days.
Bottom Line
The case for a separate MongoDB deployment has shrunk to near zero for most applications. JSONB gives you document flexibility, pg_jsonschema gives you validation, pgvector handles embeddings, and you keep the ACID guarantees and mature ecosystem that Postgres has accumulated over three decades.
The next time someone proposes adding MongoDB to the stack, ask what it gives you that Postgres with JSONB does not. Nine times out of ten, the answer will be nothing substantive - just familiarity.
Comments