The conventional wisdom is simple: SQLite is for development and mobile, Postgres is for production. This was true for a long time. It is becoming less true every year.

SQLite processes more queries per day than all other database engines combined. It ships in every phone, every browser, every Mac, and most Linux distributions. But the interesting shift is not about its ubiquity - it is about a growing ecosystem of tools that solve the three problems that kept SQLite out of production servers: replication, backups, and multi-node access.

Why SQLite Was Never “Just a Toy”

SQLite handles more concurrent reads than most teams will ever need. A single SQLite database in WAL (Write-Ahead Logging) mode can serve tens of thousands of read queries per second on modern hardware. Reads do not block writes. Writes do not block reads. The only serialization point is that writes are single-threaded.

-- Enable WAL mode (do this once, it persists)
PRAGMA journal_mode=WAL;

-- Recommended production pragmas
PRAGMA busy_timeout = 5000;
PRAGMA synchronous = NORMAL;
PRAGMA cache_size = -64000;  -- 64MB cache
PRAGMA foreign_keys = ON;
PRAGMA temp_store = MEMORY;

These pragmas matter. Without busy_timeout, concurrent writes will immediately return SQLITE_BUSY instead of retrying. Without WAL mode, readers block writers. The difference between “SQLite is too slow” and “SQLite handles our production load” is often six lines of configuration.

The Replication Problem - Solved Three Ways

The biggest objection to production SQLite has always been: “What happens when the server dies?” With Postgres, you set up streaming replication or use a managed service. SQLite had nothing comparable until recently.

Litestream - Continuous Replication to S3

Litestream is a standalone process that continuously replicates SQLite databases to S3 (or any S3-compatible storage like R2 or MinIO). It works by reading the WAL file and streaming changes in near-real-time.

# litestream.yml
dbs:
  - path: /data/app.db
    replicas:
      - url: s3://my-bucket/app.db
        retention: 72h
        sync-interval: 1s

Recovery is straightforward: Litestream restores the database from S3, replays the WAL, and your application starts with data that is at most a few seconds old. This gives you disaster recovery without a running replica.

The limitation: Litestream is backup and restore, not live replication. You cannot read from the replica while the primary is running. If your server dies, you need to restore to a new server and redirect traffic.

LiteFS - FUSE-Based Replication

LiteFS, built by Fly.io, takes a different approach. It uses a FUSE filesystem to intercept SQLite writes and replicate them to other nodes. You get a primary node that handles writes and read replicas that stay in sync within milliseconds.

Primary Node (writes + reads)
    |
    +--- Replica Node 1 (reads only)
    +--- Replica Node 2 (reads only)

LiteFS gives you horizontal read scaling and automatic failover. The primary election is handled by a lightweight consensus mechanism. When the primary dies, a replica promotes itself.

The tradeoff: it requires a FUSE filesystem, which means it runs on Linux only and adds a layer of complexity to your deployment. It also has a single-writer constraint - all writes go through one node.

Turso - Managed SQLite at the Edge

Turso wraps libSQL (a fork of SQLite) in a managed service with edge replication. Your database has a primary location and read replicas in multiple regions. Writes go to the primary, reads are served from the nearest edge.

import { createClient } from "@libsql/client";

const db = createClient({
  url: "libsql://my-db-my-org.turso.io",
  authToken: process.env.TURSO_AUTH_TOKEN,
});

const result = await db.execute("SELECT * FROM users WHERE id = ?", [userId]);

Turso gives you the ergonomics of a managed database service with the performance characteristics of SQLite. Read latency from the edge is typically under 5ms. The free tier includes 9GB of storage and 500 databases, which is generous for most applications.

Performance Comparison

Here is what realistic benchmarks show for a typical web application workload (mixed reads and writes, indexed queries, modest data size):

Metric SQLite (WAL mode) PostgreSQL MySQL
Simple SELECT by PK ~2 microseconds ~200 microseconds ~200 microseconds
INSERT single row ~50 microseconds ~500 microseconds ~400 microseconds
Complex JOIN (small tables) ~100 microseconds ~1ms ~1ms
Concurrent reads Excellent Excellent Excellent
Concurrent writes Single-writer Multi-writer Multi-writer
Network round-trip 0 (embedded) 0.5-5ms 0.5-5ms

The numbers are not close for single-node workloads. SQLite eliminates network round-trips entirely because the database is a file on the same machine as your application. Every query to Postgres involves at least one network hop, typically 0.5-2ms on the same cloud, more across regions.

For a page that runs 10 database queries, that is 5-20ms saved in network time alone. This is why SQLite-backed applications feel noticeably faster.

When SQLite Breaks Down

SQLite is not a universal replacement. It fails clearly in specific scenarios:

Write-heavy workloads. SQLite serializes all writes through a single writer. If your application needs thousands of concurrent write transactions per second, you need Postgres. In practice, the threshold is higher than most people assume - SQLite can handle hundreds of write transactions per second with WAL mode - but it is a hard ceiling.

Multi-process writes. If multiple application processes need to write to the same database simultaneously, you will hit locking contention. SQLite’s locking is file-level, not row-level. This means your architecture needs to funnel writes through a single process or use an external coordination layer.

Large datasets. SQLite’s query planner is simpler than Postgres’s. For datasets over 100GB with complex analytical queries, Postgres’s advanced statistics, parallel query execution, and sophisticated join algorithms will outperform SQLite significantly.

Advanced SQL features. No stored procedures, no materialized views, no LISTEN/NOTIFY, no logical replication, no JSONB indexing (SQLite has JSON functions but indexing is limited). If your application relies on Postgres-specific features, migration is not trivial.

Team familiarity. If your team knows Postgres and has operational playbooks for it, switching to SQLite means rebuilding that knowledge. Operational familiarity has real value.

The Architecture Shift - Embedded vs Client-Server

The deeper point is architectural. Client-server databases (Postgres, MySQL) assume a separation between your application and your data. This made sense when applications ran on separate servers from databases, when you needed connection pooling, when multiple applications shared one database.

Modern deployment patterns look different. Single-purpose containers, edge functions, and single-tenant architectures mean your application and its data often belong on the same machine. In this world, the network hop to a database server is pure overhead.

Traditional:
[App Server] --network--> [Database Server]
   2ms per query, connection pool, failover config

Embedded:
[App + SQLite on same machine]
   2 microseconds per query, no connection pool, Litestream for backup

This is not a regression. It is a simplification that eliminates an entire category of operational problems: connection pool exhaustion, database proxy configuration, network partitions between app and database, and the latency tax on every query.

Who Should Consider SQLite in Production

You should seriously evaluate SQLite if:

  • Your application is read-heavy (90%+ reads)
  • You deploy as a single container or VM per tenant
  • You want sub-millisecond query latency without caching layers
  • You are building at the edge (Cloudflare Workers with D1, Fly.io with LiteFS)
  • Your dataset is under 50GB
  • You want simpler operations (no database server to manage)

You should stick with Postgres if:

  • You need concurrent multi-writer transactions
  • You have complex analytical queries over large datasets
  • You rely on Postgres-specific features (PostGIS, JSONB indexing, logical replication)
  • Multiple services share the same database
  • Your team has deep Postgres expertise

The point is not that SQLite is better than Postgres. It is that the set of use cases where SQLite is the right choice is much larger than the conventional wisdom suggests. Every application that runs a Postgres sidecar just to store a few hundred megabytes of data is paying a complexity tax for capabilities it does not use.