Every few months, a thread goes viral with someone explaining they replaced their Kubernetes cluster with a single Docker Compose file and their life improved dramatically. The thread gets 5,000 likes from people who have been frustrated with Kubernetes and feel validated.

They are not wrong that Kubernetes is hard. They are wrong about why.

The Accidental Complexity (Real and Valid)

Kubernetes has genuine accidental complexity - things that are harder than they need to be:

  • YAML authoring is verbose and error-prone. Helm helps but adds its own complexity layer
  • The networking model (CNI plugins, kube-proxy, iptables rules) is opaque until something breaks
  • RBAC configuration is confusing to new operators
  • The upgrade path between minor versions is non-trivial
  • Certificate management is a constant source of friction
  • The number of objects (Deployment, ReplicaSet, Pod, Service, Ingress, ConfigMap, Secret, ServiceAccount…) is overwhelming for a first-time user

These are real complaints. The Kubernetes community is aware of them. Progress has been made but there is more to do.

The Essential Complexity (Often Misunderstood)

Here is what most of the “Kubernetes is unnecessarily complex” takes miss: some of what feels like complexity is the feature.

What Kubernetes Actually Solves

Consider what you need to run software reliably at scale:

  1. Workload placement - deciding which node runs which container, respecting resource constraints and affinity rules
  2. Health checking - detecting failed instances and replacing them automatically
  3. Rolling updates - updating software with zero downtime, with rollback capability
  4. Resource allocation - preventing one service from starving another of CPU/memory
  5. Service discovery - letting services find each other as they move across nodes
  6. Secret management - injecting credentials without baking them into images
  7. Horizontal scaling - automatically adding replicas as load increases
  8. Storage management - attaching persistent volumes to containers and moving them when containers reschedule
  9. Network policies - controlling which services can talk to which

Each of these items, solved correctly, is non-trivial. The teams that “replace Kubernetes with Docker Compose” are usually not doing all of these things. They are running smaller workloads where some guarantees are not needed.

That is fine. But it is not a simplification of Kubernetes - it is a different tradeoff.

The Distributed Systems Tax

Kubernetes is a distributed system that manages distributed systems. Distributed systems have inherent complexity that does not go away when you use a simpler tool - it just becomes invisible until it breaks.

Consider a simple question: a container crashes at 2 AM. What happens?

System What happens
Single server with systemd Service restarts, maybe. If the server is OOM, everything dies
Docker Compose Container may restart (with restart: unless-stopped). No workload placement
Kubernetes New pod scheduled on healthy node within seconds. Previous pod logs preserved

The Kubernetes behavior requires: node health monitoring, scheduler logic, pod lifecycle management, and container runtime integration. All of that complexity is justified by the outcome.

Who Should Not Use Kubernetes

The “Kubernetes is too complex” argument is correct for a specific group of users:

  • Single-server deployments - If you are running on one server, systemd or Docker Compose is sufficient. Kubernetes introduces overhead with no benefit
  • Small teams without a dedicated platform engineer - Operating Kubernetes well takes real expertise. A 3-person startup does not have that resource
  • Applications that never need HA - An internal tool or demo environment does not need pod rescheduling
  • Short-lived batch jobs - A cron job that runs once a day does not need Kubernetes. Though Kubernetes handles this use case well once you are already running it

For these cases, Fly.io, Railway, Render, or a plain VPS with Docker are better options. The problem is that teams at larger scale make the same argument and then spend the next two years rebuilding the features Kubernetes would have given them.

The Managed Kubernetes Improvement

Kubernetes got meaningfully easier in 2023-2025. EKS, GKE, and AKS now handle:

  • Control plane management and upgrades
  • Node provisioning and replacement
  • Certificate rotation
  • Some networking configuration

The accidental complexity of running the Kubernetes control plane is largely gone if you use a managed service. What remains is the essential complexity of defining and operating your workloads.

The Abstraction Layer Option

If raw Kubernetes YAML is still too much, platforms like Backstage, Humanitec, and Google’s Config Connector add a layer of abstraction. You define your service in terms your application team understands (memory limit, replicas, environment) and the platform generates the Kubernetes manifests.

This is the right answer for organizations where the application team is not the same as the infrastructure team.

Bottom Line

Kubernetes is hard because container orchestration at scale is hard. The teams that successfully replace it with something simpler are usually running simpler workloads, not solving the same problem more elegantly. For everything else, the complexity is largely justified by the guarantees you get.

Learn to distinguish which Kubernetes complexity is accidental (YAML verbosity, RBAC confusion) versus essential (distributed scheduling, declarative reconciliation). Fix the accidental complexity with better tools. Accept the essential complexity as the cost of the reliability you need.