Traditional VPNs are miserable. They funnel all traffic through a central gateway, add latency to every connection, require manual configuration, and break constantly when network conditions change. Tailscale reimagined the entire model by building a mesh network on top of WireGuard, and the result is a VPN that feels invisible.
Here is how it works under the hood and why the architecture is worth understanding even if you never use Tailscale.
WireGuard - Why the Protocol Matters
WireGuard is a kernel-level VPN protocol that does one thing well: encrypted point-to-point tunnels. Its codebase is roughly 4,000 lines of code, compared to OpenVPN’s 100,000+ lines. This is not just an aesthetic preference - fewer lines of code means a smaller attack surface, easier auditing, and fewer bugs.
The protocol uses modern cryptography exclusively:
- Noise Protocol Framework for handshake
- Curve25519 for key exchange
- ChaCha20-Poly1305 for symmetric encryption
- BLAKE2s for hashing
There is no cipher negotiation. Both sides must use these exact algorithms. This eliminates the entire class of downgrade attacks that plague TLS and IPsec.
# A complete WireGuard config - this is the entire thing
[Interface]
PrivateKey = yAnz5TF+lXXJte14tji3zlMNq+hd2rYUIgJBgB3fBmk=
Address = 10.0.0.1/24
ListenPort = 51820
[Peer]
PublicKey = xTIBA5rboUvnH4htodjb6e697QjLERt1NAB4mZqp8Dg=
AllowedIPs = 10.0.0.2/32
Endpoint = 203.0.113.1:51820
WireGuard’s handshake completes in one round-trip (1-RTT). Compare this to OpenVPN, which requires a full TLS handshake (2-4 round-trips) before any data flows. On a connection with 100ms latency, WireGuard establishes a tunnel in 100ms. OpenVPN takes 400-800ms.
The performance difference is also significant. WireGuard operates at the kernel level (or via wireguard-go in userspace), processes packets inline, and achieves near-line-rate throughput. Independent benchmarks show WireGuard sustaining 1+ Gbps with low CPU overhead, while OpenVPN typically peaks at 200-500 Mbps with much higher CPU usage.
Tailscale’s Coordination Server - The Brain
WireGuard gives you point-to-point tunnels. Tailscale’s contribution is everything above that: discovery, authentication, key distribution, and NAT traversal.
The coordination server (Tailscale calls it the “control plane”) does not touch your traffic. It performs three functions:
-
Identity. When you install Tailscale, you authenticate with an identity provider (Google, Microsoft, GitHub, OIDC). The coordination server maps your identity to your devices.
-
Key distribution. Each device generates a WireGuard keypair. The public key is sent to the coordination server, which distributes it to all devices in your tailnet (network). This is the equivalent of manually adding
[Peer]entries to every WireGuard config, but automated. -
Endpoint discovery. Each device reports its network information - public IP, local IP, NAT type - to the coordination server. The server shares this information with peers so they can attempt direct connections.
Device A Coordination Server Device B
| | |
|--- "I'm at 192.168.1.5, | |
| public 203.0.113.1, | |
| my pubkey is X" ------->| |
| |<---- "I'm at 10.0.0.50, |
| | public 198.51.100.1, |
| | my pubkey is Y" ------------|
| | |
|<-- "Peer B is at | "Peer A is at |
| 198.51.100.1, | 203.0.113.1, -->|
| pubkey Y" ---------------| pubkey X" -------------------|
| | |
|========== Direct WireGuard tunnel (no server) ==========>|
The critical insight: after the initial handshake, traffic flows directly between devices. The coordination server never sees your data. This is fundamentally different from traditional VPNs where all traffic flows through a central gateway.
NAT Traversal - The Hard Problem
Most devices are behind NAT (Network Address Translation). Your laptop at home has a private IP like 192.168.1.5, and your router translates that to a public IP. Two devices behind different NATs cannot directly connect without help.
Tailscale solves this with a multi-strategy approach:
Strategy 1: Direct connection. If both devices have public IPs or are on the same LAN, connect directly. This is the best case.
Strategy 2: UDP hole punching. Both devices simultaneously send UDP packets to each other’s public IP:port. This “punches holes” in both NATs, allowing the return traffic through. The coordination server orchestrates the timing. This works for most consumer NATs (cone NATs).
Strategy 3: STUN. Tailscale uses STUN (Session Traversal Utilities for NAT) servers to help devices discover their public IP and port mapping. This information is shared via the coordination server to improve hole punching success rates.
Strategy 4: DERP relays. When direct connection and hole punching both fail (symmetric NATs, restrictive firewalls, corporate proxies), traffic flows through DERP (Designated Encrypted Relay for Packets) servers. DERP relays are operated by Tailscale and are essentially encrypted packet forwarders.
When direct connection fails:
Device A ---encrypted--> DERP Relay ---encrypted--> Device B
The relay sees encrypted WireGuard packets.
It cannot decrypt them. It only knows the destination.
DERP is the fallback of last resort. Tailscale’s NAT traversal succeeds in establishing direct connections roughly 94% of the time (per Tailscale’s published statistics). The remaining 6% uses DERP relays, which adds latency but maintains connectivity.
Tailscale runs DERP servers in major regions worldwide. You can also run your own DERP relay if you want to keep relay traffic on your infrastructure.
Zero-Config Mesh Networking
The user experience is where Tailscale differentiates most sharply from raw WireGuard. With WireGuard, adding a new device to a 10-node network means updating config files on all 10 existing devices. With Tailscale, you run tailscale up and the mesh reconfigures automatically.
Each device in a tailnet gets a stable IP in the 100.x.y.z range (CGNAT space) and a MagicDNS name. You can SSH to my-server.tailnet-name.ts.net from anywhere - home, coffee shop, another continent - and the connection is encrypted end-to-end with no port forwarding, no dynamic DNS, no firewall rules.
Tailscale ACLs (Access Control Lists) let you define who can reach what:
{
"acls": [
{
"action": "accept",
"src": ["group:engineering"],
"dst": ["tag:production:443", "tag:staging:*"]
},
{
"action": "accept",
"src": ["group:ops"],
"dst": ["*:*"]
}
],
"tagOwners": {
"tag:production": ["group:ops"],
"tag:staging": ["group:engineering"]
}
}
This is zero-trust networking without the enterprise sales pitch. Engineers can reach staging servers. Ops can reach everything. No one else can reach anything. The ACLs are evaluated at the client side - a device will not even accept a connection that is not permitted by the ACL.
Self-Hosting with Headscale
Tailscale’s client is open source. The coordination server is not. If you do not want to depend on Tailscale’s infrastructure for the control plane, Headscale is an open-source reimplementation.
# Install Headscale
wget https://github.com/juanfont/headscale/releases/latest/download/headscale_linux_amd64
chmod +x headscale_linux_amd64
sudo mv headscale_linux_amd64 /usr/local/bin/headscale
# Create a user
headscale users create myuser
# Generate a pre-auth key
headscale preauthkeys create --user myuser --reusable --expiration 24h
Headscale gives you full control over the coordination server. Your device metadata, public keys, and network topology never leave your infrastructure. The data plane (actual traffic) was already direct between devices, so Headscale only replaces the control plane.
When to self-host: If you have compliance requirements that prohibit sharing network metadata with a third party, or if you want to run Tailscale-like networking in an air-gapped environment. Headscale supports most Tailscale features including ACLs, MagicDNS, and DERP.
When not to self-host: If you have fewer than 50 devices, the operational overhead of running Headscale is not worth it. Tailscale’s free tier supports up to 100 devices and 3 users. The coordination server sees only metadata (IPs, public keys), not your traffic.
Comparison with Alternatives
| Feature | Tailscale | Plain WireGuard | OpenVPN | Nebula (Slack) | ZeroTier |
|---|---|---|---|---|---|
| Setup complexity | Minimal | High | High | Moderate | Minimal |
| NAT traversal | Automatic | Manual | Requires port forward | Lighthouse-based | Automatic |
| Encryption | WireGuard | WireGuard | TLS/OpenSSL | Noise Protocol | Custom |
| Topology | Mesh | Point-to-point | Hub-and-spoke | Mesh | Mesh |
| Performance | Near line-rate | Near line-rate | 200-500 Mbps | Near line-rate | Good |
| SSO integration | Native | None | Possible | None | Possible |
| Self-hostable | Headscale | Yes | Yes | Yes | Yes (paid) |
| ACLs | Policy-based | iptables | Server-side | Firewall rules | Flow rules |
Why This Matters Beyond VPNs
Tailscale’s architecture - a mesh of encrypted tunnels with a lightweight control plane - is becoming a pattern for distributed systems. Kubernetes networking, service mesh, and multi-cloud connectivity all face the same fundamental problems: identity, encryption, discovery, and NAT traversal.
The ideas behind Tailscale apply whether or not you use the product. WireGuard as a universal transport layer. Identity-based networking instead of IP-based networking. Control plane and data plane separation. These are architectural principles that will shape networking infrastructure for the next decade.
If you are running infrastructure across multiple locations - cloud regions, on-premise servers, developer laptops, edge devices - Tailscale is the fastest path to secure connectivity. Install it on each device, define your ACLs, and the network configures itself. You spend time building your application instead of debugging iptables rules.
Comments