Skip to main content

Tunnel Quality

Methodology

Measures what applications experience through actual WireGuard tunnels relayed via DERP.

  • Traffic path: app -> WireGuard encrypt -> Tailscale framing -> DERP relay -> Tailscale deframing -> WireGuard decrypt -> app
  • Mesh: Headscale coordination, 4 Tailscale clients, direct UDP blocked (forces DERP)
  • Measurements per run: iperf3 UDP (throughput + loss + jitter), iperf3 TCP (retransmits), ICMP ping (600 samples)
  • Rates: 500M, 1G, 2G, 3G, 5G, 8G offered
  • Runs: 20 per data point
  • Configs: 4, 8, 16 vCPU
  • Total: 720 runs

Key Finding: WireGuard Is the Bottleneck

ConfigHD UDP @ 8GTS UDP @ 8GHD RetxTS RetxHD PingTS Ping
4 vCPU2,100 Mbps2,115 Mbps4,8525,2170.90 ms0.55 ms
8 vCPU2,053 Mbps2,060 Mbps4,5524,4840.98 ms0.90 ms
16 vCPU2,059 Mbps2,223 Mbps4,2914,6170.91 ms1.19 ms

Both relays deliver identical UDP throughput (~2 Gbps) because Tailscale's userspace WireGuard (wireguard-go, ChaCha20-Poly1305) is the throughput ceiling, not the relay. Loss is negligible for both (<0.04%).

TCP retransmits: HD produces 7--8% fewer at max load on 4 and 16 vCPU. Tied at 8 vCPU.

Rate Scaling

RateHD UDP (4v)TS UDP (4v)HD TCP (4v)TS TCP (4v)
500M500 Mbps500 Mbps3,911 Mbps3,878 Mbps
1G975 Mbps975 Mbps3,931 Mbps3,937 Mbps
3G1,025 Mbps1,062 Mbps3,849 Mbps3,895 Mbps
5G1,318 Mbps1,327 Mbps3,146 Mbps3,154 Mbps
8G2,100 Mbps2,115 Mbps1,217 Mbps1,118 Mbps

UDP throughput plateaus at ~1--2 Gbps regardless of offered rate -- the WireGuard crypto ceiling.

Interpretation

Switching from TS to HD as the relay is transparent to applications running through WireGuard tunnels. The performance advantages measured in the relay benchmarks represent additional headroom -- capacity to serve more tunnels, more peers, or handle traffic bursts without dropping packets.

With kernel WireGuard (wg.ko) replacing the userspace client, the tunnel ceiling would move from ~2 Gbps to 10+ Gbps, where HD's relay advantage becomes directly visible.