The Architecture Behind Our CDN Network
Building a CDN is fundamentally a distributed systems problem. Every design decision involves trade-offs between latency, consistency, cost, and operational complexity.
PoP selection and BGP anycast
We use BGP anycast to route users to the nearest PoP. When a user in Tokyo makes a request, their traffic is routed to our Tokyo PoP — not to a central origin — through standard internet routing protocols.
Cache hierarchy
We operate a two-tier cache: edge nodes (L1) and regional shields (L2). A cache miss at the edge hits the regional shield before reaching the origin. This reduces origin pull by ~97%.
Cache invalidation
Invalidation propagates through a pub/sub mesh. A purge request fans out to all PoPs within 500ms globally. For emergency invalidations we have an out-of-band path that bypasses the normal queue.
TLS termination
We terminate TLS at the edge. Session resumption (session tickets + 0-RTT for repeat visitors) cuts handshake latency by 70ms for returning users.