Loading...
Loading...
Back-of-envelope calculations for system design interviews. Pick a scenario, adjust parameters, get instant estimates.
Video upload, streaming, thumbnails
Registered accounts across the platform
Users who interact at least once per day. Typically 10-40% of total users
Reads per write. >1 = read-heavy (feeds ~100:1). <1 = write-heavy (logging ~1:10, IoT ~1:100). 1:1 = balanced (chat)
Average actions per active user daily (tweets, messages, uploads)
Size of each write operation. Tweet ~280B, image ~200KB, video ~50MB
Size of each read response. Often equals write size, larger with metadata
How long data is stored. Affects total storage but not daily bandwidth
% of reads served from cache. Redis/Memcached typically hit 80-95%
Peak vs average ratio. 2x steady, 3x social, 5-10x events (flash sales)
QPS one server handles. Web: 5-20K, DB: 1-10K, Redis: 50-100K
Pick best DB based on your workload characteristics
| Power | Value | Approx |
|---|---|---|
| 210 | 1 KB | 1,024 |
| 220 | 1 MB | 1,048,576 |
| 230 | 1 GB | ~1 Billion |
| 240 | 1 TB | ~1 Trillion |
| 250 | 1 PB | ~1 Quadrillion |
| Operation | Time |
|---|---|
| L1 cache reference | 0.5 ns |
| L2 cache reference | 7 ns |
| Main memory (RAM) | 100 ns |
| NVMe SSD random read | 15 µs |
| SATA SSD random read | 100 µs |
| HDD seek | 10 ms |
| Send 1 MB over network | 10 ms |
| Read 1 MB from NVMe SSD | 0.2 ms |
| Read 1 MB from HDD | 20 ms |
| Round trip (same DC) | 0.5 ms |
| Round trip (CA → NY) | 40 ms |
| Round trip (CA → EU) | 150 ms |
| SLA | Nines | Downtime/yr |
|---|---|---|
| 99% | Two 9s | 3.65 days/year |
| 99.9% | Three 9s | 8.77 hours/year |
| 99.99% | Four 9s | 52.6 min/year |
| 99.999% | Five 9s | 5.26 min/year |
| 99.9999% | Six 9s | 31.56 sec/year |
Video platforms like YouTube use database replication for read scaling and need careful consistency trade-offs. See the worked example for a step-by-step walkthrough.
There are 86,400 seconds in a day (24 × 60 × 60). Dividing daily operations by 86,400 gives average operations per second, assuming uniform distribution.
The Pareto principle: 20% of data accounts for 80% of requests. Caching the top 20% of daily data in memory can serve most read traffic without hitting the database.
Use 2x for steady workloads, 3x for social media with viral spikes, 5-10x for event-driven systems (ticket sales, flash sales, live sports).
In interviews, order of magnitude is sufficient. The difference between 10K and 100K QPS matters; 10K vs 12K does not. Focus on reasoning, not exact numbers.
Web servers: 5K-20K QPS. Database servers: 1K-10K QPS. Cache servers (Redis): 50K-100K QPS. API gateways: 10K-50K QPS. It depends on payload size and processing complexity.