Interview Framework: RESHADED
TL;DR
RESHADED = Requirements, Estimation, System interface, High-level design, API design, Data model, Elaborate, Discuss. Time: 45 minutes total. Goal: Show thought process, discuss trade-offs, ask clarifying questions.
The RESHADED Framework
1. Requirements (5 min)
Ask clarifying questions. Never start designing without understanding scope.
Functional:
- What features? (post tweets, follow users, view timeline?)
- Who are the users? (consumers, businesses, admins?)
- Scale? (DAU, QPS, data volume?)
Non-functional:
- Latency requirements? (< 200ms?)
- Availability? (99.9% or 99.99%?)
- Consistency? (strong or eventual?)
Out of scope:
- What NOT to design? (analytics, recommendations?)
Example questions (Twitter):
Q: Can users post tweets?
A: Yes, 280 characters max
Q: Can users follow others?
A: Yes
Q: Timeline: following only, or recommendations too?
A: Following only (recommendations out of scope)
Q: Scale?
A: 300M DAU, 100M tweets/day
Q: Latency?
A: Timeline < 200ms
Spend 5 minutes here. Interviewers want to see you ask questions, not assume requirements.
2. Estimation (5 min)
Back-of-envelope calculations. Shows you think about scale.
Example (Twitter):
DAU: 300M users
Posts: 100M tweets/day
Reads: 300M × 20 timeline views = 6B/day
QPS:
- Write: 100M / 100K = 1K QPS
- Read: 6B / 100K = 60K QPS
Storage:
- Tweet: 280 chars × 2 bytes = 560 bytes
- Metadata: + user_id, timestamp = ~1 KB/tweet
- Daily: 100M × 1 KB = 100 GB/day
- 5 years: 100 GB × 365 × 5 = 183 TB
Bandwidth:
- Write: 1K QPS × 1 KB = 1 MB/s
- Read: 60K QPS × 10 KB (10 tweets/page) = 600 MB/s
3. System Interface (2 min)
Define APIs. Clarify inputs/outputs.
Example (Twitter):
POST /api/v1/tweets
{
"user_id": "123",
"content": "Hello world",
"media_urls": ["https://..."]
}
Response: {"tweet_id": "456", "created_at": "2024-01-15T10:30:00Z"}
GET /api/v1/timeline?user_id=123&limit=20
Response: {
"tweets": [
{"tweet_id": "789", "user_id": "456", "content": "...", "created_at": "..."},
...
]
}
4. High-Level Design (10 min)
Draw boxes and arrows. Start simple, iterate.
Step 1: Single server (naive)
Step 2: Add load balancer + replicas
Step 3: Add cache, CDN, message queue
Discuss components:
- Load balancer: Distribute traffic
- Cache: Timeline data (hot data)
- Message queue: Fanout tweets to followers
- CDN: Static assets (images, videos)
5. API Design (3 min)
Detail critical APIs with rate limiting, authentication.
POST /api/v1/tweets
Authorization: Bearer <JWT>
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 85
{
"content": "Hello world",
"media_ids": ["123", "456"]
}
Error responses:
400 Bad Request: {"error": "content too long (max 280 chars)"}
401 Unauthorized: {"error": "invalid token"}
429 Too Many Requests: {"error": "rate limit exceeded"}
6. Data Model (5 min)
Design database schema.
Example (Twitter):
-- Users
CREATE TABLE users (
user_id BIGINT PRIMARY KEY,
username VARCHAR(50) UNIQUE,
email VARCHAR(255),
created_at TIMESTAMP
);
-- Tweets
CREATE TABLE tweets (
tweet_id BIGINT PRIMARY KEY,
user_id BIGINT REFERENCES users,
content VARCHAR(280),
created_at TIMESTAMP,
INDEX(user_id, created_at) -- For user's tweets
);
-- Relationships
CREATE TABLE follows (
follower_id BIGINT REFERENCES users,
followee_id BIGINT REFERENCES users,
created_at TIMESTAMP,
PRIMARY KEY(follower_id, followee_id)
);
Discussion points:
- Use BIGINT for IDs (8 bytes = 2^63 IDs)
- Index on (user_id, created_at) for timeline queries
- Consider NoSQL if massive scale (Cassandra)
7. Elaborate (10 min)
Deep dive into 1-2 components. Interviewer will guide.
Example deep dives:
- Timeline generation: Fanout-on-write vs fanout-on-read
- Caching strategy: Cache-aside, TTL, eviction
- Sharding: How to partition users/tweets
- Celebrity problem: Can't fanout to 100M followers
Timeline generation:
Fanout-on-write (pre-compute):
1. User posts tweet
2. Fanout to all followers' timelines (write to cache)
3. Follower reads timeline (cache hit, fast)
Pros: Fast reads
Cons: Slow writes (for celebrities)
Fanout-on-read (compute on demand):
1. User posts tweet (just write to DB)
2. Follower requests timeline
3. Query all followed users, merge tweets
Pros: Fast writes
Cons: Slow reads (many queries)
Hybrid (Twitter's approach):
- Regular users: Fanout-on-write
- Celebrities (>1M followers): Fanout-on-read
- Merge both approaches in timeline
8. Discuss (5 min)
Trade-offs, bottlenecks, failure scenarios.
Questions to address:
- Bottlenecks: Database writes (shard), cache misses (add more cache nodes)
- Failure scenarios: DB down (read from replicas), cache down (fallback to DB)
- Security: Rate limiting, authentication (JWT), input validation
- Monitoring: Latency (P95), error rate, QPS
Trade-offs:
- Consistency vs availability (eventual for timeline is OK)
- Latency vs cost (more cache = faster, but expensive)
- Simplicity vs scalability (start simple, scale later)
Interview Tips
| Do | Don't |
|---|---|
| ✅ Ask clarifying questions | ❌ Jump into design immediately |
| ✅ Think out loud | ❌ Stay silent |
| ✅ Start simple, iterate | ❌ Over-engineer from start |
| ✅ Discuss trade-offs | ❌ Present only one solution |
| ✅ Draw diagrams | ❌ Just talk (visual helps) |
| ✅ Acknowledge unknowns | ❌ Fake knowledge |
Time Management
| Phase | Time | % of Interview |
|---|---|---|
| Requirements | 5 min | 10% |
| Estimation | 5 min | 10% |
| System Interface | 2 min | 5% |
| High-Level Design | 10 min | 20% |
| API Design | 3 min | 5% |
| Data Model | 5 min | 10% |
| Elaborate (deep dive) | 10 min | 20% |
| Discuss (trade-offs) | 5 min | 10% |
| Total | 45 min | 90% (10% buffer) |
Common Pitfalls
- Not asking questions: Assuming requirements
- Over-engineering: Adding unnecessary complexity
- Ignoring scale: Not discussing sharding/caching
- No trade-offs: Only presenting one approach
- Weak communication: Not explaining thought process
Quick Reference
RESHADED:
- Requirements - Ask clarifying questions (5 min)
- Estimation - Back-of-envelope math (5 min)
- System Interface - Define APIs (2 min)
- High-Level Design - Draw boxes and arrows (10 min)
- API Design - Detail critical APIs (3 min)
- Data Model - Database schema (5 min)
- Elaborate - Deep dive 1-2 components (10 min)
- Discuss - Trade-offs, bottlenecks (5 min)
Next: Design URL Shortener - Classic starter problem.