It is always a good idea to estimate the scale of the system we’re going to design. This will also help later when we will be focusing on scaling, partitioning, load balancing and caching.
What scale is expected from the system (e.g., number of new tweets, number of tweet views, number of timeline generations per sec., etc.)?
How much storage will we need? We will have different numbers if users can have photos and videos in their tweets.
What network bandwidth usage are we expecting? This will be crucial in deciding how we will manage traffic and balance load between servers.
4. Modelling Data
Defining the data model early will clarify how data will flow among different components of the system. Later, it will guide towards data partitioning and management. The candidate should be able to identify various entities of the system, how they will interact with each other, and different aspect of data management like storage, transportation, encryption, etc.
UserID, Name, Email, DoB, CreationData, LastLogin, etc.
TweetID, Content, TweetLocation, NumberOfLikes, TimeStamp, etc.
UserID, TweetID, TimeStamp
Which database system should we use? Will NoSQL like Cassandra best fit our needs, or should we use a MySQL-like solution? What kind of block storage should we use to store photos and videos?
5. High-Level Design
Draw a block diagram with 5-6 boxes representing the core components of our system. We should identify enough components that are needed to solve the actual problem from end-to-end.
For Twitter, at a high-level, we will need multiple application servers to serve all the read/write requests with load balancers in front of them for traffic distributions. If we’re assuming that we will have a lot more read traffic (as compared to write), we can decide to have separate servers for handling these scenarios. On the backend, we need an efficient database that can store all the tweets and can support a huge number of reads. We will also need a distributed file storage system for storing photos and videos.
# Twitter layout
Clients => Load Balancer => Server <=> Database
=> Server <=> File System
6. Detailed Design
Dig deeper into two or three components; interviewer’s feedback should always guide us what parts of the system need further discussion. We should be able to present:
Their pros and cons
Explain why we will prefer one approach on the other.
Remember there is no single answer, the only important thing is to consider tradeoffs between different options while keeping system constraints in mind.
Example questions to think about
Since we will be storing a massive amount of data, how should we partition our data to distribute it to multiple databases? Should we try to store all the data of a user on the same database? What issue could it cause?
How will we handle hot users who tweet a lot or follow lots of people?
Since users’ timeline will contain the most recent (and relevant) tweets, should we try to store our data in such a way that is optimized for scanning the latest tweets?
How much and at which layer should we introduce cache to speed things up?
What components need better load balancing?
7. Reduce/Resolve/Identify Bottlenecks
Try to discuss as many bottlenecks as possible and different approaches to mitigate them.
Is there any single point of failure in our system? What are we doing to mitigate it?
Do we have enough replicas of the data so that if we lose a few servers we can still serve our users?
Similarly, do we have enough copies of different services running such that a few failures will not cause total system shutdown?
How are we monitoring the performance of our service? Do we get alerts whenever critical components fail or their performance degrades?