Multi Tiered - Netflix App Design

Last Updated: November 18, 2021

Back with another system design. For this project I got a little help from an online forum with the idea to design and implement AWS services that will come together to mimic a video viewing system like Netflix or Youtube. Features for this design include the following:

What are we trying to achieve by using VPCs

When we start using AWS we probably don't want all of our server, services, etc. thrown into a big melting pot. In this type of ecosystem:

  • Everything shares the same network
  • Everything can step on everything else's toes
  • Resource management becomes an army of naming convventions

Now, this can work if you're just using S3, a random EC2 instance or experimenting. But when we go to build a serious cloud infrastructure, whether for data, for an app, etc, this Wild West ecosystem isn't going to cut it.

That still doesn't answer the question - why do we need a VPC?

The answer lies in control over organization of resources, control of security, control of traffic between services, and control of architectures being seperate from each other.

This is how a VPC helps us, without one all of our services and resources are kind of anywhere (Granted AWS forces things into a deault VPC, but everything is public) in the same network. So instead of using an environment everyone is on. Lets provison our own environment.

Multi Tier Architecture of VPC

For this project I used a common infrastructure pattern, 3 tier infrastructure. his pattern divides the infrastructure into 3 separate layers: one public and 2 private layers. The idea is that the public layer acts as a shield to the private layers. Anything in the public layers is publicly accessible but stuff in the private layers is only accessible from inside the network.

In addition to dividing the network into 3 separate layers we also want to implement some kind of high availability. AWS allows you to achieve high availability by distributing your application across multiple Availability Zones. Each Availability Zone is a physical data center in a different geographic location.

The public (top) layer will host an internet-facing Elastic Load Balancer (ELB) and a Bastion host. The ELB is the entry point for your application and it directs traffic to your application servers. Note that the ELB needs to be available in all 3 availability zones. This will allow for high availability and redundancy, in case something happens to an entire availability zone. Our diagram only shows one instance of the ELB and you will only see one instance in the AWS console. However, behind the scenes AWS provisions multiple instances of the ELB based on what availability zones have EC2 instances behind that load balancer. The Bastion host (also known as Jump host) is the server that will allow you to connect to your application servers (or any other servers in the private subnets) via SSH.


Multi Tier Architecture of VPC cont.

The second layer is the application layer, this is where my application servers live. In this case I have wrapped the application server with an AutoScale Group. This will allow the application to scale up if more servers are needed or to recover in the case one of the availability zones is out of service. In the case where an entire availability zone is out of service the load balancer is smart enough to know that and it will scale up in a different availability zone.

The third and last layer is the database layer. This is where the databases live. The only way to access these databases is by connecting to them from the application layer. In our case we have decided to use Amazon's Relational Database Service (RDS) which is a managed database service provided by Amazon. One advantage of using RDS is that we can have a failover database instance in a separate availability zone. In addition, we can also have one or more read-only RDS instances to take some of the load of the main database.


So when customers type the URL of the site, they are routed to the Elastic Load Balancer that sits in our public subnet. The Subnet then directs traffic to one of our provisioned servers that sit in our private subnet. These are where our webpages sit.

Our RDS instance and replica instance sit in the private database subnets and read, write, and get data for and from our customers. Also we have a lambda instance that backup and archives our EC2 instances on a timed schedule in collaboration with Cloudwatch.