Load balancing is the methodical distribution of network traffic across multiple clustered servers. A load balancer in an application delivery controller (ADC) sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them.

Load balancers can also sometimes detect the health of backend server and avoid sending traffic to servers that are unable to fulfill a request. This allows organizations to programmatically scale their modern applications to meet demand.

In this lab, we will demonstrate application load balancing of an application distributed over 2 web servers.

AWS Load Balancing

Amazon Elastic Compute Cloud (EC2) provides scalable computing capacity in the AWS Cloud. Using EC2 eliminates the need to invest in server and networking hardware up front, so applications can be developed and deployed faster. AWS offers Elastic Load Balancing (ELB) to automatically distribute incoming application traffic across multiple targets in one or more Availability Zones.

An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. AZs offer the ability to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center.

Amazon will bill you for any applicable AWS resources and time used that are not covered in the AWS Free Tier.

Part 1: Launch an AWS EC2 Instance

  1. In the AWS Management Console page, click Services then EC2. This is the Amazon EC2 dashboard. Click on Launch Instance then Launch Instance.
  2. In Choose an Amazon Machine Image (AMI), scroll through and review the 40 available default AMIs. An Amazon Machine Image (AMI) provides the template information required to launch an instance, which is a virtual server in the cloud. Select the free tier eligible Ubuntu Server 20.04 LTS (HVM).
  3. In Choose an Instance Type, ensure that the instance type is set to t2.micro, which is Free tier eligible. Click Next: Configure Instance Details.
  4. Review the settings in Configure Instance Details, however notice that in making some changes, you may apply additional charges.
  5. In Subnet, select the default subnet for us-east-1a.
  6. Under Advanced Details, in User data, enter the script below to install Apache web server, configure and activate the server, and create a simple web page.
     #!/bin/bash
    
     sudo apt-get update -y
     sudo apt-get install apache2 unzip -y
    
     echo "<html><center><body bgcolor='black' text='#39ff14' style='font-family: Arial'>\
     <h1>Load Balancer Demo</h1><h3>Availability Zone: " > /var/www/html/index.html
     curl http://169.254.169.254/latest/meta-data/placement/availability-zone >> /var/www/html/index.html
     echo '</h3> <h3> Instance Id:' >> /var/www/html/index.html
     curl http://169.254.169.254/latest/meta-data/instance-id >> /var/www/html/index.html
     echo '</h3> <h3> Public IP:' >> /var/www/html/index.html
     curl http://169.254.169.254/latest/meta-data/public-ipv4 >> /var/www/html/index.html
     echo '</h3> <h3>Local IP:' >> /var/www/html/index.html
     curl http://169.254.169.254/latest/meta-data/local-ipv4 >> /var/www/html/index.html
     echo '</h3></html>' >> /var/www/html/index.html
    
  7. Click Next: Add Storage.
  8. In Add Storage, launch the instance with the default 8 GB volume. Click Next: Add Tags.
  9. It is best practice to tag your instances. In Add Tags, click Add Tag. Enter “Name” as the Key and “web-server-1” as the Value. Click Add another tag. Enter “Owner” as the Key and your initials as the Value. Click Next: Configure Security Group.
  10. In Configure Security Group, keep the default setting of Create a new security group. Name it “web-server-sg”. Rules with source of 0.0.0.0/0 allow all IP addresses to access the instance in that security group over a specified port. Since we will not log into this instance and this will be exposed to the Internet, delete the SSH rule by clicking the X. Click Add Rule and create a rule for Type HTTP from Source Anywhere-IPv4. Click Review and Launch.
  11. In Review Instance Launch, click Launch. Click Choose an existing key pair then select Proceed without a key pair. Check that you acknowledge that you will not be able to log into the instance. Click Launch instances.
  12. Click View Instances. The instance will take steps to boot and initialize.
  13. Check the checkbox for web-server-1 and click Actions then Image and templates then Launch more like this as a shortcut to repeat the above steps. Name it as “web-server-2”.
  14. Edit the instance configuration for web-server-2 in us-east-1b. When configured, Launch the instance as before.
  15. Click View Instances. Copy the public IPv4 address of each web server. Visit each IP in a browser. You may have to wait a few minutes first.
  16. Please write up a paragraph answering the following questions.
    1. What is each line in the bootstrap script doing? You may refer to the Retrieve instance metadata documentation for the curl commands.
    2. What webpages are you presented in step 15? What do these represent?

Part 2: Create an Elastic Load Balancer

  1. Under Load Balancing, click Load Balancers. Click Create Load Balancer.
  2. From Select load balancer type, create an Application Load Balancer.
  3. Expand the How Application Load Balancers work section and review it.
  4. Give the load balancer a Name of “web-server-alb”. Ensure that it is Internet-facing and IPv4.
  5. Under Network mapping, choose the Mappings for us-east-1a and us-east-1b, where you created the web servers.
  6. Under Security groups, click Create new security group. Create a security group “alb-sg” with a rule to allow Type HTTP from Source Anywhere-IPv4. Give it a Description. Click Create new security group.
  7. Back at the Create Application Load Balancer page, under Security groups, select alb-sg. You may have to first click the rotating arrow that is located to the right to refresh the list of target groups. Remove any other security groups.
  8. Under Listeners and routing, click Create target group.
  9. Choose a target type of Instances. Name the target group “web-server-tg”. Notice the Health checks section. Click Next.
  10. Select the web-server-1 and web-server-2 instances. Click Include as pending below. The instances will move to Targets. Click Create target group.
  11. Select the web-server-tg target group. Notice the health status of the registered targets.
  12. Back at the Create Application Load Balancer page, under Default action, select web-server-tg. You may have to first click the rotating arrow that is located to the right to refresh the list of target groups.
  13. Click Create load balancer. Click View load balancer. It may take some time for the load balancer to be provisioned and become active.
  14. Copy the DNS name for the load balancer and browse to it in your browser. Refresh the webpage a few dozen times. Notice any changes.
  15. Please write up a paragraph answering the following questions.
    1. What does the Health checks section in step 24 configure? How is it used?
    2. What changes do you observe in step 29. What does this infer, assuming the servers and paths are fully healthy?

Part 3: Program Network-wide Behavior

  1. Refresh the webpages for the public IP addresses and the load balancer. Notice that they are all reachable. We will have to edit the security group inbound rule so that the web servers only receive traffic from the load balancer.
  2. In the AWS console, under Network & Security, click Security Groups. Select the web-server-sg security group. Click Actions then Edit inbound rules.
  3. Delete the HTTP inbound rules and instead Add rule to allow Type HTTP from Sourcealb-sg” security group. Click Save rules.
  4. Repeat step 32. Notice any differences in the behavior.
  5. Refresh the load balancer webpage a few dozen times. Notice any changes.
  6. In AWS, under Instances, click Instances. Select web-server-1 and click Instance state then Stop instance then Stop. This will shutdown the web server operating system.
  7. Under Load balancing, click Target groups. Select the web-server-tg target group. Notice the health status of the registered targets.
  8. Repeat step 36.
  9. Clean up your AWS environment of all the resources you created for this lab. Terminate the lab instances, delete the load balancer, delete the target group, delete the web-server-sg security group, and delete the alb-sg security group.
  10. Please write up a paragraph answering the following questions.
    1. What is an application load balancer?
    2. What do load balancers and security groups represent?
    3. What match+action rules are being used at the security group? In step 34 what does ::/0 represent?
    4. What match+action rules do you expect are being used at the load balancer?
    5. What happened in step 35? About how long after step 34 did it take for the network-wide behavior to change? What happened in step 38?
    6. Draw an architecture diagram of the application including all hosts and resources. You may use Google Drawings or Visio for example, but you must label the resources and use the architecture icons. You may refer to AWS Architecture Icons documentation and Lucidchart AWS diagrams documentation.
    7. How would you improve this architecture for security, resiliency, fault tolerance, etc.?

GCP Load Balancing

Google Cloud Platform (GCP) provides scalable computing capacity in the cloud. Using GCP eliminates the need to invest in server and networking hardware up front, so applications can be developed and deployed faster. GCP offers Cloud Load Balancing to automatically distribute incoming application traffic across multiple targets in one or more regions.

A region is a specific geographic location where you can run your resources. Each region has one or more zones.

Google will bill you for any applicable GCP resources and time used that are not covered in the GCP Free Tier.

Launch a GCP VM Instance

  1. In the GCP Console, click Navigation menu then Compute Engine then VM instances. This is the Compute Engine dashboard.
  2. Click Create.
  3. In Boot disk, select Ubuntu 20.04 LTS.
  4. In Firewall, select Allow HTTP traffic.
  5. In Management, under Networking, select Default for network interface and ephemeral internal IP for Primary internal IP.
  6. In Management, under Cloud API access scopes, select Allow full access to all Cloud APIs.
  7. Click Create.
  8. Repeat steps 2 through 7 to create an identical VM instance.
  9. Click Navigation menu then VPC network then Firewall.
  10. Click Create firewall rule.
  11. Enter “web-server-fw” as the Name. Choose Allow HTTP traffic for the Targets. Choose IP ranges for the Source filter. Enter “0.0.0.0/0” for the Source IP ranges. Click Create.
  12. In the VM instances page, copy the External IP for each VM instance. Visit each IP in a browser. You may have to wait a few minutes first.
  13. Please write up a paragraph answering the following questions.
    1. What is each line in the bootstrap script doing?
    2. What webpages are you presented in step 12? What do these represent?

Part 2: Create a Network Load Balancer

  1. Under Network Services, click Load balancing then Create a load balancer.
  2. In What are you serving?, choose HTTP(S) traffic.
  3. In Which protocol(s) do you need?, choose TCP.
  4. In Configure the backend, click Backend configuration.
  5. In Backend configuration, click New backend.
  6. Enter “web-server-backend” as the Name. Enter the External IPs for the 2 VM instances. Click Create.
  7. In Backend configuration, click Create.
  8. In Configure the frontend, click Frontend configuration.
  9. In Frontend configuration, click New frontend.
  10. Enter “web-server-frontend” as the Name. Choose TCP Proxy. Enter “0.0.0.0/0” as the IP address. Enter “80” as the Port number.
  11. In Frontend configuration, click Create.
  12. In Review and finalize, review the configuration and click Create.
  13. In Load Balancers, click the Name of the new load balancer. Copy the IP address and visit it in a browser. Refresh the webpage a few dozen times. Notice any changes.
    1. What are the differences between AWS and GCP Load Balancing architecture?
    2. How would you improve this architecture for security, resiliency, fault tolerance, etc.?

Azure Load Balancing

Microsoft Azure provides scalable computing capacity in the cloud. Using Azure eliminates the need to invest in server and networking hardware up front, so applications can be developed and deployed faster. Azure offers Load Balancer to automatically distribute incoming application traffic across multiple targets in one or more regions.

A region is a specific geographic location where you can run your resources. Each region has one or more availability zones.

Microsoft will bill you for any applicable Azure resources and time used that are not covered in the Azure Free Services.

Part 1: Launch an Azure VM Instance

  1. In the Azure portal, click + Create a resource then Virtual machine. This is the Azure VM dashboard.
  2. In Basics, select Ubuntu Server 20.04 LTS as the OS disk image. Select an appropriate subscription and resource group. Name the virtual machine web-server-1. Choose a size appropriate to your needs. Create an administrator account.
  3. In Disks, choose a managed disk.
  4. In Networking, choose a virtual network and subnet. Ensure that Public IP is set to Yes.
  5. In Management, leave the defaults.
  6. Review and click Create.
  7. Repeat steps 1-6 to launch another virtual machine named web-server-2.
  8. In the Virtual Machines page, copy the public IP for each web server. Visit each IP in a browser. You may have to wait a few minutes first.
  9. Please write up a paragraph answering the following questions.
    1. What is each line in the bootstrap script doing?
    2. What webpages are you presented in step 8? What do these represent?

Part 2: Create a Load Balancer

  1. Under Services, click Load balancers then + Add.
  2. In Basics, name the load balancer “web-server-lb”. Choose the appropriate subscription and resource group. Choose the region closest to your virtual machines.
  3. In Backend pool, click Add.
  4. Name the backend pool “web-server-backend”. Choose Virtual machines as the target type. Select web-server-1 and web-server-2 as the virtual machines.
  5. In Health probes, name the health probe “web-server-health-probe”. Choose the appropriate protocol and port.
  6. In Load balancing rules, name the rule “web-server-lbr”. Choose the appropriate frontend IP and port. Choose the backend pool and health probe created above.
  7. In Tags, add any relevant tags.
  8. Review and click Create.
  9. In the Load Balancers page, copy the public IP for the load balancer. Visit the IP in a browser. Refresh the webpage a few dozen times. Notice any changes.
  10. Please write up a paragraph answering the following questions.
    1. What are the differences between AWS and Azure Load Balancing architecture?
    2. How would you improve this architecture for security, resiliency, fault tolerance, etc.?

More Info