Guest Post: Beginner’s Guide to Terraform AWS Compute (Part 2)

Raphael Socher
FAUN — Developer Community 🐾
7 min readMar 14, 2021

--

Photo by Guillaume Lebelt on Unsplash

Welcome back to our series on Terraform AWS.

In case you missed our previous post, we went over the basics of spinning up your infrastructure using Terraform AWS. We started off by creating a virtual machine — aka a Terraform EC2 instance — and allowed HTTP access to the same by defining a security group.

In this post, we’ll go over how to create Terraform EC2 instances into a VPC, and how to make them highly available by creating Terraform AWS load balancers.

This post discusses how to create EC2 instances into a Terraform VPC and make them highly available by creating load balancers. If you have not read the first part, it is highly recommended to read that first before proceeding towards the steps here.

Note: This is a guest post by Sumeet Ninawe from Let’s Do Tech. Sumeet has multi-cloud management platform experience where he used Terraform for the orchestration of customer deployments across major cloud providers like AWS, MS Azure and GCP. You can find him on Github, Twitter, and his website.

To follow along on Github, check out this link: Terraform AWS Compute: Github Repo.

Interested in learning more about Terraform? Join our Slack community to connect with DevOps experts and continue the conversation.

In this article, we’ll go over:

  • Step 1: Creating a Terraform VPC for AWS
  • Step 2: Provisioning- How to Install a Web Server
  • Step 3: What to Do If a Terraform EC2 Instance Goes Down
  • Step 4: How to Test Availability

Step 1: Creating Terraform VPC for AWS

It’s best practice to create our resources within a Terraform VPC. For the example we’re working with, let’s create a basic Terraform VPC and use the same. We’ll make use of the VPC module of Terraform.

To do so, add the following code to your main.tf file:

Add a couple of the variables used above to your variables.tf file, with the below values in variables.tfvars file:

Here, we’re creating a Terraform VPC with the given CIDR range and within that VPC, we’re creating 2 subnets in 2 availability zones.

Initialize Terraform again in this directory since we’ve introduced a new Terraform VPC module in our code. Perform the commands “plan” and “apply,” and verify that a VPC by name “my-vpc” has been created.

For more information on AWS VPCs, check out this post.

Step 2: Provisioning — How to Install a Web Server

In this step, we’re going to install a Nginx web server using the user_data attribute. User data is used to run shell scripts when the server is booted for the first time. While creating a Terraform EC2 instance in AWS console, we can provide user_data in the form of text in the “Configure Instance Details” step.

If you’re looking for a brief review of Terraform provisioning, here’s a guide.

In our code, we pass user_data to the EC2 instance, by adding user_data attribute. We provide the user_data script in a different file. Create a new file name “install.tpl” in the same directory and add the below script to it.

This script updates the repositories, installs Nginx, and runs it. In our main.tf file, we create a data source to read the contents of this file to be used in the aws_instance resource block. Add the below data source in main.tf file:

Also, add the user_data attribute to aws_instance resource block to use this data source script.

If you’ve followed along with our previous post, we created a Security Group where we opened the HTTP access to our instance. We need to make one change to our Security Group configuration — associate it with our VPC. Add the below line of code to your Security Group resource block.

Refer to the reference code on Github to make sure your configurations are correct.

Run the command Terraform “plan” and “apply” and once successful, try to access the Terraform EC2 instance via HTTP using the Public IP address. You should now be able to see the Nginx home page in your browser.

Step 3: What to Do If a Terraform EC2 Instance Goes Down

Currently, we have one EC2 instance running in one AZ. Let’s say that, for some reason, the instance goes down. In this case, all the traffic being served by this instance will be dropped. It would be desirable to have a backup instance to handle the traffic, perhaps in a different AZ.

The answer to this problem is high availability. It’s a big term, meaning that a lot of aspects of business continuity are involved. But in our example, let’s try to implement a simple rule of load balancing. In this step, we create an instance in each AZ, and a load balancer to route traffic (HTTP) to both instances.

Modify the aws_instance resource block as below.

Note that we’ve introduced a meta-argument “count”. The value of count is based on the number of AZs. In our case, we’re dealing with 2 AZ in the us-east-1 region. Therefore, 2 VMs will be created. Similarly, we want to spread these instances in subnets in different AZs. Thus, we’re using the element() function to select subnets based on the index of the count. Also, notice that we’re also naming our Terraform EC2 instances dynamically based on the count index.

Next, let’s set up a load balancer. Creating a load balancer by adding the “aws_lb” block below in your main.tf file. Attributes in this block are quite straightforward.

This resource block only creates the load balancer but does not add the target group or listeners. To create a target group, we use “aws_lb_target_group” and “aws_lb_target_group_attachment” resource blocks as below.

Terraform EC2 instances are registered as targets in the target group. Load balancers route the request traffic to the target group via a listener. The target group is responsible for keeping the instance’s health in check. In the above Terraform AWS code, we’re creating a target group and registering our instances as targets. Add this code to the main.tf file.

Finally, we need to create a listener using the below code.

Lastly, let’s output the load balancer’s DNS name. Add the below lines to variables.tf file.

Save the files and run the commands Terraform “plan” and “apply”. When all of the resources are created, test if you can access the Nginx home page by accessing the load balancer’s DNS name. If you’ve followed along, it should be accessible.

Step 4: How to Test Availability

The entire reason for creating a load balancer was to improve availability in case of a failure of one machine. Let’s test the same by stopping one of the Terraform EC2 instances. Navigate to EC2 instances in your AWS console and stop one of the VMs. Try to access the load balancer’s DNS via browser, and it should still work. Do a couple of tests and play around a bit. Once done, don’t forget to run the command Terraform “destroy”.

We hope you’re finding our blog series on Terraform AWS to be helpful. Let us know your thoughts.

We recently published an article on our new tool, InfraSketch. Check out the article (a Faun exclusive) to learn more.

👋 Join FAUN today and receive similar stories each week in your inbox! Get your weekly dose of the must-read tech stories, news, and tutorials.

Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇

--

--

Founder at InfraCode — customizable, reliable Infrastructure as Code tools. Simplifying the lives of DevOps professionals. www.infrastructurecode.io