Beginner’s Guide to AWS
Storage Using Terraform

Raphael Socher
FAUN — Developer Community 🐾
7 min readMar 28, 2021

--

Photo by Jr Korpa on Unsplash

Storage is one of the core aspects of cloud computing. Data has attained utmost importance in today’s world and it’s important to choose the right storage for data. Cloud providers like AWS provide multiple options to store data in various formats and classes with a varying range of availability, durability, and accessibility.

In this post, we will provision 3 types of storage on AWS using Terraform. The reference code to this blog can be found in this repo. This repository contains 3 subdirectories which contain separate code for S3, EBS, and RDS.

The post below is divided into 3 sections:

1. Provisioning Terraform AWS S3 buckets to create a static website

2. Provisioning EBS volume and attaching it to a Terraform EC2 instance

3. Provisioning RDS instances

The instructions which follow make use of minimal features — both from Terraform and AWS — with an intention to let you try out more options as per your curiosity.

Each of the subdirectories contains a file — provider.tf — which declares the Terraform AWS provider and the intended region. The subdirectories also have — variables.tf and variables.tfvars — files. We won’t go into the details about these as their purpose is already covered in our previous article, Terraform for AWS Compute.

Note: This is a guest post by Sumeet Ninawe from Let’s Do Tech. Sumeet has multi-cloud management platform experience where he used Terraform for the orchestration of customer deployments across major cloud providers like AWS, MS Azure and GCP. You can find him on Github, Twitter, and his website.

Interested in learning more about Terraform for AWS? Join our Slack community to connect with DevOps experts and continue the conversation.

Provisioning Terraform S3 buckets to create a static website.

AWS S3 (Simple Storage Service) is an object storage service provided by AWS. S3 has a rich set of features and APIs which makes it very easy to use and integrate with other services. S3 offers various storage classes that differ in terms of availability, durability, and accessibility.

One of the most highly talked-about features about S3 is that you can host a static website in a publicly accessible Terraform AWS S3 bucket. Buckets are containers in S3, in which objects are stored. In this part,

1. We will create a Terraform AWS S3 bucket.

2. Then, we will put our static web resources into it (objects).

3. Next, we can configure the bucket and object access appropriately.

4. Finally, we will access our web page from the internet.

Here are the steps:

Create a main.tf file and write the below code to create an S3 bucket. The bucket name has to be globally unique. You may want to change it to something else. The ‘acl’ attribute describes the accessibility of the bucket itself.

resource "aws_s3_bucket" "b" {
bucket = "s3website100001"
acl = "private"

website {
index_document = "index.html"
error_document = "error.html"
}

We have an additional nested block named — “website” in this bucket, where we have declared the index_document and error_document attributes. This declaration tells Terraform to enable the “Static Website” feature on this Terraform AWS bucket and set the index and error documents to the given values.

Now we need to create these documents (index and error) and make them available in the same Terraform AWS S3 bucket. In the cloned repository, we have 2 .html files created already: Index.html for valid requests, and error.html for bad requests.

resource "aws_s3_bucket_object" "index" {
bucket = "s3website100001"
key = "index.html"
source = "./index.html"
acl = "public-read"
content_type = "text/html"

}
resource "aws_s3_bucket_object" "error" {
bucket = "s3website100001"
key = "error.html"
source = "./error.html"
acl = "public-read"
content_type = "text/html"

}

Here, we are using 2 “aws_s3_bucket_object” resource blocks for 2 different objects. Each block specifies a bucket — which is the same bucket we are creating in the previous code. We are specifying a key — which would be the name of the object when uploaded to a Terraform AWS S3 bucket.

The source attribute provides the local path from where the index.html and error.html files are to be uploaded. The ‘acl’ attribute is set to “public-read” — this is required since we want users to access the static website. Lastly, we have set the content_type attribute to “text/html” since they are simple HTML files.

Save the main.tf file. If this is your first time running this code, you need to initialize the Terraform directory before running apply. After you apply this successfully, log in to your AWS console and check for the newly created bucket. You should be able to see 2 object files — index.html and error.html.

Navigate to properties > Static website hosting section, and notice that it is enabled. Click on the endpoint and if everything has gone well until now, you should see the message “Hello World!”. Try to append the endpoint with any random path and you should see “404”.

Provisioning EBS volume and attach it to a Terraform EC2 instance.

Elastic Block Store are disk volumes that can be attached to EC2 instances. Imagine EC2 instances to be machines with CPUs and RAM with some storage capacity. This storage capacity is essentially an EBS store unless you chose an AMI with an instance store.

Additional EBS volumes can also be attached and mounted to EC2 instances for additional storage. EBS volumes exist independently, meaning their lifecycle is not dependent on the EC2 instance they are attached to.

In this section, we will create an EBS volume using Terraform and attach it to an EC2 instance — again using Terraform. In the end, we would log in to the AWS console to verify if an additional volume is attached to our Terraform EC2 instance.

Here’s how:

Create a main.tf file and write the below code to create an EBS volume. We need to specify the availablity_zone attribute, as this is mandatory. We have also specified the size of the volume to be 40G. In the tags block, we will name our volume “MyEBS”. Feel free to choose any name here.

resource "aws_ebs_volume" "myebs" {
availability_zone = var.az
size = 40

tags = {
Name = "MyEBS"
}
}

Next, let us write some code for an EC2 instance. Here, we specify the AMI, instance type, and name. Make sure the AMI exists in the region you are working with. We have covered more computing-related activities in this post.

resource "aws_instance" "compute_nodes" {
ami = var.ami
instance_type = var.instance_type
tags = {
Name = "my-compute-node"
}
}

Lastly, we write the code to attach EBS volume to our EC2 instance as below. This should be straight-forward.

resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.myebs.id
instance_id = aws_instance.compute_nodes.id
}

Go ahead and initialize/apply the Terraform code, and log in to the AWS console after successful execution. Navigate to EC2 > Instances and look for the instance with name “my-compute-node”, or whichever name you chose in the above code. Observe the “Storage” tab and you should be able to see that a 40G volume is attached to this instance.

Provisioning an RDS instance.

RDS is a managed relational database service by AWS. It allows us to create a hosted database using engines like MySQL, Aurora, Oracle, MariaDB, MS SQL Server, and PostgreSQL. RDS offers features like Multi-AZ deployments and read replicas which improve the availability and resiliency of databases drastically, as compared to traditional database deployments.

Creating an RDS instance is easy using Terraform. Create a main.tf file and write the below resource block, which describes the parameters of the database you choose.

resource "aws_db_instance" "mydb" {
allocated_storage = 10
engine = "mysql"
engine_version = "8.0"
instance_class = var.instance_type
identifier = "mysqldb"
name = "mydb"
username = "admin"
password = "SuperSecret123"
skip_final_snapshot = true
}

Here, we are using an “aws_db_instance” resource block to create a database with “mysql” engine, and allocate storage of 10G. We also specify the instance type to be used for this database — this is defined in variables.tf and variables.tfvars file. Feel free to change this to the instance type you desire.

We have also specified an identifier — this is how your database will be identified in the AWS console. Next, we give a name and credentials to our database. This is critical because, without it, there would be no way to access this database. Please note that most of the attributes specified in this code are required by Terraform — which means they are also required by AWS.

In the variables.tf file, we have included an output variable that provides us with the endpoint of the database that is created.

Initialize and apply this Terraform configuration and after successful execution, log into the AWS console — RDS, to verify the same.

Lastly, please treat this as a reminder to destroy the resources you have created while following the steps above.

Thanks for following along to this post. We covered provisioning Terraform AWS S3 buckets to create a static website, provisioning EBS volume and attaching it to a Terraform EC2 instance, and provisioning RDS instances. We hope that you found value in these 3 AWS storage topics, and are ready to try it yourself.

Interested in learning more about AWS and Terraform? Join our Slack community to connect with DevOps experts and continue the conversation.

Note: This post was written by Sumeet Ninawe from Let’s Do Tech. Sumeet has multi cloud management platform experience where he used Terraform for the orchestration of customer deployments across major cloud providers like AWS, MS Azure and GCP. You can find him on Github, Twitter, and his website.

👋 Join FAUN today and receive similar stories each week in your inbox! Get your weekly dose of the must-read tech stories, news, and tutorials.

Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇

--

--

Founder at InfraCode — customizable, reliable Infrastructure as Code tools. Simplifying the lives of DevOps professionals. www.infrastructurecode.io