Introduction
In this article, we'll explore how to get started with LocalStack and Terraform, two powerful tools for local cloud development and infrastructure automation. Whether you're an enthusiast looking to set up cloud services on your local machine or automate infrastructure management, this guide will walk you through the basics.
What is LocalStack?
LocalStack is an open-source platform that allows you to run AWS services locally on your computer. It mimics real AWS services, letting you test and develop applications in a controlled environment without needing a cloud account or paying for services. This makes it perfect for experimenting with cloud technologies without the cost.
What is Terraform?
Terraform is an Infrastructure as Code (IaC) tool that enables you to define, manage, and automate cloud resources using simple configuration files. With Terraform, you can create and manage everything from virtual machines to databases in a consistent and repeatable way, helping you streamline cloud management across multiple environments.
How to install LocalStack
In this section, we will set up LocalStack using Docker. Please note that Docker and Docker Compose must already be installed, but we will not cover that in this article. Below is the Docker Compose configuration to start LocalStack, and we will also guide you through setting up an account and obtaining a token to use the Hobby plan of LocalStack.
Docker Configuration for LocalStack
Here is the Docker Compose configuration you will use to set up LocalStack:
version: "3.8"
services:
localstack:
container_name: "localstack-main"
image: localstack/localstack-pro # required for Pro
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
- "127.0.0.1:443:443" # LocalStack HTTPS Gateway (Pro)
environment:
- DISABLE_CORS_CHECKS=1
- PROVIDER_OVERRIDE_APIGATEWAY=next_gen
- LOCALSTACK_AUTH_TOKEN=${LOCALSTACK_AUTH_TOKEN:?}
- PERSISTENCE=1
- ENFORCE_IAM=1 # hobby plan
- EC2_DOWNLOAD_DEFAULT_IMAGES=0 # disabled for security, manually download images and tag
volumes:
- "/opt/localstack:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- localstack-network
networks:
localstack-network:
driver: bridge
Explanation of the Docker Compose File
- Container Name: The container running LocalStack will be named
localstack-main
. - Image: The image used is
localstack/localstack-pro
, required for advanced services in the Pro or Hobby plans. - Ports:
127.0.0.1:4566:4566
: This is the main LocalStack Gateway where AWS service endpoints are exposed.127.0.0.1:4510-4559:4510-4559
: Port range for external AWS services like EC2 and S3.127.0.0.1:443:443
: This is the HTTPS Gateway used by LocalStack Pro.
- Environment Variables:
DISABLE_CORS_CHECKS=1
: Disables CORS checks, useful for local development.PROVIDER_OVERRIDE_APIGATEWAY=next_gen
: Enables the next-generation API Gateway implementation.LOCALSTACK_AUTH_TOKEN
: This placeholder needs to be replaced by your actual LocalStack authentication token (which we'll acquire in the next section).PERSISTENCE=1
: Ensures that data persists across container restarts.ENFORCE_IAM=1
: Required for the Hobby plan to enforce IAM policies.EC2_DOWNLOAD_DEFAULT_IMAGES=0
: Prevents automatic downloading of EC2 images for security reasons.
- Volumes:
/opt/localstack:/var/lib/localstack
: Stores LocalStack data./var/run/docker.sock:/var/run/docker.sock
: Gives the LocalStack container access to Docker.
- Networks: Defines the
localstack-network
to provide isolation between services.
Creating a LocalStack Account and Acquiring an Auth Token
Since we are using the Hobby plan, you need to create a LocalStack account and acquire an authentication token. Here’s how:
Sign Up for a LocalStack Account:
- Go to the LocalStack website, select Sign In.
- Followg the registration instructions.
Join the Hobby Plan:
- After logging in, visit the subscription page.
- Select the Hobby subscription (at the bottom), which is free and includes access to essential (pro version) services.
Obtain Your Authentication Token:
- Navigate to the Auth Token section in your account Workspace.
- Generate a new token if you don’t already have one.
- Copy the token (which you will use as
LOCALSTACK_AUTH_TOKEN
in the Docker Compose configuration).
Adding the Token to Docker Compose:
- Replace the placeholder
${LOCALSTACK_AUTH_TOKEN:?}
in the Docker Compose file with the token you acquired. - Alternatively, set it as an environment variable on your system for security.
By completing these steps, you will have LocalStack set up with Docker and configured for the Hobby plan, allowing you to run AWS services locally.
Additional Resources
For more information on configuring and installing LocalStack, you can refer to the official documentation:
How to Install Terraform
To install Terraform, you will need to use the appropriate method based on your operating system. HashiCorp provides a step-by-step guide to install the Terraform CLI.
You can find the installation instructions for your system by visiting the Terraform installation page. Simply choose your operating system (Windows, macOS, or Linux) and follow the provided instructions.
This page covers:
- Windows: Install using the Windows Package Manager (Chocolatey) or download the executable.
- macOS: Install using Homebrew or download the binary.
- Linux: Install using package managers like
apt
,yum
, or manually download the binary.
Once Terraform is installed, you can verify the installation by running the following command in your terminal:
terraform -v
Architecture Overview
In this chapter, we will outline the architecture you will be deploying. The setup will involve creating an EC2 instance running Ubuntu Linux, along with a simple web server.
Key Components
- EC2 Instance: This is a virtual server provided by AWS. We will use it to host our web server.
- Ubuntu Linux: The EC2 instance will run the Ubuntu Linux operating system, a popular and stable choice for servers.
- Simple Web Server: A basic web server will be installed on the EC2 instance to serve a web page.
The architecture is simple yet provides a foundation for understanding how to set up cloud-based infrastructure using Terraform and LocalStack.
Preparing Docker Image for LocalStack
In this chapter, we will walk through how to create a Docker image specifically designed for use with LocalStack. This image will be used to emulate an EC2 instance within LocalStack. The image is based on the latest stable version of Ubuntu (Noble) and uses a LocalStack-specific tagging format to ensure compatibility when creating EC2 instances.
Key Features of the Docker Image
- Ubuntu Base Image: The Docker image uses the latest stable version of Ubuntu (
noble
). - LocalStack-specific Tag: The image is tagged with a custom name (
localstack-ec2/ubuntu-noble-ami:ami-000001
) so that it can be recognized and used by LocalStack to emulate an EC2 instance. - OpenSSH Server: The image installs OpenSSH for secure SSH access, which is necessary for remote management of the instance.
- User
ubuntu
: The default Ubuntu user is set up with passwordlesssudo
access, allowing for administrative tasks without requiring a password. - Security Enhancements:
- Disabling root login to prevent unauthorized root access.
- Disabling password-based authentication to enforce SSH key-based login, which improves security.
- Ensuring proper permissions for the
ubuntu
user's.ssh
directory for secure SSH access.
Docker Script Overview
The following script builds the Docker image with these features:
#!/bin/bash
# Set variables
DOCKER_IMAGE="ubuntu:noble"
CUSTOM_TAG="localstack-ec2/ubuntu-noble-ami:ami-000001"
SSH_PORT=22
# Check if the image with the custom tag already exists
if [ -z "$(docker images -q ${CUSTOM_TAG})" ]; then
echo "Pulling Docker image ${DOCKER_IMAGE}..."
docker pull ${DOCKER_IMAGE}
echo "Creating Dockerfile to customize the image..."
# Create Dockerfile for customization
cat <<EOF > Dockerfile
FROM ${DOCKER_IMAGE}
# Update and install OpenSSH server and sudo
RUN apt-get update && \\
apt-get install -y openssh-server sudo && \\
mkdir /var/run/sshd
# Add 'ubuntu' user to the sudoers file
RUN echo 'ubuntu ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# Ensure correct permissions for ubuntu's .ssh directory
RUN mkdir -p /home/ubuntu/.ssh && \\
chown -R ubuntu:ubuntu /home/ubuntu/.ssh && \\
chmod 700 /home/ubuntu/.ssh
# Disable root login and password authentication
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin no/' /etc/ssh/sshd_config && \\
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
# Expose SSH port
EXPOSE ${SSH_PORT}
# Start SSH service
CMD ["/usr/sbin/sshd", "-D"]
EOF
echo "Building customized Docker image with tag ${CUSTOM_TAG}..."
# Build the customized image with the new tag
docker build -t ${CUSTOM_TAG} .
echo "Docker image ${CUSTOM_TAG} created successfully!"
rm Dockerfile
else
echo "Docker image ${CUSTOM_TAG} already exists, skipping build..."
fi
Key Steps in the Script
- Base Image: The script pulls the latest stable Ubuntu image (
ubuntu:noble
). - Custom Dockerfile:
- The image is updated and installs the OpenSSH server and sudo.
- The
ubuntu
user is given passwordlesssudo
privileges. - SSH security is enhanced by disabling root login and password-based authentication.
- Build and Tag: The Docker image is built with the custom tag
localstack-ec2/ubuntu-noble-ami:ami-000001
, making it recognizable by LocalStack for use as an EC2 AMI. - Expose SSH Port: The SSH port (22) is exposed to allow SSH access to the instance.
- Run SSH Service: The SSH service is started inside the container to allow remote access.
By following these steps, you can create a secure Docker image that LocalStack will use when creating an EC2 instance.
Additional Resources for Custom AMIs
For more information about preparing custom AMIs for use in LocalStack, you can refer to the official documentation:
Managing Terraform with Commands
In this chapter, we will cover the essential Terraform commands for managing infrastructure, as well as how to configure a central plugin cache directory to optimize the terraform init
command.
Essential Terraform Commands
Below are the key Terraform commands you will use to manage your infrastructure:
terraform init
: Initializes the working directory and downloads required provider plugins. This is the first command you run in a new project.terraform plan
: Generates and shows an execution plan, detailing what actions Terraform will take to reach the desired state.terraform apply
: Applies the changes required to achieve the desired state of the configuration, as defined in the plan.terraform destroy
: Destroys the managed infrastructure defined in the Terraform configurations, reversing the infrastructure setup.
Configuring a Central Plugin Cache Directory
Terraform downloads provider plugins during the terraform init
process. To optimize this, especially when working on multiple projects or machines, you can configure a central plugin cache directory. This helps Terraform avoid re-downloading the same provider plugins multiple times, saving both time and bandwidth.
To configure a central plugin cache directory:
-
Create a directory on your system that will act as the plugin cache. For example:
mkdir -p ~/.terraform.d/plugin-cache
-
Configure the plugin cache in the Terraform settings by adding the following block to your
terraform.rc
file (usually located in~/.terraformrc
or./terraform.rc
):plugin_cache_dir = "$HOME/.terraform.d/plugin-cache"
This configuration directs Terraform to use the specified directory to store and reuse provider plugins, reducing the initialization time for future runs.
By following these steps, you can efficiently manage your Terraform infrastructure and optimize the initialization process with a central plugin cache directory.
Configuring main.tf
for Terraform Setup
In this chapter, we will configure the main.tf
file to integrate the custom Docker image and set up the infrastructure using Terraform. The setup includes configuring endpoints to point to LocalStack, setting up an EC2 instance using a custom AMI, and adding a public key to the ubuntu
user's home directory on the instance.
Provider Configuration
We define the aws
provider with the necessary configurations to work with LocalStack. Several settings are adjusted to skip certain checks, and the endpoints section is configured to point to LocalStack’s local endpoints:
provider "aws" {
access_key = "test"
secret_key = "test"
region = "us-east-1"
s3_use_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true # Note: A warning about this may still appear
endpoints {
ec2 = "http://localhost:4566"
autoscaling = "http://localhost:4566"
iam = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
In this article, we manually configure the endpoints to LocalStack services running on localhost
. We do not use the tflocal
wrapper, although it's an alternative for simplifying the setup.
Defining a VPC
A Virtual Private Cloud (VPC) is a logically isolated network within AWS that allows us to control network settings such as IP addresses, routing, and security. In our setup, we define a simple VPC with the following block:
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}
cidr_block
: This defines the IP range for the VPC in CIDR (Classless Inter-Domain Routing) notation. In this case, the VPC is assigned the IP range10.0.0.0/16
, which provides up to 65,536 IP addresses within the network. This VPC will be used to host our EC2 ins
Using a Custom AMI
We use a custom AMI (ami-000001
) in the configuration. This choice allows us to have more control over the image's software and security. Default AMIs managed by LocalStack are outdated, so using a custom AMI ensures that you have up-to-date software.
resource "aws_instance" "example" {
ami = "ami-000001" # Custom AMI built for LocalStack
instance_type = "t2.nano"
tags = {
Name = "tf-example"
}
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
apt-get update && apt-get install -y busybox
echo "${file("~/.ssh/my-key.pub")}" > /home/ubuntu/.ssh/authorized_keys
sudo -u ubuntu nohup busybox httpd -f -p ${var.server_port} &
EOF
user_data_replace_on_change = true
}
Adding SSH Key to the EC2 Instance
The user_data
block provisions the EC2 instance. The script:
- Installs the BusyBox HTTP server.
- Copies the public SSH key from your local system (
~/.ssh/my-key.pub
) into theauthorized_keys
file for theubuntu
user. - Starts the web server on the port defined by the
server_port
variable.
You could also use aws_key_pair
to manage SSH keys in Terraform. This works for default AMIs managed by AWS. However, in this setup, with the custom AMI, this method didn’t work as expected, so we manually add the key using the user_data
script.
Security Group Configuration
To allow traffic to and from the EC2 instance, we define an AWS Security Group:
resource "aws_security_group" "instance" {
name = "tf-example-instance"
vpc_id = aws_vpc.example.id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Allow SSH from any IP
}
ingress {
from_port = var.server_port
to_port = var.server_port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Allow HTTP from any IP
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"] # Allow all outbound traffic
}
}
This Security Group Allows:
- SSH access (port 22) from any IP address.
- HTTP access on the custom port defined by
server_port
(default: 8080).
Output and Variable
We define an output to display the public IP address of the EC2 instance:
output "public_ip" {
value = aws_instance.example.public_ip
description = "The public IP address of the web server"
}
We also define a variable for the HTTP server port:
variable "server_port" {
description = "The port the server will use for HTTP requests"
type = number
default = 8080
}
Summary
This setup allows us to provision an EC2 instance on LocalStack using a custom AMI, ensuring up-to-date software and enhanced security configurations. The public SSH key is manually added to the ubuntu
user's home directory, and the instance is configured to run a simple HTTP server. You could use aws_key_pair
for default AMIs, but since we are using a custom AMI, manual key provisioning through user_data
was necessary.
Additional Resources
For more information about using Terraform with LocalStack, refer to the official documentation:
Running and Verifying the Terraform Setup
In this chapter, we will cover the commands needed to run our Terraform setup, provision the infrastructure in LocalStack, and verify that the website is running inside the EC2 instance.
Running the Setup
To start the setup, navigate to the directory where your Terraform files (including main.tf
) are located and follow these steps:
-
Initialize Terraform: This step downloads the necessary provider plugins (configured for LocalStack) and initializes your working directory.
terraform init
-
Create an Execution Plan: This command generates a plan for the actions Terraform will take to provision the infrastructure.
terraform plan
-
Apply the Plan: Once the plan is verified, run the following command to provision the infrastructure:
terraform apply
Terraform will ask for confirmation, so type
yes
to proceed. This will create an EC2 instance in LocalStack using the custom AMI, security groups, and other resources defined in the configuration.
Verifying the EC2 Instance and Website
After the EC2 instance is successfully provisioned, Terraform will output the public IP address of the instance. This IP can be used to verify that the web server inside the instance is running.
To check the website, use the following steps:
-
Get the public IP address from the output of the
terraform apply
command. For example:Apply complete! Resources: 4 added, 0 changed, 0 destroyed. Outputs: public_ip = "123.45.67.89"
-
Use a web browser or
curl
to check the website running on the EC2 instance by visiting the public IP on the port defined in your configuration (default:8080
):curl http://123.45.67.89:8080
You should see the "Hello, World" message that was configured in the
user_data
script.
Dedicated Container with LocalStack Pro
Since we are using the Pro version of LocalStack, LocalStack will create a dedicated container to emulate the EC2 service. This allows us to run actual workloads in a more realistic environment, unlike the mock behavior in the Community version. The EC2 instance, security groups, and networking are all handled inside this isolated container, ensuring a more complete emulation of AWS services.
Final Thoughts
Throughout this article, we demonstrated how to set up infrastructure using LocalStack Pro and Terraform. By combining the power of Terraform's infrastructure-as-code approach with LocalStack's ability to emulate AWS services locally, we were able to build a complete, testable environment.
Here’s a recap of what we covered:
-
Introduction to LocalStack and Terraform: We discussed what LocalStack and Terraform are and how they can be used for local development.
-
Installing LocalStack and Terraform: We provided the steps for installing LocalStack using Docker and Terraform based on the operating system.
-
Architecture Overview: We outlined the architecture, focusing on deploying an EC2 instance running Ubuntu Linux with a simple web server.
-
Preparing a Docker Image for LocalStack: We created a custom Docker image tagged for use in LocalStack, with added security features like disabling root login and enforcing SSH key-based authentication.
-
Configuring
main.tf
for Terraform Setup: We integrated our custom Docker image into the Terraform configuration, defining the necessary resources such as EC2 instances, VPCs, and security groups. -
Managing Terraform with Commands: We learned the essential Terraform commands (
init
,plan
,apply
,destroy
) and how to optimize theterraform init
process by configuring a plugin cache. -
Running and Verifying the Setup: Finally, we applied the configuration and verified the website running inside the EC2 instance using the public IP, leveraging LocalStack Pro to emulate AWS services within a dedicated container.
By following the steps in this article, you can create a complete local AWS environment using LocalStack and Terraform, providing an efficient and cost-effective way to test and develop infrastructure without relying on real AWS resources.
Reference Links
Here is a list of useful resources mentioned throughout the article:
-
- Step-by-step instructions for installing the Terraform CLI based on your operating system.
-
LocalStack Configuration Reference
- Detailed information on LocalStack configuration, including environment variables and service endpoints.
-
- Comprehensive guide for setting up LocalStack using Docker and configuring it.
-
LocalStack EC2 AMIs Documentation
- An overview of how to use custom and default AMIs with LocalStack to simulate EC2 instances.
-
LocalStack Terraform Integration Guide
- Guide to integrating Terraform with LocalStack for running infrastructure as code locally.