Hey everyone! 👋 Today, I’m sharing my latest hands-on project using Terraform to build infrastructure on AWS. If you're new to Infrastructure as Code (IaC) or looking for a simple project idea, this step-by-step guide will help you get started.
I’ll explain how I created resources like VPCs, subnets, EC2 instances, and even an S3 bucket to host objects. Along the way, I’ll share the challenges I faced and the lessons I learned to help you navigate similar issues.
Let’s dive in! 💡
Project Overview
Here’s what I built using Terraform:
VPC (Virtual Private Cloud)
Public Subnets (Subnet1, Subnet2)
Internet Gateway for connectivity
Route Table and Subnet Association for network routing
EC2 Instances (Server 1 & Server 2)
Target Group for load balancing
Security Group for secure access
S3 Bucket for storage
Objects in the S3 Bucket
Accessing the application through the public site
Step-by-Step Implementation
Output from Terraform:
Lets see how it looks like on AWS Console..
Step 1: Created a VPC
I started by creating a Virtual Private Cloud (VPC) to host all the resources securely.
📌 Key Configurations:
CIDR Block:
10.0.0.0/16
Step 2: Created Subnets
I created two public subnets to deploy the EC2 instances. These subnets are designed to provide high availability.
📌 Key Configurations:
Subnet1:
10.0.1.0/24
Subnet2:
10.0.2.0/24
Step 3: Added an Internet Gateway
To allow public internet access to my resources, I attached an Internet Gateway to the VPC.
📌 Key Configurations:
Gateway type: Internet Gateway (IGW)
Attached to: My custom VPC
Step 4: Set Up Route Table and Subnet Association
I created a Route Table to define how traffic flows in and out of my VPC. Then, I associated the public subnets with this route table.
📌 Key Configurations:
Added a route for
0.0.0.0/0
to the Internet GatewayAssociated Subnet1 and Subnet2 with the Route Table
Step 5: Launched EC2 Instances
I deployed two EC2 instances to simulate application servers. Each instance was placed in a separate subnet.
📌 Key Configurations:
Instance 1 (Server 1): t2.micro
Instance 2 (Server 2): t2.micro
Connected to the public subnets
Added key pair for secure SSH access
Step 6: Configured Target Group & Load balancer
I created a Target Group to enable load balancing between my servers. This ensures even traffic distribution.
📌 Key Configurations:
Health checks: HTTP
Instances added: Server 1 and Server 2
Step 7: Set Up a Security Group
To secure my instances and resources, I created a Security Group with the following rules:
📌 Key Configurations:
Allowed inbound SSH traffic on port 22
Allowed HTTP traffic on port 80
Step 8: Created an S3 Bucket
I added an Amazon S3 Bucket for storage and uploaded some test objects.
📌 Key Configurations:
Bucket name:
my-terraform-demo-bucket
Access: Public (for testing purposes)
Accessing the site using Load-Balancer DNS URL
Challenges Faced and How I Solved Them
IAM Role for S3 Access
- I had to create an IAM role to grant the EC2 instances full access to the S3 bucket. This allowed the EC2 instances to upload and retrieve objects from the bucket.
Making S3 Bucket Public
While applying a bucket policy to make the S3 bucket public, I encountered a persistent error. Initially, the setup worked fine for the first run, but re-running the configuration for new infrastructure caused the same error.
Solution: I used **ACL settings
(Access Control Lists)** to configure both bucket-level and object-level permissions. AWS recently updated its ACL settings to separate permissions for buckets and objects for security reasons.
ACL Ownership Issue
While setting ACLs, I faced another error:
Error: uploading S3 Object (terraform-2.png) to Bucket (demo-my-24-buck): operation error S3: PutObject, https response error StatusCode: 400, ... api error AccessControlListNotSupported: The bucket does not allow ACLs
Root Cause: The bucket didn’t support ACLs due to its ownership control settings.
Solution:
I realized that Terraform was trying to apply the object ACL settings before the bucket’s ownership controls were fully propagated. To resolve this, I introduced explicit dependencies in my Terraform configuration, ensuring that the bucket’s settings were fully applied before uploading objects.This fix worked, and I could successfully upload objects to the bucket! 🎉
Modularizing the Architecture
Initially, this project followed a monolithic architecture, with all resources defined in a single configuration file. To improve reusability, maintainability, and testability, I refactored the project into a modular architecture.
Here’s how I organized the modules:
VPC Module: Contains all the VPC-related resources (VPC, subnets, internet gateway, route table).
EC2 Module: Manages the EC2 instances, security groups, and related configurations.
S3 Module: Handles the S3 bucket and object configurations.
This modular approach allows me to:
Reuse the same modules across multiple projects.
Simplify debugging by isolating each component.
Make testing easier by focusing on individual modules.
Key Takeaways
Through this project, I learned:
How to configure AWS resources using Terraform effectively.
Troubleshooting and resolving issues related to IAM roles, S3 permissions, and ACL settings.
The importance of modularizing infrastructure for better organization and scalability.
Next Steps
I plan to enhance this project by integrating it with a CI/CD pipeline to automate deployment. Stay tuned for updates! 🚀
If you’re working on a similar project or have any questions, feel free to connect with me or share your experiences in the comments. 😊
Let me know your thoughts or suggestions! Happy coding! 💻✨