THIS IS TASK-1 OF HYBRID-MULTI-CLOUD-TRAINING
LAUNCHING A WEB-SERVER ON AWS CLOUD USING TERRAFORM AND GIT HUB WITH AUTOMATION
It is simple to do anything from GUI but there a cases when GUI is not sufficient.
For example you have created a environment in AWS using GUI in your development system but if you have to make the same environment in your deployment system then it will be very tiring and tedious job and also we know human nature, we are bound to make errors , so to get rid of all these problems we can use terraform in which use have to just write the script and based on that the environment will be created. Also we know that how fast codes runs. So seamlessly the environment will be set up in deployment system in seconds or may be less.
EXPLANATION OF HCL SCRIPT WRITTEN BY ME :
STEP1:
Creating a configuration file so terraform can access it ,we are required to create the profile to so as to keep our credentials safe
provider "aws" {
region = "ap-south-1"
profile = "Ddhruv"
}
STEP2:
Creating key-pair which will be used in instance
variable "key_name" {}
resource "tls_private_key" "example" {
algorithm = "RSA"
rsa_bits = "4096"
}
resource "aws_key_pair" "generated_key" {
key_name = var.key_name
public_key = tls_private_key.example.public_key_openssh
}
STEP3:
Creating a security group which allows port 80 and 22
resource "aws_security_group" "sec_g" {
name = "sec_g"
vpc_id = "vpc-101a0678"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "sec_g"
}
}
STEP 4:
creating EC2 instance for deployment of our webserver and here we will be using the key pair
and security group created in above steps , then we will install apache , git to our ec2 instance
resource "aws_instance" "ins1" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = aws_key_pair.generated_key.key_name
security_groups = ["sec_g"]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.example.private_key_pem
host = aws_instance.ins1.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "testos"
}
}
STEP 5:
To see the availability zone of our EC2 instanceoutput "myloc" {
value = aws_instance.ins1.availability_zone
}
STEP 6:
Then we will create our EBS volume to store data
resource "aws_ebs_volume" "ebs1" {
availability_zone = aws_instance.ins1.availability_zone
size = 1
tags = {
Name = "ebsmy1"
}
}
STEP 7:
Attaching our storage
resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdd"
volume_id = "${aws_ebs_volume.ebs1.id}"
instance_id = "${aws_instance.ins1.id}"
force_detach = true
}
STEP 8:
Getting the id of the storage
output "myebs" {
value = aws_ebs_volume.ebs1.id
}
Getting the public IP of the EC2 instance
output "myy_ip" {
value = aws_instance.ins1.public_ip
}
STEP 9:
Storing our public IP on local host in a file which can later be used as proof / for analytics
resource "null_resource" "nulllocal2" {
provisioner "local-exec" {
command = "echo ${aws_instance.ins1.public_ip} > publicip.txt"
}
}
STEP 10:
Here we will format and mount the storage and also clone the repo. on git hub in /var/www/html/resource "null_resource" "nullremote3" {
depends_on = [
aws_volume_attachment.ebs_att,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.example.private_key_pem
host = aws_instance.ins1.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdd",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/Ddhruv-IOT/server-test.git /var/www/html/"
]
}
}
STEP 11:
Here our browser will automatically open the website created on AWS server once the cloudfront is created
resource "null_resource" "nulllocal1" {
depends_on = [
aws_cloudfront_distribution.s3_distribution,
]
provisioner "local-exec" {
command = "chrome ${aws_instance.ins1.public_ip}"
}
}
STEP 12:
Creating S3 bucket and storing the image local host
resource "aws_s3_bucket" "image-bucket" {
bucket = "webserver-images-test-dd-dd"
acl = "public-read"
provisioner "local-exec" {
command = "git clone https://github.com/Ddhruv-IOT/server-image"
}
provisioner "local-exec" {
when = destroy
command = "echo Y | rmdir /s server-image"
}
}
STEP 13:
Now creating a S3 bucket object and uploading the image from local host
resource "aws_s3_bucket_object" "image-upload" {
bucket = aws_s3_bucket.image-bucket.bucket
key = "stest.jpg"
source = "server-image/stest.jpg"
acl = "public-read"
}
STEP 14:
Finally creating the cloudfront and adding the path of image to our to our php code / website code
variable "var1" {default = "S3-"}
locals {
s3_origin_id = "${var.var1}${aws_s3_bucket.image-bucket.bucket}"
image_url = "${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.image-upload.key}"
}
resource "aws_cloudfront_distribution" "s3_distribution" {
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
}
enabled = true
origin {
domain_name = aws_s3_bucket.image-bucket.bucket_domain_name
origin_id = local.s3_origin_id
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.ins1.public_ip
port = 22
private_key = tls_private_key.example.private_key_pem
}
provisioner "remote-exec" {
inline = [
# "sudo su << \"EOF\" \n echo \"<img src='${self.domain_name}'>\" >> /var/www/html/test.html \n \"EOF\""
"sudo su << EOF",
"echo \"<img src='http://${self.domain_name}/${aws_s3_bucket_object.image-upload.key}' height=200px width=200px >\" >> /var/www/html/index.php",
"EOF"
]
}
}
Final Outcome : The website will be launched as soon as everything is completed
Commands to be used
1. terraform init : to install the plugins
2. terraform validate : to check our code is working
3. terraform apply : to bulid our environment
4. terraform destroy : to destroy the environment
NOTE :
Make sure AWS-CLI, GIT, Terraform all are installed on your pc.
FIND THE COMPLETE CODE HERE: GITHUB
CONNECT ME HERE :Linkedin
Video of final outcome: