Technical Deep Dive: Building a Cost-Effective Ghost Blog on AWS with Terraform

Technical Deep Dive: Building a Cost-Effective Ghost Blog on AWS with Terraform
Photo by Florian Olivo / Unsplash

Introduction: Infrastructure as Code for Blog Hosting

  • Why Terraform for infrastructure management
  • Project goals and technical requirements
  • Repository structure and organization

Part 1: AWS Architecture Overview

  • Architectural diagram and component breakdown
  • Resource relationships and dependencies
  • High-level implementation strategy

Part 2: Setting Up the AWS Environment

  • AWS provider configuration
provider "aws" {
  region = var.aws_region
  default_tags {
    tags = var.default_tags
  }
}
  • Variables and locals for flexible configuration
  • Resource naming conventions and organization

Part 3: Core Infrastructure Components

  • VPC and Networking
    • Security groups and network ACLs
    • Public/private subnet design
  • IAM Role Configuration
resource "aws_iam_role" "ec2_instance_role" {
  name = "${var.name_prefix}-ec2-role"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "ec2.amazonaws.com"
      }
    }]
  })
}
  • S3 Bucket Creation and Security

Part 4: EC2 Instance Management

  • EC2 Instance Type Selection Logic
resource "aws_instance" "ghost" {
  ami                  = data.aws_ami.ubuntu.id
  instance_type        = var.instance_type
  iam_instance_profile = aws_iam_instance_profile.instance_profile.name
  user_data            = data.template_cloudinit_config.config.rendered
  
  # Other configuration...
  
  tags = {
    Name = "${var.name_prefix}-ghost"
  }
}
  • User Data and Bootstrap Process
  • SSM Session Manager for SSH-less Access

Part 5: Ghost Blog Container Deployment

  • Docker Compose Configuration via Ansible
- name: Create docker-compose.yml
  ansible.builtin.copy:
    dest: /opt/ghost/docker-compose.yml
    content: |
      version: '3'
      services:
        ghost:
          image: ghost:5.115.1-alpine
          # Configuration details...
  • Caddy as a Reverse Proxy
  • Volume Management and Data Persistence

Part 6: SSL Certificate Handling

  • Cloudflare Origin Certificates with Terraform
resource "tls_private_key" "ghost_private_key" {
  algorithm = "RSA"
  rsa_bits  = 2048
}

resource "cloudflare_origin_ca_certificate" "ghost_cert" {
  csr                = tls_cert_request.ghost_csr.cert_request_pem
  hostnames          = ["blog.example.org"]
  request_type       = "origin-rsa"
  requested_validity = 3650  # 10 years
}
  • Certificate Deployment and Renewal Strategy
  • SSL Configuration Troubleshooting

Part 7: Cost Management and Billing

  • Creating AWS Budgets with Terraform
resource "aws_budgets_budget" "monthly" {
  name              = "monthly-budget"
  budget_type       = "COST"
  time_unit         = "MONTHLY"
  time_period_start = "2025-01-01_00:00"
  limit_amount      = "10.0"
  limit_unit        = "USD"
  
  notification {
    comparison_operator = "GREATER_THAN"
    threshold           = 80
    threshold_type      = "PERCENTAGE"
    notification_type   = "ACTUAL"
    subscriber_email_addresses = ["your@email.com"]
  }
}
  • EC2 Instance Pricing Analysis Script
  • Cost Allocation Tags for Resource Tracking
  • AWS Cost Explorer Integration

Part 8: Backup and Disaster Recovery

  • S3 Lifecycle Policies for Backups
  • Ghost Content Backup Strategy
  • Automated Recovery Procedures

Part 9: CI/CD Pipeline Integration

  • GitHub Actions Workflow for Terraform
name: Terraform Apply

on:
  push:
    branches: [ main ]
    
jobs:
  terraform:
    runs-on: ubuntu-latest
    steps:
      # Implementation details...
  • Testing and Validation Steps
  • Safe Deployment Practices

Part 10: Performance Monitoring and Optimization

  • CloudWatch Metrics and Alarms
resource "aws_cloudwatch_metric_alarm" "cpu_alarm" {
  alarm_name          = "${var.name_prefix}-high-cpu"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = "2"
  metric_name         = "CPUUtilization"
  namespace           = "AWS/EC2"
  period              = "300"
  statistic           = "Average"
  threshold           = "80"
  alarm_description   = "This metric monitors EC2 CPU utilization"
  
  dimensions = {
    InstanceId = aws_instance.ghost.id
  }
}
  • Resource Utilization Analysis
  • Load Testing Results and Tuning

Conclusion: Lessons in Infrastructure as Code

  • Evolution of the Infrastructure
  • Future Technical Improvements
  • Final Cost vs Performance Analysis
  • Resources for Further Learning