Skip to Content

Terraform & Ansible

How to combine Ansible with Terraform to provision an AWS EC2 instance.

Provisioning an EC2 Instance

I had the need to provision an EC2 instance, and wanted to use the “infrastructure as code” approach.

I’ve used Cloud Formation in the past, but I wanted to experiment with Terraform. I’ve used cloud-init before, but I found it gets a bit unwieldy, and I didn’t know how well it would mix with Debian rather than Amazon Linux.

As I am familiar with Ansible my wish was to use it for both the initial provisioning, and for subsequent updates when I modified the playbooks.

I quickly found out that Terraform does not support Ansible as a first class provisioner, and only performs provisioning at resource creation.

The Terraform module registry lists 7 entries when searching for “ansible”, but I have no experience in selecting the “best” choice.

Approach

This blog post takes you through the pieces I used to run Ansible with an EC2 instance provisioned through Terraform.

The text below will exclude some non-essential details for brevity. The full example is available in the ec2-terraform-ansible repository.

Terraform

SSH Deployment Key

An SSH key is dynamically generated to use for deployment. This is both stored as an AWS key pair for use by the EC2 instance, and written to a local file for Ansible to use.

resource "tls_private_key" "deploy" {
  algorithm = "RSA"
}
resource "aws_key_pair" "deploy" {
  key_name_prefix = "deploy-demo_"
  public_key      = tls_private_key.deploy.public_key_openssh
}
resource "local_file" "deploy" {
  filename          = "deploy.pem"
  sensitive_content = tls_private_key.deploy.private_key_pem
  provisioner "local-exec" {
    command = "chmod 600 ${self.filename}"
  }
}

EC2 Instance

The creation of the EC2 instance is simple. It uses the “remote-exec” trick to wait for the instance to be available, and then runs ansible-playbook for the provisioning. The reason for the environment variable PUBLIC_IP is explained below.

resource "aws_instance" "demo" {
  depends_on = [local_file.deploy]

  ami           = var.ec2_ami_id
  instance_type = "t3a.nano"

  key_name = aws_key_pair.deploy.key_name

  vpc_security_group_ids = [
    aws_security_group.demo.id
  ]

  provisioner "remote-exec" {
    inline = ["hostname"]

    connection {
      type        = "ssh"
      host        = self.public_ip
      user        = "admin"
      private_key = tls_private_key.deploy.private_key_pem
    }
  }

  provisioner "local-exec" {
    command = "ansible-playbook site.yml"
    environment = {
      PUBLIC_IP = self.public_ip
    }
  }
}

Ansible

Configuration

The contents of ansible.cfg specify the location of a dynamic inventory script, the SSH key for deployment (generated by terraform) and the remote user (which is “admin” for Debian).

[defaults]

inventory = ./inventory

remote_user = admin
private_key_file = deploy.pem
host_key_checking = False

Dynamic Inventory

There is a general purpose dynamic inventory script for EC2, ec2.py, but at 1,713 lines it felt too general purpose (aka complicated) for what I felt I needed. I also don’t like copying large code chunks across – how do I keep it up to date?

The following is my simple inventory script.

#!/usr/bin/python3

from argparse import ArgumentParser
from os import environ
import json
import subprocess

def main():
    parser = ArgumentParser(add_help=False)
    parser.add_argument("--list", action="store_true", required=True)
    args = parser.parse_args()
    if args.list:
        public_ip = environ.get('PUBLIC_IP') or resource_attribute(tfstate(), "aws_instance", "demo", "public_ip")

        print(json.dumps(inventory(public_ip), indent=2))

def inventory(ip):
    return {
        "_meta" : {
            "hostvars" : {
                "demo": {
                    "ansible_host": ip
                }
            }
        },
        "all" : {
            "hosts" : [
                 "demo"
            ],
            "children": []
        }
   }

def tfstate():
    return json.loads(subprocess.check_output(["terraform", "state", "pull"]).decode())

def resource_attribute(tfstate, type, name, attr):
    resource = [x for x in tfstate["resources"] if x["type"] == type and x["name"] == name][0]
    return resource["instances"][0]["attributes"][attr]

if __name__ == '__main__':
    main()

The dynamic inventory script requires you specify the --list argument, and emits minimal information for the host demo. It returns _meta information so it does not need to support the --host argument.

When run by Terraform provisioning the state does not contain details of the EC2 instance, so the public IP has to be passed in the PUBLIC_IP variable.

When you run ansible-playbook site.yml to perform subsequent updates, it uses the Terraform state to find the public IP, and uses that.

If the public IP has changed (for example, after stopping/starting instance) you need to run terraform refresh first.

Variables

The variables define the AWS region to launch the EC2 instance in, and the AMI ID of the Debian Stretch AMI to launch.

Further Thoughts

You can use this approach with Remote State without modification.

As the SSH key used for deployment is part of the Terraform state, which I have stored in S3, my playbook includes modifying authorized_keys to lock the IP permitted for the deployment key.