tl;dr – this project can now be deployed automatically with a Terraform script
Last project update, I introduced my project to leverage AWS resource to identify if pictures uploaded to an S3 bucket might contain images of credit cards, and in turn need special handling under an organisation’s PCI DSS processes. And it worked!
But cranking serverless environments by hand limits the full power of cloud (so I’m told), so I looked at ways to automate the process; seemingly there’s plenty of options, all with various pros and cons:
- CloudFormation (CFN):
- AWS own native platform for defining Infrastructure as Code (IaC). CFN is one of AWS ‘free’ feature sets (be wary, you still pay as usual for the services deployed BY CFN, but CFN itself isn’t chargeable).
- Serverless Framework:
- Well recommended, but looking at pricing model access to AWS resources such as SNS used in this project’s architecture crossed the threshold into the paid for Pro version.
- Terraform:
- Fully featured 3rd party offering, able to add cloud features.
- AWS Serverless Application Model (SAM):
- AWS’ latest framework for deploying serverless architecture. Being honest, this is probably exactly what I needed for this project, but I’d finished the project before coming across SAM in my research (maybe a followup article required…..)
As you’ve probably guessed if you read the title or tl;dr of this post: For this project, Terrform nosed over the line; largely for the unscientific reasoning of the DevOps people I know use Terraform for their IaC projects – so I could call in support and troubleshooting if needed.
Terraform Build Script
At it’s heart Terraform’s syntax provides a framework for defining what resources you want deploying to your given environment. Resource configuration can get complex if you need lots of customisation, but a basic resource definition follows a simple format:
resource "$RESOURCE_TYPE" "$INTERNAL_RESOURCE_NAME" {
parameter = "$value"
}
For example, the definition for a new S3 bucket to upload images to for this project was:
resource "aws_s3_bucket" "bucket" {
bucket = "infosanity-aws-card-spotter-tfbuild"
}
As you begin to build up more complex environments you’ll need to reference defined resources this is achieved with the format:
${RESOURCE_TYPE.INTERNAL_RESOURCE_NAME.Property}
For example, when creating an S3 Bucket Notification resource, to trigger the lambda code when a a new file is uploaded the ID of the bucket created above is required, like so:
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "${aws_s3_bucket.bucket.id}"
<..snip..>
Deployment Process
Terraform has a lot of functionality, many I’m yet to begin to process, but for basic usage I’ve found only 4 commands are required.
terraform init
I’m sure init does all many of important activity in the background; for now I just know you need to initialise a Terraform project for the first time before doing anything else. Once done however you can begin to build the infrastructure you’ve defined.
terraform plan
terraform plan essentially performs a dry-run of the build process, interpreting the required resources, determing links between resources which dictate the order in which resources are required, and ultimately what will be performed into your environment. It’s a good idea to review the out both to find any errors in terraform files, and to ensure that terraform is going to make the changes expected. For example, running plan against this project’s terraform file produces:
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_iam_role.iam_for_lambda will be created
+ resource "aws_iam_role" "iam_for_lambda" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
<..SNIP..>
N.B. Whilst I’ve found plan great for validating changes to a script, some errors, mistakes and commissions will only be caught once the changes are applied. Which leads me to…
terraform apply
As is probably expected, apply does exactly what it says on the tin: Applies the defined infrastructure to your given cloud environment(s). Running this command will change your cloud environment, with all the potential problems you could have caused manually; from breaking production systems, to unexpected vendor costs – but we’re all experts, so that’s not a concern? right?…..
Multiple applies can be used to edit live environments, and terraform will (with both plan and apply) determine exactly what set of changes are required. This can be highly beneficial when iteratively either troubleshooting a misconfiguration, or adding a new feature into an existing environment.
terraform destroy
destroy‘s functionality is ultimately exactly why I was originally interested in adding IaC to my toolkit whilst working within AWS. Like apply, destroy does what it says on the tin: destroying all the resources it’s previously instantiated.
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# aws_iam_role.iam_for_lambda will be destroyed
- resource "aws_iam_role" "iam_for_lambda" {
- arn = "arn:aws:iam::<REDACTED>:role/iam_for_lambda" -> null
- assume_role_policy = jsonencode(
{
- Statement = [
- {
From a housekeeping perspective, this removes the potential for unneeded legacy resources being left around forever because no-one’s quite sure what it does, or what it’s connected to. When a project environment built and maintained by terraform is gone in one command. From a personal perspective, this removes (reduces?) the potential for leaving expensive resources running in a personal account: finished a development session? terraform destroy and everything’s(*) gone.
* Ok, there’s some exceptions: For example, if there’s any uploaded files left in the created S3 bucket after testing, terraform destroy won’t delete the bucket until the contents are manually removed. Which I can definitely see is a sensible precaution to help avoid accidental data loss.
Caveats
Getting my hands dirty, I found a couple of issues which were difficult to initially overcome.
I’d initially naively assumed a little microservice built around a single Lambda would be a simple use-case to use as a learning project. Turns out, Lambda is one service that doesn’t work wonderfully well with terraform thanks to a circular reference: the lambda code needs writing (and packaging, zip archive) prior to deployment of infrastructure, but the lambda code will probably need to reference other cloud resources which aren’t built/known until after deployment.
This is was resolved by having the lambda code (Python, in this case) retrieve the resource references from runtime environment variables, which are populated by terraform at build. Simple enough once worked out, but meant that my simple project took longer than initially expected.
Secondly, again for ease of proof of concept, the SNS Topic for this project was intended to just be a simple email trigger with the output. Because of AWS’ (sensible) requirement for an email subscription to be verified before sending (to prevent abuse, spam, etc.) email endpoints aren’t supported by terraform:
These are unsupported because the endpoint needs to be authorized and does not generate an ARN until the target email address has been validated. This breaks the Terraform model and as a result are not currently supported.
https://www.terraform.io/docs/providers/aws/r/sns_topic_subscription.html
This isn’t the greatest limitation; it just requires a manual step of subscribing to the created SNS Topic in the AWS console before the deployed pipeline is fully functional. And I’d expect that in a real-world example, the process output will likely trigger further processes, rather than just filling another inbox.
Summary
I’m aware that this has been a very superficial introduction into the functionality terraform (and other IaC platforms) can provide. Glad I took the dive to add to my AWS toolbox; I can understand why the ability to quickly define, build, and tear-down infrastructure is going to be an important foundational ability with the other projects I’ve got rattling around my skull. Watch this space….
—
Andrew
Leave a comment