Hey, I'm Alexandra White.
Let's build a better digital experience.

back to the blog

January 17, 2018 Professional WritingWeb DevelopmentWork

Creating and managing instances with Packer and Terraform

Over the last three months, I’ve written a series of four blog posts about creating and managing instances with Packer and Terraform. It’s been a tremendous learning experience, and I couldn’t have done it without the help of some HashiCorp experts (and ex-pats). Thanks to Sean Chittenden, Paul Stack, and Justin Reagor for your advice, critique, and editing.

Below I’ve included excerpts from all of the posts. The source code for all of the exercises is available on GitHub.

Create custom infrastructure images with Packer

There are a number of ways to deploy custom applications on Triton. We’ve talked a lot about how easy it is to dockerize applications. Triton provides multiple hardware virtual machines (HVM) and infrastructure images (just run triton images to see what we offer) to meet all of your various application needs.

Although those images can be deployed into containers which can be customized individually, that extra work can be cumbersome and difficult to replicate. With Packer by Hashicorp, it’s quick and easy to automate the customization of a Triton image which you can easily deploy into multiple instances and update as needed.

Get started with Terraform and a simple application

The three-word roundup that everyone (including Hashicorp) uses to describe Terraform is: infrastructure as code. If that’s too succinct without being informative, let’s add a word and modify it: managing infrastructure with code. More on what that means in a moment.

I’ll be using Terraform to deploy an application that I’ll dub the Cat Randomizer, an HTML+JS application by Bryce Osterhaus. If you like cat GIFs and random meows, you’re in luck. Otherwise, you may want to keep your sound off once you’ve launched your application.

Using Terraform for blue-green deployments on Triton

In this post, you’ll learn how to implement the blue-green deployment model using Terraform as a way to deploy with confidence by reducing risk when updating applications and services. If you haven’t already, read how to create custom infrastructure images with Packer and how to get started with Terraform.

Deployment models let us deploy application infrastructure with confidence. There are a number of common models including (but not limited to) canary deployment, rolling deployment, and the blue-green deployment model. In the blue-green model, two versions of an application are maintained. One version is live while the other serves as a standby deployment, upon which testing can be done. This allows developers to easily rollback changes if there are problems in a new release. Remove the risk and organizational paralysis by executing changes quickly.

Blue-green deployments ensure you can predictably release updates without disruption to your customer experience.

One way to make the blue-green deployment model even better is to introduce automation. At Joyent, we have fully automated the deployment of our website. With one click to merge a GitHub pull request, a blog post can be published. The amount of stress relief this has provided me as I’ve needed to make content updates is enormous, and it has given me the desire to automate all the things.

By using Terraform to implement an automated blue-green deployment, you can quickly update your applications, potentially with zero downtime or at the very least a controlled release.

Using Terraform to deploy in multiple data centers on Triton

In this post, you’ll learn how to implement a blue-green deployment on multiple Triton data centers. Over the past few months, I’ve posted a number of tutorials which give a foundation of knowledge that will help you with the rest of this post: create custom infrastructure images with Packerget started deploying a simple application with Terraform, and using Terraform for blue-green deployments.

There are a number of reasons to consider deploying your application to more than one data center. For starters, it’s one of the best ways to maintain availability, greatly reducing the risk of events outside of your control, such as data center failure or internet connectivity issues between countries or regions. Provide minimum latency on a global scale by maintaining multiple versions of an application in data centers nearest the majority of your users.

Grow the capacity of your visitors by expanding deployments across multiple regions. You could even follow a hybrid approach and split the application between on-prem and the cloud. Triton is a public, private, and hybrid compute service, the perfect match for all deployment models.

By adding the blue-green deployment workflow to the mix, you can ensure that updates to those multi data center deployments go smoothly. After all, all application requirements are not the same. It’s always better to test before sending an application to production.