On its website Terraform is defined as:
Terraform provides a common configuration to launch infrastructure — from physical and virtual servers to email and DNS providers. Once launched, Terraform safely and efficiently changes infrastructure as the configuration is evolved.
Simple file based configuration gives you a single view of your entire infrastructure.
This makes Terraform very appealing for us as it allows us to work with multiple cloud providers whilst maintaining a single code base. For us at Gluo this makes Terraform very appealing as it would allow us to work with multiple cloud providers with just one application.
In the future we plan to work with multiple cloud providers as we see more and more businesses moving to the cloud and not just sticking one provider for all their services but rather selecting a service which is most complimentary with their business needs and being independent of cloud providers.
Therefore, we decided to have a look at how Terraform can help us in these cases.
In our first trial we decided to set up a mixed environment with multiple accounts on AWS and Azure.
AZURE
On the Azure side we have a single virtual instance, which will function as a database server for the application running in AWS.
This server sits behind a security group which only allows MySQL traffic from the AWS instance and SSH trafic for our company network.
As you can see below the setup of the virtual machine is simple and explains itself.
You just have to declare parameters like the type of instance, OS, location, credentials, etc.
We also had to create a virtual network with a subnet and a security group. The security group is defined by using the resource azure_security_group
.
Next we made 2 security groups : One to allow ssh access from our company network and the other to allow MySQL access from the public IP of the AWS instances.
Again the code explains itself, you can see that we use the variable: azure_security_group.dbservers.name
to allow the public ip of the AWS NAT instance. We allow acces to all the resources in the 10.1.2.0/24 subnet in our Azure subnet.
Next we also had to define a virtual network, which again has really self explanatory code. You choose the location, create subnets etc.
The last thing we have to create is a storage service for the VM to be hosted on. This can be done by usign the resource azure_storage_service
Below is our configuration..
please note that for this configuration we have used the “Azure Provider” API.
AWS
We will be using 2 separate accounts as an AWS provider to demonstrate Terraform’s ability to use multiple accounts as well as multiple providers. To use 2 accounts of the same provider we have to set up our variables file to accept 2 different sets of keys.
these variables will be filled in using a secret .tfvars file containing both the access keys, this file should be named <something.tfvars
and contain at least the following lines:
Next we will set up the first AWS account with a few resources.
Create a first main.tf
file to describe the resources for the main account.
The full setup of resources, including vpc, subnets and instances can be found here, but the eip resource will be referenced later on.
The linked.tf
file will be used to reference public ip addresses from both our main AWS account and the Azure account :
We added an alias to differentiate between account, note that this alias should only be defined if there is another account without the alias tag. We can now start to create resources as usual, but with ` provider = “aws.link”` so terraform knows in which account it should launch the resources.
In this linked account we will create a security group that allows the EIP of the NAT-instance on the main account to access ports 22 and 80 on the instance on the linked account.
Since resource names have to be unique, we can now just reference resources created in other accounts.
Now all that is left is to hit the apply button and have terraform provision it all.