0% found this document useful (0 votes)
10 views100 pages

UNIT-2 2.2 Introduction To Terraform

The document provides a comprehensive overview of system provisioning and configuration management, focusing on Terraform as an Infrastructure as Code (IaC) tool. It details Terraform's lifecycle, architecture, commands, variables, and use cases, emphasizing its ability to automate and manage cloud resources across multiple platforms. Additionally, it explains the importance of Terraform variables and their types, highlighting how they enhance the flexibility and scalability of infrastructure configurations.

Uploaded by

auric266
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views100 pages

UNIT-2 2.2 Introduction To Terraform

The document provides a comprehensive overview of system provisioning and configuration management, focusing on Terraform as an Infrastructure as Code (IaC) tool. It details Terraform's lifecycle, architecture, commands, variables, and use cases, emphasizing its ability to automate and manage cloud resources across multiple platforms. Additionally, it explains the importance of Terraform variables and their types, highlighting how they enhance the flexibility and scalability of infrastructure configurations.

Uploaded by

auric266
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

System Provisioning and Configuration

Management - 21CSH-481
System provisioning and configuration
management: States of various tools in
provisioning and configuration, Reasons for
using provisioning and configuration tools,
UNIT Examples: Automation, preventing errors,
-2 tracking of changes, Examples of tools and
their capabilities.
Terraform: Fundamentals, variables,
Conditions, loops, TCL, State management,
Workspaces, Modules

Introduction to Terraform
• Terraform is an open-source infrastructure as code (IaC) software tool
which can be used to provision the infrastructure of a cloud platform.
• The HCL scripts which have been used to provision infrastructure can
be human-readable configuration files that can be versioned, reused,
and shared.
• You can provision wide range of resources in the cloud by using
terraform like compute, storage, networking, and application services,
across a variety of cloud providers and on-premises environments.
Infrastructure as a Code (IaC)
• Infrastructure as Code (IaC) is a method of managing and provisioning
IT infrastructure using code, rather than manual configuration.
• It allows teams to automate the setup and management of their
infrastructure, making it more efficient and consistent.
• This is particularly useful in the DevOps environment, where teams are
constantly updating and deploying software.

Use Cases of Terraform


• Following are the some of the use cases of terraform.
• Provisioning Cloud Resources: Different types of cloud
resources can be provisioned by using terraform like AWS,GCP,
and others. The resources can be managed are compute, storage,
networking, and application services.
• Multi-Cloud Management: You can manage the infrastructure of

different cloud platform at a time which will helps you to maintain


the multi-cloud or hybrid cloud environments.
• Infrastructure Versioning and Collaboration: You can store the

scripts which have been written to provision the infrastructure in


the version control system like git form where other teams can
collaborate on infrastructure changes, track revisions, and roll back
to previous states if needed.
• Automation and Continuous Integration/Continuous
Deployment (CI/CD): You can also integrate the terraform into
you CI/CD pipelines where ever the build is triggered if there is any
changes the infrastructure will upgrades automatically.

Terraform Lifecycle
• Terraform lifecycle consists of – init, plan, apply, and destroy.

1. Terraform init initializes the (local) Terraform environment. Usually


executed only once per session.
2. Terraform plan compares the Terraform state with the as-is state in
the cloud, build and display an
execution plan. This does not change the deployment (read-only).
3. Terraform apply executes the plan. This potentially changes the
deployment.
4. Terraform destroy deletes all resources that are governed by this
specific terraform environment.

Components of Terraform Architecture


1. Terraform Configuration Files
• These files contain the definition of the infrastructure resources
that Terraform will manage, as well as any input and output
variables and modules.
• The configuration files are written in the HashiCorp Configuration
Language (HCL), which is a domain-specific language designed
specifically for Terraform.
2. Terraform State File
• This file stores the current state of the infrastructure resources
managed by Terraform statefile.
• The state file is used to track the resources that have been created,
modified, or destroyed, and it is used to ensure that the
infrastructure resources match the desired state defined in the
configuration files.
3. Infrastructure as Code
• Terraform allows you to use code to define and manage your
infrastructure, rather than manually configuring resources through
a user interface.
• This makes it easier to version, review, and collaborate on
infrastructure changes.
4. Cloud APIs or other Infrastructure Providers
• These are the APIs or other interfaces that Terraform uses to
create, modify, or destroy infrastructure resources.
• Terraform supports multiple cloud providers, as well as on-
premises and open-source tools.
5. Providers
• Terraform integrates with a wide range of cloud and infrastructure
providers, including AWS, Azure, GCP, and more.
• These providers allow Terraform to create and manage resources
on those platforms.
• Overall, the architecture of a Terraform deployment consists of
configuration files, a state file, and a CLI that interacts with cloud APIs
or other infrastructure providers to create, modify, or destroy
resources.
• This architecture enables users to define and manage infrastructure
resources in a declarative and reusable way.
Terraform Commands
1. Terraform init
• Terraform init command initializes a Terraform working directory
by downloading and installing any required plugins and
dependencies.
• It should be run before any other Terraform commands.
$ terraform init
2. Terraform validate
• The validate command performs precisely what its name implies.
• It ensures that the code is internally coherent and examines it for
syntax mistakes.
• Only the configuration files (*.tf) in the active working directory
are examined. You must provide the -a recursive flag if you want
to validate files inside of folders (for example, if you have a
module/ directory).
$ terraform validate
3. Terraform apply
• Terraform apply command applies the changes defined in the
configuration to your infrastructure.
• It creates or updates the resources according to the configuration,
and it also prompts you to confirm the changes before applying
them.
$ terraform apply
4. Terraform destroy
• Terraform destroy command will destroy all the resources created
by Terraform in the current working directory.
• It is a useful command for tearing down your infrastructure when
you no longer need it.
$ terraform destroy
5. Terraform import
• Imports an existing resource into the Terraform state, allowing it to
be managed by Terraform.
$ terraform import
6. Terraform console
• Opens an interactive console for evaluating expressions in the
Terraform configuration.
$ terraform console
7. Terraform refresh
• This command updates the state of your infrastructure to reflect the
actual state of your resources.
• It is useful when you want to ensure that your Terraform state is in
sync with the actual state of your infrastructure.
$ terraform refresh
Core Elements of Terraform
1. Terraform CLI
• Terraform is an open-source tool that is packaged into a single
executable binary, which you can download and run directly from
the command line.
• This tool helps you automate the creation and management of
infrastructure.
• To see a list of available commands in Terraform, you can run:
terraform --help
• This command will display all the available commands, with the
most commonly used ones listed first.
• The primary Terraform commands include:
• init: Prepares your directory to run other Terraform commands.

• validate: Checks if the configuration is valid.

• plan: Shows what changes will be made to your infrastructure.


• apply: Executes the changes to create or modify your
infrastructure.
• destroy: Deletes the infrastructure that was previously created.

• In addition to these, there are other commands for various tasks like
formatting code (fmt), managing state (state), and more.
2. Terraform Language
• Terraform uses HashiCorp Configuration Language (HCL) to
define infrastructure. HCL is designed to be both easy to read by
humans and understandable by machines, making it a great fit for
DevOps tools.
• Infrastructure elements managed by Terraform are
called resources.
• These can include virtual machines, S3 buckets, VPCs, and
databases.
• Each resource is defined in a block, like this example for creating
an AWS VPC:
resource "aws_vpc" "default_vpc" {
cidr_block = "172.31.0.0/16"
tags = {
Name = "example_vpc"
}
}
3. Terraform Provider
• A software element known as a Terraform provider enables
Terraform to communicate with a particular infrastructure
platform.
• The resource kinds and data sources that Terraform can handle for
that platform must be implemented by providers.
• Cloud platforms, data centres, network devices, databases, and
other resources inside the target infrastructure or service can all be
defined, configured, and managed by Terraform providers.
4. Terraform Modules
• In Terraform, a module is a container for a set of related resources
that are used together to perform a specific task.
• Modules allow users to organize and reuse their infrastructure code,
making it easier to manage complex infrastructure deployments.
• Modules are defined using the ‘module’ block in Terraform
configuration.
• A module block takes the following arguments:
• source: The source location of the module. This can be a local

path or a URL.
• name: The name of the module. This is used to reference the

module in other parts of the configuration.


• version: The version of the module to use. This is optional and

can be used to specify a specific version of the module.


• Inside a module block, users can define the resources that make up
the module, as well as any input and output variables that the
module exposes.
• Input variables allow users to pass values into the module when it
is called, and output variables allow the module to return values to
the calling configuration.
• Modules can be nested, allowing users to create complex
infrastructure architectures using a hierarchical structure.
• Modules can also be published and shared on the Terraform
Registry, enabling users to reuse and extend the infrastructure code
of others.
5. Terraform Provisioners
• Provisioners are special tools in Terraform that let you execute
commands on your infrastructure after it’s been created.
• For example, you can use provisioners to copy files to a virtual
machine or run scripts for further configuration.
• However, provisioners should be used with caution because they
can complicate your setup and may require higher-level
permissions.
• It’s best to only use them when no other Terraform constructs (like
resources or modules) can achieve the same result.
6. Terraform State
• Terraform keeps track of your infrastructure and its current state in
a file called terraform.tfstate.
• This file contains information about your infrastructure resources,
which helps Terraform determine what changes to make during
future operations.
• The state can be stored locally on your machine, but in collaborative
settings, it’s usually better to store it remotely to ensure everyone
on the team is working with the same state information.

Terraform variables
• Terraform variables are placeholders for values that you can use
to make your configurations more dynamic and reusable.
• They let you define values that can be reused throughout your
Terraform configuration, similar to variables in any
programming language.
• Terraform variables are used to accommodate different
configurations without altering your code.
• You can easily change only the values of these variables to
achieve different use cases.
• They make your configuration more dynamic and flexible, and
they enhance the parametrization of your code.

The importance of Terraform variables


• Terraform variables are essential for building scalable,
maintainable, and adaptable infrastructure configurations,
ultimately contributing to efficient infrastructure management
and deployment practices.

Terraform variables types


• Terraform has several types of variables, each designed to
handle different kinds of data.
• Below is the full list of Terraform variable types:

• String — This fundamental type stores text values. Use


strings for data that doesn’t require mathematical operations,
such as usernames or tags.
• Number — This type is used for numeric values that you
might need to perform calculations on or use in numeric
settings, such as scaling parameters, setting timeouts, and
defining a number of instances to deploy.
• Bool — Short for Boolean, this type is strictly for true or false
values. They are essential for logic and conditional statements
in configurations, such as enabling or disabling resource
provisioning.
• List — A list is a sequence of values of the same type. This
type is ideal for scenarios where you need to manage a
collection of similar items, like multiple configuration tags.
• Map — Maps are collections of key-value pairs, each unique
key mapping to a specific value. This type is useful, for
example, when associating server names with their roles or
configurations.
• Tuple — This type is similar to lists but can contain a fixed
number of elements, each potentially of a different type.
Tuples are suitable when you need to group a specific set of
values with varied types together, like a coordinate of mixed
data types.
• Object — Objects are used to define a structure with named
attributes, each with its own type. They are very flexible,
allowing the definition of complex relationships, like a
configuration block that includes various attributes of
different types.
• Set — Sets are collections of unique values of the same type.
They are useful when you need to ensure no duplicates, such
as a list of unique user identifiers or configurations that must
remain distinct.

Local variables
• Local variables are declared using the locals block.
• It is a group of key-value pairs that can be used in the
configuration.
• The values can be hard-coded or be a reference to another
variable or resource.
• Local variables are accessible within the module/configuration
where they are declared.
• Let us take an example of creating a configuration for an EC2
instance using local variables.
• Add this to a file named main.tf.
locals {
ami = "ami-0d26eb3972b7f8c96"
type = "t2.micro"
tags = {
Name = "My Virtual Machine"
Env = "Dev"
}
subnet = "subnet-76a8163a"
nic = aws_network_interface.my_nic.id
}

resource "aws_instance" "myvm" {


ami = local.ami
instance_type = local.type
tags = local.tags

network_interface {
network_interface_id = aws_network_interface.my_nic.id
device_index =0
}
}

resource "aws_network_interface" "my_nic" {


description = "My NIC"
subnet_id = var.subnet

tags = {
Name = "My NIC"
}
}
• In this example, we have declared all the local variables in the
locals block.
• The variables represent the AMI ID (ami), Instance type (type),
Subnet Id (subnet), Network Interface (nic) and Tags (tags) to be
assigned for the given EC2 instance.
• In the aws_instance resource block, we used these variables to
provide the appropriate values required for the given attribute.
• Notice how the local variables are being referenced using
a local keyword (without ‘s’).
• The usage of local variables is similar to data sources. However,
they have a completely different purpose.
• Data sources fetch valid values from the cloud provider based
on the query filters we provide.
• Whereas we can set our desired values in local variables — they
are not dependent on the cloud providers.
• It is indeed possible to assign a value from a data source to a
local variable.
• Similar to how we have done it to create the nic local variable,
it refers to the id argument in the aws_network_interface resource
block.
• As a best practice, try to keep the number of local variables to a
minimum.
• Using many local variables can make the code hard to read.
Terraform input variables
• Terraform input variables are used to pass certain values from
outside of the configuration or module.
• They are used to assign dynamic values to resource attributes.
• The difference between local and input variables is that input
variables allow you to pass values before the code execution.
• Further, the main function of the input variables is to act as
inputs to modules.
• Modules are self-contained pieces of code that perform certain
predefined deployment tasks.
• Input variables declared within modules are used to accept
values from the root directory.
• Additionally, it is also possible to set certain attributes while
declaring input variables, as below:

• type — to identify the type of the variable being declared.


• default — default value in case the value is not provided
explicitly.
• description — a description of the variable. This description is
also used to generate documentation for the module.
• validation — to define validation rules.
• sensitive — a boolean value. If true, Terraform masks the
variable’s value anywhere it displays the variable.

Terraform input variable types


• Input variables support multiple data types.
• They are broadly categorized as simple and complex.
• String, number, bool are simple data types,
whereas list, map, tuple, object, and set are complex data types.
• The following snippets provide examples for each of the types
we listed.

String type
• The string type input variables are used to accept values in the
form of UNICODE characters.
• The value is usually wrapped by double quotes, as shown
below.
variable "string_type" {
description = "This is a variable of type string"
type = string
default = "Default string value for this variable"
}
• The string type input variables also support a heredoc style
format where the value being accepted is a longer string
separated by new line characters.
• The start and end of the value is indicated by “EOF” (End Of
File) characters.
• An example of the same is shown below.
variable "string_heredoc_type" {
description = "This is a variable of type string"
type = string
default = <<EOF
hello, this is Sumeet.
Do visit my website!
EOF
}

Number type

• The number type input variable enables us to define and accept


numerical values as inputs for their infrastructure deployments.
• For example, these numeric values can help define the desired
number of instances to be created in an auto-scaling group.
• The code below defines a number type input variable in any
given Terraform config.
variable "number_type" {
description = "This is a variable of type number"
type = number
default = 42
}

Boolean type

• The boolean type input variable is used to define and accept


true/false values as inputs for infrastructure deployments to
incorporate logic and conditional statements into the Terraform
configurations.
• Boolean input variables are particularly useful for enabling or
disabling certain features or behaviors in infrastructure
deployments.
• An example of a boolean variable is below.
variable "boolean_type" {
description = "This is a variable of type bool"
type = bool
default = true
}

Terraform list variable

• Terraform list variables allow us to define and accept a


collection of values as inputs for infrastructure deployments.
• A list is an ordered sequence of elements, and it can contain any
data type, such as strings, numbers, or even other complex data
structures.
• However, a single list cannot have multiple data types.
• List type input variables are particularly useful in scenarios
where we need to provide multiple values of the same type, such
as a list of IP addresses, a set of ports, or a collection of resource
names.
• The example below is for an input variable of a type list that
contains strings.
variable "list_type" {
description = "This is a variable of type list"
type = list(string)
default = ["string1", "string2", "string3"]
}

Map type

• The map type input variable enables us to define and accept a


collection of key-value pairs as inputs for our infrastructure
deployments.
• A map is a complex data structure that associates values with
unique keys, similar to a dictionary or an object in other
programming languages.
• For example, a map can be used to specify resource tags,
environment-specific settings, or configuration parameters for
different modules.
• The example below shows how a map of string type values is
defined in Terraform.
variable "map_type" {
description = "This is a variable of type map"
type = map(string)
default = {
key1 = "value1"
key2 = "value2"
}
}

Object type

• An object represents a complex data structure that consists of


multiple key-value pairs, where each key is associated with a
specific data type for its corresponding value.
• The object type input variable allows us to define and accept a
structured set of properties or attributes as inputs for our
infrastructure deployments.
• For example, an object is used to define a set of parameters for
a server configuration.
• The variable below demonstrates how an object type input
variable is defined with multi-typed properties.
variable "object_type" {
description = "This is a variable of type object"
type = object({
name = string
age = number
enabled = bool
})
default = {
name = "John Doe"
age = 30
enabled = true
}
}

Tuple type

• A tuple is a fixed-length collection that can contain values of


different data types.
• The key differences between tuples and lists are:

1. Tuples have a fixed length, as against lists.


2. With tuples, it is possible to include values with different
primitive types. Meanwhile, lists dictate the type of elements
included in them.
3. Values included in tuples are ordered. Due to their dynamic
sizes, it is possible to resize and reorder the values in lists.

• An example of a tuple type input variable:


variable "tuple_type" {
description = "This is a variable of type tuple"
type = tuple([string, number, bool])
default = ["item1", 42, true]
}

Set type
• A set is an unordered collection of distinct values, meaning each
element appears only once within the set.
• As against lists, sets enforce uniqueness – each element can
appear only once within the set.
• Sets support various inbuilt operations such as union,
intersection, and difference, which are used to combine or
compare sets.
• An example of a set type input variable is below.
variable "set_example" {
description = "This is a variable of type set"
type = set(string)
default = ["item1", "item2", "item3"]
}

Map of objects

• One of the widely used complex input variable types is


map(object).
• It is a data type that represents a map where each key is
associated with an object value.
• It allows us to create a collection of key-value pairs, where the
values are objects with defined attributes and their respective
values.
• When using map(object), we define the structure of the object
values by specifying the attributes and their corresponding types
within the object type definition.
• Each object within the map can have its own set of attributes,
providing flexibility to represent diverse sets of data.
• An example of the same is given below, where the map of
objects represents attribute values used for the creation of
multiple subnets.
variable "map_of_objects" {
description = "This is a variable of type Map of objects"
type = map(object({
name = string,
cidr = string
}))
default = {
"subnet_a" = {
name = "Subnet A",
cidr = "10.10.1.0/24"
},
"subnet_b" = {
name = "Subnet B",
cidr = "10.10.2.0/24"
},
"subnet_c" = {
name = "Subnet C",
cidr = "10.10.3.0/24"
}
}
}

List of objects

• This type of variable is similar to the Map of objects, except that


the objects are not referred to by any “key”.
• The example used for the Map of objects can also be
represented in the form of a list of objects, as shown below.
• The list(object) is an ordered list of objects where each object is
referred to using the index.
• On the other hand, map(object) is an unordered set, and each
object is referred to using the key value.
variable "list_of_objects" {
description = "This is a variable of type List of objects"
type = list(object({
name = string,
cidr = string
}))
default = [
{
name = "Subnet A",
cidr = "10.10.1.0/24"
},
{
name = "Subnet B",
cidr = "10.10.2.0/24"
},
{
name = "Subnet C",
cidr = "10.10.3.0/24"
}
]
}

Terraform input variables example


• Let us work through the same example as before.
• Only this time, we use variables instead of local variables.
• Create a new file to declare input variables as variables.tf and
add the below content to it.
variable "ami" {
type = string
description = "AMI ID for the EC2 instance"
default = "ami-0d26eb3972b7f8c96"

validation {
condition = length(var.ami) > 4 && substr(var.ami, 0, 4)
== "ami-"
error_message = "Please provide a valid value for variable
AMI."
}
}

variable "type" {
type = string
description = "Instance type for the EC2 instance"
default = "t2.micro"
sensitive = true
}

variable "tags" {
type = object({
name = string
env = string
})
description = "Tags for the EC2 instance"
default = {
name = "My Virtual Machine"
env = "Dev"
}
}

variable "subnet" {
type = string
description = "Subnet ID for network interface"
default = "subnet-76a8163a"
}
• Here, we have declared 5 variables
— ami, nic, subnet and type with the simple data type,
and tags with a complex data type object — a collection of key-
value pairs with string values.
• Notice how we have made use of attributes
like description and default.
• The ami variable also has validation rules defined for them
to check the validity of the value provided.
• We have also marked the type variable as sensitive.
• Let us now modify main.tf to use the variables declared above.
resource "aws_instance" "myvm" {
ami = var.ami
instance_type = var.type
tags = var.tags

network_interface {
network_interface_id = aws_network_interface.my_nic.id
device_index =0
}
}

resource "aws_network_interface" "my_nic" {


description = "My NIC"
subnet_id = var.subnet

tags = {
Name = "My NIC"
}
}
• Within the resource blocks, we have simply used these
variables by using var.<variable name> format.
• When you proceed to plan and apply this configuration, the
variable values will automatically be replaced by default values.
• The following is a sample plan output.
}
}

Plan: 2 to add, 0 to change, 0 to destroy.


• To check how validation works, modify the default value
provided to the ami variable.
• Make sure to change the ami- part since validation rules are
validating the same.
• Run the plan command, and see the output.
• You should see the error message thrown on the console as
below.
│ Error: Invalid value for variable

│ on variables.tf line 1:
│ 1: variable "ami" {

│ Please provide a valid value for variable AMI.

│ This was checked by the validation rule at variables.tf:6,3-13.
• Also, notice how the type value is represented in the plan
output.
• Since we have marked it as sensitive, its value is not shown.
• Instead, it just displays sensitive.
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_state = (known after apply)
+ instance_type = (sensitive)
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
Variable substitution using CLI and .tfvars
• In the previous example, we relied on the default values of the
variables.
• However, variables are generally used to substitute values
during runtime.
• The default values can be overridden in two ways:

• Passing the values in CLI as -var argument.


• Using .tfvars file to set variable values explicitly.
• If we want to initialize the variables using the CLI argument,
we can do so as below.
• Running this command results in Terraform using these values
instead of the defaults.
terraform plan -var "ami=test" -var "type=t2.nano" -var
"tags={\"name\":\"My Virtual Machine\",\"env\":\"Dev\"}"
• While working with plan or apply commands, -var argument
should be used for every variable to be overridden.
• Note how we have provided the value for complex data type
with escaped characters.
• Imagine a scenario where many variables are used in the
configuration.
• Passing the values using CLI arguments can become a tedious
task.
• This is where .tfvars files come into play.
• Create a file with the .tfvars extension and add the below
content to it.
• I have used the name values.tfvars as the file name.
• This way we can organize and manage variable values easily.
ami = "ami-0d26eb3972b7f8c96"
type = "t2.nano"
tags = {
"name" : "My Virtual Machine"
"env" : "Dev"
}
• This time, we should ask Terraform to use the values.tfvars file
by providing its path to -var-file CLI argument.
• The final plan command should look as such:
terraform plan -var-file values.tfvars
• The -var-file argument is great if you have multiple .tfvars files
with variations in values.
• However, if you do not wish to provide the file path every time
you run plan or apply, simply name the file
as <filename>.auto.tfvars.
• This file is then automatically chosen to supply input variable
values.
Environment variables
• Additionally, input variable values can also be set
using Terraform environment variables.
• To do so, simply set the environment variable in the
format TF_VAR_<variable name>.
• The variable name part of the format is the same as the variables
declared in the variables.tf file.
• For example, to set the ami variable run the below command to
set its corresponding value.
export TF_VAR_ami=ami-0d26eb3972b7f8c96
• Apart from the above environment variable, it is important to
note that Terraform also uses a few other environment variables
like TF_LOG, TF_CLI_ARGS, TF_DATA_DIR, etc.
• These environment variables are used for various purposes like
logging, setting default behavior with respect to workspaces,
CLI arguments, etc.
Variable precedence
• As we have seen till now, there are three ways of providing
input values to Terraform configuration using variables.
• Namely—default values, CLI arguments, and .tfvars file.
• The precedence is given to values passed via CLI arguments.
• This is followed by values passed using the .tfvars file and
lastly, the default values are considered.
• In the current example, now that we have the values.tfvars file
saved, try to run a plan command by passing values via CLI -
var arguments.
• Make sure to provide different values as that of .tfvars and
defaults.
• Terraform ignores the values provided via .tfvars and defaults.
• If the values are not provided in the .tfvars file, or as defaults,
or as CLI arguments, it falls back on TF_VAR_ environment
variables.
• Additionally, if we don’t provide the values in any of the forms
discussed above, Terraform would ask for the same in
interactive mode when plan or apply commands are run.
• As a best practice, it is not recommended to store secret and
sensitive information in variable files.
• These values should always be provided via
the TF_VAR_ environment variable.
• This is where Spacelift shines.
• It makes use of these Terraform native environment variables to
manage secrets as well as other attributes that make the most
sense.
• Managing the environment in the Spacelift console is easy,
thanks to a dedicated tab where values can be edited on the go.
Variable validation
• In any programming language, you try to catch the errors as
soon as possible, and even if Terraform uses a declarative
language, this is the same.
• Variable validations ensure that constraints are applied to your
variables.
• Before this feature was even introduced, the only way you could
do validations was by making use of a hacky method leveraging
the file function.
• Let’s see that in action:
locals {
vpc_cidr = "10.0.0.0/16"
vpc_cidr_validation = split("/", local.vpc_cidr)[1] < 16 || split("/",
local.vpc_cidr)[1] > 30 ? file(format("\n\nERROR: The VPC Cidr %s
is not between /16 and /30", local.vpc_cidr)) : null
}
• Basically, in the above example, I’m checking if the network
mask is lower than 16 and greater than 30, and if it is, my
“validation” will try to open a file that doesn’t exist and will
actually print out the error message that I want.
• Otherwise, it won’t print anything:
terraform apply

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your


configuration and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed
• Now, let’s change the cidr to receive an error:
locals {
vpc_cidr = "10.0.0.0/8"
vpc_cidr_validation = split("/", local.vpc_cidr)[1] < 16 && split("/",
local.vpc_cidr)[1] > 30 ? file(format("\n\nERROR: The VPC Cidr %s
is not between /16 and /30", local.vpc_cidr)) : null
}
terraform apply

│ Error: Invalid function argument

│ on main.tf line 3, in locals:
│ 3: vpc_cidr_validation = split("/", local.vpc_cidr)[1] < 16 &&
split("/", local.vpc_cidr)[1] > 30 ? file(format("\n\nERROR: The VPC
Cidr %s is not between /16 and /30", local.vpc_cidr)) : null
│ ├────────────────
│ │ while calling file(path)
│ │ local.vpc_cidr is "10.0.0.0/8"

│ Invalid value for "path" parameter: no file exists at "\n\nERROR:
The VPC Cidr 10.0.0.0/8 is not between /16 and /30"; this function
works only with files that are distributed as part of the configuration
source code, so if this file will be created by a resource in this
configuration you must instead obtain this result from an attribute of
that resource.

• It does the job, but as you can see, the error message can be
misleading.
• With variable validations, however, things are improved a lot.
• Variable validations are defined in the variable block, and they
receive two parameters:

• condition – what you want to check inside your variable


• error_message – what error message you would like your
users to get if the condition is not fulfilled

• Let’s recreate the above example with a variable validation:


variable "cidr_block" {
type = string
default = "10.0.0.0/8"
validation {
condition = split("/", var.cidr_block)[1] > 16 && split("/",
var.cidr_block)[1] < 30
error_message = "Your vpc cidr is not between 16 and 30"
}
}
• This will result in an error because the cidr_block is not between
/16 and /30:
terraform apply

│ Error: Invalid value for variable

│ on main.tf line 1:
│ 1: variable "cidr_block" {
│ ├────────────────
│ │ var.cidr_block is "10.0.0.0/8"

│ Your vpc cidr is not between 16 and 30

│ This was checked by the validation rule at main.tf:4,3-13
• We could take this example even up a notch and verify if the
string we are passing is in a cidr format:
validation {
condition = strcontains(var.cidr_block, "/") &&
length(split(var.cidr_block, ".")) == 4
error_message = "Your vpc cidr doesn't respect the cidr format"
}
• This validation checks if our cidr has a “/” and if it has 3 “dots”.
Sensitive variables
• Terraform has also the ability to mark variables as sensitive and
will not display their value when you are running terraform
plan and apply, but they will be readable from within the
Terraform state.
• Let’s take a look at an example of a sensitive variable:
variable "my_super_secret_password" {
type = string
default = "super-secret"
sensitive = true
}

output "my_super_secret_password" {
value = var.my_super_secret_password
}
terraform apply

│ Error: Output refers to sensitive values

│ on main.tf line 20:
│ 20: output "my_super_secret_password" {

│ To reduce the risk of accidentally exporting sensitive data that
was intended to be only internal, Terraform requires that any root
module output containing sensitive data
│ be explicitly marked as sensitive, to confirm your intent.

│ If you do intend to export this data, annotate the output value as
sensitive by adding the following argument:
│ sensitive = true
• Now, if we want terraform not to error out and at least show the
output, we should add the sensitive = true to that output:
output "my_super_secret_password" {
value = var.my_super_secret_password
sensitive = true
}
terraform apply

Changes to Outputs:
+ my_super_secret_password = (sensitive value)

You can apply this plan to save these new output values to the
Terraform state, without changing any real infrastructure.

Do you want to perform these actions?


Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

my_super_secret_password = <sensitive>
• If you want to see the sensitive value in the output too,
Terraform has a mechanism in place for that if you are
leveraging the nonsensitive function:
variable "my_super_secret_password" {
type = string
default = "super-secret"
sensitive = true
}

output "my_super_secret_password" {
value = nonsensitive(var.my_super_secret_password)
}
terraform apply

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your


configuration and found no differences, so no changes are needed.

Do you want to perform these actions?


Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes


Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

my_super_secret_password = "super-secret"
Output variables
• For situations where you deploy a large web application
infrastructure using Terraform, you often need certain
endpoints, IP addresses, database user credentials, and so forth.
• This information is most useful for passing the values to
modules along with other scenarios.
• This information is also available in Terraform state files.
• But state files are large, and normally we would have to perform
an intricate search for this kind of information.
• Output variables in Terraform are used to display the required
information in the console output after a successful application
of configuration for the root module.
• To declare an output variable, write the following configuration
block into the Terraform configuration files.
output "instance_id" {
value = aws_instance.myvm.id
description = "AWS EC2 instance ID"
sensitive = false
}
• Continuing with the same example, we would like to display the
instance ID of the EC2 instance that is created.
• So, declare an output variable named instance_id — this could
be any name of our choosing.
• Within this output block, we have used some attributes to
associate this output variable’s value.
• We have used resource reference
for aws_instance.myvm configuration and specified to use
its id attribute.
• Optionally, we can use the description and sensitive flags.
• We have discussed the purpose of these attributes in previous
sections.
• When a plan command is run, the plan output acknowledges the
output variable being declared as below.
Changes to Outputs:
+ instance_id = (sensitive value)
• Similarly, when we run the apply command, upon successful
creation of EC2 instance, we would know the instance ID of the
same.
• Once the deployment is successful, output variables can also be
accessed using the output command:
terraform output
• Output:
instance_id = “i-xxxxxxxx”
• Output variables are used by child modules to expose certain
values to the root module.
• The root module does not have access to any other component
being created by the child module.
• So, if some information needs to be made available to the root
module, output variables should be declared for the
corresponding attributes within the child module.
Terraform Expressions
• Expressions are the core of HCL itself – the logic muscle of the
entire language.
• Terraform expressions allow you to get a value from
somewhere, calculate or evaluate it.
• You can use them to refer to the value of something, or extend
the logic of a component – for example, make one copy of the
resource for each value contained within a variable, using it as
an argument.
• They are used pretty much everywhere – the most simple type
of expression would be a literal value – so, there is a great
chance that you have already used them before.

1. Operators
• Dedicated to logical comparison and arithmetic operations,
operators are mostly used for doing math and basic Bool’s
algebra.
• If you need to know if number A equals number B, add them
together, or determine if both boolean A and boolean B are
“true”, Terraform offers the following operators:

Types of Terraform Operators


• Arithmetic operators – the basic ones for typical math
operations (+, -, *, /) and two special ones: “X % Y” would
return the remainder of dividing X by Y, and “-X”, which would
return X multiplied by -1. Those can only be used with numeric
values.
• Equality operators – “X == Y” is true, if X and Y have both the
same value and type, “X != Y” would return false in this case.
This one will work with any type of value.
• Comparison operators – “<, >, <=, >=” – exclusive to
numbers, returns true or false depending on the condition.
• Logical operators – the Bool’s algebra part of the pack, work
only with the boolean values of true and false.
– “X || Y” returns true if either X or Y is true, false if any of
them is false.
– “X && Y” returns true only, if both X and Y are true, false if
any of them is false.
– “!X” is true, if X is false, false if X is true.

2. Conditionals
• Sometimes, you might run into a scenario where you’d want the
argument value to be different, depending on another value.
• The conditional syntax is as such:
condition ? true_val : false_val
• The condition part is constructed using previously described
operators.
• In this example, the bucket_name value is based on the “test”
variable—if it’s set to true, the bucket will be named “dev” and
if it’s false, the bucket will be named “prod”:
bucket_name = var.test == true ? "dev" : "prod"

3. Splat expressions
• Splat expressions are used to extract certain values from
complicated collections – like grabbing a list of attributes from
a list of objects containing those attributes.
• Usually, you would need an “for” expression to do this, but
humans are lazy creatures who like to make complicated things
simpler.
• For example, if you had a list of objects such as these:
test_variable = [
{
name = "Arthur",
test = "true"
},
{
name = "Martha"
test = "true"
}
]

• Instead of using the entire “for” expression:


[for o in var.test_variable : o.name]
• you could go for the splat expression form:
var.test_variable[*].name

• And in both cases, get the same result:


["Arthur", "Martha"]

• Do note, that this behavior applies only if splat was used on a


list, set, or tuple.
• Anything else (Except null) will be transformed into a tuple,
with a single element inside, null will simply stay as is.
• This may be good or bad, depending on your use case.

4. Constraints
• In simple terms, constraints regulate what can be what and
where something can or cannot be used.
• There are two main types of constraints—for types
and versions.
• Type constraints regulate the values of variables and outputs.

For example, a string is represented by anything enclosed in


quotes, bool value is either a literal true or false, a list is
always opened with square brackets [ ], a map is defined with
curly brackets { }.
• Version constraints usually apply to modules and regulate

which versions should or should not be used.


Terraform loops
• Terraform loops are used to handle collections, and to produce
multiple instances of a resource or module without repeating the
code.
• There are three loops provided by Terraform to date:
• Count

• For_each

• For
1. Count
• Count is the most primitive—it allows you to specify a whole
number, and produces as many instances of something as this
number tells it to.
• For example, the following would order Terraform to create ten
S3 buckets:
resource "aws_s3_bucket" "test" {
count = 10
[...]
}
• When count is in use, each instance of a resource or module
gets a separate index, representing its place in the order of
creation.
• To get a value from a single resource created in this way, you
must refer to it by its index value, e.g. if you wished to see the
ID of the fifth created S3 bucket, you would need to call it as
such:
aws_s3_bucket.test[5].id
• Although this is fine for identical, or nearly identical objects,
as previously mentioned, count is pretty primitive.
• When you need to use more distinct, complex values
– count yields to for_each.

2. For_each
• As mentioned earlier, sometimes you might want to create
resources with distinct values associated with each one – such
as names or parameters (memory or disk size for example).
• For_each will let you do just that. Merely provide a variable—
map, or a set of strings, and the resources can access values
contained within, via each.key and each.value:
test_map = {
test1 = "test2",
test2 = "test4"
}

resource "test_resource" "thing" {


for_each = var.test_map

test_attribute_1 = each.key
test_attribute_2 = each.value
}
• As you can see, for_each is quite powerful, but you haven’t
seen the best yet.
• By constructing a map of objects, you can leverage a resource
or module to create multiple instances of itself, each with
multiple declared variable values:
my_instances = {
instance_1 = {
ami = "ami-00124569584abc",
type = "t2.micro"
},
instance_2 = {
ami = "ami-987654321xyzab",
type = "t2.large"
},
}

resource "aws_instance" "test" {


for_each = var.my_instances

ami = each.value["ami"]
instance_type = each.value["type"]
}
• Using this approach, you don’t have to touch anything except
the .tfvars file, to provide new instances of resources you have
already declared in your configuration.

3. For
• For is made for picking out, iterating over, and operating on
things from complex collections.
• Imagine that you have a list of words (strings), but
unfortunately, all of them contain newline characters at the end
which you don’t want.
• Like this:
word_list = [
"Critical\n",
"failure\n",
"teapot\n",
"broken\n"
]
• To fix this problem, you could do
[for word in var.word_list : chomp(word)]
• which would result in:
["Critical", "failure", "teapot", "broken"]
• As you can see, a list comes in, a list goes out—but, this is not
a must.
• The type of input, and the brackets which wrap
the for expression, determine the type of output it produces.
• If you had wrapped it with curly brackets, and provided a map
as an input, the output would have been a map.
• But there’s one thing that’s even more interesting—
the for expression can also be used to filter the input as you
please.
• By adding an if clause, you can conditionally operate or not
operate on certain values, depending on your defined terms.
• Take this example, making every word start with a capital
letter… except for the word “teapot”:
[for word in var.word_list : upper(word) if word != "teapot"]
Terraform State
• At its core, Terraform state provides an overview of your
infrastructure's resources and properties; this represents what
Terraform estimates will exist based on your configuration.
• Terraform uses state information to understand which resources
have been created, updated, or destroyed during each run.
• This information allows Terraform to make intelligent decisions
regarding which resources should be created, updated, or
destroyed during future runs.
• Terraform state is stored in a file named terraform.tfstate in
the root directory of your Terraform project.
• This JSON file stores information about all your resources, such
as IDs, attributes, and dependencies.
Terraform State Importance
• The importance of the Terraform state cannot be overstated.
• Here are some key reasons why it's crucial:
o Tracking Resource State: Terraform uses its state file to

accurately account for your infrastructure's current state.


Terraform automatically tracks the state of your resources to
understand their current configuration and to make informed
decisions when applying changes.
o Resource Dependency Management: The state also helps

Terraform understand the dependencies between resources.


If Resource A depends on Resource B, Terraform needs to
know this relationship to provision them in the correct order.
State information guides Terraform in maintaining this order
during operations.
o Plan and Apply Operations: Before making changes to
your infrastructure, Terraform generates an execution plan
that outlines what it intends to do. This plan compares the
desired state (your configuration) and the current state (the
state file). Terraform wouldn't know what changes to
propose or apply without the state.
o Resource Deletion and Cleanup: Terraform employs the
state to identify resources you no longer require for deletion.
Based on this state information, you only need to pay for
what's being actively utilized, helping manage costs more
effectively.
Managing Terraform State
• Terraform keeps an eye on all resources under its management
and their current states, ensuring accurate infrastructure
updates.
• There are two primary methods for doing so - local and remote.
Local State Management
• Local state management is the default approach employed by
Terraform.
• Under this method, Terraform creates a local state file in the
same directory as your Terraform configuration files to
facilitate state administration locally.
• Here's how you can accomplish it:
o Initializing a Terraform Project

o The first step is to initialize your Terraform project.

o This command sets up the necessary plugins and backend

configurations.
terraform init
o Creating Resources
o Define infrastructure resources in your Terraform
configuration file (e.g., main.tf).
o For example, to launch an AWS EC2 instance:

resource "aws_instance" "demo" {


ami = "ami-0c88g674cbfafe1f0"
instance_type = "t2.micro"
}
o Apply the configuration to create the resources:
terraform apply
o Terraform will add the details of the provisioned
resource in the terraform.tfstate file.
o Modifying Resources
o To modify existing resources, make changes to your

Terraform configuration and apply the changes again.


o For instance, we can add a tag to the resource created in

the previous step.


resource "aws_instance" "demo" {
ami = "ami-0c88g674cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "MyDemoInstance"
}
}
o Apply the configuration to modify the resources:
terraform apply
o Terraform will identify any discrepancies between your
desired state in your configuration and what is currently
stored in the local state file and make any necessary
updates.
o Deleting Resources and Cleanup
o To delete resources, remove them from your
configuration file and apply the changes.
o The resources will be destroyed, and the state file will be

updated to reflect the changes.


o Challenges with Local State Management
o Below are the limitations of local state management:

o Limited Collaboration: Local state files are tied to

individual machines, making it more difficult for teams


to collaborate effectively.
o Risk of Data Loss: Local state files are vulnerable to

data loss if a machine crashes or the state file is


accidentally deleted.
o Concurrency Issues: Within teams, multiple members

may attempt to implement changes simultaneously,


leading to conflicts in configuration.
Remote State Management
• Remote state management is a more robust approach suitable
for team environments and production use cases.
• In this method, the Terraform state file is stored remotely in
a shared location that all team members can access.
• Common choices for remote state storage include Amazon
S3, Azure Blob Storage, and Terraform Cloud.
• Here's how to set up remote state management.
o Initialize Remote State

o Initialize your Terraform project:

terraform init
o However, instead of using the default local state
backend, specify a remote state backend in your
configuration.
o Configure Remote State Backend
o In your Terraform configuration (e.g., main.tf), specify

the backend configuration to use the remote state.


o For example, using Amazon S3 as the remote state

backend:
terraform {
backend "s3" {
bucket = "aws-s3-bucket-demo"
key= "s3://aws-s3-bucket-
demo/statefile.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "demo-dynamodb-table"
}
}
o In this example, we use an S3 bucket to store the state
file and DynamoDB for locking.
o Below are the details of the above configurations:
• bucket: This is the name of the S3 bucket where
Terraform will store its state file and related data.
• key: This is the path within the S3 bucket where
Terraform will store its state file.
• region: This specifies the AWS region in which the S3
bucket is located.
• encrypt: When set to true, it means that Terraform
will encrypt the state file when storing it in S3,
providing additional security.
• dynamodb_table: This setting is used for state
locking, which prevents concurrent state
modifications.
o Apply Changes with Remote State
o Apply your Terraform configuration using the command

below:
terraform apply
o Terraform will store the state remotely in the specified
backend, making it accessible to all team members.
o Benefits of Remote State Management
o Remote state management offers several advantages:
• Improved collaboration: All team members can now
access and update the same state file without manually
sharing and synchronizing state files.
• Increased security: Remote state storage solutions
often have built-in security features, including access
control mechanisms.
• Better data protection: Cloud-based remote state
solutions typically offer data redundancy, backups, and
versioning to protect state data from accidental deletion
or corruption.
o Choose an appropriate remote state management solution
to ensure better collaboration, data integrity, and
concurrency control for your Terraform projects.
Terraform State Management Example
• In this scenario, we want to use Terraform to provision and
manage AWS EC2 instances.
1. Local State Management
Step 1: Set Up Your Project
Create a new directory for your Terraform project and navigate
into it:
mkdir terraform-ec2-demo
cd terraform-ec2-demo
Inside this directory, create a Terraform configuration file named
main.tf with the following content:
# main.tf
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "demo" {
ami = "ami-0c88g674cbfafe1f0"
instance_type = "t2.micro"
}
This configuration defines an AWS EC2 instance resource.
Step 2: Initialize and Apply
Initialize your Terraform project:
terraform init
Now, apply the configuration to create the EC2 instance:
terraform apply
Terraform will generate a state file named terraform.tfstate in the
same directory. This file stores information about the provisioned
EC2 instance.
Step 3: Modify and Update.
Let's make a change to the configuration. Update main.tf by
adding a "Name" tag to the EC2 instance:
# main.tf
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "demo" {
ami = "ami-0c88g674cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "DemoEC2Instance"
}
}
Apply the changes:
terraform apply
Terraform will detect the modification and update the EC2
instance accordingly. The state file is also updated to reflect the
new configuration.
Step 4: Delete Resources and Cleanup
To delete the EC2 instance, remove the below configuration in the
main.tf:
# main.tf
provider "aws" {
region = "us-east-1"
}
Apply the changes to delete the resource:
terraform apply
The EC2 instance will be deleted, and the state file will be updated
accordingly.
2. Remote State Management with Amazon S3 and DynamoDB
Step 1: Create an S3 Bucket and DynamoDB Table
• Log in to the AWS Management Console.
• Create an S3 bucket to store your Terraform state files.
• Enable versioning for the S3 bucket for data protection.
• Create a DynamoDB table to manage state locking. You can
follow the steps mentioned in the previous section.
Step 2: Configure Remote State in Terraform.
Update your main.tf configuration to use remote state
management with S3 and DynamoDB:
# main.tf
provider "aws" {
region = "us-east-1"
}
terraform {
backend "s3" {
bucket = "aws-s3-bucket-demo"
key = "/home/ubuntu/demo/statefile.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "demo-dynamodb-table"
}
}
resource "aws_instance" "demo" {
ami = "ami-0c88g674cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "DemoEC2Instance"
}
}
Step 3: Initialize and Apply with Remote State
Initialize your Terraform project to configure the remote state:
terraform init
Now, apply the configuration to create the EC2 instance:
terraform apply
Terraform will provision an EC2 instance and store its state in an
S3 bucket using DynamoDB for locking purposes.
Remote state management enables team members to work
together and ensure the security and integrity of your Terraform
state.
The above examples illustrate how Terraform state can be
managed locally and remotely using AWS EC2 instances as
infrastructure resources. Select an approach suitable for managing
the state of your projects based on your needs.
Best Practices for Managing Terraform State
Here are some best practices for managing the Terraform state:
• Keep state in a remote backend. By default, Terraform
stores state locally on disk; however, for optimal sharing and
recovery, it would be much more helpful to store state
remotely in the Cloud. Doing this allows teammates to share
it more efficiently and allows for easy recovery in case of loss
of state.
• Create separate state files for every environment. It's wise
to keep individual state files for different environments, for
example, production, staging, and development
environments. It prevents a situation where a change in one
environment affects the other environments by mistake.
• Utilize version control with your Terraform
configuration. This allows you to keep track of changes
made and roll back if necessary. It also enables you to keep
an audit trail of each modification that’s made.
• Regularly back up your state file. Regular backups of your
state files can help ensure recovery should they become lost
or deleted unexpectedly.
Terraform Workspace
• Terraform workspaces enable us to manage multiple deployments of
the same configuration.
• When we create cloud resources using the Terraform configuration
language, the resources are created in the default workspace.
• It is a very handy tool that lets us test configurations by giving us
flexibility in resource allocation, regional deployments, multi-account
deployments, and so on.
• The information about all the resources managed by Terraform is
stored in a state file.
• It is important to store this state file in a secure location.
• Every Terraform run is associated with a state file for validation and
reference purposes.
• Any modifications to the Terraform configuration, planned or applied,
are always validated first with references in the state files, and the
execution result is updated back to it.
• If you are not consciously using any workspace, all of this already
happens in a default workspace.
• Workspaces help you isolate independent deployments of the same
Terraform config while using the same state file.

Difference between the Terraform environment


and the workspace
• A Terraform environment typically refers to the overall setup of your
infrastructure, including all configurations and resources that define it.
• A workspace, on the other hand, is a named state file that enables you
to manage multiple isolated instances of the same infrastructure
configuration.
• By keeping state files separate, workspaces help prevent conflicts and
simplify the management of distinct deployments.

Terraform workspace vs. Terraform module


• Terraform workspaces and Terraform modules are two different
concepts that serve different purposes in the Terraform ecosystem.
• Workspaces allow users to manage different sets of infrastructure using
the same configuration by isolating state files.
• Modules, on the other hand, are a logical container for multiple
resources that are used together, facilitating reusability and better
organization of your code.

How to use Terraform workspace command


• To begin, let’s look at the options available to us in the help:
terraform workspace --help
Usage: terraform [global options] workspace

new, list, show, select, and delete Terraform workspaces.

Subcommands:
delete Delete a workspace
list List Workspaces
new Create a new workspace
select Select a workspace
show Show the name of the current workspace
• The options are quite straightforward here.
• We can use the workspace command to list all the available
workspaces and show the currently selected one.
• We can also create new workspaces and delete old ones. Finally, to
navigate through workspaces, we use the select command.

1. Create an EC2 instance


• For the sake of this blog post, let us consider a simple Terraform config
that creates an EC2 instance with the configuration below.
• We are currently using three variables for AMI value, the type of
instance to be created and the name tag.
resource "aws_instance" "my_vm" {
ami = var.ami //Ubuntu AMI
instance_type = var.instance_type

tags = {
Name = var.name_tag,
}
}
• If we run terraform plan command at this point, it will show that it
needs to create one resource, i.e. an EC2 instance.
• When the resource is created, the state file is updated with its
information and other attributes.
• Go ahead and create this EC2 instance.
• For reference, I am creating a t2.micro instance with Ubuntu 20.04
image.
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ instance_id = (known after apply)
+ public_ip = (known after apply)

Do you want to perform these actions?


Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

aws_instance.my_vm: Creating...
aws_instance.my_vm: Still creating... [10s elapsed]
aws_instance.my_vm: Still creating... [20s elapsed]
aws_instance.my_vm: Still creating... [30s elapsed]
aws_instance.my_vm: Still creating... [40s elapsed]
aws_instance.my_vm: Creation complete after 42s [id=i-
07708992d1d3272c1]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

instance_id = "i-07708992d1d3272c1"
public_ip = "3.73.0.139"
• As we can see from the output, the EC2 instance was successfully
created.
• Run the plan command again, and see if Terraform wants to perform
any additional actions at this point. It probably won’t.

2. Run terraform workspace show


• To check the current workspace we are in, run the command below.
terraform workspace show
default
• The output here shows that we are currently in the workspace named
default.

3. Run terraform workspace list


• To be sure that no other workspaces currently exist, run the list
command as shown below.
terraform workspace list
* default
• The list command lists all the currently created workspaces, including
the default workspace.
• The start mark beside the default workspace indicates the currently
selected workspace we are in.

4. Create a new workspace


• Let us create another workspace and select the same.
• We can do this by running the new command with the desired name of
the new workspace as below.
terraform workspace new test_workspace
Created and switched to workspace "test_workspace"!

You're now on a new, empty workspace. Workspaces isolate


their state,
so if you run "terraform plan" Terraform will not see any
existing state
for this configuration.
• Here, I have selected the name of the new Terraform workspace as
“test_workspace”.
• Note that running this command has created the new workspace and
switched to it.

5. Verify the setup


• We can verify this selection is made by running the show command as
below.
terraform workspace show
test_workspace
• Of course, another way to verify it would be to run the list command
and see where the asterisk (*) is pointing to.
terraform workspace list
default
* test_workspace
Terraform workspaces and state file
• When we create a new workspace, Terraform creates a corresponding
new state file in the same remote backend that is configured initially.
• It is important to note that the backend being used should also be able
to support the workspaces.
• In this example, I have used the AWS S3 bucket as the remote backend.
• When we look at the contents of the Terraform state S3 bucket, apart
from our default terraform.tfstate file, we can see that a new directory
named “env:/” is created, within which another directory with the name
of our workspace (test_workspace) is created.
• A new terraform.tfstate file is maintained at this location.
• Ignore the other details in the screenshot below.
• The Key column is relevant here.

• Looking closely, the size of the default state file is considerably larger
than that of the custom workspace-specific state file.
• This shows that the new state file is created, but it does not hold any
information from the default state file.
• This is how Terraform creates an isolated environment and maintains
its state file differently.
• The contents of the test_workspace state file before running terraform
apply are shown below:
{
"version": 4,
"terraform_version": "1.2.3",
"serial": 0,
"lineage": "c1aa5782-da15-419e-70f8-7024cadd0cfe",
"outputs": {},
"resources": []
}
• As a result of this, if we run the plan command in the same directory
now, Terraform will consider the state file as per the selected
workspace.
• No resources are captured or maintained in this state file, so it will
propose creating a new EC2 instance.
terraform plan

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
+ instance_id = (known after apply)
+ public_ip = (known after apply)
• Note: The plan output does not specify the workspace information it is
using while planning, so be sure to be very cautious while applying
these changes, as using the wrong workspace may break the existing
working environment.
• Despite creating an EC2 instance using the same configuration in the
default workspace, Terraform disregards its existence in a new
workspace.
• This creates many possibilities for how infrastructure management
may happen in various environments.
• The isolated nature of the Terraform workspace is used to test out
modifications to the existing configuration before applying them to the
critical environment, but this is just one of the use cases.

How to delete a Terraform workspace


• To delete the workspace, first select a different workspace.
• In our case, we go back to the default workspace and run the delete
command.
• Terraform does not let us delete the currently selected workspace.
terraform workspace select default
Switched to workspace "default".

terraform workspace delete test_workspace


Deleted workspace "test_workspace"!
• The corresponding directory structure in our S3 backend is deleted
along with the state file.

• Also, if you attempt to delete a workspace where certain resources are


being managed by Terraform, it will not let you delete that workspace,
suggesting using the -force option instead.
terraform workspace delete test_workspace

│ Error: Workspace is not empty

│ Workspace "test_workspace" is currently tracking the
following resource instances:
│ - aws_instance.my_vm

│ Deleting this workspace would cause Terraform to lose
track of any associated remote objects, which would then
require you to delete them manually outside of Terraform.
You should destroy these objects with
│ Terraform before deleting the workspace.

│ If you want to delete this workspace anyway, and have
Terraform forget about these managed objects, use the -force
option to disable this safety check.
• Using the -force option may not be a good idea as we will lose track of
all the resources being managed by Terraform.
• A better option would be to select that workspace, run the destroy
command, and then attempt to delete the workspace again.
• Note: Default workspace cannot be deleted.
• As an additional point, if you don’t want to manage Terraform state,
Spacelift can help overcome common state management issues and
adds several must-have features for infrastructure management.
• It offers an optional sophisticated state backend synchronized with the
rest of the application to maximize security and convenience.

How to manage variables with Terraform


workspaces
• Managing variables with Terraform workspaces is essential when you
need different configurations for different environments, like dev, test,
stage, and prod.
• First, you need to declare the variables as you would normally do for
any Terraform configuration.
• Providing values to these variables can be done easily by using tfvars
files.
• For each environment, you can declare a tfvars file:
vars_dev.tfvars
vars_test.tfvars
vars_stage.tfvars
vars_prod.tfvars
• Based on the workspace you are on (let’s suppose you are on the dev
workspace), you will run an apply like:
terraform apply –var-file=vars_dev.tfvars
• You can also conditionally assign values to different parameters based
on the workspace. Let’s take a look at an example:
locals {
instance_type = terraform.workspace == “prod” ? “t2.large”
: “t2.micro”
}
• The above code will assign a t2.large value to the instance_type local
variable if the workspace is prod or a t2.micro, if there is any other
workspace selected.
• There is also an option to set up environment variables conditionally
based on the workspace you are on, but this implementation uses a
different will take advantage of another scripting language
(bash/powershell/python), and you will need to create the logic of the
script yourself.
• That’s why using multiple tfvars files makes the most sense in this
approach.
• As a best practice, wherever possible, you should assign default values
to your variables, especially when you are working with workspaces,
to avoid repeating code in the tfvars files.
• This will make your configuration less error-prone.
• Terraform workspaces interpolation
• With the basics of Terraform workspaces in the background, it makes
sense to use this knowledge within the Terraform configuration objects
to identify the resources belonging to the respective workspaces.
• For example, the EC2 instances created using the same configuration
in the previous example are created with the same name, i.e., whatever
value is specified in the name_tag variable.
• When we look at these instances in the AWS console, it becomes
difficult to quickly identify which EC2 instance belongs to which
workspace.
• Terraform provides an interpolation sequence to reference the value of
the currently selected workspace, as shown below:
${terraform.workspace}
• Let’s use this to set our name tags according to the respective
workspace being selected.
• In the configuration below, we have set our name_tag variable with a
default value of EC2.
• The aws_instance resource block uses this variable in combination
with the workspace interpolation sequence to set different and
respective names.
variable "name_tag" {
type = string
description = "Name of the EC2 instance"
default = "EC2"
}

resource "aws_instance" "my_vm" {


ami = var.ami //Ubuntu AMI
instance_type = var.instance_type
tags = {
Name = format("%s_%s", var.name_tag,
terraform.workspace)
}
}
• Note: the format() function is used to concatenate multiple strings to
for a valid name value.
• We deleted the workspace from the previous section, so let’s also
create a new workspace named “test” and create EC2 instances in both
the workspaces – default, and test.
• See the console output below:
terraform workspace list
* default

terraform apply

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
+ instance_id = (known after apply)
+ public_ip = (known after apply)

Do you want to perform these actions?


Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

aws_instance.my_vm: Creating...
aws_instance.my_vm: Still creating... [10s elapsed]
aws_instance.my_vm: Still creating... [20s elapsed]
aws_instance.my_vm: Still creating... [30s elapsed]
aws_instance.my_vm: Creation complete after 31s [id=i-
0c0a6ffa4405249d7]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

instance_id = "i-0c0a6ffa4405249d7"
public_ip = "3.122.229.252"

terraform workspace new test


Created and switched to workspace "test"!

You're now on a new, empty workspace. Workspaces isolate


their state,
so if you run "terraform plan" Terraform will not see any
existing state
for this configuration.

terraform apply

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
+ instance_id = (known after apply)
+ public_ip = (known after apply)

Do you want to perform these actions in workspace "test"?


Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

aws_instance.my_vm: Creating...
aws_instance.my_vm: Still creating... [10s elapsed]
aws_instance.my_vm: Still creating... [20s elapsed]
aws_instance.my_vm: Still creating... [30s elapsed]
aws_instance.my_vm: Creation complete after 32s [id=i-
0362373fe324e402f]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

instance_id = "i-0362373fe324e402f"
public_ip = "3.72.73.27"
• Here, two EC2 instances were created using the same configuration but
in different workspaces.
• To validate if the interpolation sequences worked, log in to the AWS
console and verify the names of the newly created EC2 instances.

• As we can see, the names are set as expected, and now we can easily
identify which instance belongs to which Terraform workspace.

Environment-specific resource requirements


using Terraform workspaces
• Using the Terraform workspace enables us to isolate the infrastructure
management of production and sub-production environments, we can
also leverage the workspace interpolation sequence to allocate
appropriate resources to them.
• This helps us avoid the unnecessary costs of creating transient sub-
production environments, as these can be the scaled-down version of
the original configuration.
• With workspace interpolation sequence and conditions, configurations
are improved, as shown below.
• In the example shown below, we have used the workspace
interpolation sequence to define the number of EC2 instances to be
created based on the workspace selected.
• If the default workspace is selected, the given configuration would
create three instances, and for all other workspaces, it would just create
a single instance.
variable "name_tag" {
type = string
description = "Name of the EC2 instance"
default = "EC2"
}

resource "aws_instance" "my_vm" {


count = terraform.workspace == "default" ? 3 : 1
ami = var.ami //Ubuntu AMI
instance_type = var.instance_type

tags = {
Name = format("%s_%s_%s", var.name_tag,
terraform.workspace, count.index)
}
}
• Furthermore, corresponding changes are made to the Name tag to
include the count index to distinguish between multiple instances.
• When we apply this configuration in the default and test workspace
(which we created in the last section), we should then be able to see the
following instances with names:

1. EC2_default_0
2. EC2_default_1
3. EC2_default_2
4. EC2_test_0

• Let’s repeat the steps to apply this configuration in both workspaces as


described in the console output of the previous section.
• The screenshot below shows the corresponding EC2 instances created
in default and test workspaces.

• Thus we have been able to limit the resource utilization of transient


environments using interpolation sequence for Terraform workspaces.
• Similarly, we can leverage the concept of workspaces with more
specific use cases.

Git branches vs. Terraform workspaces


• You shouldn’t confuse branches in the version control systems with
Terraform workspaces.
• Both have different purposes.
• Git branches maintain various versioned copies of the same
configuration used to develop new features or Terraform modules,
whereas workspaces completely depend upon the state file maintained
in the remote backend by Terraform.
• In general, it is not recommended to use feature branches for
deployments in the default workspace.
• The table below summarizes the impact of various combinations.
• It assumes that:

1. The Terraform configuration is maintained in a Git repository


2. Workspaces are used to create replica sets for debugging or
developmental purposes
3. The remote backend is configured for Terraform workflow

4. Default workspace Test workspace

main This is the desired scenario. When we want to create a


branch scaled-down replica of the
existing environment for
debugging or development
purposes.
feature Strict no. Feature branches may May not break the
branch contain configurations and production, but would
modules which are still under definitely interfere with
development. So deploying this someone else’s work in
using the default workspace progress. Maybe consider
should be avoided at all costs. creating a new workspace.
• When working with Terraform, if workspaces are used, they take
precedence over the version control strategy.

Terraform workspaces best practices


• As discussed in the previous section, introducing workspaces in the
Terraform workflow, along with existing Git practices, also increases
the risk of human error.
• If the team is not well-versed in using workspaces and branches in
conjunction, the chances of wrong infrastructure deployments are high.
• As we have seen before, the workspaces create a separate working
directory structure to store state files.
• This also means that the plugins and modules are cached separately for
each workspace.
• In a team where developers may create their own workspaces to test
their changes, this can cause bandwidth and space issues on the remote
backend host.
• Workspaces are best used to test the changes in an isolated replica of
infrastructure just before the production deployment.
• They are meant to be temporary and may not be the best solution to
manage multiple staging environments since organizations usually
want these environments to be strictly separate.
Modules
• Terraform modules are the main feature that allows us to reuse
resource definitions across multiple projects or have a better
organization in a single project.
• This is much like what we do in standard programming: instead of a
single file containing all code, we organize our code across multiple
files and packages.
• A module is just a directory containing one or more resource definition
files.
• Even when we put all our code in a single file/directory, we’re still
using modules — in this case, just one.
• The important point is that sub-directories are not included in a
module.
• Instead, the parent module must explicitly include them using
the module declaration:
module "networking" {
source = "./networking"
create_public_ip = true
}Copy
• Here we’re referencing a module located in the “networking” sub-
directory and passing a single parameter to it — a boolean value.
• It’s important to note that in its current version, Terraform does not
allow the use of count and for_each to create multiple instances of a
module.
Terraform Module
• A Terraform module is a collection of configuration files that
encapsulate resources used together to achieve a specific outcome.
• Modules promote reusability, organization, and maintainability in
infrastructure as code by allowing you to group related resources and
manage them as a single unit.
Understanding the Module Block
• In Terraform, you define a module block to incorporate the contents of
another module into your configuration.
• This allows you to reuse existing configurations and standardize
resource provisioning across your infrastructure.
Syntax of a Module Block
• A typical module block has the following structure:
module "<MODULE_NAME>" {
source = "<SOURCE>"
version = "<VERSION>"
# Additional arguments corresponding to the module's input
variables
}
• <MODULE_NAME>: A unique identifier for the module instance
within your configuration.
• <SOURCE>: Specifies the location of the module’s source code.
This can be a local path, a Git repository, or a Terraform Registry
address.
• <VERSION>: Defines the version of the module to use, applicable
when sourcing modules from registries.
Example: Using a Module from the Terraform Registry
• Suppose you want to deploy an AWS Virtual Private Cloud (VPC)
using a community-maintained module from the Terraform Registry.
• You can define the module block as follows:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.19.0"

name = "my-vpc"
cidr = "10.0.0.0/16"

azs = ["us-west-1a", "us-west-1b", "us-west-1c"]


private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24",
"10.0.103.0/24"]

enable_nat_gateway = true
tags = {
Terraform = "true"
Environment = "dev"
}
}
• In this example:
source: Specifies the module’s location in the Terraform Registry.
• version: Ensures that Terraform uses version 3.19.0 of the module,
maintaining consistency across deployments.
• The subsequent arguments (name, cidr, azs, etc.) correspond to the
input variables defined by the module, allowing customization of the
VPC’s configuration.
Example: Using a Local Module
• You can also create and reference local modules within your project
directory.
• Assume you have a module that sets up an AWS EC2 instance, located
in the modules/ec2-instance directory:
module "web_server" {
source = "./modules/ec2-instance"

instance_type = "t2.micro"
ami_id = "ami-0c55b159cbfafe1f0"
subnet_id = "subnet-abc12345"
}
• Here:
• source: Points to the relative path of the local module.
• The arguments (instance_type, ami_id, subnet_id) are input
variables defined within the ec2-instance module, allowing you to
customize the EC2 instance’s properties.
Key Components of a Module
• A well-structured Terraform module typically includes the following
files:
• main.tf: Contains the primary resource definitions.
• variables.tf: Declares input variables to parameterize the module.
• outputs.tf: Defines output values to expose information about the
resources.
• versions.tf: Specifies the required Terraform version and provider
constraints.
• README.md: Provides documentation on the module’s purpose
and usage.
Best Practices for Using Modules
• Encapsulation: Modules should encapsulate their resources,
exposing only necessary inputs and outputs.
• Reusability: Design modules to be reusable across different
configurations and environments.
• Versioning: Implement version control for modules to manage
changes and ensure stability.
• Documentation: Provide clear documentation within the module
directory to explain its purpose and usage.
• Consistency: Use consistent naming conventions and file structures
across modules.
• By effectively utilizing module blocks, you can create modular,
reusable, and maintainable infrastructure configurations, enhancing the
scalability and manageability of your Terraform projects.
$ tree minimal-module/
.
├── README.md
├── main.tf
├── variables.tf
├── outputs.tf
module "consul" {
source = "hashicorp/consul/aws"
}
module "moduleName" {
source = "module/path"
}

module "networkModule" {
source = "./module/network"
}

module "s3-bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "1.0.0"
# insert the 6 required variables here
}

Working with Terraform Modules


===================================
$ terraform init
# Add modules in .tf
$ terraform get
$ terraform plan
$ terraform apply
$ terraform show
resource "azurerm_resource_group" "example" {
name = "my-resources66"
location = "West Europe"
}

module "apache"{
source = "./modules/install_apache"
}

module "nginx"{
source = "./modules/install_nginx"
instances = ["${module.web.instance_ids}"]
}

module "network" {
source = "Azure/network/azurerm"
resource_group_name = azurerm_resource_group.example.name
address_spaces = ["10.0.0.0/16", "10.2.0.0/16"]
subnet_prefixes = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
subnet_names = ["subnet1", "subnet2", "subnet3"]

subnet_service_endpoints = {
"subnet1" : ["Microsoft.Sql"],
"subnet2" : ["Microsoft.Sql"],
"subnet3" : ["Microsoft.Sql"]
}

tags = {
environment = "dev"
costcenter = "it"
}

depends_on = [azurerm_resource_group.example]
}
key parameters available for a Terraform module block
• Certainly! Here’s a table summarizing the key parameters available for
a Terraform module block:

Require
Parameter Description d Example

Specifies the
source =
location of the
"./modules/network" or sourc
source module’s Yes
e = "terraform-aws-
source code.
modules/vpc/aws"
This can be a
Require
Parameter Description d Example

local path, a
Git repository,
or a Terraform
Registry
address.

Defines the
version of the
module to use,
version particularly No version = "3.0.0"
when sourcing
modules from
registries.

Overrides the
default
provider
configurations
for the
module.
Useful when providers = { aws =
providers No
you need to aws.us_east }
specify
different
provider
settings for a
particular
module.
Require
Parameter Description d Example

Creates
multiple
instances of
the module.
Allows you to
count No count = 3
scale
resources by
specifying the
number of
instances.

Creates
multiple
instances of
the module
based on a
map or set of
strings.
Provides more
for_each = { net1 =
control
for_each No "10.0.0.0/16", net2 =
compared
"10.1.0.0/16" }
to count,
especially
when each
instance
requires
unique
configurations
.
Require
Parameter Description d Example

Specifies
dependencies
on other
resources or
modules.
Ensures that
depends_o depends_on =
the module is No
n [aws_vpc.main]
provisioned
only after
certain
resources or
modules have
been created.
• *Note: Input variables are user-defined parameters that allow
customization of the module’s behaviour. Each input variable must be
defined within the module’s variables.tf file. When calling the module,
you provide values for these variables.
• By utilizing these parameters, you can effectively manage and
customize the behaviour of your Terraform modules, leading to more
modular and maintainable infrastructure configurations.
• In Terraform, a module block is used to incorporate the configuration
of one module into another, promoting reusability and organization in
your infrastructure as code.
• The module block supports several parameters, allowing you to
customize its behaviour and the resources it provisions.
• Here's a breakdown of the key parameters you can define within
a module block:
1. source (Required):
1. Description: Specifies the location of the module's source code.
2. Usage: This can be a local file path, a URL to a version control
system (like Git), or a reference to a module in the Terraform
Registry.
3. Example:
module "network" {
source = "./modules/network"
}
or
module "network" {
source = "terraform-aws-modules/vpc/aws"
}
2. version (Optional):
1. Description: Specifies the version of the module to use.
2. Usage: Particularly useful when sourcing modules from the
Terraform Registry to ensure compatibility and stability.
3. Example:
module "network" {
source = "terraform-aws-modules/vpc/aws"
version = "3.0.0"
}
3. Input Variables:
1. Description: These are user-defined parameters that allow
customization of the module's behavior.
2. Usage: Each input variable must be defined within the
module's variables.tf file. When calling the module, you provide
values for these variables.
3. Example:
module "network" {
source = "./modules/network"
cidr = "10.0.0.0/16"
region = "us-west-1"
}
Here, cidr and region are input variables defined within
the network module.
4. providers (Optional):
1. Description: Overrides the default provider configurations for the
module.
2. Usage: Useful when you need to specify different provider settings
for a particular module.
3. Example:
provider "aws" {
region = "us-west-1"
}

module "network" {
source = "./modules/network"
providers = {
aws = aws.us_east
}
}

provider "aws" {
alias = "us_east"
region = "us-east-1"
}
5. count (Optional):
1. Description: Creates multiple instances of the module.
2. Usage: Allows you to scale resources by specifying the number of
instances.
3. Example:
module "network" {
source = "./modules/network"
count = 3
}
This will instantiate the network module three times.
6. for_each (Optional):
1. Description: Creates multiple instances of the module based on
a map or set of strings.
2. Usage: Provides more control compared to count, especially
when each instance requires unique configurations.
3. Example:
module "network" {
source = "./modules/network"
for_each = {
net1 = "10.0.0.0/16"
net2 = "10.1.0.0/16"
}
cidr = each.value
}
This will create two instances of the network module with different
CIDR blocks.
7. depends_on (Optional):
1. Description: Specifies dependencies on other resources or
modules.
2. Usage: Ensures that the module is provisioned only after certain
resources or modules have been created.
3. Example:
module "network" {
source = "./modules/network"
depends_on = [aws_vpc.main]
}
Here, the network module will be created only after
the aws_vpc.main resource is provisioned.
• By utilizing these parameters, you can effectively manage and
customize the behaviour of your Terraform modules, leading to more
modular and maintainable infrastructure configurations.
Terraform Module Block Source Arguments Style
• In Terraform, the source argument within a module block specifies the
location of the module's source code.
• Terraform supports various methods to define this source, allowing
flexibility in module sourcing.
• Here are the different ways to specify the source parameter, along with
examples:
1. Local Paths
• You can reference modules stored locally on your filesystem using
relative paths.
module "network" {
source = "./modules/network"
# Additional module arguments
}
• In this example, Terraform will load the module from
the modules/network directory relative to your current working
directory.
2. Terraform Registry
• Modules can be sourced directly from the Terraform Registry, which
hosts a vast collection of publicly available modules.
module "consul" {
source = "hashicorp/consul/aws"
version = "0.0.5"
# Additional module arguments
}
• Here, the Consul module for AWS is sourced from the Terraform
Registry.
• Specifying the version ensures that Terraform uses the desired module
version.
3. GitHub
• Modules can be sourced from GitHub repositories using
the github.com prefix.
module "vpc" {
source = "github.com/terraform-aws-modules/terraform-aws-vpc"
# Additional module arguments
}
• This example fetches the VPC module from the specified GitHub
repository.
4. Generic Git Repositories
• Terraform supports sourcing modules from any Git repository by
specifying the repository URL.
module "vpc" {
source = "git::https://example.com/terraform-
modules.git//vpc?ref=tags/v0.1.0"
# Additional module arguments
}
• In this case, Terraform retrieves the vpc module from the specified Git
repository at the v0.1.0 tag.
5. Bitbucket
• Modules can also be sourced from Bitbucket repositories.
module "vpc" {
source = "bitbucket.org/organization/terraform-
modules.git//vpc?ref=tags/v0.1.0"
# Additional module arguments
}
• This example fetches the vpc module from a Bitbucket repository at
the specified tag.
6. HTTP URLs
• Modules can be downloaded from HTTP URLs pointing to a ZIP
archive of the module.
module "network" {
source = "https://example.com/terraform-modules/network.zip"
# Additional module arguments
}
• Terraform will download and extract the module from the specified
URL.
7. Amazon S3 Buckets
• Modules stored in Amazon S3 buckets can be referenced directly.
module "network" {
source = "s3::https://s3.amazonaws.com/mybucket/terraform-
modules/network.zip"
# Additional module arguments
}
• Here, Terraform retrieves the module from the specified S3 bucket.
8. Google Cloud Storage (GCS) Buckets
• Similarly, modules can be sourced from GCS buckets.
module "network" {
source =
"gcs::https://storage.googleapis.com/mybucket/terraform-
modules/network.zip"
# Additional module arguments
}
• Terraform will download the module from the specified GCS bucket.
Note on Parameterizing the source Argument
• As of Terraform v0.13, the source argument must be a literal string and
cannot directly reference variables or expressions.
• This design ensures that module sources are known before evaluating
the configuration.
• By utilizing these methods, you can flexibly source Terraform modules
from various locations, tailoring your infrastructure as code to your
project's needs.
• In Terraform, you can specify a module's source using various Git
options to control which version or part of the repository to use.
• Here are examples illustrating different scenarios:
1. Default Branch
• By default, Terraform fetches the module from the repository's default
branch (usually main or master):
module "network" {
source = "git::https://github.com/username/repository.git"
# Additional module arguments
}
2. Specific Branch
• To use a specific branch, append the ref parameter with the branch
name:
module "network" {
source =
"git::https://github.com/username/repository.git?ref=branch-
name"
# Additional module arguments
}
3. Specific Tag
• To use a specific tag, set the ref parameter to the tag name:
module "network" {
source =
"git::https://github.com/username/repository.git?ref=v1.0.0"
# Additional module arguments
}
4. Specific Commit ID
• To use a specific commit, set the ref parameter to the commit hash:
module "network" {
source =
"git::https://github.com/username/repository.git?ref=commit-sha"
# Additional module arguments
}
5. Subdirectory in a Specific Branch
• If the module resides in a subdirectory of a specific branch, specify
both the branch and the subdirectory:
module "network" {
source =
"git::https://github.com/username/repository.git//subdirectory?ref
=branch-name"
# Additional module arguments
}
6. SSH Protocol
• To clone a private repository over SSH, use the SSH URL:
module "network" {
source = "git::ssh://git@github.com/username/repository.git"
# Additional module arguments
}
7. GitHub Shortcut
• For public GitHub repositories, you can use a shorthand notation:
module "network" {
source = "github.com/username/repository"
# Additional module arguments
}
8. Git Over SSH with Subdirectory
• To access a module in a subdirectory over SSH:
module "network" {
source =
"git::ssh://git@github.com/username/repository.git//subdirectory"
# Additional module arguments
}
9. Git Over HTTPS with Authentication
• For private repositories over HTTPS, include the username and token:
module "network" {
source =
"git::https://username:token@github.com/username/repository.git"
# Additional module arguments
}
10. Bitbucket Repository
• To source a module from a Bitbucket repository:
module "network" {
source = "git::https://bitbucket.org/username/repository.git"
# Additional module arguments
}
11. GitLab Repository
• To source a module from a GitLab repository:
module "network" {
source = "git::https://gitlab.com/username/repository.git"
# Additional module arguments
}
12. Using a Depth Parameter for Shallow Clone
• To perform a shallow clone and limit the history depth:
module "network" {
source =
"git::https://github.com/username/repository.git?ref=branch-
name&depth=1"
# Additional module arguments
}
• By utilizing these Git options, you can precisely control which version
or part of a repository Terraform uses for your modules.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy