As we navigate the increasingly complex landscape of cloud computing and Infrastructure as Code (IaC), Terraform has emerged as a key player. Majority of large enterprises use terraform to build and update their cloud infrastructure. This comprehensive guide on Terraform Interview Questions aims to help you prepare for interviews that involve Terraform – Terraform’s core concepts, use cases, and best practices.
From beginner-level overviews to advanced questions, scenario-based questions and even terraform on AWS, this interview guide has something for every aspiring cloud technologist or infrastructure architect. Let’s get started!
Beginner Level Terraform Interview Questions
Q1. What is Terraform and what are its main uses?
Terraform is an open-source Infrastructure as Code tool developed by HashiCorp. It allows developers and system administrators to describe and provision data center infrastructure using a declarative configuration language. Terraform can manage a wide variety of service providers as well as custom in-house solutions.
The main uses of Terraform are:
- Infrastructure Provisioning
- Change Management and Automation
- Policy Enforcement
- Infrastructure Versioning
Q2. How does Terraform compare to other Infrastructure as Code (IaC) tools like AWS CloudFormation or Ansible?
Terraform, AWS CloudFormation, and Ansible all serve the purpose of automating infrastructure provisioning, but they do so in slightly different ways.
- AWS CloudFormation: It is a service offered by Amazon Web Services (AWS). It is closely integrated with other AWS services but lacks flexibility when it comes to managing infrastructure outside of the AWS ecosystem.
- Ansible: It is a configuration management tool, primarily used for software deployment and configuration. It uses a push-based approach where the Ansible server pushes configuration to the nodes.
- Terraform: It is provider-agnostic and can manage a broad spectrum of resources, including AWS, Azure, GCP, and more. It uses a pull-based approach where configuration is pulled by the nodes.
For a deep dive into the differences and use-cases, you can refer to this detailed cloud architect interview guide.
Related Reading: Terraform vs CDK: An In-depth Comparison, Terraform vs. CloudFormation – Battle of IaC Solutions
Q3. What is the significance of the terraform init
command?
The terraform init
command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.
The command performs several actions:
- It downloads the necessary provider plugins.
- It sets up the backend for storing your state file.
- It validates the syntax of the terraform configuration files.
$ terraform init
Related Reading: Terraform Init Deep Dive
Q4. Can you briefly describe what a Terraform provider is?
A Terraform provider is responsible for understanding API interactions and exposing resources for a particular platform or service. Providers generally offer a collection of resource definitions, allowing you to manage services and their configurations.
For example, the AWS provider offers resources such as aws_instance
, aws_vpc
, and aws_s3_bucket
, which correspond to the respective AWS services. The provider translates the Terraform DSL into API calls to create, read, update, and delete resources.
Here’s a simple example of declaring a provider in a Terraform configuration:
provider "aws" { region = "us-west-2" }
Q5. What does idempotency mean in Terraform?
In the context of Terraform, idempotency refers to the property that operations can be applied multiple times without changing the result beyond the initial application. This means that even if you run terraform apply
multiple times, your infrastructure won’t change after the first run, unless the configuration itself has changed.
This makes Terraform very reliable because it will produce the same infrastructure for the same configuration.
Q6. How does Terraform manage the state of your infrastructure?
Terraform manages the state of your infrastructure using a state file (terraform.tfstate
). The state file maps resources in your configuration to real-world objects, tracks metadata, and optimizes for performance. This file can be stored locally or remotely (in a place like an S3 bucket) for collaborative workflows.
It is crucial to manage and version this state file because it can become the source of truth for your infrastructure.
For more on managing state and cloud resources, consider reading our article on cloud governance.
Q7. Can you explain what a Terraform module is and why you might use one?
A Terraform module is a container for multiple resources that are used together. Modules can be used to create lightweight and reusable abstractions, so you can describe your infrastructure in terms of architecture, rather than directly in terms of physical objects.
In simple terms, modules in Terraform are like functions in a programming language – they encapsulate a certain logic, they have input and output variables, and they can be called from other parts of the code, allowing reusability and keeping your code DRY (Don’t Repeat Yourself).
Here’s a simple example of module usage:
module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "2.77.0" name = "my-vpc" cidr = "10.0.0.0/16" azs = ["us-east-1a", "us-east-1b"] private_subnets = ["10.0.1.0/24", "10.0.2.0/24"] public_subnets = ["10.0.101.0/24", "10.0.102.0/24"] enable_nat_gateway = true }
Q8. What is the purpose of the terraform plan
command?
The terraform plan
command creates an execution plan. It determines what actions are necessary to achieve the desired state specified in the configuration files. This is a dry run and does not make any changes to the actual resources but shows you what will be done when you run terraform apply
.
$ terraform plan
It’s a good practice to run terraform plan
before terraform apply
to prevent any unintended modifications to your infrastructure.
Related Reading: Terraform Plan: Deep Dive
Q9. How would you define “Resource” in the context of Terraform?
In Terraform, a “resource” is a component of your infrastructure. This could be a low-level component such as a physical server, virtual machine, network interface, etc., or a high-level component like an email provider setup.
Each resource block describes one or more infrastructure objects. The resource block has two string arguments, a type, and a name, followed by a block of key-value attributes.
Here’s an example of an AWS instance resource:
resource "aws_instance" "example" { ami = "ami-0c94855ba95c574c8" instance_type = "t2.micro" tags = { Name = "Example" } }
Q10. What is the use of terraform apply
command?
The terraform apply
command is used to apply the changes required to reach the desired state of your configuration, or the pre-determined set of actions generated by a terraform plan
execution plan.
This command is a critical part of Terraform and is what actually enables it to automate infrastructure provisioning.
$ terraform apply
After running this command, Terraform will provide a summary of the changes to be made in accordance
with your configuration files, and it requires a confirmation input (yes
) to proceed with the changes.
Intermediate Level Terraform Interview Questions
Q11. How can you use variables in Terraform?
Variables in Terraform act as placeholders for values and can be used to increase the reusability and modularity of your Terraform code. They can be defined using the variable
block and referenced elsewhere in your configuration with the var
prefix.
Here’s an example:
variable "image_id" { description = "The id of the machine image (AMI) to use for the server." type = string default = "ami-0c94855ba95c574c8" } resource "aws_instance" "example" { ami = var.image_id instance_type = "t2.micro" tags = { Name = "Example" } }
This sets a variable image_id
with a default value, which is then used when defining the AWS instance.
For a deeper dive into the usage of variables and other essential Terraform concepts, you may want to check out our comprehensive guide on cloud architect interview questions and answers.
Q12. Can you explain the use of ‘data sources’ in Terraform?
Data sources allow data to be fetched or computed for use elsewhere in your Terraform configuration. Unlike resources, data sources do not create or manage infrastructure. They help to use information defined outside of Terraform, or defined by another separate Terraform configuration.
Here’s an example of a data source where we fetch information about an AWS AMI:
data "aws_ami" "example" { most_recent = true owners = ["self"] filter { name = "name" values = ["myami-*"] } } resource "aws_instance" "example" { ami = data.aws_ami.example.id instance_type = "t2.micro" tags = { Name = "Example" } }
In this case, we are fetching the most recent AMI that belongs to us and has a name that starts with “myami-“. We then use this AMI when creating an instance.
Q13. How does Terraform handle dependencies between resources?
Terraform uses the resource graph to understand dependencies between resources. It can identify the relationships between various resources, which allows it to create resources in a correct order, i.e., it ensures dependent resources are created before the resources that depend on them.
In most cases, Terraform can implicitly understand the dependencies based on resource configurations. For example, if one resource uses the output of another resource as an input, Terraform knows the latter resource must be created first.
However, for explicit dependencies or when Terraform cannot figure out the dependencies, a depends_on
argument can be used.
Here’s an example where we have a dependency:
resource "aws_instance" "example" { ami = "ami-0c94855ba95c574c8" instance_type = "t2.micro" tags = { Name = "Example" } } resource "aws_eip" "example" { vpc = true instance = aws_instance.example.id depends_on = [aws_instance.example] }
In this case, the Elastic IP (aws_eip
) depends on the AWS instance to be created first, hence the depends_on
argument.
Q14. How do you manage and mitigate state drift in Terraform?
State drift occurs when the real-world resources managed by Terraform drift from the state recorded in the last Terraform apply. This could be due to manual changes made directly to the infrastructure or external factors causing changes that Terraform isn’t aware of.
To manage and mitigate state drift, you can use the terraform refresh
command to update the state file with the real-world resources. It does not modify infrastructure, but it updates the state file to match the real infrastructure.
However, to prevent state drift, it’s best practice to make all infrastructure changes through Terraform only and not manually. You can also leverage policy as code tools like Open Policy Agent or HashiCorp’s Sentinel to enforce that all changes must go through Terraform.
Q15. How do you handle errors in Terraform?
Errors in Terraform can be handled by understanding the error message and adjusting the configuration accordingly. Terraform provides descriptive error
messages that you can use to identify what needs to be corrected. Also, using terraform plan
before terraform apply
can help in detecting potential errors before applying changes.
Q16. How would you securely manage secrets in Terraform?
Secrets in Terraform should never be hard-coded in the configuration files. You can use the sensitive
variable attribute to prevent the value of the variable from showing in the CLI output when running Terraform commands.
variable "secret_key" { description = "The secret key" type = string sensitive = true }
For managing secrets like API keys, it’s best practice to use a secure secrets management tool like AWS Secrets Manager, Azure Key Vault, or HashiCorp Vault. They store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets, and other sensitive data.
Q17. What is the significance of terraform destroy
command?
The terraform destroy
command is used to destroy the Terraform-managed infrastructure. It’s the opposite of terraform apply
. When you run it, Terraform determines what resources exist and then deletes them.
terraform destroy
Q18. Can you elaborate on how Terraform uses ‘provisioners’?
Provisioners in Terraform are used as a last resort option to execute scripts on a local or remote machine as part of resource creation or destruction. Provisioners can be used to bootstrap a resource, cleanup before destroy, run configuration management, etc.
resource "aws_instance" "example" { # ... provisioner "local-exec" { command = "echo The server's IP address is ${self.private_ip}" } }
Q19. What do you understand by ‘Resource Graph’ in Terraform?
The resource graph in Terraform is what Terraform constructs to understand the dependencies between resources. When Terraform creates, updates, or destroys resources, it performs these operations in the correct order according to the dependency graph.
Q20. How would you make use of ‘Output Variables’ in Terraform?
Output variables in Terraform are used to extract information about the infrastructure after deployment. They can be used to present a subset of the resources’ attributes to the user, or to pass information from the root module to one or more child modules.
output "instance_ip_addr" { value = aws_instance.server.private_ip description = "The private IP address of the main server instance." }
In this example, the private IP address of the created EC2 instance will be output when terraform apply
is run.
Advanced Terraform Interview Questions
Q21. Can you explain the use of Terragrunt and how it complements Terraform?
Terragrunt is a thin wrapper that provides extra tools for working with multiple Terraform modules, remote state management, and locking. It helps keep your Terraform code DRY (Don’t Repeat Yourself), maintainable, and evolving.
terragrunt plan-all
This plan-all
command allows you to execute terraform plan
on multiple modules at once, rather than navigating to each directory to run the command separately.
Q22. How can you achieve zero-downtime deployment with Terraform?
Terraform provides several strategies for achieving zero-downtime deployment. One common strategy is the blue-green deployment. In this strategy, you’d set up your Terraform configuration to create a new set of resources (the green environment), then switch traffic from the old (blue) environment to the new one once everything is confirmed working. Once the switch is complete and everything is working as expected, the old resources can be destroyed.
Q23. How do you test your Terraform code?
Testing Terraform code can be done in several ways:
- Unit Testing: Use the Terratest Go library to write unit tests for your Terraform code.
- Integration Testing: Use the Kitchen-Terraform framework to test the interaction of multiple components.
- Linting: Use tflint to catch possible issues in your code.
Q24. What strategies can you apply for managing a large scale Terraform codebase?
When managing a large scale Terraform codebase, it’s crucial to maintain an efficient and organized structure to enable easy navigation, quick updates, and minimal errors. Here are some strategies that could be beneficial:
Modularize your code: This involves breaking down your code into distinct modules, each serving a specific function. Modular code is reusable, easy to understand, and manage.
For example:
module "s3_bucket" { source = "terraform-aws-modules/s3-bucket/aws" version = "1.0" bucket = "my-s3-bucket" acl = "private" versioning = { enabled = true } }
In the code snippet above, we are using a module for creating an S3 bucket using Terraform.
Use Version Control System (VCS): Implement a VCS, such as Git, to keep track of changes, create different branches for separate features, and enable effective collaboration amongst team members.
Implement a Code Review Process: Make sure that all Terraform scripts go through a stringent code review process before being merged. This process could help identify potential problems or optimizations that could be made.
Automated Testing: Introduce automated testing with tools like terratest
to catch potential issues early on in the development cycle.
Continuous Integration/Continuous Deployment (CI/CD): Use CI/CD pipelines to automate the process of testing and deployment. This ensures that all changes are tested and applied consistently.
Q25. How can you manage state locking in a distributed team environment?
State locking in Terraform is crucial in a distributed team environment to avoid conflicts and maintain the integrity of your infrastructure’s state. Here are some ways to manage it:
Use Remote State Storage Backends that Support Locking: Not all backends support state locking. Backends like Amazon S3 (with DynamoDB), Terraform Cloud, and Azure Blob Storage do support locking. So, when working in a team, choose a backend that supports state locking.
For example, when using an S3 bucket as your backend, you can enable state locking by adding a DynamoDB table:
terraform { backend "s3" { bucket = "mybucket" key = "path/to/my/key" region = "us-east-1" dynamodb_table = "mytable" // Enable state locking } }
Automate Workflow with a CI/CD Pipeline: Using a CI/CD pipeline ensures that Terraform commands are executed in a controlled environment, reducing the risk of human error. Most CI/CD tools allow you to enforce concurrency, ensuring that only one job (which contains terraform apply
or terraform destroy
) runs at a time.
Use Terraform Cloud/Enterprise: These tools come with automatic state locking and unlocking, which can be a hassle-free way to manage state locking in a team.
Scenario-Based Terraform Questions
Q26. Your team has recently faced issues due to state drift. How would you address this situation using Terraform?
State drift occurs when the real-world infrastructure deviates from the state defined in your Terraform configuration. It can be caused by manual changes or updates made outside of Terraform. Here are steps to address state drift:
Identify Drift: Use the terraform plan
command to compare the real-world infrastructure with the state recorded by Terraform. This can help identify any discrepancies or “drifts” between the two.
terraform plan
Use terraform refresh
: This command updates the local state file against real resources. It doesn’t modify infrastructure, but updates the state file with real-time status. After this, run terraform plan
again to see if the drift still exists.
terraform refresh
Mitigate Drift: Based on the identified drift, discuss with your team and decide whether the manual changes should be maintained or reverted. If they should be maintained, update your Terraform configuration accordingly and apply it. If they should be reverted, you can use terraform apply
to sync your infrastructure with the Terraform configuration.
Prevent Future Drift: The best way to handle drift is to prevent it. Establish a strong policy that all infrastructure changes should be done through Terraform and not manually. Educate your team about the problems that can occur due to state drift.
Q27. You have been asked to deploy a multi-tier application across multiple cloud providers using Terraform. What would your approach be?
Terraform based orchestration is a great choice for deploying a multi-tier application across multiple cloud providers. The following steps can be a general guide:
Design the Architecture: Understand the application requirements and design your multi-tier architecture accordingly. Identify the components that need to be deployed on different cloud providers.
Identify Providers: Terraform supports multiple cloud providers through providers. Identify the Terraform providers required for your deployment. For instance, if you’re using AWS and GCP, you would need both the AWS and Google providers.
provider "aws" { region = "us-west-2" } provider "google" { project = "my-gcp-project" region = "us-central1" }
Organize Configuration: Organize your Terraform configuration files based on the tiers and cloud providers. This could be a multi-folder structure where each tier/provider has its own set of configuration files. This can help manage complexity and isolate changes.
Modularize the Code: Create reusable modules for common tasks that can be used across different tiers or cloud providers. This will help in reducing code duplication and making the code more maintainable.
Manage State Carefully: When working across multiple cloud providers, managing state becomes crucial. Consider using remote state with a backend that supports state locking. You might want to segregate state files based on the tiers and cloud providers.
Implement CI/CD: Implement a Continuous Integration/Continuous Deployment (CI/CD) pipeline to automate the testing, planning, and applying of your Terraform configurations.
Q28. A recent update to your Terraform configuration is causing errors during terraform apply
. What steps would you take to troubleshoot?
Troubleshooting a Terraform configuration involves the following steps:
Step 1 – Review Error Messages: Start by reviewing the error messages thrown by terraform apply
. These messages often contain the exact line of code causing the error and give you a good starting point for troubleshooting.
Step 2 – Validate Configuration: Use the terraform validate
command to check the syntax of your Terraform files. It will point out if there are any syntactical errors in your configuration.
terraform validate
Step 3 – Debug Log: Use the `TF_LOG` environment variable to get detailed logs about what Terraform is doing. This can provide additional context about the error.
export TF_LOG=DEBUG terraform apply
Step 4 – Examine State: Use terraform show
to examine the current state and understand what Terraform believes the infrastructure is. This can help identify any discrepancies between your infrastructure and state.
terraform show
Step 5 – Rollback If Needed: If the error persists and is causing issues, consider rolling back the change to a previous working version using your version control system.
Q29. You’re working with a team that has a large Terraform codebase that has grown difficult to manage. What steps would you propose to refactor and manage the codebase effectively?
Refactoring a large Terraform codebase involves the following steps:
Code Organization: Organize your code into a clear directory structure. This structure could be based on environments (prod, dev, test), resources (networking, databases, VMs), or cloud providers, depending on what makes sense for your project.
Modularize: Break down your Terraform code into reusable modules. This can help reduce code duplication, improve code readability, and make your codebase easier to manage.
Utilize Workspaces: Use Terraform workspaces to manage multiple environments. This allows you to use the same configuration for different environments (like staging, production) while maintaining separate state files.
Implement Variable Files: Use variable files (*.tfvars
) to set environment-specific values. This allows the same configuration to be used in different environments with different settings.
Code Reviews: Implement a rigorous code review process to maintain code quality. This can help catch issues and inconsistencies in your codebase.
Automated Testing: Implement automated testing with a tool like terratest
to ensure your changes are functioning as expected.
Continuous Integration/Continuous Deployment (CI/CD): Use a CI/CD pipeline to automate the process of planning, testing, and applying your Terraform configuration. This can also help catch errors early in the development process.
Q30. You need to introduce a sensitive API key into your Terraform configuration. How would you securely manage this secret while ensuring it is accessible to your Terraform code?
When dealing with sensitive information like API keys in Terraform, it’s important to never hard-code the secret directly into your configuration. Here’s how you can manage it:
Terraform Variables: Use Terraform variables to inject secrets at runtime. This keeps sensitive information out of your code. You can provide the variable values through a separate variables file, through environment variables, or through command line arguments.
Use a Secret Manager: Use a cloud provider’s secret manager service like AWS Secrets Manager or Azure Key Vault. You can store the sensitive data securely in the secret manager, and then access it in your Terraform configuration.
Environment Variables: Terraform will read environment variables that start with TF_VAR_
. So you can set an environment variable that holds your secret, and Terraform will automatically use it.
export TF_VAR_my_secret="secret_value"
In your Terraform configuration:
variable "my_secret" {} resource "example_resource" "example" { secret = var.my_secret }
Terraform Cloud: If you’re using Terraform Cloud, you can store your sensitive values as environment variables in your workspace settings. Terraform Cloud will keep these values secure and provide them to your configuration when it runs.
Lastly, ensure your .tfstate
file is encrypted at rest, since it can contain sensitive data. Remote state backends such as AWS S3 allow for easy encryption and secure access.
Terraform on AWS Interview Questions
31. How do you define AWS resources in your Terraform configuration files?
You define AWS resources in your Terraform configuration files using the aws
provider and resource blocks. Each resource type has a specific syntax and set of arguments. Here’s an example of defining an AWS EC2 instance:
provider "aws" { region = "us-west-2" } resource "aws_instance" "example" { ami = "ami-0c94855ba95c574c8" instance_type = "t2.micro" tags = { Name = "example-instance" } }
In the above example, the provider block configures the AWS provider which is required for Terraform to interact with the AWS API. The resource block defines an EC2 instance.
Related Reading: For a detailed, step-by-step guide with code snippets on how to utilize Terraform for EC2 instance creation, check out our in-depth article, “Creating an EC2 Instance Using Terraform”.
32. How can you utilize Amazon S3 buckets with Terraform?
You can create, manage, and configure Amazon S3 buckets using the aws_s3_bucket
resource type in Terraform. Here’s an example:
resource "aws_s3_bucket" "bucket" { bucket = "bucket-name" acl = "private" tags = { Environment = "Dev" } }
In addition to creating buckets, you can also manage other aspects of S3 buckets like bucket policies, CORS rules, versioning, and server-side encryption using various resource types like aws_s3_bucket_policy
, aws_s3_bucket_cors
, etc.
Moreover, S3 buckets can also be used as backends for Terraform remote state storage. When leveraging S3, always be sure to understand the various S3 storage classes and estimate your S3 cost.
33. Can you explain how to manage an AWS EC2 instance using Terraform?
Terraform provides the aws_instance
resource to manage EC2 instances. You can specify details like the instance type, AMI, VPC, security groups, and many other options. Here’s an example:
resource "aws_instance" "example" { ami = "ami-0c94855ba95c574c8" instance_type = "t2.micro" tags = { Name = "example-instance" } }
In the above configuration, the AMI and instance type for the EC2 instance are specified. You can also define things like security groups, network interfaces, IAM instance profiles, and more.
34. How does Terraform handle AWS IAM roles and permissions?
Terraform handles IAM roles and permissions using the aws_iam_role
and aws_iam_role_policy
resources, respectively. You can create a new IAM role and attach a policy to it. Here’s an example:
resource "aws_iam_role" "role" { name = "example-role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "ec2.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } resource "aws_iam_role_policy" "policy" { name = "example-policy" role = aws_iam_role.role.id policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:Describe*" ], "Effect": "Allow", "Resource": "*" } ] } EOF }
In this example, an IAM role is created that allows EC2 instances to assume this role. A policy is then attached to the role that allows it to describe EC2 resources. The JSON policy document for assume_role_policy is defined using a “here document” syntax (<<EOF … EOF). This policy follows AWS’s standard policy language, stating that the sts:AssumeRole action is allowed for the EC2 service, meaning the EC2 service can assume this role.
Related Reading: Top AWS IAM Interview Questions & Answers
35. Can you elaborate on how you would set up an AWS VPC (Virtual Private Cloud) with Terraform?
Setting up an AWS VPC involves creating the VPC itself, as well as related resources like subnets, internet gateways, route tables, and security groups. Here’s a basic example of creating a VPC and a subnet:
resource "aws_vpc" "example" { cidr_block = "10.0.0.0/16" } resource "aws_subnet" "example" { vpc_id = aws_vpc.example.id cidr_block = "10.0.1.0/24" }
In this configuration, a VPC is created with a CIDR block of 10.0.0.0/16, and then a subnet is created within that VPC with a CIDR block of 10.0.1.0/24. You can continue to add additional resources like an internet gateway to provide internet access, route tables to define network routing, and security groups to define network access rules.
For an in-depth understanding and more examples, refer to our detailed guide on setting up AWS VPCs using the Terraform VPC Module.
When creating VPC’s through Terraform or any other method, always be sure to follow VPC creation best practices.
36. How would you create and manage AWS RDS instances using Terraform?
Terraform provides the aws_db_instance
resource type for creating and managing AWS RDS instances. Here’s an example:
resource "aws_db_instance" "default" { allocated_storage = 20 storage_type = "gp2" engine = "mysql" engine_version = "5.7" instance_class = "db.t2.micro" name = "mydb" username = "foo" password = "foobarbaz" parameter_group_name = "default.mysql5.7" }
In this configuration, an RDS instance with the MySQL 5.7 engine is created. The instance class is db.t2.micro
and the storage type is gp2
with 20GB allocated storage. You can specify many other options, like multi-AZ deployment, security groups, IAM roles, and more.
37. How would you configure an autoscaling group on AWS using Terraform?
You can configure an autoscaling group on AWS using the aws_autoscaling_group
resource in Terraform. Here’s an example:
resource "aws_launch_configuration" "example" { image_id = "ami-0c94855ba95c574c8" instance_type = "t2.micro" name = "example" } resource "aws_autoscaling_group" "example" { launch_configuration = aws_launch_configuration.example.id min_size = 1 max_size = 5 desired_capacity = 3 vpc_zone_identifier = ["subnet-abcde012", "subnet-bcde012a"] }
In this example, a launch configuration is defined that specifies what each instance in the autoscaling group should look like. Then, an autoscaling group is created that uses this launch configuration and is set to maintain between 1 and 5 instances, with a desired capacity of 3 instances.
When creating autoscaling groups, be sure to follow AWS Autoscaling Best Practices.
38. How can you utilize Terraform to set up an AWS Lambda function?
Terraform provides the aws_lambda_function
resource for managing AWS Lambda functions. Here’s an example:
resource "aws_lambda_function" "test_lambda" { filename = "lambda_function_payload.zip" function_name = "lambda_function_name" role = aws_iam_role.iam_for_lambda.arn handler = "exports.test" source_code_hash = filebase64sha256("lambda_function_payload.zip") runtime = "nodejs12.x" environment { variables = { foo = "bar" } } }
In this configuration, the function is set to use the “nodejs12.x” runtime. The handler value is the function within your code that Lambda calls to begin execution. The role is the IAM role that you have created that has the necessary permissions for the Lambda function.
Related Reading: Top AWS Lambda Interview Questions
39. How would you set up an AWS EKS (Elastic Kubernetes Service) cluster using Terraform?
Terraform provides the aws_eks_cluster
resource type for managing AWS EKS clusters. Here’s an example:
data "aws_ami" "eks_worker" { filter { name = "name" values = ["amazon-eks-node-1.21-v*"] } most_recent = true owners = ["602401143452"] # Amazon } resource "aws_eks_cluster" "example" { name = "example" role_arn = aws_iam_role.example.arn vpc_config { subnet_ids = [aws_subnet.example.id] } depends_on = [ aws_iam_role_policy_attachment.example_eks, aws_iam_role_policy_attachment.example_eks_nodes, ] }
This example creates an EKS cluster named “example” with a dependency on IAM roles for EKS and EKS nodes. The vpc_config
block assigns the EKS cluster to a specific subnet.
For a detailed step-by-step guide, refer to our article on provisioning EKS Cluster using Terraform.
40. How does Terraform handle multi-region deployment in AWS?
Terraform supports multi-region deployment in AWS by allowing you to specify the region in the provider configuration or by using aliases to create multiple AWS provider instances. Here’s an example:
provider "aws" { region = "us-west-2" } provider "aws" { alias = "east" region = "us-east-1" } resource "aws_instance" "west" { provider = aws # ... other configuration ... } resource "aws_instance" "east" { provider = aws.east # ... other configuration ... }
In this example, two instances are created, one in the us-west-2
region and one in the us-east-1
region. By using provider aliases, you can manage resources in multiple regions within the same Terraform configuration.
Related Reading: Top Cloud Architect Interview Questions