Terraform - Deploying and Managing Azure Log Analytics Workspace
Hi! As cloud architectures become more complex, Infrastructure as Code (IaC) has become increasingly important. By using tools like Terraform, IaC allows you to manage intricate infrastructures in a text-based, repeatable, and automated manner. This ensures consistency, minimizes mistakes, and greatly accelerates the process of provisioning and scaling resources.
This article will explore how to seamlessly deploy Log Analytics Workspaces directly into your Azure Resource Groups. This approach offers significant advantages, particularly if your organization has already invested in Azure infrastructure.
However, if you’re new to Azure and haven’t set up a resource group, you might want to check out our previous article on creating Azure resource groups using the Azure CLI. Check out this link.
By using Terraform for your Infrastructure as Code (IaC) needs, you simplify the deployment process and establish a consistent way to manage and configure resources. This reduces discrepancies and improves the reliability and troubleshooting of your architecture.
Prerequisites #
- You need Terraform CLI on your local machine, if you’re new to using Terraform to deploy Microsoft Azure resources, then I recommend you check out this link.
- A text editor or IDE of your choice (Visual Studio Code with Terraform extension is my recommendation)
Declare Azure Provider in Terraform #
The provider.tf file in Terraform is used to specify and configure the providers used in your Terraform configuration. A provider is a service or platform where the resources will be managed. This could be a cloud provider like Microsoft Azure, AWS, Google Cloud, etc.
This file is important because it tells Terraform which provider’s API to use when creating, updating, and deleting resources. Without it, Terraform wouldn’t know where to manage your resources.
provider "azurerm" {
features {}
}
Create Log Analytics Workspaces using Terraform #
In the case of Log Analytics Workspaces deployment, the main.tf file contains the following key components:
- azurerm_log_analytics_workspace:This block creates a series of Azure Log Analytics Workspaces with varying configurations, as defined in the var.log_analytics_workspace_settings variable. It uses a for_each loop to iterate through the settings, allowing for the automated deployment of multiple workspaces with individualized settings for data retention, permissions, data ingestion, and more.
// Create Azure Log Analytics Workspaces based on settings in var.log_analytics_workspace_settings
resource "azurerm_log_analytics_workspace" "law" {
for_each = { for idx, settings in var.log_analytics_workspace_settings : idx => settings }
// Basic Information
name = each.value.name
resource_group_name = each.value.resource_group_name
location = each.value.location
sku = each.value.sku
// Data Retention and Quotas
retention_in_days = each.value.retention_in_days // Data retention period in days
daily_quota_gb = each.value.daily_quota_gb // Daily data ingestion quota in GB
// Permissions and Authentication
allow_resource_only_permissions = each.value.allow_resource_only_permissions
local_authentication_disabled = each.value.local_authentication_disabled
// Data Ingestion and Query Settings
internet_ingestion_enabled = each.value.internet_ingestion_enabled
internet_query_enabled = each.value.internet_query_enabled
// Customer Managed Key for Query
cmk_for_query_forced = each.value.cmk_for_query_forced
// Common Tags
tags = var.tags
}
Declaration of input variables #
The variables.tf file in Terraform defines the variables I will use in the main.tf file. These variables allow for more flexibility and reusability in the code. In this example, the variables are defined in the variables.tf include:
-
log_analytics_workspace_settings: This block declares a list of objects, each containing various attributes related to an Azure Log Analytics Workspace. Attributes include the name, associated resource group, geographical location, SKU, and several optional settings for data retention, quotas, and permissions. Validations are in place to ensure compliance with Azure’s constraints for Log Analytics Workspaces concerning retention days, SKU, and location.
-
tags: This block declares a variable named tags, which is a map of strings. It is used to assign tags to the Azure resources being created. For example, you can use a key-value pair such as Terraform = true to indicate that the resource was deployed with Terraform.
// Variables for Log Analytics Workspace Settings
variable "log_analytics_workspace_settings" {
type = list(object({
name = string
resource_group_name = string
location = string
sku = string
retention_in_days = optional(number)
daily_quota_gb = optional(number)
allow_resource_only_permissions = optional(bool)
local_authentication_disabled = optional(bool)
internet_ingestion_enabled = optional(bool)
internet_query_enabled = optional(bool)
cmk_for_query_forced = optional(bool)
}))
// Validation for retention period in days
validation {
condition = alltrue([
for law in var.log_analytics_workspace_settings : (
law.retention_in_days == 7 || (law.retention_in_days >= 30 && law.retention_in_days <= 730)
)
])
error_message = "Retention period must be 7 (Free Tier only) or a range between 30 and 730."
}
// Validation for SKU
validation {
condition = alltrue([
for law in var.log_analytics_workspace_settings : law.sku == "PerGB2018"
])
error_message = "The only valid value for SKU is PerGB2018."
}
// Validation for location
validation {
condition = alltrue([
for law in var.log_analytics_workspace_settings : contains([
"West Europe",
"North Europe",
], law.location)
])
error_message = "Valid locations are West Europe and North Europe."
}
}
// Common tags for all Azure resources created
variable "tags" {
description = "Common tags for all resources"
# Define the type structure for the tags
type = object({
Environment = string
Terraform = string
})
# Default values for tags
default = {
Environment = "www.jorgebernhardt.com"
Terraform = "true"
}
}
Declaration of output values #
The output.tf file in Terraform extracts and displays information about the resources created or managed by your Terraform configuration. These outputs are defined using the output keyword and can be used to return information that can be useful for the user, for other Terraform configurations, or for programmatically using the information in scripts or other tools.
In this example, the output.tf file returns information about the IDs, Primary Shared Keys, and Secondary Shared Keys of the Azure Log Analytics Workspaces that are created by the Terraform configuration.
To ensure greater security, you can designate certain output variables as sensitive. This can be achieved by including the argument sensitive=true in the output block of your output.tf file. Terraform will then prevent these values from being displayed in the standard output, which helps to minimize the possibility of unintended exposure.
Once Terraform has finished applying your configuration, it will display the defined outputs.
// Output IDs of Log Analytics Workspace
output "log_analytics_workspace_ids" {
value = { for idx, law in azurerm_log_analytics_workspace.law : idx => law.id }
description = "IDs of Log Analytics Workspace"
}
// Output Primary Keys of Log Analytics Workspace (Sensitive)
output "log_analytics_workspace_primary_keys" {
value = { for idx, law in azurerm_log_analytics_workspace.law : idx => law.primary_shared_key }
description = "Primary Shared Keys of Log Analytics Workspace"
sensitive = true
}
// Output Secondary Keys of Log Analytics Workspace (Sensitive)
output "log_analytics_workspace_secondary_keys" {
value = { for idx, law in azurerm_log_analytics_workspace.law : idx => law.secondary_shared_key }
description = "Secondary Shared Keys of Log Analytics Workspace"
sensitive = true
}
Executing the Terraform Deployment #
Now that you’ve declared the resources correctly, it’s time to take the following steps to deploy them in your Azure environment.
-
Initialization: To begin, execute the terraform init command. This will initialize your working directory that holds the .tf files and download the provider specified in the provider.tf file, and configure the Terraform backend. I suggest looking at this link if you’re curious about the process.
-
Planning: Next, execute the terraform plan. This command creates an execution plan and shows Terraform’s actions to achieve the desired state defined in your .tf files. This gives you a chance to review the changes before applying them.
-
Apply: When you’re satisfied with the plan, execute the terraform apply command. This will implement the required modifications to attain the intended infrastructure state. Before making any changes, you will be asked to confirm your decision.
-
Inspection: After applying the changes, you can use terraform show command to see the current state of your infrastructure.
-
Destroy (optional): when a project is no longer needed or when resources have become outdated. You can use the terraform destroy command. This will remove all the resources that Terraform has created.
References and useful links #
Thank you for taking the time to read my post. I sincerely hope that you find it helpful.