Clone the companion project to follow along…
In a recent multi-part series on Terraforming a simple N-tier stack atop AWS, we provisioned a number of public and private subnets. We used CIDR ranges provided as input variables and the cidrsubnet function to automatically carve out smaller subnets based on the number of AZs in the target region.
As originally mentioned, this was an attempt to follow typical HA best practices (not having all resources in a single AZ) and meet RDS subnet group requirements. In the real world you would need to ensure selected subnets fit into your IP allocation scheme, and carefully cater the layout to be optimal for your context. The approach was an opinionated way to quickly spin up N subnets without too much thought… As is often the case with opinions, this one soon met edge cases where it didn’t make sense.
Strong opinions weakly held –Paul Saffo
Here’s part of the related block of code from the original series:
resource "aws_subnet" "public_subnets" {
count = length(data.aws_availability_zones.all.names)
vpc_id = aws_vpc.vpc.id
cidr_block = cidrsubnet(var.public_cidr, 2, count.index)
availability_zone = element(data.aws_availability_zones.all.names, count.index)
map_public_ip_on_launch = true
tags = {
"Name" = "${var.env_name}-public-subnet${count.index}"
}
}
cidrsubnet
works by taking a CIDR range and adding the specified number of
netmask bits (the second argument). In our case, we started with /24’s for the
public and private CIDRs which got carved up into four /26’s.
The original intent was not to be perfectly efficient, since this clearly leaves some address space on the table in a typical three-AZ region… but the real problem was that it doesn’t work at all in larger regions with more than three AZs. Since certain AWS regions have four or more AZs, this opinion needs updated to allow deployment in any region.
To avoid hard-coding the second argument and satisfy most cases without adding too much complexity, I used a ternary (or conditional in Terraform parlance). This still adds two bits (turning our /24 into four /26’s) in regions with fewer than four AZs while adding three bits (creating eight /27’s) in larger regions. Here it is in action:
locals {
# When we have 2, 3 or 4 AZs in a region divide the public and private
# CIDR ranges into 4 subnets (add 2 bits to netmask). In larger regions
# with >4 AZs, divide into 8 subnets (add 3 bits to netmask).
newbits = length(data.aws_availability_zones.available.names) > 4 ? 3 : 2
}
resource "aws_subnet" "public_subnets" {
count = length(data.aws_availability_zones.available.names)
vpc_id = aws_vpc.vpc.id
cidr_block = cidrsubnet(var.public_cidr, local.newbits, count.index)
availability_zone = element(data.aws_availability_zones.available.names, count.index)
map_public_ip_on_launch = true
tags = {
"Name" = "${var.env_name}-public-subnet${count.index}"
}
}
resource "aws_subnet" "private_subnets" {
count = length(data.aws_availability_zones.available.names)
vpc_id = aws_vpc.vpc.id
cidr_block = cidrsubnet(var.private_cidr, local.newbits, count.index)
availability_zone = element(data.aws_availability_zones.available.names, count.index)
tags = {
"Name" = "${var.env_name}-private-subnet${count.index}"
}
}
Using conditionals to add flexibility…This still isn’t optimal in all scenarios, but remains opinionated enough to get the job done in most cases without requiring much thought. It respects CIDR subnet boundaries and future-proofs a bit, allowing larger regions to expand to eight AZs without modification. As before, the top-level CIDR ranges for the public and private subnets can be adjusted as needed.
With just a few more lines we’ve added flexibility, and fixed a bug which led to ugly error messages in larger regions. It’s still not perfect, but perfection is never the goal… it’s all about iterating and learning!
Perfect is the enemy of good. –Voltaire