Terraform antipattern: Resource Modules
by Pedro Santos
March 12, 2026
Say you’re working for a medium to large company that is using infrastructure-as-code (IaC) to manage their infrastructure. Seeing so many different ways of deploying the same resources with conflicting levels of compliance, you come up with a solution. You’d like to create a unified approach to deploy IaC resources by creating resource modules. Resource modules are thin wrappers around the resource following your organization’s best practices. This keeps everyone’s configuration DRY and consistent across your organization.
However, having been through this a few times myself, I’ll make the case that this approach is more work than it is worth.
This isn’t just my experience — the Terraform documentation itself recommends against them. Their argument is that a module must create a new abstraction, something that has a higher meaning than the sum of its resources. A virtual_machine module, for instance, adds nothing beyond what myprovider_virtual_machine already expresses. An nginx_load_balancer module, on the other hand, gives meaning to the underlying resource; the fact that it happens to be a virtual machine is an implementation detail. Without creating a new abstraction, a module only adds unnecessary complexity.
In this post, I’ll expand on the abstraction reasoning and introduce the additional arguments why resource modules may not be the best solution:
- They are not DRY
- They don’t scale
- Their API will grow to mirror the wrapped resource
- They are awkward to build and to use
While this post is centered around Terraform, the arguments and conclusions apply to other IaC languages. No matter if you are developing in CloudFormation, ARM/Bicep, or Helm, you’ll find this post relevant.
They are not DRY
A common argument in favour of resource modules is that they reduce repetition, thus making the code DRY and promoting code-reuse. If you’re going to write the same resource + private connectivity + RBAC roles over and over again, why not encapsulate the functionality at the level that you need? It’s a fair point, but I believe it’s flawed in subtle ways.
First, we need to make a distinction between duplicated logic and duplicated code. DRY is about duplicated decisions, not duplicated structure. A key difference between IaC and application programming is that cloud resources have a large configuration surface area and the language is verbose. In an idealised scenario, your IaC resources would need only a few configuration parameters. But in reality, that’s not the case.
Take, for example, the ubiquitous virtual machine. It requires configuration for RAM, CPU, NICs, GPUs, storage, etc. We consume the same API to deploy completely different use-cases: a database server, firewall, NFS share, or web server. This is not necessarily an issue with the cloud resource itself, but a consequence of the abstraction level. The resource is just infrastructure; the meaning comes from how we use it.
So when we spell out the same IaC resource in slightly different ways throughout the code, it really feels like a code smell. The obvious solution would be to encapsulate the repeated code of the resource definition as a module. But if we look further, we’re not encapsulating the logic, we’re hiding away the natural verbosity of the IaC domain space. Verbosity in IaC is not a problem to be solved in your consumer code. It is a property of the domain. Forcing all use-cases into a shared module ends up creating accidental coupling.
It’s also worth noting that you don’t need a module to share knowledge. Good Terraform providers ship complete, working examples in their documentation for exactly this reason. A good starter template sometimes beats a leaky abstraction. For IaC, templates preserve discoverability and keep developers fluent in the underlying resource, rather than insulating them from it.
They don’t scale
A fair counter-argument to the DRY argument is that even if the module does not remove verbosity, it at least centralises it. But as we’ll see, this only holds in the short term.
When evaluating opportunities to reduce duplication, we must always keep in mind the cost of those choices. It’s a fine line balancing simplicity (KISS) with abstraction (DRY). With experience, you develop an intuition for a complexity budget that helps you balance the two principles. Your complexity budget is determined by both the problem you’re solving and the tools you’re using to solve it.
A lot of the practices and paradigms from software development don’t translate well to Terraform. This is partly because Terraform is a declarative DSL, it describes what the infrastructure should be, not how to deploy it, and was never intended as a general-purpose programming language. But this comes with significant tradeoffs in the design of the configuration language.
To start, loops and conditionals are significantly more complex in Terraform than in general-purpose languages. Then, we have functions. In JavaScript, a function is a couple of lines. In Terraform, it’s a whole module with its own directory, inputs, outputs, state, and a lifecycle of its own. And finally, there are no higher-order semantics. No first-class support for interfaces, inheritance, or composition.
This means that Terraform is verbose not only in its resource definitions, but also in how it expresses abstractions. Yes, we can do a lot with Terraform, but in the same way we can build a web server in Bash.
It follows that, in Terraform, abstractions are expensive and their complexity increases with every layer of nesting. Most of my IaC deployments can tolerate two nested modules before they become too complex to reliably handle. Wrapping all resources in their related resource modules increases the number of nested modules and exhausts the complexity budget of your solution.
Keeping things simple might mean a bit more repeated structure, but makes your code easier to understand and to change.
This whole realisation that Terraform (and other IaC DSLs) are limited at handling complexity has an interesting side effect They guide your code to be simpler and thus your infrastructure. When conditional logic is expensive, you stop building environments that diverge between Dev and Prod. When your tooling resists complexity, you configure resources the way your use-case demands, rather than forcing them to fit an opinionated abstraction.
Their API will grow to mirror the wrapped resource
No matter your best intentions, your wrapper module will end up with all sorts of escape hatches to represent edge cases. This is the consequence of building a catch-all abstraction, and the effect will compound the bigger your user base is. As a concrete example, say you are creating a module for a storage account. Your company mandates all production data to be geo-replicated and in the West Europe region. So, your module will look something like:
resource azurerm_storage_account "this" {
// [...]
location = "westeurope"
account_tier = "Premium"
account_replication_type = "GRS"
}
Then, as it happens, a member of another team comes to you with the following problem:
Their application is using a storage account to host large temporary files.
The architecture and budget were agreed and approved months ago with a locally redundant storage account.
Changing the replication type this late in the development stage would cause weeks of delay to re-approve
the architecture and increased budget for no perceived upside.
The situation gets escalated, and the following compromise gets reached:
By default, the replication type will be GRS, but it can be overridden.
The updated module will look something like this:
variable "replication_type" {
type = string
default = "GRS"
}
resource azurerm_storage_account "this" {
// [...]
location = "westeurope"
account_tier = "Premium"
account_replication_type = var.replication_type
}
This situation then repeats for the other hardcoded parameters. A team working on a legacy part of the infrastructure with different naming conventions? Naming can be overridden. Team wants to use a public-facing storage account? Private endpoints become optional. In the end, your module will devolve into the following spaghetti code:
variable "location" { /*...*/ }
variable "account_type" { /*...*/ }
variable "replication_type" { /*...*/ }
// [...]
resource azurerm_storage_account "this" {
// [...]
resource_group_name = var.resource_group_name
location = var.location
account_tier = var.account_tier
account_replication_type = var.replication_type
}
At this point, you wonder why you went through all the hassle of creating a module.
They are awkward to build and to use
We just wrote hundreds of lines of code to make the API slightly nicer and to override the defaults of the cloud resources. This code needs to be maintained, followed up on with bug reports and feature requests, deprecated, etc. We’ll likely be wrapping one of several providers (azurerm, aws, google, etc.) that release updates on a frequent, rolling basis. This puts the module in the critical path any time a downstream configuration needs to use a recently added resource option.
At the end of the day, a Terraform developer will have more experience using Terraform resources directly. Learning a wrapper module’s new defaults, new semantics, and new caveats takes time that could be better spent developing your infrastructure. If issues arise, a developer must be comfortable with both the underlying Terraform resource and the wrapper module simultaneously.
So, in my opinion, the cost/benefit is just not there.
What to do instead
The idea of harmonising IaC standards across your infrastructure has merit; I just don’t believe that resource modules are the correct way to achieve it. Instead, I propose the following action points:
Encode standards in data-only modules
With the understanding that hardcoded options will eventually be made into optional defaults, I propose encoding your org’s conventions into separate data-only modules. Unlike a resource module, this data-only module has no Terraform lifecycle, no state, and no API drift problem. Here’s an example of a storage account module:
# [...]
output "deployment_standards" {
value = {
namespace = var.is_production ? "prod" : "staging"
replicas = var.is_production ? 3 : 1
security_context = {
run_as_non_root = true
read_only_root_filesystem = true
}
}
}
Downstream code would consume this module as such:
module "org_standards" { /* ... */ }
resource "kubernetes_deployment" "this" {
metadata {
namespace = module.org_standards.deployment_standards.namespace
}
spec {
replicas = module.org_standards.deployment_standards.replicas
template {
spec {
container {
security_context {
run_as_non_root = module.org_standards.deployment_standards.security_context.run_as_non_root
read_only_root_filesystem = module.org_standards.deployment_standards.security_context.read_only_root_filesystem
}
}
}
}
}
}
The main difference from resource modules is that we’re not encoding the resources themselves, but the business logic of the parameters. In OOP terms, the org’s conventions are defined by composition and not by inheritance.
Enforce compliance with policies
Since data-only modules don’t enforce standardisation, what stops a team from simply ignoring them and hardcoding their own values? Policy tooling like checkov provides exactly this mechanism, enforcing compliance requirements as part of your CI/CD pipeline. It supports built-in rules as well as custom org-specific policies. Some platforms offer additional enforcement layers, rejecting non-compliant resources regardless of whether they are deployed through Terraform, CLI, or the web portal: Azure Policies, Kubernetes Pod Security Admission, and AWS Config all operate at this level.
This will ensure that, for example, a database is deployed with a deny-all network policy by default. A key advantage of this mechanism is that the policy code lifecycle is decoupled from the IaC developer’s application. This way, you don’t have to wait for the module’s consumers to update their versions to allow/disallow a new config parameter.
Make conventions visible
It’s also important that the conventions of your organisation are well understood across all teams. Document the whys and hows and provide ready-to-use snippets for developers. Ensure the documentation is kept up-to-date and is easily available to IaC developers. This way, architects and developers are aware of the standard way of doing things in the organisation. Deviations can be caught earlier and discussed during the initial design phase.
Shift from resource modules to solution modules
Modules should implement a solution that does not exist as a standalone Terraform resource. As an example, the following would be valid module candidates:
- An IAM solution implemented with a Keycloak VM, database, networking, etc.
- A specific Kubernetes application as a collection of deployments, secrets, and config_maps (just like a Helm chart)
So instead of wrapping Terraform resources, create solutions in a purposeful, non-generic way. Ideally, the module architecture should emerge from experience rather than upfront design; premature abstraction is especially costly in IaC, where lifecycle and state management make refactoring expensive. Say you notice that many of your projects involve creating the same 3-tier architecture and that the pattern can be standardised across most of your projects.
This is the perfect candidate for a reusable module. Because we’re implementing a concrete concept, it does not have to be generic, and we can say no to changes outside the module’s scope. Your module should abstract away the specific resources being created — for example, the app layer could be a VM running Docker, an Azure Web App, etc. It’s not meant to be used in place of a Terraform resource, but rather as a fully contained solution template.
Bonus point: Keeping things DRY with for-each
It is still possible to keep things DRY in your solution module, or when using resources directly, by using the for_each pattern.
To deploy multiple resources for the same use-case but with slightly different configurations, consider the following:
resource "aws_sqs_queue" "data" {
for_each = {
orders = { delay_seconds = 0, message_retention_seconds = 86400 }
shipments = { delay_seconds = 10, message_retention_seconds = 604800 }
}
# [...]
delay_seconds = each.value.delay_seconds
message_retention_seconds = each.value.message_retention_seconds
}
# Different use-case, separate resource block
resource "aws_sqs_queue" "notifications" {
for_each = {
email = { delay_seconds = 0 }
sms = { delay_seconds = 5 }
}
# [...]
delay_seconds = each.value.delay_seconds
}
Here we assume that the queues on each use-case are used in the same way. If their use-cases start diverging too much, I simply move the resources into separate definitions.
Conclusion
Resource modules are a well-intentioned solution to a real problem: inconsistent, non-compliant infrastructure. But the cure ends up worse than the disease; they introduce API complexity, lifecycle constraints, and a maintenance burden that compounds as your user base grows. The alternative isn’t to abandon standardisation but to apply it at the right level:
- Encode organisational conventions as data-only modules
- Enforce compliance through policy tooling,
- Build solution modules that emerge from observed patterns rather than upfront design.
The result is infrastructure that is readable, debuggable, and easy to onboard. When in doubt, reach for the Terraform resource directly.