5 Beginner Terraform Mistakes You Must Absolutely Avoid
February 20, 2026

A few years ago, I still remember the first time I ran a terraform apply command in production. My palms were sweating, my heart was racing, and that little voice in my head was saying: "Are you really sure about what you're doing?" Spoiler alert: I wasn't, not entirely. Today, as a Lead Cloud Architect, I look back with a mixture of fondness and dread at the monumental mistakes I made when starting out.
Terraform is a fantastic tool. It is the industry standard for Infrastructure as Code (IaC). But it is also a tool that will let you shoot yourself in the foot with surgical precision if you're not careful. Unlike application code where a bug might "just" crash a feature, a Terraform mistake can wipe a production database or expose your company secrets to the entire world in a matter of seconds.
The goal of this article is not to scare you, but to offer you the shortcut I never had. We are going to break down five classic, technical and architectural mistakes that almost every beginner (and even some seniors) makes. Grab a coffee, get comfortable, and let's analyze together how to avoid turning your infrastructure into a field of ruins.
Storing State Locally — The Recipe for Collaborative Disaster
One of the first things you learn is that Terraform maintains the state of your infrastructure in a file called terraform.tfstate. At first, everything seems simple: you run your code, the file is created on your computer, and everything works. This is exactly where the trap snaps shut.
Why Local Storage is a Ticking Time Bomb
When you're working alone on a personal project, keeping the state on your machine (the local backend) is acceptable. But the moment a second person joins the team, or you want to set up a CI/CD pipeline, local storage becomes your worst nightmare. As this guide on the risks of state file corruption explains, this file is the absolute source of truth for Terraform. If you have it on your computer, your colleague does not. If they try to deploy a change, Terraform will think the resources don't exist and try to recreate them, causing name conflicts or, worse, costly duplications.
Furthermore, if your hard drive dies, you lose the mapping between your code and reality. You then have to do manual imports — a tedious and error-prone task. But the most insidious problem remains the absence of locking. Without a centralized mechanism to say "I am currently modifying the infrastructure," two engineers can launch a deployment simultaneously. The result? A race condition that corrupts the state file, making your infrastructure unmanageable.
The Remote Backend Solution with Locking
The best practice, from day one, is to use a remote backend. On AWS, the historical standard was to use an S3 bucket for storage and a DynamoDB table for locking (State Locking). Good news: AWS has recently simplified things by introducing native locking directly via S3, illustrating the constant evolution of managed services at AWS, GCP and Azure, eliminating the need to manage an additional DynamoDB resource for this critical task.
Naively Handling Secrets and Sensitive Data
This is the mistake that makes security managers sweat the most. When starting out, the temptation is to move fast. You need a password for an RDS database or an API key for a third-party service. The temptation is great to write them "just for testing" in the variables.tf file or, worse, directly in the resource code.
The Persistence of Plaintext Data
Even if you use environment variables to pass your secrets to Terraform, there is a crucial technical detail that many people ignore: the terraform.tfstate file stores resource values in plaintext. Yes, you read that right. Indeed, Terraform state files store sensitive data in plaintext — even if you mark a variable as sensitive = true in your code, because that only prevents it from being displayed in the console output (stdout) during a plan or apply.
If you commit your state file to Git (which you should absolutely never do, see the previous point), or if your S3 bucket is not properly secured, anyone with access to this file can read your database passwords, your TLS private keys, and your access tokens. This is a major security vulnerability.
Securing the End-to-End Chain
To handle this like a pro, you need to adopt a layered approach. First, ban hard-coded secrets in your .tf files. Favor native secret managers like AWS Secrets Manager, Azure Key Vault, or the invaluable HashiCorp Vault. If your needs require a more "cloud-agnostic" approach, or if you want to keep a trace of your secrets directly in your Git repository, never do it in plaintext. Use tools like git-crypt or sops. These solutions encrypt your sensitive files before they are pushed to the server, ensuring that only a user with the decryption key can read them.
Rather than passing passwords via variables, your Terraform code should rely on Data Sources to dynamically retrieve this information at runtime. This way, the secret never appears in your source code. Finally, don't forget the weak link: the state file. Since Terraform inevitably stores data there for its calculations, encrypting your backend (like server-side encryption on S3) and a restrictive IAM policy are non-negotiable. Treat your state file with the same level of paranoia as your private keys.
Monolithic Architecture and the Refusal of Modularity
When starting out, there is a tendency to put everything in a single main.tf file, or at best, to separate files by service (network, compute, database) but in a single root folder. At first, it's fast. But as the infrastructure grows, this monolithic approach becomes an operational nightmare.
The Blast Radius Problem
Imagine you have your entire infrastructure (VPC, production databases, application servers, DNS) in a single Terraform project. Every time you run a terraform plan, the tool must refresh the state of all these resources via the Cloud provider APIs. This poses two major problems. First, slowness: on large infrastructures, a simple plan can take 10 to 15 minutes.
Second, and more seriously, the "Blast Radius." This concept of a 'Terralith' or monolithic architecture presents major risks: if you make a small change to a Security Group but make a typo that affects a global resource, you risk breaking all of production. A monolithic architecture does not allow you to isolate risks.
Divide and Conquer with Modules
Mature architecture relies on extensive use of modules and strict separation of states — essential pillars for a scalable Infrastructure as Code (IaC). Don't duplicate your code for Dev, Staging and Prod environments. Instead, create reusable modules (for example a "standard-app" module grouping ASG, ALB and security rules) and call them with specific variables. To truly isolate risks, you must physically separate state files. Your network (VPC) must have its own lifecycle, independent of your databases or Kubernetes clusters. Here's what a healthy folder hierarchy looks like for a large-scale project:
terraform/
├── networking/ (VPC, Subnets...)
├── eks/ (Kubernetes Cluster)
├── cloudwatch/
├── cloudtrail/
├── app1/
│ ├── rds/
│ ├── asg/
│ └── cloudfront/
└── app2/
├── ec2/
├── rds/
└── s3_images/
Thanks to this structure, a change to App 1's CloudFront technically has no chance of impacting App 2's VPC or database. Each folder has its own terraform init and its own remote backend. As mentioned above regarding the Blast Radius, this approach is the only guarantee of a robust infrastructure, where the impact radius is controlled and execution times remain ultra-fast.
Workspaces vs tfvars: The Environment Isolation Dilemma
A question that systematically comes up when you start structuring your projects: how do you cleanly manage deployment across multiple environments (Dev, Staging, Prod)? The temptation is great to use the native Terraform Workspaces feature, but this is often where the first operational friction appears.
The CLI Workspaces Mirage
On paper, workspaces are attractive. They allow you to use the same code and switch from one state to another with a simple terraform workspace select command. It is a powerful tool for testing isolated features or creating ephemeral environments. However, using them to durably isolate Production from the rest is a risky practice.
The main danger is invisibility. Unlike a folder structure, nothing in your code editor visually indicates which environment you're working on. A moment of inattention, a terraform destroy launched while you thought you were in 'dev' when you were actually in 'prod', and disaster strikes. Furthermore, since workspaces often share the same backend and source code, you quickly end up polluting your files with complex conditions (count = terraform.workspace == "prod" ? 3 : 1) that make reading painful.
Clarity Through Physical Separation
For long-lived environments, expert recommendations lean heavily toward using dedicated variable files (.tfvars) or, better yet, organizing by directories to favor scalability. By using dev.tfvars, staging.tfvars and prod.tfvars files, you make configurations explicit. Each deployment requires passing the file as a parameter (-var-file="prod.tfvars"), which adds a layer of conscious validation.
For total isolation and maximum security, favor distinct folders. This not only isolates state files, but also allows managing different module versions between Dev and Prod, thus avoiding breaking your critical infrastructure during an overly ambitious module update.
Ignoring Refactoring and Moved Blocks
Infrastructure is alive. It evolves. What you named aws_instance.web today might need to be called aws_instance.frontend tomorrow, or moved into a child module for cleanliness. For a beginner, this is often a dead end. You tell yourself: "If I change the name in the code, Terraform will destroy the old resource and create a new one." And that is exactly what it will do by default.
The Fear of Data Loss
For stateless resources like a security group, this is not very serious. But for a production database or an S3 bucket containing petabytes of data, a destroy/recreate is unacceptable. For a long time, the only solution was to use the CLI command terraform state mv. It was a frightening surgical operation: you had to manipulate the state file directly from the command line, often blind, with a high risk of corruption if you got the resource paths wrong.
The Elegance of the Moved Block
Fortunately, modern versions of Terraform (since 1.1) have introduced the moved block. As this article on abandoning the manual terraform state mv command highlights, it is a revelation I wish I'd had sooner. Instead of typing perilous CLI commands, you document the move directly in your HCL code.
Concretely, you write a block that says: "What was previously aws_instance.web is now aws_instance.frontend." On the next terraform plan, the tool will read this block, understand that it is a rename and not a destruction/creation, and propose migrating the state without touching the real infrastructure. This has two immense advantages. First, it is secure because Terraform manages the logic. Second, it allows you to version your migrations, ensuring your colleagues will understand exactly what is happening during code review.
Conclusion
Starting with Terraform is an exciting journey toward automation and Cloud mastery. However, the learning curve is littered with unforgiving pitfalls. By avoiding local storage, securing your secrets, modularizing your architecture, understanding the risks of accidental destruction related to indexes when using count, and using moved blocks for your refactors, you are already placing yourself in the top tier of practitioners.
Infrastructure as Code is not just about writing code; it is about managing the lifecycle and operational responsibility. I made these mistakes so you don't have to repeat them. So, configure your remote backend, encrypt your data, and happy deploying!

Bastien Genestal
Lead Cloud Architect @ Log'in Line
As Lead Cloud Architect, I manage a team of architects and SREs to manage our clients' clusters on a daily basis. I am also a big fan of climbing 🧗♂️
LinkedIn
