Terraform manages its state buy storing the files in tfstate files. Thus when you run terraform plan or terraform aply , terraform will be able to find the resources it created previously and hence act upon them. The tfstate files by default gets stored
in the same directory where your terraform files exist. This is usually not a problem if you are the only one who runs terraform. However this becomes a problem when you are using terraform as a team on a real product and can run into several problems:

  1. Shared storage for state files: To be able to use Terraform to update your infrastructure, each of your team members needs access to the same Terraform state files. That means you need to store those files in a shared location.
  2. Locking state files: As soon as data is shared, you run into a new problem: locking. Without locking, if two team members are running Terraform at the same time, you may run into race conditions as multiple Terraform processes make concurrent updates to the state files, leading to conflicts, data loss, and state file corruption.
  3. Isolating state files: When making changes to your infrastructure, it’s a best practice to isolate different environments. For example, when making a change in the staging environment, you want to be sure that you’re not going to accidentally break production. But how can you isolate your changes if all of your infrastructure is defined in the same Terraform state file?

Enough Theory: Just show me the code.

If you came here knowing already what you want and just want to see an example, then below shows my main.tf file which you can adapt to use for your own projects. My terraform projects always start with this as the main.tf. If you want to understand what is going on or a step by step instructions, read further down.

1. Create or update file main.tf as shown below.

[Error: Wrong macro arguments: '-info' for macro 'alert' (maybe wrong macro tag syntax?)]
Update region, bucket and dynamodb entries accordingly
<</alert-info>>

provider "aws" {
  region = "eu-west-1"
}


##
# Create a bucket to store terraform state and a dyanmodb_table 
# to store the lock so that only one person can be running 
# terraform at any given time. 
##

resource "aws_s3_bucket" "terraform_state" {
  bucket = "aws-jnvilo-state"
  # Enable versioning so we can see the full revision history of our
  # state files
  versioning {
    enabled = true
  }
  # Enable server-side encryption by default
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "aws-jnvilo-locks"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
  attribute {
    name = "LockID"
    type = "S"
  }
}

2. Initialize and create the resources

$ terraform init
# terraform apply

3. Update main.tf to use the S3 and dynamodb

Edit your main.tf and add

terraform {
  backend "s3" {
    # Replace this with your bucket name!
    bucket         = "aws-jnvilo-state"
    key            = "global/s3/terraform.tfstate"
    region         = "us-east-2"
    # Replace this with your DynamoDB table name!
    dynamodb_table = "aws-jnvilo-locks"
    encrypt        = true
  }
}

We need to run terraform init again. If we just run terraform apply now, terraform will tell us to do an init first because the backend has changed.

$ terraform init

Et voila: Now your state is stored in an S3 bucket and the locks are in a nosql dynamodb.