Provisioning Confluent Cloud Kafka Using Terraform

EDITOR’S NOTE: Confluent’s Terraform provider for this purpose is finally GA. You should probably just use that.

You’re probably here via a search engine in a feverish quest for a solution so I’ll just get straight to it: Here’s some HCL for provisioning a Confluent Cloud Kafka cluster along with accompanying resources.

variable "confluentcloud_username" {}
variable "confluentcloud_password" {} 

terraform {
  required_providers {
    confluentcloud = {
      source  = "Mongey/confluentcloud"
      version = "0.0.10"
    }
    kafka = {
      source  = "Mongey/kafka"
      version = "0.3.0"
    }
  }
}

provider "confluentcloud" {
  username = var.confluentcloud_username
  password = var.confluentcloud_password
}

resource "confluentcloud_environment" "environment" {
  name = "environment"
}

resource "confluentcloud_kafka_cluster" "cluster" {
  name             = "cluster"
  service_provider = "aws"
  region           = "us-east-1a"
  availability     = "LOW"
  environment_id   = confluentcloud_environment.environment.id
  deployment = {
    sku = "BASIC"
  }
  network_egress  = 100
  network_ingress = 100
  storage         = 5000
}

resource "confluentcloud_api_key" "terraform" {
  cluster_id     = confluentcloud_kafka_cluster.cluster.id
  environment_id = confluentcloud_environment.environment.id
}

provider "kafka" {
  bootstrap_servers = [replace(confluentcloud_kafka_cluster.cluster.bootstrap_servers, "SASL_SSL://", "")]
  sasl_username     = confluentcloud_api_key.terraform.key
  sasl_password     = confluentcloud_api_key.terraform.secret
  tls_enabled       = true
}

resource "kafka_topic" "topic" {
  config = {
    "cleanup.policy"                      = "delete"
    "delete.retention.ms"                 = "86400000"
    "max.compaction.lag.ms"               = "9223372036854775807"
    "max.message.bytes"                   = "2097164"
    "message.timestamp.difference.max.ms" = "9223372036854775807"
    "message.timestamp.type"              = "CreateTime"
    "min.compaction.lag.ms"               = "0"
    "min.insync.replicas"                 = "2"
    "retention.bytes"                     = "-1"
    "retention.ms"                        = "-1"
    "segment.bytes"                       = "104857600"
    "segment.ms"                          = "604800000"
  }
  name               = "topic"
  partitions         = 3
  replication_factor = 3
}

resource "confluentcloud_api_key" "key" {
  cluster_id     = confluentcloud_kafka_cluster.cluster.id
  environment_id = confluentcloud_environment.environment.id
}

resource "kafka_acl" "acl_write" {
  resource_name       = kafka_topic.topic.name
  resource_type       = "Topic"
  acl_principal       = "User:${confluentcloud_api_key.key.id}"
  acl_host            = "*"
  acl_operation       = "Write"
  acl_permission_type = "Allow"
  resource_pattern_type_filter = "Literal"
} 

… and here’s how you import existing resources. Environment and cluster IDs are of the form (lkc|env)-[a-z0-9]{5} and can be found in the URL of the console page for that particular object. User IDs are six digits.

terraform import confluentcloud_environment.environment env-foo42
terraform import confluentcloud_cluster.cluster lkc-bar69
terraform import confluentcloud_api_key.key 123456 # Manual state editing required to avoid re-creation
terraform import kafka_topic.topic topic
terraform import kafka_acl.acl_write 'User:123456|*|Write|Allow|Topic|topic|Literal'

Note that API keys will require you to do a terraform state pull, edit the null values that it complains about in the plan, then terraform state push after incrementing the serial number in order to avoid having Terraform re-create the key due to unknown values.

I’m using Conor Mongey’s Kafka and Confluent Cloud providers to accomplish this. Hats off to Conor and the rest of the contributors for making life with Kafka that much easier!

 Share!

 
comments powered by Disqus