Skip to content
Merged
Show file tree
Hide file tree
Changes from 30 commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
83ae92e
fix: extract out worker_pool logic
Oct 15, 2025
657d9bc
Merge branch 'main' of https://github.com/terraform-ibm-modules/terra…
Oct 15, 2025
5d69d89
resolve pc
iamar7 Oct 15, 2025
1cc3027
update pr_test
Oct 16, 2025
bb3ec17
Merge branch '16090-cds' of https://github.com/terraform-ibm-modules/…
Oct 16, 2025
a4acda2
added moved block
Oct 17, 2025
134f5c0
Merge branch 'main' into 16090-cds
iamar7 Oct 21, 2025
49cb75a
update tests
Oct 21, 2025
01adf44
update readme
iamar7 Oct 21, 2025
7f9e360
update readme
Oct 21, 2025
3d4b1bc
add worker pool example
Oct 24, 2025
7040b3a
update worker pool example
Oct 24, 2025
0e860f4
update default worker pool
Oct 24, 2025
a25f222
add workerpool example to test
Oct 24, 2025
4acdeeb
Merge branch 'main' into 16090-cds
iamar7 Oct 24, 2025
f68b3bb
resolve comments
Oct 24, 2025
ac0c59a
Merge branch '16090-cds' of https://github.com/terraform-ibm-modules/…
Oct 24, 2025
8a2ec73
update example
Oct 24, 2025
2015d93
Add default value null to prefix variable
iamar7 Oct 24, 2025
9a354da
Remove validation from prefix variable
iamar7 Oct 24, 2025
4143f78
resolve comments
Oct 24, 2025
b22e5c4
Merge branch '16090-cds' of https://github.com/terraform-ibm-modules/…
Oct 24, 2025
9492774
remove worker pool example
Oct 24, 2025
9f7fd8a
resolve pc
iamar7 Oct 24, 2025
bb31898
add worker pool example
Oct 27, 2025
8f44a14
Merge branch '16090-cds' of https://github.com/terraform-ibm-modules/…
Oct 27, 2025
8d008dd
resolve comments
Oct 28, 2025
b7ef0db
update README
Oct 28, 2025
c900e6d
Merge branch 'main' into 16090-cds
iamar7 Oct 28, 2025
4004c59
Merge branch 'main' into 16090-cds
ocofaigh Oct 28, 2025
2200dcf
update cross kms example
Oct 28, 2025
2f07b6e
revert changes
iamar7 Oct 28, 2025
daa451f
remove blank line
iamar7 Oct 28, 2025
f4421f3
remove changes
iamar7 Oct 28, 2025
432a02c
Merge branch 'main' into 16090-cds
iamar7 Oct 29, 2025
16b9e28
update advanced example
iamar7 Oct 29, 2025
f2ddf66
update example
iamar7 Oct 29, 2025
3b6c5cd
Merge branch 'main' into 16090-cds
ocofaigh Oct 29, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ Optionally, the module supports advanced security group management for the worke
* [Submodules](./modules)
* [fscloud](./modules/fscloud)
* [kube-audit](./modules/kube-audit)
* [worker-pool](./modules/worker-pool)
* [Examples](./examples)
* [2 MZR clusters in same VPC example](./examples/multiple_mzr_clusters)
* [Advanced example (mzr, auto-scale, kms, taints)](./examples/advanced)
Expand Down Expand Up @@ -296,6 +297,7 @@ Optionally, you need the following permissions to attach Access Management tags
| <a name="module_cbr_rule"></a> [cbr\_rule](#module\_cbr\_rule) | terraform-ibm-modules/cbr/ibm//modules/cbr-rule-module | 1.33.7 |
| <a name="module_cos_instance"></a> [cos\_instance](#module\_cos\_instance) | terraform-ibm-modules/cos/ibm | 10.5.1 |
| <a name="module_existing_secrets_manager_instance_parser"></a> [existing\_secrets\_manager\_instance\_parser](#module\_existing\_secrets\_manager\_instance\_parser) | terraform-ibm-modules/common-utilities/ibm//modules/crn-parser | 1.2.0 |
| <a name="module_worker_pools"></a> [worker\_pools](#module\_worker\_pools) | ./modules/worker-pool | n/a |

### Resources

Expand All @@ -308,8 +310,6 @@ Optionally, you need the following permissions to attach Access Management tags
| [ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/container_vpc_cluster) | resource |
| [ibm_container_vpc_cluster.cluster](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/container_vpc_cluster) | resource |
| [ibm_container_vpc_cluster.cluster_with_upgrade](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/container_vpc_cluster) | resource |
| [ibm_container_vpc_worker_pool.autoscaling_pool](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/container_vpc_worker_pool) | resource |
| [ibm_container_vpc_worker_pool.pool](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/container_vpc_worker_pool) | resource |
| [ibm_iam_authorization_policy.ocp_secrets_manager_iam_auth_policy](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/iam_authorization_policy) | resource |
| [ibm_resource_tag.cluster_access_tag](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/resource_tag) | resource |
| [ibm_resource_tag.cos_access_tag](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/resources/resource_tag) | resource |
Expand All @@ -322,7 +322,6 @@ Optionally, you need the following permissions to attach Access Management tags
| [ibm_container_addons.existing_addons](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_addons) | data source |
| [ibm_container_cluster_config.cluster_config](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_cluster_config) | data source |
| [ibm_container_cluster_versions.cluster_versions](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_cluster_versions) | data source |
| [ibm_container_vpc_worker_pool.all_pools](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_vpc_worker_pool) | data source |
| [ibm_is_lbs.all_lbs](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/is_lbs) | data source |
| [ibm_is_virtual_endpoint_gateway.api_vpe](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/is_virtual_endpoint_gateway) | data source |
| [ibm_is_virtual_endpoint_gateway.master_vpe](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/is_virtual_endpoint_gateway) | data source |
Expand Down
1 change: 1 addition & 0 deletions examples/custom_sg/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ An example showing how to attach additional security groups to the worker pools,
2. A second custom security group, named `custom-worker-pool-sg`, is specified for one of the `custom-sg` worker pools. This security group is not applied to other worker pools.
3. Three custom security groups, named `custom-master-vpe-sg`, `custom-registry-vpe-sg`, and `custom-kube-api-vpe-sg`, are attached to the three VPEs created by the ROKS-stack: the master VPE, the container registry VPE, and the kubernetes API VPE. This is in addition to the IBM-managed security groups that are still attached to those resources.
4. One custom security group, named `custom-lb-sg`, is attached to the LB created out-of-the-box by the IBM stack.
5. An additional worker pool named `workerpool` is created and attached to the cluster using the `worker-pool` submodule.

Furthermore, the default IBM-managed `kube-<clusterId>` security group is linked to all worker nodes of the cluster by utilizing the `attach_ibm_managed_security_group` input variable. It is important to note that, in this configuration, the default VPC security group is not connected to any worker node.

Expand Down
14 changes: 14 additions & 0 deletions examples/custom_sg/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -117,3 +117,17 @@ module "ocp_base" {
"registry" = [module.custom_sg["custom-registry-vpe-sg"].security_group_id]
}
}


########################################################################################################################
# Worker Pool
########################################################################################################################

module "worker_pool" {
source = "../../modules/worker-pool"
resource_group_id = module.resource_group.resource_group_id
vpc_id = ibm_is_vpc.vpc.id
cluster_id = module.ocp_base.cluster_id
vpc_subnets = local.cluster_vpc_subnets
worker_pools = var.worker_pools
}
35 changes: 35 additions & 0 deletions examples/custom_sg/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -57,3 +57,38 @@ variable "ocp_entitlement" {
description = "Value that is applied to the entitlements for OCP cluster provisioning"
default = null
}

variable "worker_pools" {
type = list(object({
subnet_prefix = optional(string)
vpc_subnets = optional(list(object({
id = string
zone = string
cidr_block = string
})))
pool_name = string
machine_type = string
workers_per_zone = number
resource_group_id = optional(string)
operating_system = string
labels = optional(map(string))
minSize = optional(number)
secondary_storage = optional(string)
maxSize = optional(number)
enableAutoscaling = optional(bool)
boot_volume_encryption_kms_config = optional(object({
crk = string
kms_instance_id = string
kms_account_id = optional(string)
}))
additional_security_group_ids = optional(list(string))
}))
description = "List of additional worker pools"
default = [{
subnet_prefix = "default"
pool_name = "workerpool"
machine_type = "bx2.4x16"
operating_system = "REDHAT_8_64"
workers_per_zone = 2
}]
}
134 changes: 16 additions & 118 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,6 @@
locals {
# ibm_container_vpc_cluster automatically names default pool "default" (See https://github.com/IBM-Cloud/terraform-provider-ibm/issues/2849)
default_pool = element([for pool in var.worker_pools : pool if pool.pool_name == "default"], 0)
# all_standalone_pools are the pools managed by a 'standalone' ibm_container_vpc_worker_pool resource
all_standalone_pools = [for pool in var.worker_pools : pool if !var.ignore_worker_pool_size_changes]
all_standalone_autoscaling_pools = [for pool in var.worker_pools : pool if var.ignore_worker_pool_size_changes]

default_ocp_version = "${data.ibm_container_cluster_versions.cluster_versions.default_openshift_version}_openshift"
ocp_version = var.ocp_version == null || var.ocp_version == "default" ? local.default_ocp_version : "${var.ocp_version}_openshift"
Expand Down Expand Up @@ -466,114 +463,15 @@ data "ibm_container_cluster_config" "cluster_config" {
endpoint_type = var.cluster_config_endpoint_type != "default" ? var.cluster_config_endpoint_type : null # null value represents default
}

##############################################################################
# Worker Pools
##############################################################################

locals {
additional_pool_names = var.ignore_worker_pool_size_changes ? [for pool in local.all_standalone_autoscaling_pools : pool.pool_name] : [for pool in local.all_standalone_pools : pool.pool_name]
pool_names = toset(flatten([["default"], local.additional_pool_names]))
}

data "ibm_container_vpc_worker_pool" "all_pools" {
depends_on = [ibm_container_vpc_worker_pool.autoscaling_pool, ibm_container_vpc_worker_pool.pool]
for_each = local.pool_names
cluster = local.cluster_id
worker_pool_name = each.value
}

resource "ibm_container_vpc_worker_pool" "pool" {
for_each = { for pool in local.all_standalone_pools : pool.pool_name => pool }
vpc_id = var.vpc_id
resource_group_id = var.resource_group_id
cluster = local.cluster_id
worker_pool_name = each.value.pool_name
flavor = each.value.machine_type
operating_system = each.value.operating_system
worker_count = each.value.workers_per_zone
secondary_storage = each.value.secondary_storage
entitlement = var.ocp_entitlement
labels = each.value.labels
crk = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.crk
kms_instance_id = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.kms_instance_id
kms_account_id = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.kms_account_id

security_groups = each.value.additional_security_group_ids

dynamic "zones" {
for_each = each.value.subnet_prefix != null ? var.vpc_subnets[each.value.subnet_prefix] : each.value.vpc_subnets
content {
subnet_id = zones.value.id
name = zones.value.zone
}
}

# Apply taints to worker pools i.e. all_standalone_pools
dynamic "taints" {
for_each = var.worker_pools_taints == null ? [] : concat(var.worker_pools_taints["all"], lookup(var.worker_pools_taints, each.value["pool_name"], []))
content {
effect = taints.value.effect
key = taints.value.key
value = taints.value.value
}
}

timeouts {
# Extend create and delete timeout to 2h
delete = "2h"
create = "2h"
}

# The default workerpool has to be imported as it will already exist on cluster create
import_on_create = each.value.pool_name == "default" ? var.allow_default_worker_pool_replacement ? null : true : null
orphan_on_delete = each.value.pool_name == "default" ? var.allow_default_worker_pool_replacement ? null : true : null
}

# copy of the pool resource above which ignores changes to the worker pool for use in autoscaling scenarios
resource "ibm_container_vpc_worker_pool" "autoscaling_pool" {
for_each = { for pool in local.all_standalone_autoscaling_pools : pool.pool_name => pool }
vpc_id = var.vpc_id
resource_group_id = var.resource_group_id
cluster = local.cluster_id
worker_pool_name = each.value.pool_name
flavor = each.value.machine_type
operating_system = each.value.operating_system
worker_count = each.value.workers_per_zone
secondary_storage = each.value.secondary_storage
entitlement = var.ocp_entitlement
labels = each.value.labels
crk = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.crk
kms_instance_id = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.kms_instance_id
kms_account_id = each.value.boot_volume_encryption_kms_config == null ? null : each.value.boot_volume_encryption_kms_config.kms_account_id

security_groups = each.value.additional_security_group_ids

lifecycle {
ignore_changes = [worker_count]
}

dynamic "zones" {
for_each = each.value.subnet_prefix != null ? var.vpc_subnets[each.value.subnet_prefix] : each.value.vpc_subnets
content {
subnet_id = zones.value.id
name = zones.value.zone
}
}

# Apply taints to worker pools i.e. all_standalone_pools

dynamic "taints" {
for_each = var.worker_pools_taints == null ? [] : concat(var.worker_pools_taints["all"], lookup(var.worker_pools_taints, each.value["pool_name"], []))
content {
effect = taints.value.effect
key = taints.value.key
value = taints.value.value
}
}

# The default workerpool has to be imported as it will already exist on cluster create
import_on_create = each.value.pool_name == "default" ? var.allow_default_worker_pool_replacement ? null : true : null
orphan_on_delete = each.value.pool_name == "default" ? var.allow_default_worker_pool_replacement ? null : true : null
module "worker_pools" {
source = "./modules/worker-pool"
vpc_id = var.vpc_id
resource_group_id = var.resource_group_id
cluster_id = local.cluster_id
vpc_subnets = var.vpc_subnets
worker_pools = var.worker_pools
ignore_worker_pool_size_changes = var.ignore_worker_pool_size_changes
allow_default_worker_pool_replacement = var.allow_default_worker_pool_replacement
}

##############################################################################
Expand Down Expand Up @@ -605,7 +503,7 @@ resource "null_resource" "confirm_network_healthy" {
# Worker pool creation can start before the 'ibm_container_vpc_cluster' completes since there is no explicit
# depends_on in 'ibm_container_vpc_worker_pool', just an implicit depends_on on the cluster ID. Cluster ID can exist before
# 'ibm_container_vpc_cluster' completes, so hence need to add explicit depends on against 'ibm_container_vpc_cluster' here.
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool]
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools]

provisioner "local-exec" {
command = "${path.module}/scripts/confirm_network_healthy.sh"
Expand Down Expand Up @@ -659,7 +557,7 @@ resource "ibm_container_addons" "addons" {
# Worker pool creation can start before the 'ibm_container_vpc_cluster' completes since there is no explicit
# depends_on in 'ibm_container_vpc_worker_pool', just an implicit depends_on on the cluster ID. Cluster ID can exist before
# 'ibm_container_vpc_cluster' completes, so hence need to add explicit depends on against 'ibm_container_vpc_cluster' here.
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool, null_resource.confirm_network_healthy]
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools, null_resource.confirm_network_healthy]
cluster = local.cluster_id
resource_group_id = var.resource_group_id

Expand Down Expand Up @@ -732,7 +630,7 @@ resource "kubernetes_config_map_v1_data" "set_autoscaling" {
##############################################################################

data "ibm_is_lbs" "all_lbs" {
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool, null_resource.confirm_network_healthy]
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools, null_resource.confirm_network_healthy]
count = length(var.additional_lb_security_group_ids) > 0 ? 1 : 0
}

Expand Down Expand Up @@ -768,19 +666,19 @@ locals {

data "ibm_is_virtual_endpoint_gateway" "master_vpe" {
count = length(var.additional_vpe_security_group_ids["master"])
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool, null_resource.confirm_network_healthy]
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools, null_resource.confirm_network_healthy]
name = local.vpes_to_attach_to_sg["master"]
}

data "ibm_is_virtual_endpoint_gateway" "api_vpe" {
count = length(var.additional_vpe_security_group_ids["api"])
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool, null_resource.confirm_network_healthy]
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools, null_resource.confirm_network_healthy]
name = local.vpes_to_attach_to_sg["api"]
}

data "ibm_is_virtual_endpoint_gateway" "registry_vpe" {
count = length(var.additional_vpe_security_group_ids["registry"])
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool, null_resource.confirm_network_healthy]
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools, null_resource.confirm_network_healthy]
name = local.vpes_to_attach_to_sg["registry"]
}

Expand Down Expand Up @@ -872,7 +770,7 @@ module "existing_secrets_manager_instance_parser" {

resource "ibm_iam_authorization_policy" "ocp_secrets_manager_iam_auth_policy" {
count = var.enable_secrets_manager_integration && !var.skip_ocp_secrets_manager_iam_auth_policy ? 1 : 0
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, ibm_container_vpc_worker_pool.pool, ibm_container_vpc_worker_pool.autoscaling_pool]
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools]
source_service_name = "containers-kubernetes"
source_resource_instance_id = local.cluster_id
target_service_name = "secrets-manager"
Expand Down
Loading