Skip to content

Commit e2c13db

Browse files
authored
Merge branch 'master' into timeout-changes
2 parents d1650b5 + f2d3baf commit e2c13db

File tree

111 files changed

+2813
-556
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

111 files changed

+2813
-556
lines changed

CHANGELOG.md

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,71 @@
1+
# 1.81.0-beta1 (July 25, 2025)
2+
3+
## Bug Fixes
4+
5+
### VPC Infrastructure
6+
* Added an empty check on allowed_use for is_instance ([6381](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6381))
7+
8+
9+
# 1.81.0-beta0 (July 24, 2025)
10+
11+
## Bug Fixes
12+
13+
### Cloud Logs
14+
* fix dashboard panic ([6374](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6374))
15+
16+
### Key Management
17+
* fix KMS example code and typos ([6337](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6337))
18+
19+
### Direct Link
20+
* update error toolchain changes for direct link ([6272](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6272))
21+
22+
### IAM
23+
* Fix build failure issues ([6367](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6367))
24+
* Fixed documentation on access management templates ([6298](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6298))
25+
26+
### Power Systems
27+
* [Resource] [Datasource] Replace SSH Key API with new API ([6375](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6375))
28+
29+
### Transit Gateway
30+
* update error toolchain changes for transit gateway ([6274](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6274))
31+
32+
### VPC Infrastructure
33+
* handle nil pointer in VPN server route deletion wait ([6369](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6369))
34+
35+
## Enhancements
36+
37+
### CD Tekton Pipeline
38+
* add support for waiting runs limit ([6335](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6335))
39+
40+
### Cloud Internet Services
41+
* Add support for managed and custom lists ([6310](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6310))
42+
43+
### IAM
44+
* Handle failed state assignments ([6372](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6372))
45+
46+
### ODF
47+
* ODF 4.18 initial support ([6348](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6348))
48+
49+
### Power Systems
50+
* [DataSource] [Resource] Add Software Tier support for data sources and resources ([6321](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6321))
51+
52+
### VMware
53+
* update vmware service ([6329](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6329))
54+
55+
### VPC Infrastructure
56+
* added support for vpc-services ([6357](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6357))
57+
* public address range development ([6341](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6341))
58+
* Added support for tags in is_vpn_server resource ([6295](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6295))
59+
* Added support for source_snapshot on instance template ([6364](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6364))
60+
* image capabilities changes ([6366](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6366))
61+
* Added crn to virtual network interface for is_instance ([6297](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6297))
62+
63+
## Deprecations
64+
65+
### IAM
66+
* deprecate iam_service_id and profile_id while creation of policies ([6345](https://github.com/IBM-Cloud/terraform-provider-ibm/pull/6345))
67+
68+
169
# 1.80.4 (July 15, 2025)
270
## Bug Fixes
371

examples/ibm-key-protect/main.tf

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,11 +26,11 @@ data "ibm_kp_key" "test" {
2626
key_protect_id = "${ibm_kp_key.test.key_protect_id}"
2727
}
2828

29-
resource "ibm_cos_bucket" "flex-us-south" {
29+
resource "ibm_cos_bucket" "smart-us-south" {
3030
depends_on = [ibm_iam_authorization_policy.policy]
3131
bucket_name = var.bucket_name
3232
resource_instance_id = ibm_resource_instance.cos_instance.id
3333
region_location = "us-south"
34-
storage_class = "flex"
34+
storage_class = "smart"
3535
kms_key_crn = ibm_kp_key.test.id
3636
}

examples/ibm-key-protect/variables.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,5 +41,5 @@ variable "standard_key" {
4141
variable "bucket_name" {
4242
description = "The cos bucket name"
4343
type = string
44-
default = "test_buck"
44+
default = "kptestbucket"
4545
}

examples/ibm-kms/main.tf

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -28,12 +28,12 @@ data "ibm_kms_keys" "test" {
2828
instance_id = "${ibm_kms_key.test.instance_id}"
2929
}
3030

31-
resource "ibm_cos_bucket" "flex-us-south" {
31+
resource "ibm_cos_bucket" "smart-us-south" {
3232
depends_on = [ibm_iam_authorization_policy.policy]
3333
bucket_name = var.bucket_name
3434
resource_instance_id = ibm_resource_instance.cos_instance.id
3535
region_location = "us-south"
36-
storage_class = "flex"
36+
storage_class = "smart"
3737
kms_key_crn = ibm_kms_key.test.id
3838
}
3939

@@ -49,21 +49,20 @@ resource "ibm_kms_kmip_adapter" "myadapter" {
4949

5050
resource "ibm_kms_kmip_client_cert" "mycert" {
5151
instance_id = "${ibm_kms_key.test.instance_id}"
52-
adapter_id = "${ibm_kms_kmip_adapter.myadapter.id}"
52+
adapter_id = "${ibm_kms_kmip_adapter.myadapter.adapter_id}"
5353
certificate = file("${path.module}/localhost.crt")
5454
name = var.kmip_cert_name
5555
}
5656

5757
data "ibm_kms_kmip_adapter" "adapter_data" {
5858
instance_id = "${ibm_kms_key.test.instance_id}"
5959
name = "${ibm_kms_kmip_adapter.myadapter.name}"
60-
# adapter_id = "${ibm_kms_kmip_adapter.myadapter.adapter_id}"
6160
}
6261

6362
data "ibm_kms_kmip_client_cert" "cert1" {
6463
instance_id = "${ibm_kms_key.test.instance_id}"
6564
adapter_name = "${ibm_kms_kmip_adapter.myadapter.name}"
66-
cert_id = "${ibm_kms_kmip_client_cert.mycert.id}"
65+
cert_id = "${ibm_kms_kmip_client_cert.mycert.cert_id}"
6766
}
6867

6968
data "ibm_kms_kmip_adapters" "adapters" {
@@ -77,13 +76,13 @@ data "ibm_kms_kmip_client_certs" "cert_list" {
7776

7877
data "ibm_kms_kmip_objects" "objects_list" {
7978
instance_id = "${ibm_kms_key.test.instance_id}"
80-
adapter_id = "${ibm_kms_kmip_adapter.myadapter.id}"
79+
adapter_id = "${ibm_kms_kmip_adapter.myadapter.adapter_id}"
8180
object_state_filter = [1,2,3,4]
8281
}
8382

84-
data "ibm_kms_kmip_object" "object1" {
85-
count = length(data.ibm_kms_kmip_objects.objects_list.objects) > 0 ? 1 : 0
86-
instance_id = "${ibm_kms_key.test.instance_id}"
87-
adapter_id = "${ibm_kms_kmip_adapter.myadapter.id}"
88-
object_id = "${data.ibm_kms_kmip_objects.objects_list.objects.0.object_id}"
89-
}
83+
# Note: As object creation is not supported via terraform, the below code attempts to pull the id of the first item from the list of kmip objects
84+
# data "ibm_kms_kmip_object" "object1" {
85+
# instance_id = "${ibm_kms_key.test.instance_id}"
86+
# adapter_id = "${ibm_kms_kmip_adapter.myadapter.adapter_id}"
87+
# object_id = "${data.ibm_kms_kmip_objects.objects_list.objects.0.object_id}"
88+
# }

examples/ibm-kms/variables.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ variable "standard_key" {
4141
variable "bucket_name" {
4242
description = "The cos bucket name"
4343
type = string
44-
default = "test_buck"
44+
default = "kptestbucket"
4545
}
4646
variable "kmip_adapter_name" {
4747
description = "The KMIP adapter name"

examples/openshift-data-foundation/addon/4.17.0/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folde
4444
To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/`
4545

4646
```bash
47-
$ cd addon/4.16.0
47+
$ cd addon/4.17.0
4848
```
4949

5050
```bash
@@ -75,7 +75,7 @@ cluster = "" # Enter the Cluster ID
7575
region = "us-south" # Enter the region
7676
7777
# For add-on deployment
78-
odfVersion = "4.16.0"
78+
odfVersion = "4.17.0"
7979
8080
# For CRD Creation and Management
8181
autoDiscoverDevices = "false"
@@ -161,7 +161,7 @@ ocsUpgrade = "false" -> "true"
161161
| ibmcloud_api_key | IBM Cloud API Key | `string` | yes | -
162162
| cluster | Name of the cluster. | `string` | yes | -
163163
| region | Region of the cluster | `string` | yes | -
164-
| odfVersion | Version of the ODF add-on | `string` | yes | 4.12.0
164+
| odfVersion | Version of the ODF add-on | `string` | yes | 4.16.0
165165
| osdSize | Enter the size for the storage devices that you want to provision for the Object Storage Daemon (OSD) pods | `string` | yes | 512Gi
166166
| numOfOsd | The Number of OSD | `string` | yes | 1
167167
| osdStorageClassName | Enter the storage class to be used to provision block volumes for Object Storage Daemon (OSD) pods | `string` | yes | ibmc-vpc-block-metro-10iops-tier
Lines changed: 198 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,198 @@
1+
# Deploying and Managing Openshift Data Foundation
2+
3+
This example shows how to deploy and manage the Openshift Data Foundation (ODF) on IBM Cloud VPC based RedHat Openshift cluster. Note this template is still in development, so please be advised before using in production.
4+
5+
This sample configuration will deploy the ODF, scale and upgrade it using the "ibm_container_addons" and "kubernetes_manifest" from the ibm terraform provider and kubernetes provider respectively.
6+
7+
For more information, about
8+
9+
* ODF Deployment, see [Deploying OpenShift Data Foundation on VPC clusters](https://cloud.ibm.com/docs/openshift?topic=openshift-deploy-odf-vpc&interface=ui)
10+
* ODF Management, see [Managing your OpenShift Data Foundation deployment](https://cloud.ibm.com/docs/openshift?topic=openshift-ocs-manage-deployment&interface=ui)
11+
12+
#### Folder Structure
13+
14+
```ini
15+
├── openshift-data-foundation
16+
│ ├── addon
17+
│ │ ├── ibm-odf-addon
18+
│ │ │ ├── main.tf
19+
│ │ ├── ocscluster
20+
│ │ │ ├── main.tf
21+
│ │ ├── createaddon.sh
22+
│ │ ├── createcrd.sh
23+
│ │ ├── updatecrd.sh
24+
│ │ ├── updateodf.sh
25+
│ │ ├── deleteaddon.sh
26+
│ │ ├── deletecrd.sh
27+
│ │ ├── main.tf
28+
│ │ ├── variables.tf
29+
│ │ ├── schematics.tfvars
30+
```
31+
32+
* `ibm-odf-addon` - This folder is used to deploy a specific Version of Openshift-Data-Foundation with the `odfDeploy` parameter set to false i.e the add-on is installed without the ocscluster using the IBM-Cloud Terraform Provider.
33+
* `ocscluster` - This folder is used to deploy the `OcsCluster` CRD with the given parameters set in the `schematics.tfvars` file.
34+
* `addon` - This folder contains scripts to create the CRD and deploy the ODF add-on on your cluster. `The main.tf` file contains the `null_resource` to internally call the above two folders, and perform the required actions.
35+
36+
#### Note
37+
38+
You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folders. You just have to input the required parameters in the `schematics.tfvars` file under the `addon` folder, and run terraform.
39+
40+
## Usage
41+
42+
### Option 1 - Command Line Interface
43+
44+
To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/`
45+
46+
```bash
47+
$ cd addon/4.18.0
48+
```
49+
50+
```bash
51+
$ terraform init
52+
$ terraform plan --var-file schematics.tfvars
53+
$ terraform apply --var-file schematics.tfvars
54+
```
55+
56+
Run `terraform destroy --var-file schematics.tfvars` when you don't need these resources.
57+
58+
### Option 2 - IBM Cloud Schematics
59+
60+
To Deploy & Manage the Openshift-Data-Foundation add-on using `IBM Cloud Schematics` please follow the below documentation
61+
62+
https://cloud.ibm.com/docs/schematics?topic=schematics-get-started-terraform
63+
64+
Please note you have to change the `terraform` keyword in the scripts to `terraform1.x` where `x` is the version of terraform you use in IBM Schematics, for example if you're using terraform version 1.3 in schematics make sure to change `terraform` -> `terraform1.3` in the .sh files.
65+
66+
## Example usage
67+
68+
### Deployment of ODF
69+
70+
The default schematics.tfvars is given below, the user should just change the value of the parameters in accorandance to their requirment.
71+
72+
```hcl
73+
ibmcloud_api_key = "" # Enter your API Key
74+
cluster = "" # Enter the Cluster ID
75+
region = "us-south" # Enter the region
76+
77+
# For add-on deployment
78+
odfVersion = "4.18.0"
79+
80+
# For CRD Creation and Management
81+
autoDiscoverDevices = "false"
82+
billingType = "advanced"
83+
clusterEncryption = "false"
84+
hpcsBaseUrl = null
85+
hpcsEncryption = "false"
86+
hpcsInstanceId = null
87+
hpcsSecretName = null
88+
hpcsServiceName = null
89+
hpcsTokenUrl = null
90+
ignoreNoobaa = "false"
91+
numOfOsd = "1"
92+
ocsUpgrade = "false"
93+
osdDevicePaths = null
94+
osdSize = "512Gi"
95+
osdStorageClassName = "ibmc-vpc-block-metro-10iops-tier"
96+
workerPools = null
97+
workerNodes = null
98+
encryptionInTransit = false
99+
taintNodes = false
100+
addSingleReplicaPool = false
101+
ignoreNoobaa = false
102+
disableNoobaaLB = false
103+
enableNFS = false
104+
useCephRBDAsDefaultStorageClass = false
105+
resourceProfile = "balanced"
106+
```
107+
108+
### Scale-Up of ODF
109+
110+
The following variables in the `schematics.tfvars` file can be edited
111+
112+
* numOfOsd - To scale your storage
113+
* workerNodes - To increase the number of Worker Nodes with ODF
114+
* workerPools - To increase the number of Storage Nodes by adding more nodes using workerpool
115+
116+
```hcl
117+
# For CRD Management
118+
numOfOsd = "1" -> "2"
119+
workerNodes = null -> "worker_1_ID,worker_2_ID"
120+
```
121+
122+
### Upgrade of ODF
123+
124+
The following variables in the `schematics.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD.
125+
126+
* odfVersion - Specify the version you wish to upgrade to
127+
* ocsUpgrade - Must be set to `true` to upgrade the CRD
128+
129+
```hcl
130+
# For ODF add-on upgrade
131+
odfVersion = "4.17.0" -> "4.18.0"
132+
133+
# For Ocscluster upgrade
134+
ocsUpgrade = "false" -> "true"
135+
```
136+
137+
## Examples
138+
139+
* [ ODF Deployment & Management ](https://github.com/IBM-Cloud/terraform-provider-ibm/tree/master/examples/openshift-data-foundation/deployment)
140+
141+
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
142+
143+
## Requirements
144+
145+
| Name | Version |
146+
|------|---------|
147+
| terraform | ~> 0.14.8 |
148+
149+
## Providers
150+
151+
| Name | Version |
152+
|------|---------|
153+
| ibm | latest |
154+
| kubernetes | latest |
155+
156+
## Inputs
157+
158+
| Name | Description | Type | Required | Default
159+
|------|-------------|------|----------|--------|
160+
| ibmcloud_api_key | IBM Cloud API Key | `string` | yes | -
161+
| cluster | Name of the cluster. | `string` | yes | -
162+
| region | Region of the cluster | `string` | yes | -
163+
| odfVersion | Version of the ODF add-on | `string` | yes | 4.16.0
164+
| osdSize | Enter the size for the storage devices that you want to provision for the Object Storage Daemon (OSD) pods | `string` | yes | 512Gi
165+
| numOfOsd | The Number of OSD | `string` | yes | 1
166+
| osdStorageClassName | Enter the storage class to be used to provision block volumes for Object Storage Daemon (OSD) pods | `string` | yes | ibmc-vpc-block-metro-10iops-tier
167+
| autoDiscoverDevices | Set to true if automatically discovering local disks | `string` | no | true
168+
| billingType | Set to true if automatically discovering local disks | `string` | no | advanced
169+
| clusterEncryption | To enable at-rest encryption of all disks in the storage cluster | `string` | no | false
170+
| hpcsEncryption | Set to true to enable HPCS Encryption | `string` | no | false
171+
| hpcsBaseUrl | The HPCS Base URL | `string` | no | null
172+
| hpcsInstanceId | The HPCS Service ID | `string` | no | null
173+
| hpcsSecretName | The HPCS secret name | `string` | no | null
174+
| hpcsServiceName | The HPCS service name | `string` | no | null
175+
| hpcsTokenUrl | The HPCS Token URL | `string` | no | null
176+
| ignoreNoobaa | Set to true if you do not want MultiCloudGateway | `bool` | no | false
177+
| ocsUpgrade | Set to true to upgrade Ocscluster | `string` | no | false
178+
| osdDevicePaths | IDs of the disks to be used for OSD pods if using local disks or standard classic cluster | `string` | no | null
179+
| workerPools | A list of the worker pools names where you want to deploy ODF. Either specify workerpool or workernodes to deploy ODF, if not specified ODF will deploy on all nodes | `string` | no | null
180+
| workerNodes | Provide the names of the worker nodes on which to install ODF. Leave blank to install ODF on all worker nodes | `string` | no | null
181+
| encryptionInTransit |To enable in-transit encryption. Enabling in-transit encryption does not affect the existing mapped or mounted volumes. After a volume is mapped/mounted, it retains the encryption settings that were used when it was initially mounted. To change the encryption settings for existing volumes, they must be remounted again one-by-one. | `bool` | no | false
182+
| taintNodes | Specify true to taint the selected worker nodes so that only OpenShift Data Foundation pods can run on those nodes. Use this option only if you limit ODF to a subset of nodes in your cluster. | `bool` | no | false
183+
| addSingleReplicaPool | Specify true to create a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability. | `bool` | no | false
184+
| disableNoobaaLB | Specify true to disable to NooBaa public load balancer. | `bool` | no | false
185+
| enableNFS | Enabling this allows you to create exports using Network File System (NFS) that can then be accessed internally or externally from the OpenShift cluster. | `bool` | no | false
186+
| useCephRBDAsDefaultStorageClass | Enable to set the Ceph RADOS block device (RBD) storage class as the default storage class during the deployment of OpenShift Data Foundation | `bool` | no | false
187+
| resourceProfile | Provides an option to choose a resource profile based on the availability of resources during deployment. Choose between lean, balanced and performance. | `string` | yes | balanced
188+
189+
190+
Refer - https://cloud.ibm.com/docs/openshift?topic=openshift-deploy-odf-vpc&interface=ui#odf-vpc-param-ref
191+
192+
## Note
193+
194+
* Users should only change the values of the variables within quotes, variables should be left untouched with the default values if they are not set.
195+
* `workerNodes` takes a string containing comma separated values of the names of the worker nodes you wish to enable ODF on.
196+
* On `terraform apply --var-file=schematics.tfvars`, the add-on is enabled and the custom resource is created.
197+
* During ODF update, please do not tamper with the `ocsUpgrade` variable, just change the value to true within quotation, without changing the format of the variable.
198+
* During the `Upgrade of Odf` scenario on IBM Schematics, please make sure to change the value of `ocsUpgrade` to `false` after. Locally this is automatically handled using `sed`.

0 commit comments

Comments
 (0)