Skip to content

Commit 18ebc96

Browse files
authored
ODF 4.19 initial support (IBM-Cloud#6527)
Signed-off-by: Vasudha-M <[email protected]>
1 parent 98c97e3 commit 18ebc96

File tree

12 files changed

+724
-0
lines changed

12 files changed

+724
-0
lines changed
Lines changed: 198 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,198 @@
1+
# Deploying and Managing Openshift Data Foundation
2+
3+
This example shows how to deploy and manage the Openshift Data Foundation (ODF) on IBM Cloud VPC based RedHat Openshift cluster. Note this template is still in development, so please be advised before using in production.
4+
5+
This sample configuration will deploy the ODF, scale and upgrade it using the "ibm_container_addons" and "kubernetes_manifest" from the ibm terraform provider and kubernetes provider respectively.
6+
7+
For more information, about
8+
9+
* ODF Deployment, see [Deploying OpenShift Data Foundation on VPC clusters](https://cloud.ibm.com/docs/openshift?topic=openshift-deploy-odf-vpc&interface=ui)
10+
* ODF Management, see [Managing your OpenShift Data Foundation deployment](https://cloud.ibm.com/docs/openshift?topic=openshift-ocs-manage-deployment&interface=ui)
11+
12+
#### Folder Structure
13+
14+
```ini
15+
├── openshift-data-foundation
16+
│ ├── addon
17+
│ │ ├── ibm-odf-addon
18+
│ │ │ ├── main.tf
19+
│ │ ├── ocscluster
20+
│ │ │ ├── main.tf
21+
│ │ ├── createaddon.sh
22+
│ │ ├── createcrd.sh
23+
│ │ ├── updatecrd.sh
24+
│ │ ├── updateodf.sh
25+
│ │ ├── deleteaddon.sh
26+
│ │ ├── deletecrd.sh
27+
│ │ ├── main.tf
28+
│ │ ├── variables.tf
29+
│ │ ├── schematics.tfvars
30+
```
31+
32+
* `ibm-odf-addon` - This folder is used to deploy a specific Version of Openshift-Data-Foundation with the `odfDeploy` parameter set to false i.e the add-on is installed without the ocscluster using the IBM-Cloud Terraform Provider.
33+
* `ocscluster` - This folder is used to deploy the `OcsCluster` CRD with the given parameters set in the `schematics.tfvars` file.
34+
* `addon` - This folder contains scripts to create the CRD and deploy the ODF add-on on your cluster. `The main.tf` file contains the `null_resource` to internally call the above two folders, and perform the required actions.
35+
36+
#### Note
37+
38+
You do not have to change anything in the `ibm-odf-addon` and `ocscluster` folders. You just have to input the required parameters in the `schematics.tfvars` file under the `addon` folder, and run terraform.
39+
40+
## Usage
41+
42+
### Option 1 - Command Line Interface
43+
44+
To run this example on your Terminal, first download this directory i.e `examples/openshift-data-foundation/`
45+
46+
```bash
47+
$ cd addon/4.19.0
48+
```
49+
50+
```bash
51+
$ terraform init
52+
$ terraform plan --var-file schematics.tfvars
53+
$ terraform apply --var-file schematics.tfvars
54+
```
55+
56+
Run `terraform destroy --var-file schematics.tfvars` when you don't need these resources.
57+
58+
### Option 2 - IBM Cloud Schematics
59+
60+
To Deploy & Manage the Openshift-Data-Foundation add-on using `IBM Cloud Schematics` please follow the below documentation
61+
62+
https://cloud.ibm.com/docs/schematics?topic=schematics-get-started-terraform
63+
64+
Please note you have to change the `terraform` keyword in the scripts to `terraform1.x` where `x` is the version of terraform you use in IBM Schematics, for example if you're using terraform version 1.3 in schematics make sure to change `terraform` -> `terraform1.3` in the .sh files.
65+
66+
## Example usage
67+
68+
### Deployment of ODF
69+
70+
The default schematics.tfvars is given below, the user should just change the value of the parameters in accorandance to their requirment.
71+
72+
```hcl
73+
ibmcloud_api_key = "" # Enter your API Key
74+
cluster = "" # Enter the Cluster ID
75+
region = "us-south" # Enter the region
76+
77+
# For add-on deployment
78+
odfVersion = "4.19.0"
79+
80+
# For CRD Creation and Management
81+
autoDiscoverDevices = "false"
82+
billingType = "advanced"
83+
clusterEncryption = "false"
84+
hpcsBaseUrl = null
85+
hpcsEncryption = "false"
86+
hpcsInstanceId = null
87+
hpcsSecretName = null
88+
hpcsServiceName = null
89+
hpcsTokenUrl = null
90+
ignoreNoobaa = "false"
91+
numOfOsd = "1"
92+
ocsUpgrade = "false"
93+
osdDevicePaths = null
94+
osdSize = "512Gi"
95+
osdStorageClassName = "ibmc-vpc-block-metro-10iops-tier"
96+
workerPools = null
97+
workerNodes = null
98+
encryptionInTransit = false
99+
taintNodes = false
100+
addSingleReplicaPool = false
101+
ignoreNoobaa = false
102+
disableNoobaaLB = false
103+
enableNFS = false
104+
useCephRBDAsDefaultStorageClass = false
105+
resourceProfile = "balanced"
106+
```
107+
108+
### Scale-Up of ODF
109+
110+
The following variables in the `schematics.tfvars` file can be edited
111+
112+
* numOfOsd - To scale your storage
113+
* workerNodes - To increase the number of Worker Nodes with ODF
114+
* workerPools - To increase the number of Storage Nodes by adding more nodes using workerpool
115+
116+
```hcl
117+
# For CRD Management
118+
numOfOsd = "1" -> "2"
119+
workerNodes = null -> "worker_1_ID,worker_2_ID"
120+
```
121+
122+
### Upgrade of ODF
123+
124+
The following variables in the `schematics.tfvars` file should be changed in order to upgrade the ODF add-on and the Ocscluster CRD.
125+
126+
* odfVersion - Specify the version you wish to upgrade to
127+
* ocsUpgrade - Must be set to `true` to upgrade the CRD
128+
129+
```hcl
130+
# For ODF add-on upgrade
131+
odfVersion = "4.18.0" -> "4.19.0"
132+
133+
# For Ocscluster upgrade
134+
ocsUpgrade = "false" -> "true"
135+
```
136+
137+
## Examples
138+
139+
* [ ODF Deployment & Management ](https://github.com/IBM-Cloud/terraform-provider-ibm/tree/master/examples/openshift-data-foundation/deployment)
140+
141+
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
142+
143+
## Requirements
144+
145+
| Name | Version |
146+
|------|---------|
147+
| terraform | ~> 0.14.8 |
148+
149+
## Providers
150+
151+
| Name | Version |
152+
|------|---------|
153+
| ibm | latest |
154+
| kubernetes | latest |
155+
156+
## Inputs
157+
158+
| Name | Description | Type | Required | Default
159+
|------|-------------|------|----------|--------|
160+
| ibmcloud_api_key | IBM Cloud API Key | `string` | yes | -
161+
| cluster | Name of the cluster. | `string` | yes | -
162+
| region | Region of the cluster | `string` | yes | -
163+
| odfVersion | Version of the ODF add-on | `string` | yes | 4.16.0
164+
| osdSize | Enter the size for the storage devices that you want to provision for the Object Storage Daemon (OSD) pods | `string` | yes | 512Gi
165+
| numOfOsd | The Number of OSD | `string` | yes | 1
166+
| osdStorageClassName | Enter the storage class to be used to provision block volumes for Object Storage Daemon (OSD) pods | `string` | yes | ibmc-vpc-block-metro-10iops-tier
167+
| autoDiscoverDevices | Set to true if automatically discovering local disks | `string` | no | true
168+
| billingType | Set to true if automatically discovering local disks | `string` | no | advanced
169+
| clusterEncryption | To enable at-rest encryption of all disks in the storage cluster | `string` | no | false
170+
| hpcsEncryption | Set to true to enable HPCS Encryption | `string` | no | false
171+
| hpcsBaseUrl | The HPCS Base URL | `string` | no | null
172+
| hpcsInstanceId | The HPCS Service ID | `string` | no | null
173+
| hpcsSecretName | The HPCS secret name | `string` | no | null
174+
| hpcsServiceName | The HPCS service name | `string` | no | null
175+
| hpcsTokenUrl | The HPCS Token URL | `string` | no | null
176+
| ignoreNoobaa | Set to true if you do not want MultiCloudGateway | `bool` | no | false
177+
| ocsUpgrade | Set to true to upgrade Ocscluster | `string` | no | false
178+
| osdDevicePaths | IDs of the disks to be used for OSD pods if using local disks or standard classic cluster | `string` | no | null
179+
| workerPools | A list of the worker pools names where you want to deploy ODF. Either specify workerpool or workernodes to deploy ODF, if not specified ODF will deploy on all nodes | `string` | no | null
180+
| workerNodes | Provide the names of the worker nodes on which to install ODF. Leave blank to install ODF on all worker nodes | `string` | no | null
181+
| encryptionInTransit |To enable in-transit encryption. Enabling in-transit encryption does not affect the existing mapped or mounted volumes. After a volume is mapped/mounted, it retains the encryption settings that were used when it was initially mounted. To change the encryption settings for existing volumes, they must be remounted again one-by-one. | `bool` | no | false
182+
| taintNodes | Specify true to taint the selected worker nodes so that only OpenShift Data Foundation pods can run on those nodes. Use this option only if you limit ODF to a subset of nodes in your cluster. | `bool` | no | false
183+
| addSingleReplicaPool | Specify true to create a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability. | `bool` | no | false
184+
| disableNoobaaLB | Specify true to disable to NooBaa public load balancer. | `bool` | no | false
185+
| enableNFS | Enabling this allows you to create exports using Network File System (NFS) that can then be accessed internally or externally from the OpenShift cluster. | `bool` | no | false
186+
| useCephRBDAsDefaultStorageClass | Enable to set the Ceph RADOS block device (RBD) storage class as the default storage class during the deployment of OpenShift Data Foundation | `bool` | no | false
187+
| resourceProfile | Provides an option to choose a resource profile based on the availability of resources during deployment. Choose between lean, balanced and performance. | `string` | yes | balanced
188+
189+
190+
Refer - https://cloud.ibm.com/docs/openshift?topic=openshift-deploy-odf-vpc&interface=ui#odf-vpc-param-ref
191+
192+
## Note
193+
194+
* Users should only change the values of the variables within quotes, variables should be left untouched with the default values if they are not set.
195+
* `workerNodes` takes a string containing comma separated values of the names of the worker nodes you wish to enable ODF on.
196+
* On `terraform apply --var-file=schematics.tfvars`, the add-on is enabled and the custom resource is created.
197+
* During ODF update, please do not tamper with the `ocsUpgrade` variable, just change the value to true within quotation, without changing the format of the variable.
198+
* During the `Upgrade of Odf` scenario on IBM Schematics, please make sure to change the value of `ocsUpgrade` to `false` after. Locally this is automatically handled using `sed`.
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
#!/bin/bash
2+
3+
set -e
4+
5+
WORKING_DIR=$(pwd)
6+
7+
cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ibm_odf_addon/variables.tf
8+
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
9+
cd ${WORKING_DIR}/ibm_odf_addon
10+
terraform init
11+
terraform apply --auto-approve -var-file ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
#!/bin/bash
2+
3+
set -e
4+
5+
WORKING_DIR=$(pwd)
6+
7+
cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ocscluster/variables.tf
8+
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ocscluster/schematics.tfvars
9+
cd ${WORKING_DIR}/ocscluster
10+
terraform init
11+
terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/schematics.tfvars
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
#!/usr/bin/env bash
2+
3+
set -e
4+
5+
WORKING_DIR=$(pwd)
6+
7+
cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ibm_odf_addon/variables.tf
8+
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
9+
cd ${WORKING_DIR}/ibm_odf_addon
10+
terraform init
11+
if [ -e ${WORKING_DIR}/ibm_odf_addon/terraform.tfstate ]
12+
then
13+
echo "ok"
14+
else
15+
terraform apply --auto-approve -var-file=${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
16+
fi
17+
terraform destroy --auto-approve -var-file=${WORKING_DIR}/ibm_odf_addon/schematics.tfvars
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
#!/usr/bin/env bash
2+
3+
set -e
4+
5+
WORKING_DIR=$(pwd)
6+
7+
cp ${WORKING_DIR}/variables.tf ${WORKING_DIR}/ocscluster/variables.tf
8+
cp ${WORKING_DIR}/schematics.tfvars ${WORKING_DIR}/ocscluster/schematics.tfvars
9+
cd ${WORKING_DIR}/ocscluster
10+
terraform init
11+
if [ -e ${WORKING_DIR}/ocscluster/terraform.tfstate ]
12+
then
13+
echo "ok"
14+
else
15+
terraform import -var-file=${WORKING_DIR}/ocscluster/schematics.tfvars kubernetes_manifest.ocscluster_ocscluster_auto "apiVersion=ocs.ibm.io/v1,kind=OcsCluster,namespace=openshift-storage,name=ocscluster-auto"
16+
terraform apply --auto-approve -var-file ${WORKING_DIR}/ocscluster/schematics.tfvars
17+
fi
18+
19+
terraform destroy --auto-approve -var-file=${WORKING_DIR}/ocscluster/schematics.tfvars
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
terraform {
2+
required_providers {
3+
ibm = {
4+
source = "IBM-Cloud/ibm"
5+
version = ">= 1.56.0"
6+
}
7+
}
8+
}
9+
10+
provider "ibm" {
11+
region = var.region
12+
ibmcloud_api_key = var.ibmcloud_api_key
13+
}
14+
15+
16+
resource "ibm_container_addons" "addons" {
17+
18+
manage_all_addons = "false"
19+
cluster = var.cluster
20+
21+
addons {
22+
23+
name = "openshift-data-foundation"
24+
version = var.odfVersion
25+
parameters_json = <<PARAMETERS_JSON
26+
{
27+
"odfDeploy":"false"
28+
}
29+
PARAMETERS_JSON
30+
31+
}
32+
33+
}
Lines changed: 89 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
resource "null_resource" "customResourceGroup" {
2+
3+
provisioner "local-exec" {
4+
5+
when = create
6+
command = "sh ./createcrd.sh"
7+
8+
}
9+
10+
provisioner "local-exec" {
11+
12+
when = destroy
13+
command = "sh ./deletecrd.sh"
14+
15+
}
16+
17+
depends_on = [
18+
null_resource.addOn
19+
]
20+
21+
22+
}
23+
24+
25+
resource "null_resource" "addOn" {
26+
27+
provisioner "local-exec" {
28+
29+
when = create
30+
command = "sh ./createaddon.sh"
31+
32+
}
33+
34+
provisioner "local-exec" {
35+
36+
when = destroy
37+
command = "sh ./deleteaddon.sh"
38+
39+
}
40+
41+
}
42+
43+
44+
resource "null_resource" "updateCRD" {
45+
46+
triggers = {
47+
numOfOsd = var.numOfOsd
48+
ocsUpgrade = var.ocsUpgrade
49+
workerNodes = var.workerNodes
50+
workerPools = var.workerPools
51+
osdDevicePaths = var.osdDevicePaths
52+
taintNodes = var.taintNodes
53+
addSingleReplicaPool = var.addSingleReplicaPool
54+
resourceProfile = var.resourceProfile
55+
enableNFS = var.enableNFS
56+
}
57+
58+
59+
provisioner "local-exec" {
60+
61+
command = "sh ./updatecrd.sh"
62+
63+
}
64+
65+
depends_on = [
66+
null_resource.upgradeODF
67+
]
68+
69+
}
70+
71+
resource "null_resource" "upgradeODF" {
72+
73+
triggers = {
74+
75+
odfVersion = var.odfVersion
76+
77+
}
78+
79+
provisioner "local-exec" {
80+
81+
command = "sh ./updateodf.sh"
82+
83+
}
84+
85+
depends_on = [
86+
null_resource.customResourceGroup, null_resource.addOn
87+
]
88+
89+
}

0 commit comments

Comments
 (0)