Skip to content

Commit 8247efc

Browse files
author
Lucas Yoon
committed
RHDHPAI-581: adding UI components and connecting cluster info
Signed-off-by: Lucas Yoon <[email protected]>
1 parent 6aa826e commit 8247efc

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+1817
-8
lines changed

LICENSE

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
12
Apache License
23
Version 2.0, January 2004
34
http://www.apache.org/licenses/
@@ -198,4 +199,4 @@
198199
distributed under the License is distributed on an "AS IS" BASIS,
199200
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200201
See the License for the specific language governing permissions and
201-
limitations under the License.
202+
limitations under the License.
Lines changed: 137 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
# Remote cluster configuration with ai-rhdh-installer
2+
3+
Use this guide if you manage RHDH via the `ai-rhdh-installer` repository and want to enable remote cluster deployments. If you are not using the installer, see `Bring your own cluster: documentation/app/template-remote-cluster.md`.
4+
5+
---
6+
7+
## Prerequisites
8+
- Two OpenShift clusters:
9+
- Host cluster: runs RHDH, Argo CD, Tekton
10+
- Remote cluster: receives the application deployment
11+
- CLI access: `oc`/`kubectl` authenticated to both clusters, `ArgoCD` CLI
12+
- Versions: RHDH 1.8 with OCP 4.17
13+
- If deploying RHOAI-based apps on the remote cluster, install:
14+
- Node Feature Discovery (with a `NodeFeatureDiscovery` CR)
15+
- NVIDIA GPU Operator (with a `ClusterPolicy` CR)
16+
- OpenShift AI Operator (with a `DataScienceCluster` CR)
17+
18+
---
19+
20+
## Step 1: Configure RHDH with ai-rhdh-installer (preferred)
21+
22+
### Interactive setup
23+
24+
First, create a non-expiring service account token with view permissions on the **remote** cluster:
25+
26+
```bash
27+
oc create serviceaccount rhdh-sa -n default
28+
29+
# Grant read-only access cluster-wide
30+
oc create clusterrolebinding rhdh-sa-view \
31+
--clusterrole=view \
32+
--serviceaccount=default:rhdh-sa
33+
34+
# Create a Secret that yields a permanent token
35+
cat <<'EOF' | oc apply -f -
36+
apiVersion: v1
37+
kind: Secret
38+
metadata:
39+
name: rhdh-sa-token
40+
namespace: default
41+
annotations:
42+
kubernetes.io/service-account.name: rhdh-sa
43+
type: kubernetes.io/service-account-token
44+
EOF
45+
46+
# Retrieve and print the token
47+
oc get secret rhdh-sa-token -n default -o jsonpath='{.data.token}' | base64 --decode
48+
```
49+
50+
Run the installer and answer the prompts to enable remote cluster support:
51+
52+
```bash
53+
bash configure.sh
54+
```
55+
56+
When prompted:
57+
- "Would you like to enable remote cluster setup? (y/n):" → enter `y`
58+
- Provide the remote cluster API URL (e.g., `https://api.<remote-cluster>:443`)
59+
- Provide a remote cluster name
60+
- Paste a Service Account token that has read access (view/cluster-reader)
61+
- Choose TLS verification preference (for labs you may skip TLS; in production provide a CA)
62+
63+
The installer updates dynamic plugins (Kubernetes/Topology, optionally Tekton) with the remote cluster. Then proceed to Argo CD cluster registration and namespace preparation.
64+
65+
66+
## Step 2: Register the remote cluster in Argo CD (host cluster)
67+
68+
```bash
69+
# 1) Inspect kubeconfig contexts on your workstation
70+
oc config get-contexts
71+
72+
# 2) Use the remote cluster context and verify access
73+
oc config use-context <remote-cluster-context-name>
74+
oc get nodes
75+
76+
# 3) Ensure you're logged into the remote cluster with oc (OpenShift)
77+
# If not logged in, authenticate with a token (adjust flags as needed)
78+
oc whoami || oc login https://api.<remote-cluster>:443
79+
80+
# 4) Log into Argo CD (host cluster)
81+
# Use --insecure if you don't have a valid TLS cert; add --sso if SSO is enabled
82+
argocd login <argocd-url>
83+
84+
# 5) Register the remote cluster with Argo CD using its kubeconfig context
85+
argocd cluster add <remote-cluster-context-name>
86+
87+
# 6) Verify Argo CD sees the cluster
88+
argocd cluster list
89+
```
90+
91+
#### To rotate the Argo CD bearer token for an external cluster
92+
93+
```bash
94+
# Run against the external cluster's kubeconfig context
95+
oc config use-context <remote-cluster-context-name>
96+
97+
# Delete the argocd-manager token secret so a new one is created
98+
oc delete secret argocd-manager-token-XXXXXX -n kube-system
99+
100+
# Re-add the cluster to rotate credentials
101+
argocd cluster add <remote-cluster-context-name>
102+
```
103+
Permanent, non-expiring service account token to view:
104+
105+
- Kubernetes: Manually create a Secret for a Service Account — `https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-secret-for-a-service-account`
106+
- Backstage: Kubernetes configuration — `https://backstage.io/docs/next/features/kubernetes/configuration/`
107+
108+
See: Argo CD — External cluster credentials: https://argo-cd.readthedocs.io/en/stable/operator-manual/security/#external-cluster-credentials
109+
110+
111+
## Step 3: Use the Software Template (Application Engineers)
112+
- Import the desired template into RHDH
113+
- Select Remote deployment in the wizard
114+
- Enter the remote API URL exactly as in the Argo CD secret `server`
115+
- Enter the namespace (same name will be used for host-side Tekton PipelineRuns)
116+
117+
---
118+
119+
## Validation
120+
- Topology/Kubernetes plugins show the remote cluster’s resources
121+
- Argo CD has two apps:
122+
- `app` targets the remote cluster and syncs application workloads
123+
- `app-tekton` targets the host cluster and manages Tekton resources
124+
- PipelineRuns execute on the host cluster; workloads appear on the remote namespace
125+
126+
---
127+
128+
## Troubleshooting
129+
- Remote resources not appearing: verify pluginConfig cluster entry and token
130+
- Argo CD sync errors: check the cluster secret `server`/token and TLS settings
131+
- Image pull errors: ensure pull secrets exist in the remote namespace
132+
133+
---
134+
135+
## Notes
136+
- Same namespace name on host and remote is required in this iteration
137+
- Multi-namespace remote deployments are out-of-scope and may be added later

documentation/app/mkdocs.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ nav:
33
- Overview: index.md
44
- Prerequisites: template-prerequisites.md
55
- Template Usage: usage.md
6+
- Remote Cluster configuration: template-remote-cluster.md
67
- Deployed Application: application.md
78
plugins:
89
- techdocs-core
Lines changed: 192 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
## Bring your own cluster (remote deployments)
2+
3+
This guide explains how to deploy application workloads to a remote OpenShift cluster while keeping Tekton Pipelines and webhooks on the host (RHDH) cluster. The GitOps application is split into two Argo CD apps:
4+
5+
- app: Deploys the application manifests to the destination cluster (host or remote)
6+
- app-tekton: Manages Tekton resources on the host cluster so all PipelineRuns execute there
7+
8+
### High-level behavior
9+
- Tekton webhooks and PipelineRuns stay on the host cluster.
10+
- When a remote cluster is selected, only the application runtime resources deploy to the remote cluster.
11+
- The same namespace name is used on both clusters. For the initial iteration, choosing different namespaces across clusters is not supported.
12+
13+
### Who should use this
14+
15+
#### Platform Engineers
16+
- Configure the remote cluster in the dynamic plugins (Topology and Kubernetes) so RHDH can read cluster resources.
17+
- Register the remote cluster in Argo CD on the host cluster via a cluster secret.
18+
- Ensure the remote namespace is prepared with required secrets/permissions or provide a bootstrap mechanism.
19+
20+
#### Application Engineers
21+
- Use the Software Template wizard to select Remote cluster deployment.
22+
- Provide the remote cluster API URL (must match the Argo CD cluster `server`).
23+
- Provide the namespace to deploy into (same name will be used on host for Tekton PipelineRuns).
24+
25+
---
26+
27+
## Prerequisites
28+
- Two OpenShift clusters:
29+
- Host cluster: runs RHDH, Argo CD, Tekton
30+
- Remote cluster: receives the application deployment
31+
- CLI access: `oc`/`kubectl` authenticated to both clusters, `ArgoCD` CLI
32+
- Versions: RHDH 1.8 with OCP 4.17
33+
- If deploying RHOAI-based apps on the remote cluster, install:
34+
- Node Feature Discovery (with a `NodeFeatureDiscovery` CR)
35+
- NVIDIA GPU Operator (with a `ClusterPolicy` CR)
36+
- OpenShift AI Operator (with a `DataScienceCluster` CR)
37+
38+
---
39+
40+
## Step 1: Expose the remote cluster to RHDH UI via dynamic plugins (Topology/Kubernetes)
41+
42+
Before installing/configuring RHDH using the ai-rhdh-installer, add the remote cluster to the installer’s dynamic plugin configuration so the Topology and Kubernetes plugins can read it.
43+
44+
On the remote cluster, create a service account and bind permissions:
45+
46+
#### Create a service account token
47+
48+
49+
Permanent, non-expiring service account token to view:
50+
51+
- Kubernetes: Manually create a Secret for a Service Account — `https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-secret-for-a-service-account`
52+
- Backstage: Kubernetes configuration — `https://backstage.io/docs/next/features/kubernetes/configuration/`
53+
54+
55+
```bash
56+
oc create serviceaccount rhdh-sa -n default
57+
58+
# Grant read-only access cluster-wide
59+
oc create clusterrolebinding rhdh-sa-view \
60+
--clusterrole=view \
61+
--serviceaccount=default:rhdh-sa
62+
63+
# Create a Secret that yields a permanent token
64+
cat <<'EOF' | oc apply -f -
65+
apiVersion: v1
66+
kind: Secret
67+
metadata:
68+
name: rhdh-sa-token
69+
namespace: default
70+
annotations:
71+
kubernetes.io/service-account.name: rhdh-sa
72+
type: kubernetes.io/service-account-token
73+
EOF
74+
75+
# Retrieve and print the token
76+
oc get secret rhdh-sa-token -n default -o jsonpath='{.data.token}' | base64 --decode
77+
```
78+
79+
Add a cluster entry to the installer’s plugin config. Example shapes for cluster locator configuration:
80+
81+
Minimal entry (used by some dynamic plugins):
82+
83+
```yaml
84+
- authProvider: serviceAccount
85+
name: <remote-cluster-name>
86+
serviceAccountToken: <paste-token-here>
87+
skipTLSVerify: true
88+
url: https://api.<your-remote-cluster>.
89+
```
90+
91+
Full Backstage Kubernetes backend plugin config using `clusterLocatorMethods`:
92+
93+
```yaml
94+
pluginConfig:
95+
kubernetes:
96+
serviceLocatorMethod:
97+
type: multiTenant
98+
clusterLocatorMethods:
99+
- type: config
100+
clusters:
101+
- url: https://api.<remote-cluster>:443
102+
name: <remote-cluster-name>
103+
authProvider: serviceAccount
104+
serviceAccountToken: ${K8S_SA_TOKEN}
105+
skipTLSVerify: true
106+
# Your remote cluster info goes here
107+
customResources:
108+
- group: route.openshift.io
109+
apiVersion: v1
110+
plural: routes
111+
- group: tekton.dev
112+
apiVersion: v1
113+
plural: pipelineruns
114+
- group: tekton.dev
115+
apiVersion: v1
116+
plural: taskruns
117+
```
118+
119+
Notes:
120+
- For more options (e.g., `catalog`, `gke`, metrics, dashboards), see the Backstage docs: [Backstage Kubernetes configuration](https://backstage.io/docs/next/features/kubernetes/configuration/).
121+
- Avoid storing tokens in catalog annotations; prefer the config locator method with `serviceAccountToken`.
122+
123+
Then proceed with the installer’s configure step to apply changes.
124+
125+
---
126+
127+
## Step 2: Register the remote cluster in Argo CD (host cluster)
128+
Argo CD must know how to reach the remote cluster to deploy the application there. You can register the cluster either via the Argo CD CLI (recommended) or by creating a cluster Secret (manual).
129+
130+
### Register via Argo CD CLI
131+
132+
```bash
133+
# 1) Inspect kubeconfig contexts on your workstation
134+
kubectl config get-contexts
135+
136+
# 2) Use the remote cluster context and verify access
137+
kubectl config use-context <remote-cluster-context-name>
138+
kubectl get nodes
139+
140+
# 3) Ensure you're logged into the remote cluster with oc (OpenShift)
141+
# If not logged in, authenticate with a token (adjust flags as needed)
142+
oc whoami || oc login https://api.<remote-cluster>:443
143+
144+
# 4) Log into Argo CD (host cluster)
145+
# Use --insecure if you don't have a valid TLS cert; add --sso if SSO is enabled
146+
argocd login <argocd-url>
147+
148+
# 5) Register the remote cluster with Argo CD using its kubeconfig context
149+
argocd cluster add <remote-cluster-context-name>
150+
151+
# 6) Verify Argo CD sees the cluster
152+
argocd cluster list
153+
```
154+
155+
### Rotate the Argo CD bearer token for an external cluster
156+
157+
```bash
158+
# Run against the external cluster's kubeconfig context
159+
kubectl config use-context <remote-cluster-context-name>
160+
161+
# Delete the argocd-manager token secret so a new one is created
162+
kubectl delete secret argocd-manager-token-XXXXXX -n kube-system
163+
164+
# Re-add the cluster to rotate credentials
165+
argocd cluster add <remote-cluster-context-name>
166+
```
167+
168+
See: [Argo CD — External cluster credentials](https://argo-cd.readthedocs.io/en/stable/operator-manual/security/#external-cluster-credentials)
169+
170+
171+
## Step 4: Generate and import the template
172+
- Regenerate templates in your fork if needed:
173+
174+
- Import the updated template into RHDH.
175+
- In the template wizard, select Remote cluster deployment and provide the remote API URL that matches the Argo CD secret `server` value.
176+
- Choose the target namespace (same name will be used on the host for Tekton PipelineRuns).
177+
178+
---
179+
180+
## Validation
181+
- Topology/Kubernetes plugins show the remote cluster’s resources
182+
- Argo CD has two apps:
183+
- `app` targets the remote cluster and syncs application workloads
184+
- `app-tekton` targets the host cluster and manages Tekton resources
185+
- PipelineRuns execute on the host cluster; workloads appear on the remote namespace
186+
187+
---
188+
189+
## Notes and limitations
190+
- Same namespace name on host and remote is required for this iteration.
191+
- Multiple remote namespaces per template are out of scope and may require future enhancements.
192+
- This feature is evolving; comprehensive end-user documentation improvements are being tracked upstream.

scripts/util

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ function apply-configurations() {
1111
TEMPLATE_TECHDOC_TYPE=$3
1212

1313
DOC_SRC_ROOT="$ROOT_DIR/documentation/$TEMPLATE_TECHDOC_TYPE"
14+
15+
# template-prerequisites.md documentation
1416
SAMPLE_DOC_SRC="$DOC_SRC_ROOT/$SAMPLENAME"
1517
PREREQ_DOC_SRC="$DOC_SRC_ROOT/template-prerequisites.md"
1618
if [ -d $DOC_SRC_ROOT ]; then
@@ -23,6 +25,19 @@ function apply-configurations() {
2325
cp -r $DOC_SRC_ROOT/mkdocs.yml $DEST/mkdocs.yml
2426
fi
2527

28+
# template-remote-cluster.md documentation
29+
SAMPLE_DOC_SRC="$DOC_SRC_ROOT/$SAMPLENAME"
30+
REMOTE_DOC_SRC="$DOC_SRC_ROOT/template-remote-cluster.md"
31+
if [ -d $DOC_SRC_ROOT ]; then
32+
if [ -d $SAMPLE_DOC_SRC ]; then
33+
cp -r $SAMPLE_DOC_SRC $DEST/docs
34+
if [ -f $REMOTE_DOC_SRC ]; then
35+
cp $REMOTE_DOC_SRC $DEST/docs/template-remote-cluster.md
36+
fi
37+
fi
38+
cp -r $DOC_SRC_ROOT/mkdocs.yml $DEST/mkdocs.yml
39+
fi
40+
2641
cp $SKELETON_DIR/template.yaml $DEST/template.yaml
2742

2843
# get default env variables

0 commit comments

Comments
 (0)