-
Notifications
You must be signed in to change notification settings - Fork 1
Dual cluster FRIDGE deployment #137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…e Kubernetes Service (AKS) cluster using Pulumi and Python.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How much of this is copied from the existing Pulumi project?
Wouldn't it make more sense to replace that?
|
It will do, once I'm happy with it :) |
|
Although I do wonder if it's helpful to keep a single-cluster spec around for some of the local dev work - a lot easier to run a single cluster on a VM locally than to run two... |
I think that is running just the isolated cluster part in a K3s instance. For local dev you don't need the access cluster or lock down. That way, at least that part is the same thing locally and when deployed. |
Someone needs to tell you about this version control thing. You don't need to make a copy of all your code when you want to make changes any more 😜 |
This PR splits FRIDGE deployment up across two clusters.
The
accesscluster contains an SSH server and the Harbor container registry.The
isolatedcluster contains the rest of the FRIDGE deployment.This complements PR #131, which does the initial deployment of two AKS clusters.
The intended deployment pattern here is for the TRE Operator to first deploy the access cluster components, then set up the SSH proxy to allow them to deploy the isolated cluster components.
That will then be followed by a final infrastructure configuration step to limit network traffic more completely.