diff --git a/README.md b/README.md index bf05d38..f428d5f 100644 --- a/README.md +++ b/README.md @@ -1,293 +1,563 @@ -# Part of Smart-home security demo using AI - LLMs +# Hydra: IoT Device Emulation and Isolation Framework [![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/hydra)](https://artifacthub.io/packages/search?repo=hydra) -Hydra allows emulatoion of IoT devices that provides one or two isolated enviroments, both running linux. Each enviroment runs in a seperate VM providing independent resource anf configuration management. -Each enviroment is expected to run containers. The emulated "host" enviroment runs a debian cloud image where k3s, crismux and containerd are installed and configured. -The "isolated" enciroment runs either a debian cloud image with containerd, or a rimdworkspace image. The "isolated" enviroment is designed to be managed by a "host" enviroment. -The "isolated" enviroment runs a proxy enabling TCP access to containerd (csi-grpc-proxy). Even though the "host and isolated environment are configured to work together, they can be started and used seepratedly, any. - -Hydra is composed by two separate modules: isolated-vm and add-crismux. Isolated-vm starts a VM with desired properties. Isolated-vm can be used as stand-alone by directly running start-vm.sh script or using auxiliary scripts that configures start-vm.sh for a specific use, as docker container or run using a helm chart. -The second module install crismux in a k3s/k8s installation. It also installs a "nelly" runtime class. This runtime class will direct kubelet to use the isolated VM to run the container instead of running it on the host. - -Isolated-vm VM utilizes KVM or HVF acceleration if available. -The scripts run under MacOS, Linux, docker and k3s. - -# Requirements - -## MacOS - - Homebrew (strongly suggested) - - Virtualization engine (either or both) - * KVM - - brew install qemu - * krunkit - - brew install podman - - brew install krunkit - - wget - - brew install qemu - - - for testing - - cri-tools - - brew install cri-tools) - -## Linux - - KVM - - wget - - mkisofs - -## Docker - - docker installed on the host - -## K3s - - running installation of K3s (can be installed using k3sup) - - helm - -# Usage - - -The start-vm.sh script had the following environment vaiables that configures what and how the VM will run. - - -THe following variables configures the script: - -| Variable | Usage | Default value | -| -------- | ----- | ------------- | -| `DRY_RUN_ONLY` | If > 0 will print the command line for the VM and exit | 0 | -| `DEBUG` | If > 0 will print debug for the script | 0 | -| `DISABLE_9P_KUBELET_MOUNTS` | If > 0 do not enable mounting of /var/lib/kubelet and /var/lib/pods | 0 | -| `ADDITIONAL_9P_MOUNTS` | additional mounts format `|[$|]` | | -| `COPY_IMAGE_BACKUP` | if > 0 preserve a copy of the image and start form a copy of that image if it exists | 0 | -| `ALWAYS_REUSE_DISK_IMAGE` | if > 0 reuse existing disk image even if configuration has changed | 0 | -| `DEFAULT_KERNEL_VERSION` | kernel version to install | `6.12.12+bpo` | -| `KERNEL_VERSION` | full version to install if a different kernel is required | `linux-image-${DEFAULT_KERNEL_VERSION}-${ARCH}` | -| `ENABLE_VSOCK_LINUX` | Enable VSOCK for linux if > 0 | 0 | -| `DEFAULT_DIR_IMAGE` | Where to store the downloaded image | `$(pwd)/image` | -| `DEFAULT_DIR_K3S_VAR_DARWIN` | Where to point the 9p mounts if running on MacOS | `$(pwd)/k3s-var` | -| `DEFAULT_DIR_K3S_VAR_LINUX_NON_ROOT` | Where to point the 9p mounts if running on Linux as a non-root user | `$(pwd)/k3s-var` | -| `DEFAULT_DIR_K3S_VAR_LINUX_ROOT` | Where to point the 9p mounts if running on linux machine as root (or inside a container ) | | -| `DEFAULT_DIR_K3S_VAR_OTHER` | Where to point the 9p mounts if running on other OS machine | `$(pwd)/k3s-var` | -| `DEFAULT_IMAGE` | QCOW image to use as base | `debian-12-genericcloud-${ARCH}-20250316-2053.qcow2` | -| `DEFAULT_IMAGE_SOURCE_URL` | where to download QCOW image | `https://cloud.debian.org/images/cloud/bookworm/20250316-2053/` | -| `DEFAULT_DIR_IMAGE` | Directory to use to store image and artifacts | `$(pwd)/image` | -| `DEFAULT_KVM_DARWIN_CPU` | # CPUS allocated to the VM when running in MacOS | 2 | -| `DEFAULT_KVM_DARWIN_MEMORY` | DRAM allocated to the VM to the VM when running in MacOS | 2 | -| `DEFAULT_KVM_LINUX_CPU` | # CPUS allocated to the VM when running in Linux/container | 2 | -| `DEFAULT_KVM_LINUX_MEMORY` | DRAM allocated to the VM to the VM when running in Linux/container | 2 | -| `DEFAULT_KVM_UNKNOWN_CPU` | # CPUS allocated to the VM when running in unknown OS | 2 | -| `DEFAULT_KVM_UNKNOWN_MEMORY` | DRAM allocated to the VM to the VM when running in unknown OS | 2 | -| `DEFAULT_KVM_DISK_SIZE` | Maximum size of QCOW disk | 3 | -| `DEFAULT_KVM_DARWIN_BIOS` | bios to boot (UEFI) when running under MacOS | `/opt/homebrew/Cellar/qemu/9.2.2/share/qemu/edk2-${ARCH}-code.fd` | -| `DEFAULT_KVM_LINUX_v9_BIOS` | bios to boot (UEFI) when running under Linux/container with KVM v9x | | -| `DEFAULT_KVM_LINUX_v7_BIOS` | bios to boot (UEFI) when running under Linux/container with KVM v7x | `/usr/share/qemu-efi-aarch64/QEMU_EFI.fd` | -| `DEFAULT_KVM_LINUX_v7_BIOS`(aarch64) | bios to boot (UEFI) when running under Linux/container with KVM v7x | `/usr/share/AAVMF/AAVMF_CODE.fd` | -| `DEFAULT_KVM_LINUX_v7_BIOS`(amd64) | bios to boot (UEFI) when running under Linux/container with KVM v7x | `/usr/share/ovmf/OVMF.fd` | -| `DEFAULT_KVM_UNKNWON_BIOS` | bios to boot (UEFI) when running under unknown OS | | -| `DEFAULT_KVM_HOST_SSHD_PORT` | TCP port to be used on the host to access port 22 on VM | 5555 | -| `DEFAULT_KVM_HOST_CONTAINERD_PORT` | TCP port to be used on the host to access port 35000 (cs-grpc-proxy) on VM | 35000 | -| `DEFAULT_CSI_GRPC_PROXY_URL` | URL to get csi-grpc-proxy binary | `https://github.com/democratic-csi/csi-grpc-proxy/releases/download/v0.5.6/csi-grpc-proxy-v0.5.6-linux- `| -| `KVM_CPU_TYPE` | CPU type | Use "host" if accelerated, otherise use "cortex-a76" or "qemu64-v1" | -| `KVM_CPU` | # cpus to allocate | `DEFAULT_KVM__CPU`| -| `KVM_MEMORY` | DRAM to allocate | `DEFAULT_KVM__MEMORY` | -| `KVM_BIOS` | BIOS to use | `DEFAULT_KVM__BIOS` | -| `KVM_MACHINE_TYPE` | KVM machine type | use "virt" or "pc" | -| `VM_USERNAME` | Usename to created at VM | hailhydra | -| `VM_SALT` | Salt to be used when creating the encrypted password | 123456 | -| `VM_PASSWORD` | Cleartext password to be used | hailhydra | -| `VM_PASSWORD_ENCRYPTED` | Encrypted password to be used, overwrites the cleartext password | | -| `VM_HOSTNAME` | Hostname | vm-host | -| `VM_SSH_AUTHORIZED_KEY` | ssh public key to add to authorized\_key for the user VM_USERNAME | | -| `VM_SSH_KEY_FILENAME` | Use this file to load the public key into authorized_key | | -| `RUN_BARE_KERNEL` | if > 0 then Use kernel and initrd instead of cloud image | 0 | -| `DEFAULT_KVM_PORTS_REDIRECT` | format is `:[;:]` | | -| `DEFAULT_RIMD_ARTIFACT_URL` | where to download the artifacts (kernel + initrd) | https://gitlab.arm.com/api/v4/projects/576/jobs/146089/artifacts | -| `RIMD_ARTIFACT_URL_USER` | User to authenticate to get artifacts from URL | | -| `RIMD_ARTIFACT_URL_PASS` | Password to authenticate to get artifacts from URL | | -| `RIMD_ARTIFACT_URL_TOKEN` | Token to authenticate to get artifacts from URL | | -| `DEFAULT_RIMD_ARTIFACT_FILENAME` | Filename to use when storing the downloaded file | artifacts.zip | -| `DEFAULT_RIMD_KERNEL_FILENAME` | Filename that contains the kernel to run | `final_artifact/Image.gz` | -| `DEFAULT_RIMD_IMAGE_FILENAME` | Filename that contains the initrd to run | `final_artifact/initramfs.linux_arm64.cpio` | -| `DEFAULT_RIMD_FILESYSTEM_FILENAME` | Filename that contains the read/write filesystem for the VM | `final_artifact/something.qcow2` | -| `ADDITIONAL_KERNEL_COMMANDLINE` | When running a bare kernel add this to kernel command line | | -| `DISABLE_CONTAINERD_CSI_PROXY` | Disable installation of containerd and csi proxy | 0 | -| `ENABLE_K3S_DIOD` | Enable installation of k3s and diod on VM | 0 | -| `ENABLE_VIRTIO_GPU` | Enable Vulkan acceleration on VM (not ready yet) | 0 | -| `DEFAULT_VIRTIO_GPU_VRAM` | How much ram to enable for GPU | 4 | -| `ADDITIONAL_9P_MOUNTS` | Add this additional 9P mounts to VM | | -| `EXTERNAL_9P_KUBELET_MOUNTS` | Use tcp transport instead of local transport for 9P mounts | 0 | -| `DEFAULT_KVM_HOST_DIOD_PORT` | Host port to use for access of DioD on the VM | 30564 | -| `DEFAULT_KVM_HOST_CONTAINERD_PORT` | Host port to use for access of containerd on the VM | 35000 | -| `DEFAULT_KVM_HOST_RIMD_PORT` | Host port to use for RIMD server on the VM | 35001 | -| `DEFAULT_CSI_GRPC_PROXY_URL` | Where to get csi-grpc-proxy | `https://github.com/democratic-csi/csi-grpc-proxy/releases/download/v0.5.6/csi-grpc-proxy-v0.5.6-linux-` | -| `K3S_VERSION_INSTALL` | Version of K3s to install | `v1.32.6+k3s1` | -| `ENABLE_KRUNKIT` | Use krunkit instead of KVM | 0 | -| `KRUNKIT_HTTP_PORT` | Port of HTTP interface of krunkit | 61800 | -| `GVPROXY_HTTP_PORT` | Port of HTTP interface of gvproxy | 61801 | -| `DEFAULT_DIR_TMP_SOCKET` | Directory to use for sockets (path < 100 bytes) | `/tmp/image-$$` | -| `DEFAULT_NETWORK_PREFIX` | Network prefix used for VM interface | 10.0.2 | -| `DEFAULT_DEVICE_IP` | IP used on the VM interface | `${DEFAULT_NETWORK_PREFIX}.15` | -| `DEFAULT_GATEWAY_IP` | IP use for the gateway | `${DEFAULT_NETWORK_PREFIX}.2` | -| `DEFAULT_DNS_IP` | IP used for DNS (inside VM) | `${DEFAULT_NETWORK_PREFIX}.3` | - -The directory `` will be use to store all the files created or downloaded. Files downloaded will be cached locally and only redownloaded if a new epty version of that file is required. -SSH will be availabe at port 5555 (if the envviroment variable is not changed). and Containerd CRI will be available at port 35000. -A csi-proxy or crismux running on the host can be used to convert that port to a socket if required. - -The image will be resized automatically according to the sizes provided. The image will not be reduced in size. - -The VM will use acceleration if available. - -The following scripts are provided so specific enviroments can be easily run. - -"run-host.sh" uses QEMU and run a host enviroment with k3s/crismux and containerd. -"run-isolated.sh" uses QEMU and run an isolated enviroment using debian. -"run-isolated-bare.sh" uses QEMU and run an isolated enviroment from rimdworkspacve -"run-isolated-krunkit-krun.sh' uses krunkit and gvproxy to run an isolated environment using debian -"run-isolated-krunkit-krun-bare.sh" used krunkit and gvproxy to run an isolated environment using rimdworspace. - -Any host can be connected with any isolated environment. More than one host or more than one isolated environemt is not supported. - -## MacOSo - -Krunkit is only supported on MacOS. On a M4 machine both SME and vulkan are supported as acceleration. - -### Crismux - -## Linux - -clone the repository -``` -git clone https://github.com/smarter-project/hydra -``` +Hydra is a comprehensive framework for emulating IoT devices with isolated execution environments. It provides one or two isolated Linux environments, each running in separate virtual machines with independent resource and configuration management. Designed as part of a smart-home security demonstration using AI and LLMs, Hydra enables sophisticated container orchestration scenarios with strong isolation guarantees. + +## Overview -### Isolated-vm +Hydra enables the creation of isolated execution environments for containers, perfect for: -#### TL;DR +- **Smart Home Security Research**: Simulate IoT devices in isolated virtual machines +- **Multi-Container Orchestration**: Run containers in separate VMs with Kubernetes integration +- **Security Testing**: Test container isolation and security boundaries +- **IoT Device Emulation**: Emulate complete device stacks in virtual environments +- **Edge Computing Research**: Prototype edge computing scenarios with VM-based isolation -The terminal will output VM console messages and the last message should be a login prompt. When running the script directly this will be printed in the current terminal that the script is running. Use `VM_USERNAME`, `VM_PASSWORD` to login. Ssh and csi-grpc-proxy interfaces are available through the network. +### Key Features -Two options to exit the VM after it has been started. +- **Dual Environment Architecture**: Host and isolated environments, each in dedicated VMs +- **Kubernetes Integration**: Full k3s support with Crismux for multi-containerd orchestration +- **Cross-Platform Support**: Runs on macOS (Apple Silicon & Intel), Linux, Docker, and Kubernetes +- **Multiple Virtualization Backends**: QEMU/KVM, HVF (Hypervisor.framework), and Krunkit support. +- **Support SME and Vulkan acceleration**: Krunkit supports SME2 on Apple M4 platforms and vulkan on all Apple silicon platforms. QEMU supports vulkan acceleration on Linux. +- **Container Runtime Isolation**: Each VM runs containerd independently +- **Network Configuration**: Flexible networking with port forwarding and CSI proxy +- **Multi-VM Orchestration**: Launch multiple VMs concurrently (see `src/multi-vm/`) -+ stop the hypervisor by typing "control-a" and "c" and at the hypervisor prompt, type "quit" -+ login to the VM using the `VM_USERNAME`, `VM_PASSWORD` and execute "`sudo shutdown -h 0" +## Architecture -Run the script start-vm.sh to create the VM using debian cloud image ``` -cd hydra/src/isolated-vm -./start-vm.sh +┌────────────────────────────────────────────────┐ +│ Physical Host │ +│ │ +│ ┌─────────────────────────────────────┐ │ +│ │ Host Environment (VM) │ │ +│ │ ┌──────────────────────────────┐ │ │ +│ │ │ k3s / Kubernetes │ │ │ +│ │ │ Crismux │ │ │ +│ │ │ Containerd (host runtime) │ │ │ +│ │ │ Kubelet │ │ │ +│ │ └──────────────────────────────┘ │ │ +│ │ │ │ │ +│ │ │ Runtime Class: "nelly" │ │ +│ └───────────┼─────────────────────────┘ │ +│ │ │ +│ ┌───────────▼─────────────────────────┐ │ +│ │ Isolated Environment (VM) │ │ +│ │ ┌──────────────────────────────┐ │ │ +│ │ │ Containerd (isolated) │ │ │ +│ │ │ csi-grpc-proxy │ │ │ +│ │ │ (TCP → Unix Socket) │ │ │ +│ │ └──────────────────────────────┘ │ │ +│ │ │ │ +│ │ Containers run here with │ │ +│ │ isolation from host environment │ │ +│ └─────────────────────────────────────┘ │ +└────────────────────────────────────────────────┘ ``` -It will start a VM using local directory image. If run as root (linux) it will also try to share the directories `/var/lib/kubelet` and `/var/log/pods`. +### Components -Run the script start-vm.sh to create the VM using the kernel/initrd instead of cloud image -``` -cd hydra/src/isolated-vm -RUN_BARE_KERNEL=1 RIMD_ARTIFACT_URL_TOKEN= ./start-vm.sh +1. **Isolated-VM** (`src/isolated-vm/`): Core VM creation and management + - QEMU/KVM-based virtualization + - Debian cloud images or RIMDworkspace support + - Cloud-init configuration + - Port forwarding (SSH, Containerd, RIMD) + - 9P filesystem mounts for kubelet integration -## MacOS +2. **Add-Crismux** (`src/add-crismux/`): Kubernetes multi-containerd support + - Crismux installation in k3s/k8s + - "nelly" runtime class configuration + - Enables kubelet to route containers to isolated VMs -clone the repository +3. **Multi-VM** (`src/multi-vm/`): Orchestration for multiple VMs + - YAML-based configuration + - Parallel VM execution + - Network configuration and IP management + - See `src/multi-vm/README.md` for details + +## Quick Start + +### Prerequisites + +#### macOS + +```bash +# Install Homebrew if needed +/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" + +# Install QEMU and dependencies +brew install qemu wget cri-tools + +# Optional: For Krunkit support (macOS only, M4 recommended) +brew install podman krunkit ``` -git clone https://github.com/smarter-project/hydra + +#### Linux + +```bash +# Install QEMU/KVM +sudo apt-get update +sudo apt-get install -y qemu-kvm qemu-utils wget genisoimage + +# Verify KVM access +sudo usermod -aG kvm $USER +# Log out and back in for group changes to take effect ``` -### Isolated-vm +### Basic Usage -#### TL;DR +1. **Clone the repository**: -The terminal will output VM console messages and the last message should be a login prompt. When running the script directly this will be printed in the current terminal that the script is running. Use `VM_USERNAME`, `VM_PASSWORD` to login. Ssh and csi-grpc-proxy interfaces are available through the network. + ```bash + git clone https://github.com/smarter-project/hydra + cd hydra + ``` -Two options to exit the VM after it has been started. +2. **Start an isolated VM**: -+ stop the hypervisor by typing "control-a" and "c" and at the hypervisor prompt, type "quit" -+ login to the VM using the `VM_USERNAME`, `VM_PASSWORD` and execute "`sudo shutdown -h 0" + ```bash + cd src/isolated-vm + ./start-vm.sh + ``` -Run the script start-vm.sh to create the VM using debian cloud image -``` +3. **Access the VM**: + + ```bash + # SSH into the VM (default port 5555) + ssh hailhydra@localhost -p 5555 + # Password: hailhydra + + # Or use the containerd endpoint + # Configure csi-grpc-proxy first (see Testing section) + ``` + +4. **Start multiple VMs** (see multi-vm documentation): + + ```bash + cd src/multi-vm + sudo ./start-multi-vm.sh + ``` + +## Quick Start + +Get Hydra running in under 5 minutes: + +### Single VM Quick Start + +```bash +# 1. Clone and enter directory +git clone https://github.com/smarter-project/hydra cd hydra/src/isolated-vm + +# 2. Start VM (downloads image on first run) ./start-vm.sh + +# 3. In another terminal, SSH into the VM +ssh hailhydra@localhost -p 5555 +# Password: hailhydra ``` -It will start a VM using local directory image. If run as root (linux) it will also try to share the directories `/var/lib/kubelet` and `/var/log/pods`. +The VM will boot and be ready in about 2-3 minutes. You'll see the login prompt in the terminal running `start-vm.sh`. -Run the script start-vm.sh to create the VM using the kernel/initrd instead of cloud image -``` -cd hydra/src/isolated-vm -RUN_BARE_KERNEL=1 RIMD_ARTIFACT_URL_TOKEN= ./start-vm.sh +### Multiple VMs Quick Start -#### Testing +```bash +# 1. Navigate to multi-vm directory +cd src/multi-vm -Use csi-grpc-proxy running on the host to convert from a tcp port to an unix socket -``` -wget https://github.com/democratic-csi/csi-grpc-proxy/releases/download/v0.5.6/csi-grpc-proxy-v0.5.6-darwin-arm64 -chmod u+x csi-grpc-proxy-v0.5.6-darwin-arm64 -BIND_TO=unix:///tmp/socket-csi PROXY_TO=tcp://127.0.0.1:35000 ./csi-grpc-proxy-v0.5.6-darwin-arm64 & -``` +# 2. Edit vm-config.yaml to configure your VMs +# 3. Start all VMs +sudo ./start-multi-vm.sh -Now you can run crictl to send commands to containerd running on the isolated enviroment. Start/stop/list containers, download/remove images, etc. -This webpage has examples of how to use crictl `https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/` -``` -crictl --runtime-endpoint unix:///tmp/socket-csi ps +# 4. SSH into VMs (ports configured in vm-config.yaml) +ssh hydra@localhost -p 5555 # VM1 +ssh hydra@localhost -p 5556 # VM2 ``` -### Crismux (needed if using kubernetes) +### Kubernetes Quick Start -Run the script install_crismux.sh to enable crismux -``` -cd hydra/src/add-crismux +```bash +# 1. Install k3s (if not installed) +curl -sfL https://get.k3s.io | sh - + +# 2. Install Crismux +cd src/add-crismux ./install_crismux.sh install + +# 3. Start isolated VM (in another terminal) +cd ../isolated-vm +sudo ./start-vm.sh + +# 4. Create a pod with nelly runtime class +kubectl apply -f - <" \ - hydra/hydra + +**Headless VM**: +```bash +KVM_NOGRAPHIC="-nographic" ./start-vm.sh & ``` -isolated-vm.configuration.sshkey allows ssh login to VM -add +**RIMDworkspace VM**: +```bash +RUN_BARE_KERNEL=1 \ +RIMD_ARTIFACT_URL_TOKEN="your-token" \ +./start-vm.sh ``` ---set "configuration.local_node_image_dir=" + +### Crismux Integration + +Crismux enables Kubernetes to use multiple containerd instances, allowing containers to run in isolated VMs. + +#### Installation + +```bash +cd src/add-crismux +./install_crismux.sh install ``` -to store images at the container and not on the node, because the container filesystem is not persistent, all files will be lost if the container is stopped. +This installs: +- Crismux in your k3s/k8s cluster +- "nelly" runtime class for routing containers to isolated VMs +- Required CRDs and controllers + +#### Usage + +Create pods with the "nelly" runtime class: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: isolated-pod +spec: + runtimeClassName: nelly + containers: + - name: test-container + image: nginx:latest +``` -## Docker +The pod will run in the isolated VM environment instead of the host. -### Isolated-vm +### Multi-VM Orchestration -#### TL;DR +For running multiple VMs simultaneously, see `src/multi-vm/README.md`. -Change `$(pwd)/image` to another directory if that is not appropriate. and `i$(ls ${HOME}/.ssh/*\..pub | head -n 1 | xrgs cat 2>/dev/null)` to the appropriate key to be used (this scirpt will select the first key available).. +## Testing -``` +### Connecting to Containerd + +1. **Download csi-grpc-proxy**: + + ```bash + # macOS (ARM64) + wget https://github.com/democratic-csi/csi-grpc-proxy/releases/download/v0.5.6/csi-grpc-proxy-v0.5.6-darwin-arm64 + chmod +x csi-grpc-proxy-v0.5.6-darwin-arm64 + + # Linux + wget https://github.com/democratic-csi/csi-grpc-proxy/releases/download/v0.5.6/csi-grpc-proxy-v0.5.6-linux-amd64 + chmod +x csi-grpc-proxy-v0.5.6-linux-amd64 + ``` + +2. **Start the proxy**: + + ```bash + BIND_TO=unix:///tmp/socket-csi \ + PROXY_TO=tcp://127.0.0.1:35000 \ + ./csi-grpc-proxy-v0.5.6-darwin-arm64 & + ``` + +3. **Use crictl**: + + ```bash + # List containers + crictl --runtime-endpoint unix:///tmp/socket-csi ps + + # Pull an image + crictl --runtime-endpoint unix:///tmp/socket-csi pull nginx:latest + + # Run a container + crictl --runtime-endpoint unix:///tmp/socket-csi run \ + --runtime-endpoint unix:///tmp/socket-csi \ + container-id + ``` + +See [Kubernetes crictl documentation](https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/) for more examples. + +## Deployment Options + +### Docker + +Run isolated-vm in a container: + +```bash docker run \ - -d \ - --rm \ - --network host \ - --env "VM_SSH_AUTHORIZED_KEY=$(ls ${HOME}/.ssh/*\.pub 2?dev/null | head -n 1 | xargs cat 2>/dev/null)" \ - -v $(pwd)/image:/root/image \ - -v /var/lib/kubelet:/var/lib/kubelet \ - -v /var/log/pods:/var/log/pods \ - --device /dev/kvm:/dev/kvm \ - ghcr.io/smarter-project/hydra/isolated-vm:main + -d \ + --rm \ + --network host \ + --env "VM_SSH_AUTHORIZED_KEY=$(cat ~/.ssh/id_rsa.pub)" \ + -v $(pwd)/image:/root/image \ + -v /var/lib/kubelet:/var/lib/kubelet \ + -v /var/log/pods:/var/log/pods \ + --device /dev/kvm:/dev/kvm \ + ghcr.io/smarter-project/hydra/isolated-vm:main + +# View logs +docker logs -f + +# SSH to VM +ssh -p 5555 hailhydra@localhost ``` -This command will run the container without logs. use the following command to observe the logs -``` -docker logs -f +### Kubernetes / k3s with Helm + +Deploy the complete stack: + +#### Add Helm repository + +```bash +helm repo add hydra https://smarter-project.github.io/hydra/ ``` -and this one to have a shell into the container. but preferably use ssh +#### Install +```bash +helm install \ + --create-namespace \ + --namespace hydra \ + --set "isolated-vm.configuration.sshkey=$(cat ~/.ssh/id_rsa.pub)" \ + hydra hydra/hydra ``` -docker exec -it /bin/bash + +#### Verify + +```bash +kubectl get pods -n hydra +kubectl get runtimeclass ``` +**Note**: For persistent storage, set `configuration.local_node_image_dir` to a host path or PVC. + +### Standalone k3s Installation + +1. **Install k3s** (if not already installed): + + ```bash + curl -sfL https://get.k3s.io | sh - + ``` + +2. **Install Crismux**: + + ```bash + cd src/add-crismux + ./install_crismux.sh install + ``` + +3. **Start isolated VM**: + + ```bash + cd src/isolated-vm + sudo ./start-vm.sh + ``` + +4. **Create pods with nelly runtime class** (see Crismux Integration section) + +## Platform-Specific Notes + +### macOS + +- **Virtualization**: Uses HVF (Hypervisor.framework) for acceleration +- **Krunkit**: Supported on macOS, with full support on M4 machines (SME and Vulkan) +- **Architecture**: Supports both Apple Silicon (ARM64) and Intel (x86_64) +- **BIOS**: Automatically detected from Homebrew QEMU installation + +### Linux + +- **Virtualization**: Uses KVM for hardware acceleration (if available) +- **Filesystem**: 9P mounts for `/var/lib/kubelet` and `/var/log/pods` (when run as root) +- **Networking**: Supports both user-mode and bridge networking +- **BIOS**: Paths vary by distribution; defaults provided for common setups + +## Architecture Deep Dive + +### Host Environment + +The host environment runs: + +- **k3s**: Lightweight Kubernetes distribution +- **Kubelet**: Kubernetes node agent +- **Crismux**: Multi-containerd runtime manager +- **Containerd (host)**: Primary container runtime + + +Containers scheduled with the "nelly" runtime class are routed to the isolated environment. + +### Isolated Environment + +The isolated environment provides: + +- **Containerd (isolated)**: Independent container runtime +- **csi-grpc-proxy**: Converts TCP to Unix socket for Kubernetes integration +- **Network Isolation**: Separate network namespace +- **Resource Isolation**: Dedicated CPU, memory, and disk + +### Communication Flow + ``` -ssh -p @localhost +Kubelet (Host) + │ + │ (via Crismux) + ▼ +Containerd Socket (Host) + │ + │ (via csi-grpc-proxy) + ▼ +TCP:localhost:35000 + │ + │ (port forward) + ▼ +Containerd (Isolated VM) + │ + ▼ +Container Execution ``` -#### Details +## Troubleshooting + +### VM Won't Start + +- **Check QEMU installation**: `which qemu-system-aarch64` (or `qemu-system-x86_64`) +- **Verify permissions**: May need `sudo` on Linux for KVM access +- **Check disk space**: Ensure sufficient space for images +- **Review logs**: Run with `DEBUG=1` for detailed output + +### Network Issues + +- **Port conflicts**: Check if ports 5555, 35000, 35001 are in use +- **Firewall rules**: Ensure firewall allows port forwarding +- **VM networking**: Verify network configuration in cloud-init + +### Containerd Connection Issues + +- **Proxy not running**: Ensure csi-grpc-proxy is active +- **Wrong endpoint**: Verify `PROXY_TO` points to correct port +- **VM not ready**: Wait for VM to fully boot and containerd to start + +### Image Download Failures + +- **Network connectivity**: Verify internet access +- **Disk space**: Ensure enough space for image download +- **URL access**: Check if image source URL is accessible + +## Contributing + +Contributions welcome! Areas for contribution: + +- Platform support improvements +- Documentation enhancements +- Performance optimizations +- Security hardening +- Additional virtualization backends + +## License + +Part of the Smarter Project. See repository for license details. + +## Related Projects + +- **Crismux**: Multi-containerd runtime manager +- **csi-grpc-proxy**: Container runtime interface proxy +- **RIMDworkspace**: Isolated runtime environment + +## Resources + +- [Main Documentation](https://github.com/smarter-project/documentation) +- [Artifact Hub](https://artifacthub.io/packages/search?repo=hydra) +- [Multi-VM Documentation](src/multi-vm/README.md) +- [Kubernetes crictl Guide](https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/) + +## Support + +For issues, questions, or contributions: +- Open an issue on GitHub +- Check existing documentation +- Review troubleshooting section + +--- + +**Hydra**: Many heads, unified purpose. Isolated execution environments for the modern containerized world.