Skip to content

Conversation

@mattkur
Copy link
Contributor

@mattkur mattkur commented Dec 19, 2025

This is a follow up to #2577. Here, I clean up the OpenHCL architecture page. I introduce a separate page that describes the processes and components. This allows me to streamline the boot process page. I've also added a sequence diagram.

All written with the help of CoPilot. I've edited this and added some stuff manually, as well.

  • guide: refactor architecture / boot / processes docs
  • guide: enhance boot flow documentation and clarify configuration sources

Copilot AI review requested due to automatic review settings December 19, 2025 21:11
@mattkur mattkur requested a review from a team as a code owner December 19, 2025 21:11
@github-actions github-actions bot added the Guide label Dec 19, 2025
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors and reorganizes the OpenHCL architecture documentation. The documentation has been restructured from a flat organization to a nested directory structure under Guide/src/reference/architecture/openhcl/, improving navigation and separating concerns.

Key changes:

  • Split the monolithic boot process document into focused pages covering processes/components, boot flow, sidecar architecture, and IGVM artifact details
  • Enhanced the boot flow documentation with a new Mermaid sequence diagram visualizing the entire boot process
  • Streamlined the main OpenHCL architecture page to focus on high-level concepts with links to detailed pages

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
Guide/src/reference/architecture/openhcl_boot.md Deleted the original boot process documentation (content migrated to new structure)
Guide/src/reference/architecture/igvm.md Deleted the original IGVM overview (content moved to openhcl/igvm.md)
Guide/src/reference/architecture/openhcl/processes.md New file documenting OpenHCL processes and components (Boot Shim, kernels, paravisor, workers, diagnostics)
Guide/src/reference/architecture/openhcl/boot.md New file with enhanced boot flow documentation including a sequence diagram and step-by-step explanations
Guide/src/reference/architecture/openhcl/sidecar.md New file dedicated to the x86_64 sidecar kernel architecture and its purpose
Guide/src/reference/architecture/openhcl/igvm.md New file describing the IGVM artifact format and package contents
Guide/src/reference/architecture/openhcl.md Updated main architecture page to focus on overview and scenarios, with links to detailed component pages
Guide/src/SUMMARY.md Updated table of contents to reflect the new nested documentation structure
Comments suppressed due to low confidence (1)

Guide/src/reference/architecture/openhcl/boot.md:116

  • The term is inconsistently capitalized throughout the document. In some places it appears as "Device Tree" (capitalized) and in others as "device tree" (lowercase). For consistency, consider using lowercase "device tree" throughout, as this is the more common convention in Linux kernel documentation unless referring to the Device Tree Specification formally.
    * **Host Device Tree:** A device tree provided by the host containing topology and resource information.
    * **Command Line:** It parses the kernel command line, which can be supplied via IGVM or the host device tree.
3. **Device Tree:** It constructs a Device Tree that describes the hardware topology (CPUs, memory) to the Linux kernel.
4. **Sidecar Setup (x86_64):** The shim determines which CPUs will run Linux (typically just the BSP) and which will run the Sidecar (APs). It sets up control structures and directs Sidecar CPUs to the Sidecar entry point.
    * **Sidecar Entry:** "Sidecar CPUs" jump directly to the Sidecar kernel entry point instead of the Linux kernel.
    * **Dispatch Loop:** These CPUs enter a lightweight dispatch loop, waiting for commands.
5. **Kernel Handoff:** Finally, the BSP (and any Linux APs) jumps to the Linux kernel entry point, passing the Device Tree and command line arguments.

## 3. Linux Kernel Boot

The **Linux Kernel** takes over on the BSP and initializes the operating system environment. Sidecar CPUs remain in their dispatch loop until needed (e.g., hot-plugged for Linux tasks).

1. **Kernel Init:** The kernel initializes its subsystems (memory, scheduler, etc.).
2. **Driver Init:** It loads drivers for the paravisor hardware and standard devices.
3. **Root FS:** It mounts the initial ramdisk (initrd) as the root filesystem.
4. **Expose DT:** It exposes the boot-time Device Tree to userspace (e.g., under `/proc/device-tree`) for early consumers.
4. **User Space:** It spawns the first userspace process, `underhill_init` (PID 1).

## 4. Userspace Initialization (`underhill_init`)

`underhill_init` prepares the userspace environment.

1. **Filesystems:** It mounts essential pseudo-filesystems like `/proc`, `/sys`, and `/dev`.
2. **Environment:** It sets up environment variables and system limits.
3. **Exec:** It replaces itself with the main paravisor process, `/bin/openvmm_hcl`.

## 5. Paravisor Startup (`openvmm_hcl`)

The **Paravisor** process (`openvmm_hcl`) starts and initializes the virtualization services.

1. **Config Discovery:** It reads the system topology and configuration from `/proc/device-tree` and other kernel interfaces.
2. **Service Init:** It initializes internal services, such as the VTL0 management logic and host communication channels.
3. **Worker Spawn:** It spawns the **VM Worker** process (`underhill_vm`) to handle the high-performance VM partition loop.

## 6. VM Execution

At this point, the OpenHCL environment is fully established.
The `underhill_vm` process runs the VTL0 guest, handling exits and emulating devices, while `openvmm_hcl` manages the overall policy and communicates with the host.

Comment on lines 18 to 54
Host->>Shim: 1. Load IGVM & Transfer Control
activate Shim
note over Shim: 2. Boot Shim Execution<br/>Hardware Init, Config Parse, Device Tree
par CPU Split
Shim->>Sidecar: APs Jump to Sidecar
activate Sidecar
note over Sidecar: Enter Dispatch Loop
Shim->>Kernel: BSP Jumps to Kernel Entry
deactivate Shim
activate Kernel
end
note over Kernel: 3. Linux Kernel Boot<br/>Init Subsystems, Load Drivers, Mount initrd
Kernel->>Init: Spawn PID 1
deactivate Kernel
activate Init
note over Init: 4. Userspace Initialization<br/>Mount /proc, /sys, /dev
Init->>HCL: Exec openvmm_hcl
deactivate Init
activate HCL
note over HCL: 5. Paravisor Startup<br/>Read Device Tree, Init Services
HCL->>Worker: Spawn Worker
activate Worker
par 6. VM Execution
note over HCL: Manage Policy & Host Comm
note over Worker: Run VTL0 VP Loop
note over Sidecar: Wait for Commands / Hotplug
end
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Mermaid sequence diagram uses autonumber on line 7, which automatically numbers sequence diagram steps. However, manual numbers are also included in the message labels (e.g., "1. Load IGVM & Transfer Control") and notes (e.g., "2. Boot Shim Execution"). This creates redundant numbering that may confuse readers. Consider removing the manual number prefixes from the diagram labels since autonumber will handle the numbering automatically.

Copilot uses AI. Check for mistakes.
@github-actions
Copy link

@mattkur mattkur requested a review from chris-oo December 24, 2025 01:34

VTLs can be backed by:
- **VTL2:** OpenHCL runs here[^sk]. It has higher privileges and is isolated from VTL0.
- **VTL0:** The Guest OS (e.g., Windows, Linux) runs here. It cannot access VTL2 memory or resources.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe include VTL1 in this list instead of as a footnote, since from our perspective they're both just 'guest' VTLs.


- Hardware-based [TEEs], like Intel [TDX] and AMD [SEV-SNP]
- Software-based constructs, like Hyper-V [VSM]
This isolation is enforced by the underlying hypervisor (Hyper-V) and can be backed by:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we remove mention of a 'hypervisor' in this line, since in a CVM scenario we're relying solely on the hardware for isolation enforcement?


The following diagram offers a brief, high-level overview of the OpenHCL
Architecture.
OpenHCL is a paravisor execution environment that runs within the guest partition of a virtual machine. It provides virtualization services to the guest OS from within the guest itself, rather than from the host.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
OpenHCL is a paravisor execution environment that runs within the guest partition of a virtual machine. It provides virtualization services to the guest OS from within the guest itself, rather than from the host.
OpenHCL is a paravisor execution environment that runs within the guest partition of a virtual machine. It provides virtualization services to the guest OS from within the guest partition itself, rather than from the host as is traditionally done.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or something to really distinguish ourselves from "normal" vmms

interactive shell, additional perf and debugging tools, etc). Release
configurations use a lean, minimal userspace, consisting entirely of OpenHCL
components.
OpenHCL acts as a compatibility layer for Azure Boost. It translates legacy synthetic device interfaces (like VMBus networking and storage) used by the guest OS into the hardware-accelerated interfaces (proprietary [Microsoft Azure Network Adapter] (MANA) and NVMe) provided by the Azure Boost infrastructure. This allows unmodified guest images to take advantage of next-generation hardware.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
OpenHCL acts as a compatibility layer for Azure Boost. It translates legacy synthetic device interfaces (like VMBus networking and storage) used by the guest OS into the hardware-accelerated interfaces (proprietary [Microsoft Azure Network Adapter] (MANA) and NVMe) provided by the Azure Boost infrastructure. This allows unmodified guest images to take advantage of next-generation hardware.
OpenHCL acts as a compatibility layer for Azure Boost. It translates legacy synthetic device interfaces (like VMBus networking and storage) used by the guest OS into the hardware-accelerated interfaces (proprietary [Microsoft Azure Network Adapter] (MANA) and NVMe) provided by the Azure Boost infrastructure. This allows unmodified guests to take advantage of next-generation hardware.

**Key Responsibilities:**

- **Hardware Initialization:** Sets up CPU state, enables MMU, and configures initial page tables.
- **Configuration Parsing:** Receives boot parameters from the host that were generated at IGVM build time.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe its worth mentioning here that it may also perform filtering of the host-provided configuration in CVM scenarios to ensure nothing slips through that would weaken our security boundaries (like say exposing debugging services)


- **Fast Boot:** Allows secondary CPUs (APs) to boot quickly without initializing the full Linux kernel.
- **Dispatch Loop:** Runs a minimal loop waiting for commands from the host or the main kernel.
- **On-Demand Conversion:** Can be converted to a full Linux CPU if required.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **On-Demand Conversion:** Can be converted to a full Linux CPU if required.
- **On-Demand Conversion:** Can be converted to a full Linux CPU when required.


## Sidecar Kernel (x86_64)

On x86_64 systems, OpenHCL includes a "sidecar" kernel—a lightweight, bare-metal kernel that runs on a subset of CPUs to improve boot performance and reduce resource usage.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
On x86_64 systems, OpenHCL includes a "sidecar" kernela lightweight, bare-metal kernel that runs on a subset of CPUs to improve boot performance and reduce resource usage.
On x86_64 systems, OpenHCL includes a "sidecar" kernela lightweight, bare-metal kernel that runs on a subset of CPUs to improve boot performance and reduce resource usage.

- **Command Handling:** Processes diagnostic commands and queries.
- **Log Retrieval:** Provides access to system logs.

## Profiler Worker (`profiler_worker`)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is only for azure profiler, which will only show up in internal builds, right? Probably not worth including here.

- **Sidecar Kernel (x86_64):** The lightweight kernel for APs.
- **Initial Ramdisk (initrd):** The root filesystem containing userspace binaries (`underhill_init`, `openvmm_hcl`, etc.).
- **Memory Layout:** Directives specifying where each component should be loaded in memory.
- **Configuration:** Boot-time parameters (CPU topology, device settings, etc.) generated at IGVM build time.
Copy link
Contributor

@smmalis37 smmalis37 Dec 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC it will also include secure measurement information for CVMs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants