Skip to content

Add vmm-test guest selection guidance #2938

@mattkur

Description

@mattkur

Add vmm-test guest selection guidance

Summary

The vmm-test framework supports a wide range of guest images, boot methods, VMM backends, and isolation modes. There is no documented guidance for test authors about when to use which configuration. Choosing the right guest for a new test is currently tribal knowledge.

This issue covers adding practical, decision-oriented guidance to the test documentation.


The problem

A test author writing a new vmm-test has to choose from combinations like:

  • openvmm_linux_direct_x64
  • openvmm_uefi_x64(vhd(alpine_3_23_x64))
  • openvmm_uefi_x64(vhd(ubuntu_2504_server_x64))
  • openvmm_openhcl_uefi_x64[vbs](vhd(windows_datacenter_core_2025_x64_prepped))
  • hyperv_openhcl_uefi_x64[snp](vhd(ubuntu_2504_server_x64))

The tradeoffs are real and not obvious:

Guest Image Size Boot Time Feature Coverage
Linux Direct (Alpine kernel+initrd) Minimal Fastest Serial I/O, basic devices
Alpine VHD ~224 MB Fast Lightweight OS, cloud-init
Ubuntu VHD ~3.7 GB Medium Full integration, systemd, networking
Windows Server VHD ~32 GB Slow Windows-specific, Hyper-V ICs, VBS
guest_test_uefi Built in-tree Fast UEFI firmware behavior only

And the architecture coverage differs:

Boot Method x64 aarch64
Linux Direct Blocked (GH #1798)
PCAT (Gen 1) N/A
UEFI (Gen 2)
OpenHCL + UEFI ✓ (Hyper-V only)
OpenHCL + Linux Direct Not yet

What should be documented

Guest selection decision tree

  • Testing basic device I/O, serial communication, or minimal boot behavior? → Linux Direct. Fastest iteration, no disk image needed.
  • Testing UEFI firmware behavior or boot services?guest_test_uefi. In-tree, minimal, no external image dependency.
  • Quick smoke test that needs a real OS but speed matters? → Alpine VHD. Small image, fast boot, available on both x64 and aarch64.
  • Full integration: networking, storage, systemd services, general compatibility? → Ubuntu VHD. Full toolchain, widest feature coverage.
  • Windows-specific features, Hyper-V integration components, or VBS/CVM isolation? → Windows Server VHD. Required for Windows guest behavior.
  • OpenHCL paravisor behavior, device relay, or VTL2 servicing? → Add openhcl_ prefix variants alongside non-OpenHCL ones.
  • Confidential VM isolation (SNP, TDX)? → Use hyperv_openhcl_uefi_x64[snp] or [tdx] variants. These require specific hardware runners.

Practical guidelines

  • Prefer the lightest guest that exercises the code path under test.
  • If a test does not need a full OS, Linux Direct or guest_test_uefi are better choices than Ubuntu or Windows.
  • If a test needs to run on both x64 and aarch64, note that Linux Direct and PCAT are x64-only today. Use UEFI boot with Alpine or Ubuntu for cross-architecture tests.
  • Windows guests are expensive (32 GB images, slow boot). Only use them when the test specifically exercises Windows guest behavior.
  • When adding OpenHCL variants, also keep a non-OpenHCL variant so the test runs in both configurations unless it is specifically testing paravisor behavior.

Known quirks

These should be documented so test authors do not rediscover them:

  • Ubuntu VHDs trigger a TPM-related reboot on first boot
  • Windows Server 2025 always requires an initial reboot
  • Windows 11 Enterprise (aarch64) always requires an initial reboot
  • FreeBSD has a 20-second Hyper-V shutdown IC sleep
  • Linux Direct on aarch64 is blocked on PL011 serial emulator limitations (GH petri: support pipette on linux direct tests on aarch64 #1798)

Page location

Add to the vmm-test docs — either as a new section in Guide/src/dev_guide/tests/vmm.md or as a sub-page like Guide/src/dev_guide/tests/vmm/guest_selection.md.


Goals

  • Give vmm-test contributors clear, practical guidance about guest and boot method selection
  • Document the tradeoffs (image size, boot time, feature coverage, architecture support) in one place
  • Document known quirks so contributors do not rediscover them
  • Make it easy for a new contributor to pick the right test configuration on the first try

Non-goals

  • Documenting every guest image in exhaustive detail (the KnownTestArtifacts enum and petri artifacts crate are the source of truth)
  • Redesigning the test framework or guest image strategy
  • Adding new test guests or boot methods

Rough implementation plan

  1. Write guest selection guidance as a section or sub-page of the vmm-test docs
  2. Include the decision tree, tradeoff table, and known quirks
  3. Update Guide/src/SUMMARY.md if adding a new sub-page

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions