virtio-pmem: migrate from VirtioQueueWorkerContext to direct VirtioQueue#2945
virtio-pmem: migrate from VirtioQueueWorkerContext to direct VirtioQueue#2945jstarks wants to merge 2 commits intomicrosoft:mainfrom
Conversation
Replace the VirtioQueueWorker + VirtioQueueWorkerContext indirection with direct VirtioQueue ownership and a custom AsyncRun implementation. Before: PmemWorker implemented VirtioQueueWorkerContext::process_work(), and VirtioQueueWorker managed queue creation, the run loop, and spawned a dedicated OS thread via DefaultPool::spawn_on_thread. After: PmemWorker is the long-lived task (T) created at Device::new(), and PmemQueue (containing the VirtioQueue) is the late-bound state (S) inserted during enable() and removed during poll_disable(). This follows the TaskControl convention used by virtio-net: T holds device logic, S holds per-enable queue state. - No Option<TaskControl> needed -- TaskControl lives in Device directly - Remove exit_event (StopTask handles cancellation) - Remove async-trait and event-listener deps - Add pal_async (PolledWait) and futures (StreamExt) deps - Extract process_pmem_request() as a free function - Remove dead Device::file field (now owned by PmemWorker from init)
There was a problem hiding this comment.
Pull request overview
Refactors the virtio_pmem device to use the newer TaskControl + VirtioQueue streaming model (replacing the prior VirtioQueueWorker/async-trait/event-listener approach), and updates crate dependencies accordingly.
Changes:
- Reworks the pmem queue worker to run via
task_control::AsyncRunand consume requests fromVirtioQueueas a stream. - Updates
virtio_pmemcrate dependencies (addsfutures/pal_async, removesasync-trait/event-listener). - Updates the workspace
Cargo.lock(includes several unrelated dependency bumps).
Reviewed changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| vm/devices/virtio/virtio_pmem/src/lib.rs | Switches pmem queue processing to TaskControl + VirtioQueue stream-based handling. |
| vm/devices/virtio/virtio_pmem/Cargo.toml | Adjusts dependencies to match the new queue/worker implementation. |
| Cargo.lock | Records updated dependency resolutions, including several version/rev bumps. |
You can also share your feedback on Copilot code review. Take the survey.
| self.worker.task().mem.clone(), | ||
| qr.notify, | ||
| queue_event, | ||
| ) | ||
| .expect("failed to create virtio queue"); |
There was a problem hiding this comment.
VirtioQueue::new(...) returns a Result and can fail based on guest-provided queue parameters. Using .expect(...) will panic on malformed/untrusted guest input and can crash the VMM. Please propagate/handle the error (log + return, and avoid calling self.worker.task() here since TaskControl::task() panics if the task is running).
| let qr = resources.queues.remove(0); | ||
| let queue_event = PolledWait::new(&self.driver, qr.event).unwrap(); | ||
| let queue = VirtioQueue::new( |
There was a problem hiding this comment.
Both resources.shared_memory_region.unwrap() and PolledWait::new(...).unwrap() can panic. shared_memory_region is None for the MMIO transport, and PolledWait::new can fail. Panics here will crash the VMM. Please handle these error/None cases (log + return) and avoid discarding the result of map() so failures are visible.
|
Blocked on #2947 |
Replace the VirtioQueueWorker + VirtioQueueWorkerContext indirection with direct VirtioQueue ownership and a custom AsyncRun implementation.