virtio-9p: migrate from VirtioQueueWorkerContext to direct VirtioQueue#2946
virtio-9p: migrate from VirtioQueueWorkerContext to direct VirtioQueue#2946jstarks wants to merge 2 commits intomicrosoft:mainfrom
Conversation
Same pattern as virtio-pmem: Plan9Worker (T) holds device logic and is created at Device::new(). Plan9Queue (S) holds the VirtioQueue, inserted at enable() and removed at poll_disable(). No Option<TaskControl> needed. - Remove Arc<Plan9FileSystem> -- Plan9Worker is sole owner - Remove Arc<fs::File> field from device (dead after move to worker) - Remove exit_event (StopTask handles cancellation) - Remove async-trait and event-listener deps - Add pal_async (PolledWait) and futures (StreamExt) deps - Extract process_9p_request() as a free function
There was a problem hiding this comment.
Pull request overview
Migrates the virtio-9p device’s queue handling from the VirtioQueueWorkerContext indirection to direct VirtioQueue ownership with a custom AsyncRun loop, matching the newer pattern used by other virtio devices.
Changes:
- Replace
VirtioQueueWorker/VirtioQueueWorkerContextusage withTaskControl<Plan9Worker, Plan9Queue>and anAsyncRunimplementation. - Create and drive
VirtioQueuedirectly viaPolledWait+StreamExt::next(). - Update crate dependencies to remove
async-trait/event-listenerand addfutures/pal_async.
Reviewed changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| vm/devices/virtio/virtio_p9/src/lib.rs | Refactors queue lifecycle to direct VirtioQueue + AsyncRun; moves request processing into a helper function. |
| vm/devices/virtio/virtio_p9/Cargo.toml | Adjusts dependencies for the new queue/event and stream-based implementation. |
| Cargo.lock | Updates locked dependency versions (including virtio_p9 deps list). |
You can also share your feedback on Copilot code review. Take the survey.
| let queue_event = PolledWait::new(&self.driver, queue_resources.event).unwrap(); | ||
| let queue = VirtioQueue::new( | ||
| resources.features, | ||
| queue_resources.params, | ||
| self.worker.task().mem.clone(), | ||
| queue_resources.notify, | ||
| queue_event, | ||
| ) | ||
| .expect("failed to create virtio queue"); |
There was a problem hiding this comment.
enable() currently uses both .next().expect("expected single queue") and panicking error handling (PolledWait::new(...).unwrap(), VirtioQueue::new(...).expect(...)). Even if these are “should never happen” invariants, other virtio devices (e.g. virtio-blk) tend to log and return instead of crashing the VMM. Consider handling missing/invalid queue resources and queue/event creation failures by logging the error and returning early.
| // Make a copy of the incoming message. | ||
| let mut message = vec![0; work.get_payload_length(false) as usize]; | ||
| if let Err(e) = work.read(&worker.mem, &mut message) { | ||
| tracing::error!( |
There was a problem hiding this comment.
This allocates a Vec sized directly from work.get_payload_length(false) (guest-controlled descriptor lengths). A malicious/buggy guest can request very large buffers and trigger excessive memory allocation/OOM. Consider enforcing a reasonable maximum request size (or reworking to stream/parse in-place, similar to virtiofs’s payload reader) and completing the descriptor with an error response when the limit is exceeded.
| // Allocate a temporary buffer for the response. | ||
| let mut response = vec![9; work.get_payload_length(true) as usize]; | ||
| if let Ok(size) = worker.fs.process_message(&message, &mut response) { |
There was a problem hiding this comment.
This allocates a response buffer sized from work.get_payload_length(true) (guest-controlled writable descriptor lengths), which can also be huge and lead to OOM/abort. Please cap the maximum response size you’re willing to allocate (and/or write the reply in smaller chunks) before allocating.
|
Blocked on #2947 |
Same pattern as virtio-pmem, #2945.