-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cancellation events may not be processed #103
Comments
I tested and can't reproduce this, but I have a guess at what could be happening.
This is consistent with your log showing both the event thread and the main thread handling the completion of the OUT transfer before we see the cancelled IN transfer come back. The extra delay gives the cancellation time to finish before submitting the OUT, which means the second IN transfer correctly receives the response. You might be able to confirm this with the following patch to log the completion size and status earlier in the event handling, before it throws out the response because there is no future to wake. diff --git a/src/platform/windows_winusb/transfer.rs b/src/platform/windows_winusb/transfer.rs
index b6134ba..14af6b9 100644
--- a/src/platform/windows_winusb/transfer.rs
+++ b/src/platform/windows_winusb/transfer.rs
@@ -326,7 +326,7 @@ impl PlatformSubmit<ControlOut<'_>> for TransferData {
pub(super) fn handle_event(completion: *mut OVERLAPPED) {
let completion = completion as *mut EventNotify;
- debug!("Handling completion for transfer {completion:?}");
+ debug!("Handling completion for transfer {completion:?} size {} status {:x}", unsafe { (*completion).overlapped.InternalHigh }, unsafe { (*completion).overlapped.Internal });
unsafe {
let p = addr_of_mut!((*completion).ptr).read();
notify_completion::<TransferData>(p) I've known from the beginning that USB transfer semantics especially around cancellation are fundamentally incompatible with the way Rust Futures work. The Queue API is designed to address those issues, and in 0.2 I plan to promote the Queue API to be the main one, with convenience wrappers on top for simpler use cases. There definitely needs to be a simple way to do single transfers with a timeout that avoids these kinds of issues! |
Thank you for your extensive explanation. here are the logs with a suggested change to transfer.rs . when it fails.
when it is okay.
note: 0xC0000120 is the value of STATUS_CANCELLED Yes, now it's clear even for me that, in the failure case, 3 bytes of data is being trapped by the first IN transfer which has already been cancelled. next I will try to implement drain() function with nusb::transfer::Queue . |
I realized that I can just modify read_bulk() to achieve cancel-safety regarding timeout. Before: fn read_bulk(&self, endpoint: u8, buf: &mut [u8], timeout: Duration) -> io::Result<usize> {
let fut = async {
let comp = self.bulk_in(endpoint, RequestBuffer::new(buf.len())).await;
comp.status.map_err(io::Error::other)?;
let n = comp.data.len();
buf[..n].copy_from_slice(&comp.data);
Ok(n)
};
block_on(fut.or(async {
Timer::after(timeout).await;
Err(std::io::ErrorKind::TimedOut.into())
}))
} ↓ fn read_bulk(&self, endpoint: u8, buf: &mut [u8], timeout: Duration) -> io::Result<usize> {
let mut queue = self.bulk_in_queue(endpoint);
queue.submit(RequestBuffer::new(buf.len()));
let Some(comp) = block_on(
async {
let comp = queue.next_complete().await;
Some(comp)
}
.or(async {
Timer::after(timeout).await;
None
}),
) else {
queue.cancel_all();
let _ = block_on(queue.next_complete());
return Err(std::io::ErrorKind::TimedOut.into());
};
comp.status.map_err(io::Error::other)?;
let n = comp.data.len();
buf[..n].copy_from_slice(&comp.data);
Ok(n)
} |
This issue is a sequel to the previous one I reported for probe-rs.
probe-rs/probe-rs#2995
When probe-rs is establishing connection with CMSIS-DAP probes,
it takes a moment longer than I would expect, depending on the setup of the USB devices.
one known setup I'm testing with is like this.
The above mentioned issue was about the correct handling of the timeout.
But why the timeout here at the first place?
I have created a simple test code to mimic the initial part of probe-rs's connection process.
and I think I've come very close to what is happening.
the test code.
https://gist.github.com/elfmimi/a52416d87c1b616cd9c7145a6e272fe6
when it fails.
the program gets stuck and never completes.
when it is okay.
uncomment the following line, and then it starts to work fine every time.
the same trick applies to probe-rs. adding sleep() to the end of drain() function seems to eliminate the delay I am seeing at the start of probe-rs's operation.
https://github.com/probe-rs/probe-rs/blob/5cc700f6160b1e3e40a0a569c635eed9371ec40b/probe-rs/src/probe/cmsisdap/commands/mod.rs#L252
btw, thanks for sharing the great piece of your work, the nusb!
The text was updated successfully, but these errors were encountered: