Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rp2040 hcd bulk #1434

Merged
merged 4 commits into from
Nov 5, 2022
Merged

Rp2040 hcd bulk #1434

merged 4 commits into from
Nov 5, 2022

Conversation

Skyler84
Copy link
Contributor

Add support for BULK endpoint transfers. This is achieved by allocating hw_endpoint for bulk transfers, not just interrupt. These endpoints can perform transfers in a non-blocking manner, whereas using EPx would block all other (non interrupt) transfers.
This change also allows interrupt transfers in both IN and OUT directions now.

Added descriptive strings for edpt_dir and edpt_type
Added support for allocating hw_endpoints for non-interrupt endpoints.
Allow endpoints to be used in either direction by updating bit checks.
@bohdan-tymkiv
Copy link

I can confirm that this patch works with multiple IN/OUT BULK endpoints simultaneously.
I am using a raw bare API (tuh_edpt_xfer) to communicate with a custom USB device.

@rppicomidi
Copy link
Contributor

@Skyler84 I can confirm this patch works with one BULK IN endpoint and and ONE BULK OUT endpoint at the same time with a MIDI Host function.

@rppicomidi
Copy link
Contributor

@Skyler84 @bohdan-tymkiv Did you try this patch with a hub? I am finding it works fine when only one device is plugged in but if two devices are plugged in and packets are being sent to both devices, packet corruption occurs. Also, the IN transfers complete on both devices but the second device plugged in has an all-zero packet buffer even though the connected device should be sending data.

@Skyler84
Copy link
Contributor Author

Skyler84 commented Apr 28, 2022

I have not tested this with a hub yet! Definitely something I can take a look at with my logic analyzer. I've also found that it doesn't work with transfers >64B since that's all that's allocated, and the double buffering is not complete in the HCD i dont think?
It's a fairly close solution but I think needs some refinement.
edit - I think the corruption may be because the controller isn't waiting long enough for a device to return data?

@johnkjellberg
Copy link

I can confirm that this works for direct connection to a CDC device!

@bohdan-tymkiv
Copy link

Have not tried it with the HUB. I think implementing HUB support can be a subject of a different patch. This change works and it basically makes RP2040 Host functional.

@johnkjellberg
Copy link

Any news regarding this? Is the hub support critical for merging, or is there something else that needs to be done before approval?

@atoktoto atoktoto mentioned this pull request Sep 7, 2022
@wrtlprnft
Copy link

I found your patch because my attempts to implement a host for device with a bulk in endpoint and a bulk out endpoint failed. With the patch it actually works, although I've stayed away from complicated setups with hubs.

However, I now have the issue that it's very slow as every new transfer has to wait for the start-of-frame marker because that's the only time where hardware actually polls the endpoint control “register” in DPRAM. If I understand the RP2040 datasheet correctly, the controller would otherwise be able to start a transaction in the middle of a frame if SIE_CTRL.SOF_SYNC is not set. By writing to SIE_CTRL, which is a real register, we can trigger the hardware to do something immediately. I don't see a way to trigger the interrupt endpoints.

So I think we should use the epx endpoint after all, and any transactions that are non-blocking from the point of view of the user should be handled in a software queue somehow.

wrtlprnft pushed a commit to wrtlprnft/tinyusb that referenced this pull request Sep 15, 2022
…oints

See also the comment at

 hathach#1434 (comment)

. This commit introduces a new structure to save the logical information
about an endpoint that takes turn being committed to the single hardware
epx endpoint.

Compared to using the interrupt endpoints, which is the approach of the
referenced issue, this has the advantage that we can trigger a transfer to
occur *now* by writing to SIE_CTRL instead of waiting for hardware to poll
the interrupt descriptors after the next sof packet up to 1 ms in the
future.

It has the disadvantage that we don't get to setup multiple transfers in
advance, there can only be one epx transfer that is being attempted. At the
moment, this means that only one non-interrupt transfer can be outstanding
at a time from the tinyusb user's perspective.

It should be possible to implement software queuing of transfers to hide
this from the user, but this is not being attempted in this commit.
@wrtlprnft
Copy link

I did a proof of concept in wrtlprnft@43a245d.

I don't think it's ready to be a pull request of its own because it requires the user to only have one transfer (tuh_edpt_xfer or tuh_control_xfer) in flight at a time and there are likely many things I didn't think about. Maybe it's useful to someone until there's a better solution.

@johnkjellberg
Copy link

johnkjellberg commented Sep 19, 2022

I did a proof of concept in wrtlprnft@43a245d.

I don't think it's ready to be a pull request of its own because it requires the user to only have one transfer (tuh_edpt_xfer or tuh_control_xfer) in flight at a time and there are likely many things I didn't think about. Maybe it's useful to someone until there's a better solution.

@wrtlprnft Thanks! This is also the first attempt I have seen that I got working for both CDC and MSC!
Seems to be some cleanup bugs though(it crashes/don't work after CDC have been unmounted. Neither CDC or MSC). I will investigate that a bit.

Edit: I found the issue and have put the changes needed as comments in the commit.

@rppicomidi
Copy link
Contributor

@wrtlprnft Have you seen the discussion #1261? (TL;DR If a bulk IN transfer does not complete (because the device NAKs the transfer), epx is busy until the device sends some data back) I took an admittedly quick look at the PR and did not see such a mechanism. My attempt a while ago in PR #1219 does address receiving NAK on bulk IN, but it is a bit ugly.

@wrtlprnft
Copy link

wrtlprnft commented Sep 20, 2022

@rppicomidi I did not see that, but I was wondering about that myself. Thanks for the pointer!

My intended application, which I was doing a quick prototype for to see whether it was feasible and fast enough, was doing JTAG over an FTDI dongle. In that application, I have a clear command/response structure where I know more or less exactly how many Bytes of IN transfer I'll get at what point, so the issue never really arose.

As I said before, my code currently doesn't even attempt to support multiple concurrent transactions, so that's something that will have to be done before I can really consider NAKs in my branch. I'll definitely have a look at your branch because it seems like it solves many of the issues I had.

Copy link
Owner

@hathach hathach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for this brilliant PR and every for helping testing and verifying. Indeed the datasheet seems to hint that these "Interrupt" endpoints are used for Interrupt type. Using them with bulk in asynchronous manner is a great find out. Which simplify the usbh driver.

Basically the interrupt and bulk have the same packet layout (pid, data, crc, ack), the difference is that bulk endpoint will continuously transferring data as long as bus frame (1ms) is allowed. For example, we can have 10 packets of 64 bytes in 1ms with Bulk. For interrupt endpoint, it will only transferring 1 packet per interval (could be 1ms, 2ms, 4ms etc..). To be honest, I am not sure if using these "interrupt" endpoint for bulk this way would hurt the throughput or not (need to confirm with rpi team). If it does then our max throughput would be 64 bytes/1ms --> 64KB/s . IMHO, it will likely be the case here. Though it is still better than nothing now. We can merge this as it is and try to figure out a better implementation later e.g similar to #1219 approach

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants