-
-
Notifications
You must be signed in to change notification settings - Fork 236
DisplayLink Screens Not Outputting #910
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hm, not really sure. Do they work on cosmic-comp? If not, then could you open an issue in Smithay? |
I'll check if it works on Cosmic tomorrow, will let you know. Thanks! |
This will most likely require using a software renderer like the |
Absolutely not, why would it? If you have a GPU, you can always render into an offscreen target for whatever purpose (including remote desktop sessions etc.) 🙃 With See mutter#615. |
Yeah, no, this is not how things work. You can not blindly assume that importing buffers from different subsystems or devices/drivers just works. The display link drm device will probably require linear dumb buffers for scanning out the framebuffer. Rendering into dumb buffers with something like GLES is not guaranteed to work, so you need to be prepared to render into an offscreen buffer compatible with the render node and transfer the memory to the (dumb) buffer compatible with the display link device. An important optimization of wayland compositors is to avoid re-drawing/copying stuff that did not change. This is typically done with some kind of damage tracking base on buffer ages. You definitely do not want to copy the whole buffer from the render node to the display link node every time some small part changes.
Transferring those damage rects from the offscreen render buffer to the target framebuffer is what currently would require using something like the I did some tests in the past using dumb-buffers and pixman for drawing client dmabuf with pretty good results (though I do not have the numbers available). So sometimes it might even be more performant to explicitly not use the gpu for composition. Pretty sure that fullscreen clients and shm clients would fall into this.
Yes, this is an optimization we should also do, but as said can not rely on. There is even an linked issue with an still open MR to disable it for some setups where you have two GPUs from the same vendor which are unable to share memory. The whole multi-gpu/split kms topic is a bit brittle imo. |
I encountered same problem today, any progress/plans for this issue? |
Hi, I'm also having the same problem and out of curiosity I was looking in the examples of smithay and running this example (in niri) shows me in terminal all the monitors I have, even those that are connected via USB, which makes me think that probably this issue is fixed along with this other one #843 |
That's what atomic test commits are for.
DisplayLink is implemented as a (somewhat weird) user-space driver. It should be able to import pretty much anything, and has no issue with GPU buffers. In wlroots as of https://gitlab.freedesktop.org/wlroots/wlroots/-/merge_requests/4824, if a secondary drm device does not have the ability to create a render that could be blitted to, we fall back to just trying to scan out primary render buffers directly. Worst case, it doesn't work and commits will fail, which isn't an issue - a screen that doesn't work stays off. For multi-gpu of any kind, we always restrict ourselves to linear buffers as modifiers are not guaranteed to be portable, but that's a quite minor theoretical performance hit relative to using CPU buffers. |
Yes,
So what I had in mind was to extend the abstractions with support of drawing into dumb buffers over pixman as a fallback path in the multi-gpu abstraction. Not for rendering directly, but for fallback transfer from the primary gpu render target to the dumb buffer. I made some progress for the fallback path, but I am not able to dedicate as much time as I would like to atm. |
Fair enough, the reason I didn't care too much about the complexity of a dumb buffer fallback was that I didn't see many realistic scenarios where it would actually be needed, and just trying to do the direct scanout path was trivial and harmless in cases where it wouldn't work. Doesn't mean it's as easy to do here, especially if the multi-gpu abstraction is external. |
Just to elaborate a bit on the background on why I personally prefer the dumb buffer path and it seems Simon shares my view in your linked MR. I already implemented this part in So what I currently work on is:
This way we could solve two things, DisplayLink support and finally having a software rendering fallback for environments without a gpu to render. (* The current multi-gpu abstraction already supports sharing a dmabuf across different targets and only falls back to cpu transfer if this fails. Both paths fully support damage tracking) |
Just to be clear, I'm just sharing what we did in wlroots our experience could be useful, not saying you necessarily have to do it the same way.
Note that he follows up with:
It was trivial for us to do, enabled the "end-goal" of direct scan-out for DisplayLink/GUD immediately and works in what I believe is all real-world use-cases. In the unlikely event that someone should have a setup where it cannot work, there are no worse off than before. (Even if nvidia cannot render linear, DisplayLink is geared for laptop use where the primary or sole renderer is an iGPU.) Long term, we'd probably want to rework mgpu so that the compositor can pick which renderer each output should use rather than having it be magic logic within the drm backend as it is now. Then the compositor can integrate buffer sources into its output configuration logic, to be tested the same as it tests other configuration like formats, modifiers, refresh rates, and so forth. With that, pixman could be tried as last resort, although the priority would be to allow rendering each output with its local GPU or even dynamically switching for power/performance. |
I really appreciate your input and did not take it as a critic. You and everyone putting so much effort into the ecosystem have my full respect. I did not intend to be mean, so if that is how it sounded I am deeply sorry. I am also here to improve the wayland ecosystem as good as I can and really enjoy the technical discussions.
Fair and I fully agree that having a solution for the majority of users is a solid start. Our exchange made me curious if we can do the same in
This is something already possible today with |
Ok, #1281 should implement what is necessary to fallback to the primary gpu for rendering for display only devices. Unfortunately I lack DisplayLink hardware so I can not test this. Would be great if anyone with this issue could give it a try. |
I pulled out my old test DisplayLink dock and gave it a spin, and it works.
|
Thanks, really appreciate your help!
Sounds like some kind of synchronization issue. Overlay planes are disabled by default in niri, so I would not expect it to be caused by missing explicit sync on overlay planes. What gpu/driver was in use as the primary gpu? |
I was testing with amdgpu (Radeon 780M in a Ryzen 7950HS laptop chip), and a display running at 4k@30Hz. Entry in drmdb for the evdi device during sway test here: https://drmdb.emersion.fr/snapshots/bfd5c0265d05 - as you can see it's quite... limited. I did a quick test with sway, and the issue also happens there. We use a single dmabuf for both rendering and kms, so I don't think there's any need to test that in niri. One thing of interest is that A few extra observations:
(Note that my test dock is a 10 years old dell thing I keep around from a previous job for testing. If the issue is bandwidth, it could be that a newer dock or lower resolution wouldn't experience this.) |
Okay, thanks!
I just bought a simple DisplayLink to HDMI Adapter and can observe the same with an Intel iGPU. Only had a quick look at the |
Seems like evdi roughly works like this:
A few notes:
|
Thanks for the analysis! Some observations from looking through the code. The driver specifically requests vblank support by calling The kernel docs state:
But the driver always returns This also aligns with a trace log I generated recording the timestamps and sequence counters received in user-space. Both are always zero... |
I just patched the driver to return success from the enable vblank funtion and to call drm_crtc_handle_vblank as part of the vblank handling. While that does not solve the framerate issue, which I expected, it indeed does populate the sequence and monotonic timestamp. |
Okay, I just pushed a commit to the PR implementing a simple strategy to throttle the vblanks. I mainly implemented it to verify if this at least makes the tearing go away. So far it seems to work as intended, even fullscreen direct scan-out video playback seems to work. |
Hello and thank you for your hard work.
I have a DisplayLink dock that powers two monitors, working fine in GNOME and Hyprland, but unable to output in Niri.
When I enter niri msg outputs, the displays are not showing.
This is the output I get when I do
journalctl -eb /run/current-system/sw/bin/niri
:I checked to see if there was an environment variable but my searching skills failed me.
Here's what shows when I list the current available cards:
ls /dev/dri/by-path/
pci-0000:00:02.0-card pci-0000:00:02.0-render platform-evdi.0-card platform-evdi.1-card
Thank you very much for reading.
System Information
The text was updated successfully, but these errors were encountered: