Skip to content

DisplayLink Screens Not Outputting #910

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Keremimo opened this issue Dec 31, 2024 · 22 comments · May be fixed by #1281
Open

DisplayLink Screens Not Outputting #910

Keremimo opened this issue Dec 31, 2024 · 22 comments · May be fixed by #1281
Labels
bug Something isn't working question Further information is requested

Comments

@Keremimo
Copy link

Keremimo commented Dec 31, 2024

Hello and thank you for your hard work.

I have a DisplayLink dock that powers two monitors, working fine in GNOME and Hyprland, but unable to output in Niri.

When I enter niri msg outputs, the displays are not showing.

This is the output I get when I do journalctl -eb /run/current-system/sw/bin/niri:

Dec 31 11:32:52 ThinkChad niri[10580]: 2024-12-31T10:32:52.824671Z  INFO niri: starting version 0.1.10-1 (unknown commit)
Dec 31 11:32:52 ThinkChad niri[10580]: 2024-12-31T10:32:52.828279Z DEBUG niri_config: loaded config from "/home/kerem/.config/niri/config.kdl"
Dec 31 11:32:53 ThinkChad niri[10580]: 2024-12-31T10:32:53.123778Z  INFO niri::backend::tty: using as the render node: "/dev/dri/renderD128"
Dec 31 11:32:53 ThinkChad niri[10580]: 2024-12-31T10:32:53.188319Z DEBUG niri::backend::tty: device added: 57857 "/dev/dri/card1"
Dec 31 11:32:53 ThinkChad niri[10580]: 2024-12-31T10:32:53.518640Z DEBUG niri::backend::tty: this is the primary node
Dec 31 11:32:53 ThinkChad niri[10580]: 2024-12-31T10:32:53.518657Z DEBUG niri::backend::tty: this is the primary render node
Dec 31 11:32:53 ThinkChad niri[10580]: 2024-12-31T10:32:53.541087Z DEBUG niri::backend::tty: device changed: 57857
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.556158Z DEBUG niri::backend::tty: new connector: eDP-1 "Thermotrex Corporation TL140BDXP01-0 Unknown"
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.556340Z DEBUG niri::backend::tty: new connector: DP-1 "PNP(AOC) 27G2WG3- 1TMP9HA011448"
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.556509Z DEBUG niri::backend::tty: connecting connector: DP-1
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.556654Z DEBUG niri::backend::tty: picking mode: Mode { name: "1920x1080", clock: 325670, size: (1920, 1080), hsync: (1944, 1976, 2056), vsync: (1083, 1088, 1100), hskew: 0, vscan: 0, vrefresh: 144, >
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.556828Z DEBUG niri::backend::tty: set max bpc to 8
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.568865Z DEBUG niri::niri: putting output DP-1 at x=0 y=-1080
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.568933Z DEBUG niri::backend::tty: connecting connector: eDP-1
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.569093Z DEBUG niri::backend::tty: picking mode: Mode { name: "2560x1440", clock: 496130, size: (2560, 1440), hsync: (2608, 2640, 2720), vsync: (1463, 1468, 1520), hskew: 0, vscan: 0, vrefresh: 120, >
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.569216Z DEBUG niri::backend::tty: set max bpc to 8
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.582735Z DEBUG niri::niri: putting output eDP-1 at x=0 y=0
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.582996Z DEBUG niri::backend::tty: device added: 57856 "/dev/dri/card0"
Dec 31 11:32:54 ThinkChad niri[10580]: MESA-LOADER: failed to retrieve device information
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.599271Z  WARN niri::backend::tty: error adding device: None of the following EGL extensions is supported by the underlying EGL implementation, at least one is required: ["EGL_EXT_device_drm"]
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.599289Z DEBUG niri::backend::tty: device added: 57858 "/dev/dri/card2"
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.606191Z  WARN niri::backend::tty: error adding device: None of the following EGL extensions is supported by the underlying EGL implementation, at least one is required: ["EGL_EXT_device_drm"]
Dec 31 11:32:54 ThinkChad niri[10580]: 2024-12-31T10:32:54.606224Z  INFO niri: listening on Wayland socket: wayland-1

I checked to see if there was an environment variable but my searching skills failed me.

Here's what shows when I list the current available cards:
ls /dev/dri/by-path/
pci-0000:00:02.0-card pci-0000:00:02.0-render platform-evdi.0-card platform-evdi.1-card

Thank you very much for reading.

System Information

  • niri version: niri 0.1.10-1 (unknown commit)
  • Distro: NixOS 24.11 Stable
  • GPU: Intel HD 620
  • CPU: Intel Core i5 8250
@Keremimo Keremimo added the bug Something isn't working label Dec 31, 2024
@YaLTeR
Copy link
Owner

YaLTeR commented Jan 1, 2025

Hm, not really sure. Do they work on cosmic-comp? If not, then could you open an issue in Smithay?

@YaLTeR YaLTeR added the question Further information is requested label Jan 1, 2025
@Keremimo
Copy link
Author

Keremimo commented Jan 1, 2025

I'll check if it works on Cosmic tomorrow, will let you know. Thanks!

@cmeissl
Copy link
Contributor

cmeissl commented Jan 3, 2025

This will most likely require using a software renderer like the PixmanRenderer.

@valpackett
Copy link
Contributor

valpackett commented Jan 5, 2025

This will most likely require using a software renderer like the PixmanRenderer.

Absolutely not, why would it? If you have a GPU, you can always render into an offscreen target for whatever purpose (including remote desktop sessions etc.) 🙃

With evdi that target itself is managed at the DRM level. So a DisplayLink setup is just kind of an inverted multi-GPU: instead of an extra render-only device you get an extra output-only device, and you need to use the primary GPU to render into the secondary's memory.

See mutter#615.

@cmeissl
Copy link
Contributor

cmeissl commented Jan 6, 2025

This will most likely require using a software renderer like the PixmanRenderer.

Absolutely not, why would it? If you have a GPU, you can always render into an offscreen target for whatever purpose (including remote desktop sessions etc.) 🙃

With evdi that target itself is managed at the DRM level. So a DisplayLink setup is just kind of an inverted multi-GPU: instead of an extra render-only device you get an extra output-only device, and you need to use the primary GPU to render into the secondary's memory.

Yeah, no, this is not how things work. You can not blindly assume that importing buffers from different subsystems or devices/drivers just works. The display link drm device will probably require linear dumb buffers for scanning out the framebuffer. Rendering into dumb buffers with something like GLES is not guaranteed to work, so you need to be prepared to render into an offscreen buffer compatible with the render node and transfer the memory to the (dumb) buffer compatible with the display link device.

An important optimization of wayland compositors is to avoid re-drawing/copying stuff that did not change. This is typically done with some kind of damage tracking base on buffer ages. You definitely do not want to copy the whole buffer from the render node to the display link node every time some small part changes.

smithay already provides an abstraction to deal with multi-gpu setups that tries to avoid copies between devices.
The multi-gpu renderer tries to find dma-buffer sharable between both nodes, falling back to cpu-copy (downloading pixels from the render gpu). It also makes sure to only transfer/copy damaged regions to target framebuffer.

Transferring those damage rects from the offscreen render buffer to the target framebuffer is what currently would require using something like the PixmanRenderer in smithay. This might still allow to use a shared dma buffer and use an slightly more optimized transfer when we can directly map the render dmabuf instead of having to ask something like GLES to download the pixels for us. But there is no guarantee that we find a suitable format for this. To give an example there are gpu drivers that might not allow to render into a linear buffer, in which case we can not access the buffer directly.

I did some tests in the past using dumb-buffers and pixman for drawing client dmabuf with pretty good results (though I do not have the numbers available). So sometimes it might even be more performant to explicitly not use the gpu for composition. Pretty sure that fullscreen clients and shm clients would fall into this.

See mutter#615.

Yes, this is an optimization we should also do, but as said can not rely on. There is even an linked issue with an still open MR to disable it for some setups where you have two GPUs from the same vendor which are unable to share memory.

The whole multi-gpu/split kms topic is a bit brittle imo.

@Ninlives
Copy link

I encountered same problem today, any progress/plans for this issue?

@SergioRibera
Copy link

Hi, I'm also having the same problem and out of curiosity I was looking in the examples of smithay and running this example (in niri) shows me in terminal all the monitors I have, even those that are connected via USB, which makes me think that probably this issue is fixed along with this other one #843

@kennylevinsen
Copy link

Yeah, no, this is not how things work. You can not blindly assume that importing buffers from different subsystems or devices/drivers just works.

That's what atomic test commits are for.

The display link drm device will probably require linear dumb buffers

DisplayLink is implemented as a (somewhat weird) user-space driver. It should be able to import pretty much anything, and has no issue with GPU buffers.

In wlroots as of https://gitlab.freedesktop.org/wlroots/wlroots/-/merge_requests/4824, if a secondary drm device does not have the ability to create a render that could be blitted to, we fall back to just trying to scan out primary render buffers directly. Worst case, it doesn't work and commits will fail, which isn't an issue - a screen that doesn't work stays off.

For multi-gpu of any kind, we always restrict ourselves to linear buffers as modifiers are not guaranteed to be portable, but that's a quite minor theoretical performance hit relative to using CPU buffers.

@cmeissl
Copy link
Contributor

cmeissl commented Mar 12, 2025

Yeah, no, this is not how things work. You can not blindly assume that importing buffers from different subsystems or devices/drivers just works.

That's what atomic test commits are for.

Yes, smithay uses atomic tests quite extensively. The whole composition, especially the direct scan-out part on the primary plane and overlay planes is based on it.
The composition abstraction already allows to handle allocation and rendering (actually also adding the framebuffer) separately. So in theory it should already be possible to try to optimized path. Jut someone will have to write that part and try it out.

The display link drm device will probably require linear dumb buffers

DisplayLink is implemented as a (somewhat weird) user-space driver. It should be able to import pretty much anything, and has no issue with GPU buffers.

In wlroots as of https://gitlab.freedesktop.org/wlroots/wlroots/-/merge_requests/4824, if a secondary drm device does not have the ability to create a render that could be blitted to, we fall back to just trying to scan out primary render buffers directly. Worst case, it doesn't work and commits will fail, which isn't an issue - a screen that doesn't work stays off.

For multi-gpu of any kind, we always restrict ourselves to linear buffers as modifiers are not guaranteed to be portable, but that's a quite minor theoretical performance hit relative to using CPU buffers.

smithay provides a multi-gpu abstraction, but took a slightly different route. It does not restrict itself to linear buffers.
The logic tries to avoid cpu copies, but if everything fails falls back to cpu transfer. afaik nvidia (binary driver at least) does not support rendering into linear buffers. So for scenarios with dGPU rendering, but iGPU scan-out the abstraction will try to do transfers over dmabuf or ultimately fall back to cpu copy.

So what I had in mind was to extend the abstractions with support of drawing into dumb buffers over pixman as a fallback path in the multi-gpu abstraction. Not for rendering directly, but for fallback transfer from the primary gpu render target to the dumb buffer. smithay already provides a way to render across different backends with its multigpu::cross_render function. The rendering would still take place on the primary gpu in this case, including damage tracking for rendering, but also automatically for the cpu transfer.
The plan was/is to implement the fallback path first and then add the optimization.

I made some progress for the fallback path, but I am not able to dedicate as much time as I would like to atm.

@kennylevinsen
Copy link

Fair enough, the reason I didn't care too much about the complexity of a dumb buffer fallback was that I didn't see many realistic scenarios where it would actually be needed, and just trying to do the direct scanout path was trivial and harmless in cases where it wouldn't work.

Doesn't mean it's as easy to do here, especially if the multi-gpu abstraction is external.

@cmeissl
Copy link
Contributor

cmeissl commented Mar 15, 2025

Fair enough, the reason I didn't care too much about the complexity of a dumb buffer fallback was that I didn't see many realistic scenarios where it would actually be needed, and just trying to do the direct scanout path was trivial and harmless in cases where it wouldn't work.

Doesn't mean it's as easy to do here, especially if the multi-gpu abstraction is external.

Just to elaborate a bit on the background on why I personally prefer the dumb buffer path and it seems Simon shares my view in your linked MR.

  1. Implement mmap for DMA-BUFs so that the Pixman renderer can access the source buffer and blit to a DRM dumb buffer. This would be more generally useful when cross-device DMA-BUF imports fail (see !4055 and !4408) and would prevent DMA-BUFs from moving to main memory.

I already implemented this part in smithay a while ago while implementing a software Renderer based on pixman (after writing a rust wrapper for pixman). The renderer can be used in combination with a DumbAllocator but also with a GbmAllocator in which case it will mmap a Dmabuf created from the GbmBuffer. So it is possible to utilize direct scan-out using gbm import while having pixman composition as a fallback.
What smithay (still) lacks is an easy way to dynamically select a renderer and and multigpu backend at runtime.

So what I currently work on is:

This way we could solve two things, DisplayLink support and finally having a software rendering fallback for environments without a gpu to render.

(* The current multi-gpu abstraction already supports sharing a dmabuf across different targets and only falls back to cpu transfer if this fails. Both paths fully support damage tracking)

@kennylevinsen
Copy link

Just to be clear, I'm just sharing what we did in wlroots our experience could be useful, not saying you necessarily have to do it the same way.

it seems Simon shares my view: [quote of suggested option 2, CPU rendering to GPU buffer]

Note that he follows up with:

I think I'd actually be okay with [option 1, direct scan-out as implemented] because:

  • This only affects secondary KMS devices. A working primary device has been created already by that point. So users should already have another way to connect an output which works fine, and a secondary device failure won't prevent wlroots from starting up anyways. IOW: users aren't actually locked in a broken state.
  • As noted in the commit message, output commits can fail for any reason already (e.g. bandwidth limitations).
  • Trying to bypass the multi-GPU blit is something we'd want to do in the future even if we've initialized a multi-GPU renderer. Some multi-GPU setups are capable of directly scanning out foreign buffers (see !4055).

It was trivial for us to do, enabled the "end-goal" of direct scan-out for DisplayLink/GUD immediately and works in what I believe is all real-world use-cases. In the unlikely event that someone should have a setup where it cannot work, there are no worse off than before.

(Even if nvidia cannot render linear, DisplayLink is geared for laptop use where the primary or sole renderer is an iGPU.)

Long term, we'd probably want to rework mgpu so that the compositor can pick which renderer each output should use rather than having it be magic logic within the drm backend as it is now. Then the compositor can integrate buffer sources into its output configuration logic, to be tested the same as it tests other configuration like formats, modifiers, refresh rates, and so forth. With that, pixman could be tried as last resort, although the priority would be to allow rendering each output with its local GPU or even dynamically switching for power/performance.

@cmeissl
Copy link
Contributor

cmeissl commented Mar 16, 2025

Just to be clear, I'm just sharing what we did in wlroots our experience could be useful, not saying you necessarily have to do it the same way.

I really appreciate your input and did not take it as a critic. You and everyone putting so much effort into the ecosystem have my full respect. I did not intend to be mean, so if that is how it sounded I am deeply sorry. I am also here to improve the wayland ecosystem as good as I can and really enjoy the technical discussions.

it seems Simon shares my view: [quote of suggested option 2, CPU rendering to GPU buffer]

Note that he follows up with:

I think I'd actually be okay with [option 1, direct scan-out as implemented] because:

  • This only affects secondary KMS devices. A working primary device has been created already by that point. So users should already have another way to connect an output which works fine, and a secondary device failure won't prevent wlroots from starting up anyways. IOW: users aren't actually locked in a broken state.
  • As noted in the commit message, output commits can fail for any reason already (e.g. bandwidth limitations).
  • Trying to bypass the multi-GPU blit is something we'd want to do in the future even if we've initialized a multi-GPU renderer. Some multi-GPU setups are capable of directly scanning out foreign buffers (see !4055).

It was trivial for us to do, enabled the "end-goal" of direct scan-out for DisplayLink/GUD immediately and works in what I believe is all real-world use-cases. In the unlikely event that someone should have a setup where it cannot work, there are no worse off than before.
(Even if nvidia cannot render linear, DisplayLink is geared for laptop use where the primary or sole renderer is an iGPU.)

Fair and I fully agree that having a solution for the majority of users is a solid start. Our exchange made me curious if we can do the same in smithay with some simple changes. I will open a PR a bit later that should work similar to the work you did in wlroots.

Long term, we'd probably want to rework mgpu so that the compositor can pick which renderer each output should use rather than having it be magic logic within the drm backend as it is now. Then the compositor can integrate buffer sources into its output configuration logic, to be tested the same as it tests other configuration like formats, modifiers, refresh rates, and so forth. With that, pixman could be tried as last resort, although the priority would be to allow rendering each output with its local GPU or even dynamically switching for power/performance.

This is something already possible today with smithay, though iirc only Cosmic makes use of it.

@cmeissl cmeissl linked a pull request Mar 16, 2025 that will close this issue
@cmeissl
Copy link
Contributor

cmeissl commented Mar 16, 2025

Ok, #1281 should implement what is necessary to fallback to the primary gpu for rendering for display only devices. Unfortunately I lack DisplayLink hardware so I can not test this. Would be great if anyone with this issue could give it a try.

@kennylevinsen
Copy link

I pulled out my old test DisplayLink dock and gave it a spin, and it works.

weston-simple-egl experienced a bit of weird tearing in the animation, not sure what that was about, but didn't look more into it. No guarantee that it isn't a driver bug after all.

@cmeissl
Copy link
Contributor

cmeissl commented Mar 17, 2025

I pulled out my old test DisplayLink dock and gave it a spin, and it works.

Thanks, really appreciate your help!

weston-simple-egl experienced a bit of weird tearing in the animation, not sure what that was about, but didn't look more into it. No guarantee that it isn't a driver bug after all.

Sounds like some kind of synchronization issue.
The way I implemented it does not use the same dmabuf for rendering and for import.
But we pass the native egl fence to drm in case the device supports sync objects and wait cpu side otherwise. So rendering on the primary plane should not tear. Though it should be easy to change that and use the same dmabuf. Do you think this is worth trying out?

Overlay planes are disabled by default in niri, so I would not expect it to be caused by missing explicit sync on overlay planes.

What gpu/driver was in use as the primary gpu?

@kennylevinsen
Copy link

kennylevinsen commented Mar 18, 2025

I was testing with amdgpu (Radeon 780M in a Ryzen 7950HS laptop chip), and a display running at 4k@30Hz. Entry in drmdb for the evdi device during sway test here: https://drmdb.emersion.fr/snapshots/bfd5c0265d05 - as you can see it's quite... limited.

I did a quick test with sway, and the issue also happens there. We use a single dmabuf for both rendering and kms, so I don't think there's any need to test that in niri.

One thing of interest is that weston-simple-egl is reporting 120fps or so when running on the DisplayLink display even though the output is running at 30Hz, suggesting that evdi/displaylink is not correctly throttling pageflips and might be releasing buffers back to the display server prematurely. I'm tempted to just write this off as a displaylink driver bug not worth more effort for now.

A few extra observations:

  1. The tearing is a bit odd, as it happens in rows of maybe 64 pixels or so, with a number of such rows alternating between old vs. new frame. Not what you'd usually expect from tearing between scanout buffers for example.
  2. Larger surfaces are more affected than smaller ones. A full- or half-screen weston-simple-egl experiences it quite a bit, a quarter screen window only sees it occasionally.
  3. A full-screen youtube video only seems to experience an occasional single tear line, suggesting that either the frame rate or maybe the frame difference in case of compression and bandwidth limits exacerbate the issue.

(Note that my test dock is a 10 years old dell thing I keep around from a previous job for testing. If the issue is bandwidth, it could be that a newer dock or lower resolution wouldn't experience this.)

@cmeissl
Copy link
Contributor

cmeissl commented Mar 19, 2025

I did a quick test with sway, and the issue also happens there. We use a single dmabuf for both rendering and kms, so I don't think there's any need to test that in niri.

Okay, thanks!

One thing of interest is that weston-simple-egl is reporting 120fps or so when running on the DisplayLink display even though the output is running at 30Hz, suggesting that evdi/displaylink is not correctly throttling pageflips and might be releasing buffers back to the display server prematurely. I'm tempted to just write this off as a displaylink driver bug not worth more effort for now.

A few extra observations:

1. The tearing is a bit odd, as it happens in rows of maybe 64 pixels or so, with a number of such rows alternating between old vs. new frame. Not what you'd usually expect from tearing between scanout buffers for example.

2. Larger surfaces are more affected than smaller ones. A full- or half-screen `weston-simple-egl` experiences it quite a bit, a quarter screen window only sees it occasionally.

3. A full-screen youtube video only seems to experience an occasional single tear line, suggesting that either the frame rate or maybe the frame difference in case of compression and bandwidth limits exacerbate the issue.

(Note that my test dock is a 10 years old dell thing I keep around from a previous job for testing. If the issue is bandwidth, it could be that a newer dock or lower resolution wouldn't experience this.)

I just bought a simple DisplayLink to HDMI Adapter and can observe the same with an Intel iGPU. Only had a quick look at the evdi driver and I can't see any IRQ handler I would have expected. The vblank is sent as part of the crtc atomic flush, not sure this is correctly handled. Other drivers seem to either use a IRQ, a timer or use the *_arm_* version of the vblank function.

@kennylevinsen
Copy link

Seems like evdi roughly works like this:

  1. On plane update, it stores the buffer and damage, and on atomic flush, it stores a vblank event to use later. Note that each individual store takes and releases the painter lock. The user-space driver is notified of "update ready".
  2. At some point, the user-space driver calls the EVDI_GRABPIX ioctl with a CPU buffer equal to or larger than the display mode in size.
  3. The dirty rects, vblank and framebuffer is read, and dirty rects and vblank is cleared, all under a single painter lock.
  4. evdi does a CPU copy from the primary and cursor buffers to the target buffer according to the damage regions. It does so using its own hand-rolled copy_to_user loops instead of drm_fb_memcpy though.
  5. When the copy is complete, evdi fires the vblank event.

A few notes:

  1. Commit and read is racey, as stores are not done atomically/under a single lock, but applied directly to the evdi_painter gradially as the commit progresses. A call to EVDI_GRABPIX at any given time might see e.g. a new scanout buffer but old damage or vblank event. Could definitely lead to issues, although not sure if that's the issues we see.
  2. There is no throttling on the KMS side, with commits returning immediately. If you commit faster than EVDI_GRABPIX is called, the previous vblank event will be fired immediately to make room to store a new one. I don't remember if the atomic helpers themselves do any sort of limiting for you.
  3. If the user-space driver does not itself synchronize its EVDI_GRABPIX calls to the refresh rate of the display, the result will be redundant CPU copies and vblank events as fast as she can go. With this in mind, the 120fps is probably the fault of the displaylink userspace driver itself.
  4. They're not using the drm_fb memcpy and map helpers it seems, instead rolling their own code. Also, #ifdef hell because it's an out-of-tree driver supporting many kernel versions. :/

@cmeissl
Copy link
Contributor

cmeissl commented Mar 21, 2025

Thanks for the analysis!

Some observations from looking through the code.

The driver specifically requests vblank support by calling drm_vblank_init(dev, 1), which afaict disables the automatic fake vblank generation drm implements for drivers without vblank interrupts (originally implemented for writeback).

The kernel docs state:

Drivers must initialize the vertical blanking handling core with a call to
drm_vblank_init(). Minimally, a driver needs to implement
&drm_crtc_funcs.enable_vblank and &drm_crtc_funcs.disable_vblank plus call
drm_crtc_handle_vblank() in its vblank interrupt handler for working vblank
support.

But the driver always returns 1 from its enable_vblank implementation in evdi_enable_vblank, which afaict indicates an error.
There is also no reference to drm_crtc_handle_vblank at all in the evdi driver.


This also aligns with a trace log I generated recording the timestamps and sequence counters received in user-space. Both are always zero...

@cmeissl
Copy link
Contributor

cmeissl commented Mar 21, 2025

I just patched the driver to return success from the enable vblank funtion and to call drm_crtc_handle_vblank as part of the vblank handling. While that does not solve the framerate issue, which I expected, it indeed does populate the sequence and monotonic timestamp.

@cmeissl
Copy link
Contributor

cmeissl commented Mar 22, 2025

Okay, I just pushed a commit to the PR implementing a simple strategy to throttle the vblanks. I mainly implemented it to verify if this at least makes the tearing go away. So far it seems to work as intended, even fullscreen direct scan-out video playback seems to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants