Skip to content
This repository has been archived by the owner on Nov 1, 2021. It is now read-only.

Add explicit synchronization, with timelines #3282

Draft
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

emersion
Copy link
Member

@emersion emersion commented Oct 20, 2021

This PR implements explicit synchronization. Unlike previous attempts, it uses a timeline abstraction based on drm_syncobj like Vulkan.

To inter-operate with other APIs (KMS, EGL, linux-explicit-synchronization-v1), the timeline points can be converted from/to sync_file FDs.

Test with: build/examples/explicit-sync -s weston-simple-dmabuf-egl. Use WAYLAND_DEBUG=server to check that fences are exchanged. To test the DMA-BUF sync_file extraction, use weston-simple-egl as test client (requires kernel patch).

TODO:

  • Write an example compositor
  • Extract sync_file from DMA-BUF when client doesn't support linux-explicit-synchronization-v1
  • Add wlr_texture upload synchronization
  • Figure out if this API is okay for Vulkan
  • Detect support in backends and renderers
  • Add support for cached state to linux-explicit-synchronization-v1
  • Documentation
  • Sanity checks (e.g. in wlr_output_commit)
  • Consider ref'counting wlr_render_timeline
  • Output cursors let's wait until output layers for this

Previous work:

Future work:

  • Multi-GPU support in the DRM backend
  • Wayland backend support

@emersion emersion force-pushed the explicit-sync-timeline branch 2 times, most recently from aecdc8e to d68dfb4 Compare October 20, 2021 20:34
@nyorain
Copy link
Contributor

nyorain commented Oct 20, 2021

From a first glance, the API should work for Vulkan. But we need some additional guarantees/documentation. How does a renderer know whether implicit or explicit sync is used for a render buffer? Is a single call to signal_timeline enough to inform the renderer that rendering does not have to be finished by the time renderer_end returns? This is important for Vulkan. I would prefer an explicit parameter to renderer_begin or renderer_end to select the sync mode.

@emersion
Copy link
Member Author

Added some documentation.

@jbeich, what's the correct way to grab <linux/dma-buf.h> on FreeBSD? It's this file in drm-kmod: https://github.com/freebsd/drm-kmod/blob/master/linuxkpi/gplv2/include/uapi/linux/dma-buf.h

@emersion emersion force-pushed the explicit-sync-timeline branch 2 times, most recently from 6524855 to 5c8db60 Compare October 21, 2021 11:48
This patch adds support for the
linux-explicit-synchronization-unstable-v1 protocol.

To test, run weston-simple-dmabuf-egl.
@jbeich
Copy link
Contributor

jbeich commented Oct 21, 2021

@jbeich, what's the correct way to grab <linux/dma-buf.h> on FreeBSD? It's this file in drm-kmod: https://github.com/freebsd/drm-kmod/blob/master/linuxkpi/gplv2/include/uapi/linux/dma-buf.h

Until https://reviews.freebsd.org/D23085 or https://github.com/evadot/drm-subtree lands probably making a private copy of uapi headers (a la Mesa) or their contents.

@evadot, may know more.

@jbeich
Copy link
Contributor

jbeich commented Oct 21, 2021

See also FreeBSDDesktop/kms-drm#156

@evadot
Copy link

evadot commented Oct 21, 2021

@jbeich, what's the correct way to grab <linux/dma-buf.h> on FreeBSD? It's this file in drm-kmod: https://github.com/freebsd/drm-kmod/blob/master/linuxkpi/gplv2/include/uapi/linux/dma-buf.h

Until https://reviews.freebsd.org/D23085 or https://github.com/evadot/drm-subtree lands probably making a private copy of uapi headers (a la Mesa) or their contents.

@evadot, may know more.

Yeah we don't have a good way to provide them for now so having a copy would be easier for us.

@emersion
Copy link
Member Author

emersion commented Nov 1, 2021

wlroots has migrated to gitlab.freedesktop.org. This pull request has been moved to:

https://gitlab.freedesktop.org/wlroots/wlroots/-/merge_requests/3282

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants