Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove image on buffer destroy callback only #178

Merged
merged 1 commit into from
Feb 16, 2023

Conversation

psaavedra
Copy link
Member

@psaavedra psaavedra commented Dec 1, 2022

view-backend-exportable-fdo-egl.cpp relies on the bufferDestroyListenerCallback for destroying the image, This is called during the wl_resource_destroy() to release the image.

Related-to: #175 #176

@psaavedra psaavedra added the bug Something isn't working label Dec 1, 2022
@psaavedra psaavedra self-assigned this Dec 1, 2022
@psaavedra psaavedra force-pushed the psaavedra_remove_exported_flag branch from 1f24536 to 5c032f7 Compare December 1, 2022 14:40
@psaavedra
Copy link
Member Author

Valgrind's traces on #175

==17==  Address 0x1fac1fd0 is 0 bytes inside a block of size 56 free'd
==17==    at 0x48438AF: operator delete(void*) (vg_replace_malloc.c:923)
==17==    by 0x1A8674F5: wl_priv_signal_final_emit (wayland-server.c:2221)
==17==    by 0x1A8674F5: destroy_resource (wayland-server.c:724)
==17==    by 0x1A867C24: wl_resource_destroy (wayland-server.c:744)
==17==    by 0x1A099FE4: ffi_call_unix64 (unix64.S:101)
==17==    by 0x1A0993F5: ffi_call_int (ffi64.c:669)
==17==    by 0x1A86D2A1: wl_closure_invoke (connection.c:1025)
==17==    by 0x1A868216: wl_client_connection_data (wayland-server.c:437)
==17==    by 0x1A86B019: wl_event_loop_dispatch (event-loop.c:1027)

shows that the first deletion happens during the destruction of the wl_client because a problem in the wl_client_connection_data().

On

static int
wl_client_connection_data(int fd, uint32_t mask, void *data)
{
...
        if ((resource_flags & WL_MAP_ENTRY_LEGACY) ||
            resource->dispatcher == NULL) {
            wl_closure_invoke(closure, WL_CLOSURE_INVOKE_SERVER,
                      object, opcode, client);

this part of the code is reached if resource->dispatcher == NULL) so probably this is
because the peer on the channel is already gone at than point and wayland decides to call wl_closure_invoke().

@psaavedra psaavedra changed the title Image is remove on buffer destroy callback only Image is removed on buffer destroy callback only Dec 1, 2022
@psaavedra psaavedra changed the title Image is removed on buffer destroy callback only Remove image on buffer destroy callback only Dec 1, 2022
view-backend-exportable-fdo-egl.cpp relies on the
bufferDestroyListenerCallback for destroying the image, This is called
during the wl_resource_destroy() to release the image.

Related-to: #175 #176
@psaavedra psaavedra force-pushed the psaavedra_remove_exported_flag branch from 5c032f7 to d4ab8d8 Compare December 1, 2022 16:33
@aperezdc
Copy link
Member

I can take a look at this later today, but @carlosgcampos and/or @zdobersek may want to comment here as well.

@alexgcastro
Copy link
Collaborator

We are already reviewing the patch, hopefully we will have a resolution this week. Without the option to reproduce the original problems it took some time to check the lifecycle of the object considering that image structure position between wayland and the backend.

Copy link
Collaborator

@alexgcastro alexgcastro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code apparently missed the situation where wayland destruction and the releaseImage happened at the same time. The places in the code that are checking if bufferResource is NULL are doing the work but was not enough, that explains previous patches. Still the code needs a better name and check for that situation, but this patch is in the right direction. We have to handle the image in the desctruction and avoid any operation when the bufferResource is gone. The option implemeted was keep the image and hope for the best.

It is a pity we can not reproduce the original valgrind stacks and crashes to verify the whole situation though.

Thanks for the patch, hopefully we can refactor this code at some point.

@aperezdc
Copy link
Member

Thanks a lot @alexgcastro! 🤗

@aperezdc aperezdc merged commit cb6b86a into master Feb 16, 2023
@aperezdc aperezdc deleted the psaavedra_remove_exported_flag branch February 16, 2023 14:15
@aperezdc aperezdc added the merge:wpebackend-fdo-1.14 PR should be merged to the 1.14 branch as well label Feb 16, 2023
@aperezdc
Copy link
Member

Cherry-picked into the wpebackend-fdo-1.14 branch as commit 0d6a75a.

clopez added a commit to clopez/WPEBackend-fdo that referenced this pull request Jun 25, 2024
…view-backend-exportable-fdo-egl

Since commit b51f539 there is a memory leak each time the wl_resource is destroyed.
This can be easily reproduced by repeteadly switching full-screen on/off
(pressing F11 key) with Cog on Weston.

The memory leak is caused because since b51f539 the wpe_fdo_egl_exported_image object
is not cleaned anymore on the bufferDestroyListenerCallback callback.

Commit cb6b86a fixed the leak but introduced crashes on some cases, so it was reverted.

This is a new attempt at fixing this leak, this adds safeguards to ensure that the
image object is not cleaned twice or with the wrong exported status.

Related-to: Igalia#73 Igalia#175 Igalia#176 Igalia#178 #538
clopez added a commit to clopez/WPEBackend-fdo that referenced this pull request Jun 25, 2024
…view-backend-exportable-fdo-egl

Since commit b51f539 there is a memory leak each time the wl_resource is destroyed.
This can be easily reproduced by repeteadly switching full-screen on/off
(pressing F11 key) with Cog on Weston.

The memory leak is caused because since b51f539 the wpe_fdo_egl_exported_image object
is not cleaned anymore on the bufferDestroyListenerCallback callback.

Commit cb6b86a fixed the leak but introduced crashes on some cases, so it was reverted.

This is a new attempt at fixing this leak, this adds safeguards to ensure that the
image object is not cleaned twice or with the wrong exported status.

Related-to: Igalia#73 Igalia#175 Igalia#176 Igalia#178
Related-to: Igalia/cog#538
clopez added a commit that referenced this pull request Sep 11, 2024
…view-backend-exportable-fdo-egl

Since commit b51f539 there is a memory leak each time the wl_resource is destroyed.
This can be easily reproduced by repeteadly switching full-screen on/off
(pressing F11 key) with Cog on Weston.

The memory leak is caused because since b51f539 the wpe_fdo_egl_exported_image object
is not cleaned anymore on the bufferDestroyListenerCallback callback.

Commit cb6b86a fixed the leak but introduced crashes on some cases, so it was reverted.

This is a new attempt at fixing this leak, this adds safeguards to ensure that the
image object is not cleaned twice or with the wrong exported status.

Related-to: #73 #175 #176 #178
Related-to: Igalia/cog#538
aperezdc pushed a commit that referenced this pull request Sep 11, 2024
…view-backend-exportable-fdo-egl

Since commit b51f539 there is a memory leak each time the wl_resource is destroyed.
This can be easily reproduced by repeteadly switching full-screen on/off
(pressing F11 key) with Cog on Weston.

The memory leak is caused because since b51f539 the wpe_fdo_egl_exported_image object
is not cleaned anymore on the bufferDestroyListenerCallback callback.

Commit cb6b86a fixed the leak but introduced crashes on some cases, so it was reverted.

This is a new attempt at fixing this leak, this adds safeguards to ensure that the
image object is not cleaned twice or with the wrong exported status.

Related-to: #73 #175 #176 #178
Related-to: Igalia/cog#538
(cherry picked from commit 5b1c5e4)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working merge:wpebackend-fdo-1.14 PR should be merged to the 1.14 branch as well
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants