-
-
Notifications
You must be signed in to change notification settings - Fork 699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Item edition becomes slow when there are lots of items. #2374
Comments
Since std is used in this project std::unordered_map can be used. It is basically a hash table. A simple patch to use it for GetItem and GetItemRef (axeldavy@d7125ca) drastically improves performance:
Now ideally DeleteItem should be improved as well, I guess we must find a fast way to get the parent of an item. |
You're retracing my steps 🤣 and sometimes even going faster than me 👍. Like, a hashtable-based |
Please don't rush to fix all synchronization and refcount issues, wait until I publish my fixes 😂 no need for duplicate efforts ;). |
Using std:unordered_map seems like the easiest way to get a hashtable-based GetItem. And as you can see in my log even with more than a million elements, it's several times faster than the old code with 1000 elements. Also apparently it's easy to get the parent of an item, so DeleteItem comes naturally too (DeleteChild on the parent). I'll try to wait for the synchronization and refcount. Do you have a public branch I could pull ? EDIT: I also observe significant performance boost on the framerate. |
Unfortunately I don't have a public branch because my local build contains components specific to my project, which I can't share. I typically cherry-pick commits into my fork https://github.com/v-ein/DearPyGui-fixes right before opening a PR, but this time I want to get #2275 merged first, then pull it to my repo because the new code might have similar sync issues, so I'd prefer to re-check it first. |
In that I would appreciate to have some hints as to what is causing the current hangs. Even If I do not release completly in order to remove the gil in the mvAppItem destructor, I get hangs. I have a lot of hangs I some temporary workarounds would be helpful. |
Not sure what the "do not release completely" means, do you mean you're not deleting items?
Like deadlocks in #2053? Exactly when do they happen in your app? The problem with the current version of DPG is that you don't have to do anything special... just use DPG from a worker thread and you'll get a chance for a deadlock. I don't think there are any workarounds on the Python side of things; this has to be fixed in C++. |
Just thought that you can try to increase sys.setswitchinterval and see if that makes things better. |
Version of Dear PyGui
Version: 1.11.1
Operating System: Arch Linux
My Issue/Question
When there is a significant number of items created, editing a large number of items becomes increasingly slow.
To Reproduce
Here is a code to notice the issue:
In this code, the number of items is increased linearly. But all these items are hidden and encapsulated inside a draw layer. Thus the frame time is not affected much.
During item creation, the creation's speed of the items remains stable.
However item edition becomes very slow.
I encountered this bug because I wanted to edit the thickness of some custom drawing items in a plot in response to zoom and noticed it became quadratically slow with the number of elements drawn (because I need to edit thickness of more items, and because each edition is linearly slower). The proper solution for my thickness issue is to add a configuration option to not scale the thickness with the plot zoom level (DearPyGui does scale by ImPlot->Mx the thickness, and basically I'm inverting it). But regardless on my own individual issue, one can notice with the above example that pretty quickly it becomes impossible to edit item's configuration in real time.
On my computer the code will display:
As one can notice, it is already not real time anymore to edit the configuration of 1000 elements when there are only 2000 elements. As soon as there are 6000 elements it takes already more than 100 ms.
This item edition slowdown is also noticed when deleting item. Deleting a draw layer with many elements is magnitudes faster than deleting individually all elements.
Expected behavior
Likely the slowdown is due to the UUID search which seems to go linearly through the whole graph until the UUID is found.
The reason creation time is not affected much by the slowdown is due to the fact UUID search has a cache of 25 elements. But when editing elements or deleting them, this cache is not useful anymore.
Expected behaviour would be to have O(1) or O(log(n)) operations, using hashing or sorting.
Dear ImGui seems to have ImPool/ImGuid helpers to have log(n) operations.
However given DearPyGui shouldn't expect in practice to have 100K items, I think the simplest and fastest would be to simply use a hash table. For example have a table of size 1024, each element being a list of pairs (uuid, pointer), and elements inside at index i corresponding to elements with uuid % 1024 == i. To prevent allocating elements inside the table, the best it to use a linked list. Basically each item structure have a list field pointing to the next and previous element in the list. This enables to remove elements in the list for free, and prevents allocating specifically for the hash table. (see https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/src/util/list.h for an example of linked list).
The text was updated successfully, but these errors were encountered: