You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have an environment that uses Arrow + Plasma to send requests between Python clients and a C++ server that responds with search results etc.
I use a sequence number based approach for Object ID creation so its understood on both sides. All that works well. So each request from the client creates a unique Object ID, creates and seals it etc. On the other end, a get against that Object ID retrieves the request payload, releases and deletes the Object ID. A similar response scheme for Object IDs are used from the server side to the client to get search results etc where it creates its own unique Object ID understood by the client. The server side creates and seals and the Python client side does a get and deletes the Object ID (there is no release method in Python it appears). I have experimented with deleting the plasma buffer.
The end result is that as transactions build up, the server side memory use goes way up and I can see that a good # of the objects aren't deleted from the Plasma store until the server exits. I have nulled out the search result part too so that is not what is accumulating. I have not done a memory profile but wanted to get some feedback on some what might be wrong here.
Is there a better way to use Object IDs for example? And what might be causing the huge memory usage. In this example, I had ~4M transactions between clients and the server which hit a memory usage of about 10+ GB which is in the ballpark of the size of all the payloads. Besides doing release-deletes on Object IDs, is there a better way to purge and remove these objects?
Plasma has been deprecated (#33077) in Arrow 10.0 and subsequently removed in Arrow 12.0 (#33243). Given Plasma is no longer an existing component, closing this issue.
I have an environment that uses Arrow + Plasma to send requests between Python clients and a C++ server that responds with search results etc.
I use a sequence number based approach for Object ID creation so its understood on both sides. All that works well. So each request from the client creates a unique Object ID, creates and seals it etc. On the other end, a get against that Object ID retrieves the request payload, releases and deletes the Object ID. A similar response scheme for Object IDs are used from the server side to the client to get search results etc where it creates its own unique Object ID understood by the client. The server side creates and seals and the Python client side does a get and deletes the Object ID (there is no release method in Python it appears). I have experimented with deleting the plasma buffer.
The end result is that as transactions build up, the server side memory use goes way up and I can see that a good # of the objects aren't deleted from the Plasma store until the server exits. I have nulled out the search result part too so that is not what is accumulating. I have not done a memory profile but wanted to get some feedback on some what might be wrong here.
Is there a better way to use Object IDs for example? And what might be causing the huge memory usage. In this example, I had ~4M transactions between clients and the server which hit a memory usage of about 10+ GB which is in the ballpark of the size of all the payloads. Besides doing release-deletes on Object IDs, is there a better way to purge and remove these objects?
Any help is appreciated.
Reporter: Abe Mammen
Note: This issue was originally created as ARROW-8873. Please see the migration documentation for further details.
The text was updated successfully, but these errors were encountered: