You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be preferable to keep the original realloc API. Could you create an issue about this and emit an (only-once) warning in define_NRT_Reallocate call about the current behavior with a reference to the issue?
The Numba Runtime offers a memory reallocation function called NRT_Reallocate, which internally utilizes the realloc function. However, using this function in HeavyDB can lead to issues because realloc may free the old buffer while copying memory, resulting in a double-free corruption that the HeavyDB server will subsequently try to free.
The text was updated successfully, but these errors were encountered:
guilhermeleobas
changed the title
You can use malloc_usable_size for determining the size of the malloc-allocated buffer. It would be Linux specific function though, see https://stackoverflow.com/questions/1281686/determine-size-of-dynamically-allocated-memory-in-c for Windows and Mac equivalents. Recall, heavydb can be built on Linux as well as on Windows.
Refactor NRT_ReallocateMar 22, 2023
You can use
malloc_usable_size
for determining the size of the malloc-allocated buffer. It would be Linux specific function though, see https://stackoverflow.com/questions/1281686/determine-size-of-dynamically-allocated-memory-in-c for Windows and Mac equivalents. Recall, heavydb can be built on Linux as well as on Windows.It would be preferable to keep the original
realloc
API. Could you create an issue about this and emit an (only-once) warning indefine_NRT_Reallocate
call about the current behavior with a reference to the issue?Originally posted by @pearu in #531 (comment)
The Numba Runtime offers a memory reallocation function called
NRT_Reallocate
, which internally utilizes therealloc
function. However, using this function in HeavyDB can lead to issues becauserealloc
may free the old buffer while copying memory, resulting in a double-free corruption that the HeavyDB server will subsequently try to free.The text was updated successfully, but these errors were encountered: