You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
processed gets appended to unprocessed and both slices might be full resulting in unprocessed to grow to (maxUnprocessed + maxProcessed) = 10 * c
So the correct value should be 200 * c.
This is quite a number. I'm wondering if float32 and smaller unprocessed buffer would allow to reduce memory consumption while keeping enough numerical stability and accuracy.
The text was updated successfully, but these errors were encountered:
The calculation for ByteSizeForCompression states
https://github.com/influxdata/tdigest/blob/master/tdigest.go#L50
But unprocessed and processed are allocated in https://github.com/influxdata/tdigest/blob/master/tdigest.go#L30 with a capacity of maxUnprocessed and maxProcessed which are 8 and 2 times c (see e.g. https://github.com/influxdata/tdigest/blob/master/tdigest.go#L304)
This leads to ByteSizeForCompression underestimating the memory consumption of these two buffers by a factor of 10 which is not neglible.
I think the result should be
cumulative )
= 168 * c
(Btw: the 40 is not correct even if the factors 8 and 2 for the capacity of the unprocessed and processed buffers are ignored.)
BUT: This 168 * c still underestimates the memory consumpotion as in
https://github.com/influxdata/tdigest/blob/master/tdigest.go#L121
processed gets appended to unprocessed and both slices might be full resulting in unprocessed to grow to (maxUnprocessed + maxProcessed) = 10 * c
So the correct value should be
200 * c
.This is quite a number. I'm wondering if float32 and smaller unprocessed buffer would allow to reduce memory consumption while keeping enough numerical stability and accuracy.
The text was updated successfully, but these errors were encountered: