-
Notifications
You must be signed in to change notification settings - Fork 506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nan::CopyBuffer only allows uint32_t lengths when node::Buffer APIs allows size_t #573
Comments
I don't remember. It could be an oversight, but it does not really matter, since a buffer cannot be larger than 2^30 - 1 bytes, which fits well inside an uint32_t. On May 27, 2016 12:03:59 AM GMT+03:00, Tom Jakubowski [email protected] wrote:
|
Hmm, so it is. I had a brainfart reading the docs for Buffer.kMaxLength (which is (2^31 - 1), according to the docs anyway). Never mind! :-) |
|
@kkoopa what if kMaxLength was determined at compile-time so we can make use of that extra gigabyte of space when possible? |
That defeats the point of using a compatibility layer. It would also exceed the default V8 heap limit, so you are not supposed to use that much memory even if it were possible. |
Not sure I understand how it defeats the point of compatibility. I agree in the case of a library creating giant buffers of a hard-coded size > 2^30-1 ("this library will never work on 32-bit or older node"), but if the buffer size is determined at run time and may be small or huge ("if you want to use this library for big data, you will need to use recent 64-bit node"), then it seems reasonable to allow it. Increasing the heap limit is a fairly common thing to do. I think if you're in this territory of working with big objects then it's normal to expect users to tinker with memory options. |
Firstly, it depends on the pointer size. You can have a 64-bit binary with a 32-bit address space and pointer size, e.g. x32. This is not common, but it illustrates that it is not as simple as merely stating 64 bits without specifying what that entails. It defeats compatibility because it produces different observable behavior on different versions and platforms. Ideally, the same operations should behave identically. If something does not work under premise x, it should also not work under premise y, unless the benefits greatly outweigh the drawbacks of inconsistency. I do not consider an increase of 1 GB to be worth the inconsistency. It simply hides the root cause of the problem, badly. The correct solution is to restructure the misbehaving application, dividing the work into parts so that too much memory is not needed in a single Buffer. This would enable the application to work correctly on all platforms. If an application regularly needs to use Buffers larger than 1 GB, what says that it does not need to use Buffers larger than 2 GB? Why not 4 GB? Increasing the maximum Buffer size from 1 GB to 2 GB is not going to help in the long run. In conclusion, while it would be trivial to allow 2 GB Buffers on supported platforms, it would still not fix the actual problem. It might even be detrimental to the common good, since it could postpone a proper solution. |
Going all the way back to v0.10,
node::Buffer::New()
accepts asize_t
length argument. I can't easily find the source for v0.8, but unless v0.8 only allowed uint32_t lengths, why does the signature of Nan::CopyBuffer limit the size this way?The text was updated successfully, but these errors were encountered: