-
Notifications
You must be signed in to change notification settings - Fork 452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[js-api] Allow larger memories (2 → 4 GB) #1116
Comments
Those Chrome constants are somewhat misleading right now, the value that matters is the flag |
FWIW, the 2GB limit serves a compatibility purpose: SpiderMonkey has a hard 2GB limit on its ArrayBuffers at the moment (*), and by having the limit we reduce the risk that some wasm content can only run on a subset of the implementations on the web. (*) Yes, it can be changed. Yes, we can do it. It's just a large piece of work and it's hard to know exactly when it can be done. |
But even today, there is no guarantee that a 2GB memory request will succeed, in SpiderMonkey or elsewhere: for example, you could be running SpiderMonkey on a RasPi, or in a 32-bit browser process where the virtual memory address space is too fragmented for a large contiguous allocation. Whether an allocation request fails because of implementation limits or because of the local machine's limits doesn't make much of a difference for the user experience. FWIW, Chrome/V8 has a 2GB ArrayBuffer limit on 32-bit builds (which are still supported and widely in use), and no plans to change that. The way I see it is as follows:
|
It is true that a 2GB request may fail but in an implementation on a mainstream 64-bit desktop platform it probably will not fail. Thus when one (dominant) engine lifts this limit, some content may come to depend on the larger limit, and other engines may be unable to run the content on those platforms. 32-bit platforms are not obsolete but are becoming much less important, and certainly they are not the target machines for the apps we're talking about here (you mentioned "partners"), which are presumably large games and large, ported desktop applications. |
We would welcome this improvement! We have a rendering engine implemented in WebAssembly that is still not as capable as the version implemented in PNaCl. Initially this was due to lack of support for threads. When Chrome 78 finally added this, the next deficiency was lack of dynamic memory growth. Chrome 79 addressed that but we still see some scenes where the raytracer will run out of memory at the 2GB limit. PNaCl was a 64bit memory model and did not have this issue. We are still using the PNaCl origin trial to allow some customers to work with large scenes, and when I recently renewed this I supplied feedback to the Chrome team that the 2GB memory limit was what is holding us back from moving fully to a WebAssembly implementation. |
We discussed this at the CG meeting on 2020-01-21 and reached an agreement to:
Come to think of it, we didn't discuss the specific value, but in analogy with Also, convenience spec link I forgot to include earlier: https://webassembly.github.io/spec/js-api/index.html#limits I'll prepare a PR. |
I think 65536 would be nice if there is no technical reason to prevent it, as users will ask for 4GB memory, so requiring them to do 4GB-delta would be a slight annoyance. I understand there are concerns about letting the initial memory be ~4GB, but I hope we can allow that eventually. Until then, the toolchain will need to do a dance of "set initial to 2GB-delta, immediately do a grow to ~4GB". |
Why would the toolchain need to do this immediately? Given that growth will always have to be enabled when using 4GB, I don't see any problem with starting out smaller (ideally much less than 2G) and growing to 4G only when needed. |
We could support the user compiling with The alternative is to tell users that they can't just request 4GB even if they know they need it (which I think is slightly annoying for users) and instead they must start with ~2GB and it can grow to 4GB if they actually use that much (which is not great if the user knows the app actually needs 4GB; it might fail during some malloc in the middle that the app doesn't handle). |
Well, the current spec says that users can't just request 4GB even if they know they need it... and the change we have agreed upon maintains that. (I agree that it would be nice to lift the limit; that said, as discussed above, being able to allocate that much isn't guaranteed anyway, so starting smaller and growing on demand with appropriate failure-handling is more robust and therefore preferable anyway.) |
(Maybe I misunderstood the agreement? I thought it was about the behavior of browsers, not toolchains.) I understand the point about preserving the 2GB limit. For existing users there won't be any disruption, I agree. However, it will be a problem for new users, I think. Here's how I see the toolchain side, in emscripten: if the user says
I don't think 1 is an option, as it can break user code: if the program assumes it has 4GB (or it doesn't run) then it might not be written to properly handle malloc failures along the way, and/or it may get farther along in execution than it should. 2 is an option, but given the user explicitly asked for 4GB, we are telling them that the only way to get what they want is to do something like build with 2GB + add to their startup code an immediate growth to 4GB. Users can do that, but we could make things easier for them by doing it for them automatically - that's what option 3 is. |
Per discussion in WebAssembly#1116, the limit for initial memory pages remains at 32767 (just under 2GiB), while the limit for maximum memory pages is increased to 65536 (4GiB). Spec tests are brought in line with these definitions (they actually did not reflect the spec text before). Fixes: WebAssembly#1116.
Would anyone care to review #1121 please? |
Did you have a chance to think some more on this topic? As we discussed at the CG meeting, I'd like to remove the limitation on 2GB for the initial size. My reasoning is that
|
friendly ping @lars-t-hansen Also note that meanwhile I had another idea to mitigate the risk you are concerned about, see emscripten-core/emscripten#10601 (comment) - I think we should change Emscripten's maximum memory to 2GB by default before we enable support for more than that. That means that users will not get over 2GB - with or without memory growth - without explicitly opting in (which as in the last comment, is very unlikely to happen unless the user really needs it). |
@kripken, my apologies, it's a crazy time. Will get to this shortly. |
@kripken, though I think the solution in emscripten-core/emscripten#10601 (comment) is very reasonable, it doesn't really address my concern, which is not about random applications accidentally requesting 4GB but about important applications deliberately doing it, and thus excluding Firefox (until we fix this issue, which we must). Option 2 in your list above is really the only thing that prevents this from happening because it is the only one that shifts the burden to the application developer so that she can be(come) aware that there are platform limitations. I think I need to be done with this issue, though. The current solution (max initial = 2GB, max max = 4GB) was a compromise reached at the CG meeting. If you want to raise max initial to 4GB then I suppose you should go back to the CG with it; I won't try to block the change. Though I might propose that we remove the notion of a max initial, since the justification for changing the max initial is effectively that it is not useful for max initial to have a lower value than max max. |
Sounds good @lars-t-hansen , thanks, I'll try to raise this in the next CG meeting. I agree we can just remove the max initial. (I admit I don't fully understand your first paragraph though. Maybe a moot point since we've agreed on the future actions. Happy to talk more offline if you want.) |
The JS API currently defines a maximum size of 32767 * page_size = just under 2GB for memories:
We (Chrome) have requests from partners to allow larger memories, so we would like to increase this limit. Wasm currently being a 32-bit system sets a natural limit at 4GB, so I propose a new maximum of 65535 pages.
If that is somehow contentious, then 49151 pages or ~3GB could be a compromise. Personally, I don't see a benefit of 3GB compared to 4GB. It is obviously already the case that the spec'ed maximum is not a reliable definition of minimum available memory (on many 32-bit devices, allocating 2GB or even just 1GB will be impossible in practice), and the spec already explicitly notes this, so bumping the allowed maximum does not materially worsen any existing compatibility/predictability issues. Reasonably well-equipped modern 64-bit desktop or server systems will be able to accommodate 4GB just as easily as 2GB. As the spec does not mandate a minimum supported memory size, any implementations that have internal limits preventing them from supporting larger memories (e.g., use of signed 32-bit
int
types for memory accesses) are not required to make any changes.The text was updated successfully, but these errors were encountered: