-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MVP array.copy
#313
Comments
We should either have a complete story for bulk operators on arrays, analogous to what we did for memories and tables, or defer. Let's avoid a one-off instruction. As mentioned on the same comment, I find it a bit hard to argue that this is MVP material, so I'd tend to defer. Bulk instructions could be an immediate follow-up proposal, though. |
A full suite of bulk operations sounds good to me. I agree that conceptually they're not "MVP," but they're also straightforward (edit: and useful!) enough that we would have no problem shipping them concurrently with the MVP. The overhead of forking a separate repo for this handful of instructions does not seem worth it to me. |
If somebody else volunteers to do all the work (especially tests, interpreter, and spec text), then I certainly wouldn't stand in the way. But I do worry that it would delay us quite a bit longer (right now it almost seems realistic that we can finish the MVP this year). Forking a repo is probably a minor effort in comparison. ;) |
I'm ok with leaving out bulk array operations from the MVP. |
I take issue with "we should have a complete story, or nothing" arguments, they are fundamentally against the idea of an MVP. I'd also like to caution us against completionism: we very intentionally subscribe to an incremental design philosophy. Among other aspects, that means that we very intentionally standardize things that there is demand for, and postpone things that may potentially come in handy in the future, but nobody is asking for currently. A "one-off instruction" might be mildly undesirable if it is somehow a concept of its own (though even that situation might not be a problem at all, sometimes it's fine/unavoidable to have groups of size 1). But if it is a first representative of a potential group of similar instructions that might follow it later (if and when demand for them emerges), then I think adding feature sets incrementally is not a problem but rather how we should be operating. Long story short, I think the key question is whether |
The concepts of MVP and incrementality don't mean that features should be considered at arbitrarily small granularity. That would be a mess. It’s a bit odd to argue on the basis of MVP policy here when we already established that this case is stretching the definition of MVP. ;) Wasm was viable for years without memory.copy, and this one should be even less essential. There are many features I would like to have right away, but at this point I'd rather have us focus on finalising than extending the scope of the MVP. |
|
I want to add couple of notes that could be useful for this discussion:
|
@rossberg, do you see any problems with @gkdn's suggestion that |
Yes. This assumes that all arrays will forever pay the cost for carrying complete runtime type information, which is an assumption that I'd absolutely like to avoid. As a ground principle, I think we should not hide casts in other operations, because of this assumption and since it's already a complex operation with potentially unbounded cost in the future. Please let's not turn Wasm into a dynamic language. |
Below are the (informal) semantics I would propose for the four new instructions. I'll upload a PR adding these to the MVP doc and adding tests for them soon, but I wanted to post them before that in case folks have any comments.
|
LGTM. A couple of notational suggestions:
You can express this as
(Also, needs to be a mutable array.)
You can assume the subtyping relation is extended to storage types (was missing from the MVP doc, I just added it). So this could be:
In the notation of the spec, you probably mean |
Great, thanks for the notes! |
There's almost a duality between the allocating and non-allocating bulk array operations:
Do we want to fill one or both of these gaps? An An |
Good points about the duality. For |
Yes, I can write a benchmark to compare the performance of |
According to the spec for |
Yes, this is intended and is the same way the current bulk memory and bulk table instructions work. |
Just FYI, I have an initial implementation in the reference interpreter of the four instructions described here and I'll make a PR once I've finished writing tests. Two questions:
|
@jakobkummerow or @manoskouk, does v8 have prototype opcodes selected for these instructions? @conrad-watt, I'm also writing tests for these, so you can probably get away with landing your implementation with just a bare minimum of testing. Alternatively, if you've already sunk a bunch of time into the tests, then finishing would save me some work :) |
We use |
Thanks! The tests look great. |
The Kotlin folks brought up a use case for copying between GC arrays and memory, which is for applications that need to get the same data into arrays for use from GC languages and into memory for use from e.g. Skia (compiled with Emscripten). We wouldn't want to delay the MVP to get these instructions in, though. |
@manoskouk proposed
array.copy
in #278 (comment), but it was not included in that PR because it deserved separate discussion and a separate PR.Would anyone object to adding
array.copy
to the MVP?The text was updated successfully, but these errors were encountered: