-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Map structure #4
Comments
Is it observable? |
@michaelficarra: It is observable if we expose the cache to the developer somehow. For example, we may allow the developer to supply a map-like cache argument. (See also proposal-policy-map-set, which I will also present to plenary.) For example: const cache = new Map;
function f (x, y) { return x + y; }
const fMemo = f.memo(cache);
f(0, 0); f(1, 1); If we went with a flat map-like with argument-tuple keys, then cache would be This would allow the developer to |
I don't like using a user-provided map. If we want memoised functions to have methods like |
@michaelficarra: The more personally important use case for allowing custom caches is to allow the user to specify the memoization’s cache replacement policy. For example, I may want to specify that the cache store up to 100 entries with a least recently used (LRU) policy, like how Python does, so I may wish to use new LRUMap(100). But, yes, hiding the cache’s structure completely is also an option for this issue. |
#5 (comment) reminded me that records/tuples cannot directly contain objects, so that goes out the window. Looks like we would have to use trees of Map-likes—or compositeKeys—after all. Or maybe there’s another way. |
Once you choose a tree of map-likes, now you are giving the consumer per-argument-position control over the caching strategy, which I don't think anyone wants. |
Here you are trying to solve a problem that |
@michaelficarra: I agree. I would much prefer a flat one-level cache data structure with complex keys. I suppose we could still:
For example: const cache = new LRUMap(256);
const f = (function f (arg0) { return this.x + arg0; }).memo(cache);
const o0 = { x: 'a' }, o1 = { x: 'b' };
f.call(o0, 0); // Returns 'a0'.
f.call(o1, 1); // Returns 'b1'. Now @zloirock: Although composite keys would probably also be good for this use case, composite keys have been stuck at Stage 1 with no progress for two years, so I am concerned that it will be a long time before they get added to the language (if ever). At plenary, I will present compositeKey as an alternative possibility to using tuple keys containing symbols in a WeakMap. We will discuss both approaches’ trade-offs. |
This proposal is a good chance to advance |
According to composite keys’ explainer, “at least one component must be a valid key that can be placed in a WeakMap”. Unless I am mistaken, this restriction would exclude them from being usable for argument-list keys, since both We still have the “replace objects with symbols from a WeakMap” solution, though. |
@js-choi a first component can be, for example, a function. |
A Map tree would both be difficult to manage if you ever want to touch it by hand in userland, and may prevent a built-in LRU-Map from working, unless it's also built to support this tree shape. I think an ideal default option would be to use a tuple for the cache key (update: I discuss more on how I think this is still possible despite previous concerns near the bottom). I also feel that this should be completely customizable, as I don't think it's possible to provide an algorithm that would satisfy everyone, and requiring people to customize the algorithm on a custom Map instance itself would be tedious, and make many use cases of function memoization overly difficult. An option I've used previously is to require the user to generate a "cache key" from the provided parameters. So, something like this: const add = (x, y) => x + y
const memoizedAdd = memoize(add, {
argsToKey: (...args) => JSON.stringify(args)
}) In this example, if you call This puts a ton of power into the hands of the user - power which is often needed. It can solve scenarios such as these:
|
It is a good point that key customization is important, and that the tuple-based approach in #4 (comment) would not address even common use cases like option objects. It is also a good point that we can punt this problem to the future by making keying functions be required. It’s not a big deal to write something like |
If we go with a Map-like cache (see policy Maps), how should we structure the cache? For example, we could use a tree of Maps, or we could use argument-tuples as keys in one Map.
There is also the compositeKeys proposal (#1).
The text was updated successfully, but these errors were encountered: