-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide cache.putAll() method #867
Comments
Duplicate of #823? |
Out of curiosity, why was |
Because you have to have a cache at the same time as your list of requests. If you don't have the cache until your list of requests completes, then you need to |
Why wouldn't you have a cache? |
Verging off-topic (with apologies to Jake), but in the context of discussions about transactions, I'm very interested in the motivating use-case. Specifically, do the atomic transactions:
|
My intent is to make it easier to make things atomic. You could do this:
but then my thinking was if the requests fail or the browser session is terminated, you are left with an empty cache. If your versioning system works by So I tried doing this:
This means if step 1 fails or is cancelled, there is no empty cache left behind. Right now step 3 has to be a lot of I am wondering if it would help to have a single atomic "open cache and write this data" operation, but then I guess we're on to transactions? |
A I'm starting to get the sense that the transactions might be worth exposing, though. |
Transaction-wise, it seems like most of the use cases can be characterized as wanting to synchronize an entire bundle of versioned resources in an all-or-nothing fashion. Partial progress isn't a problem other than if it's accidentally perceived as completed. In fact, it seems beneficial for partial progress to be stored so that forward progress is always made, especially if partial progress keeps happening for reasons related to resource exhaustion. caches.move (or I've seen rename used by Jake in other issues) seems like it establishes a perfect idiom for this without requiring transactions and the potentially user-hostile issues that could crop up. Imagine a hypothetical game "foogame" where levels are characterized by a versioned JSON manifest consisting of paths, file lengths, and hashes of the files. The SW receives a request for "level2", and the pseudo-code goes like this:
1: re: magical mutex thing or coordination amongst multiple separate fetch events. I'm still coming up to speed about Service Workers and idioms, so I'm not clear if there's a solution in place already for this case already. That is, I'm aware that this could theoretically be handled by rev'ing the service-worker and using the installing/installed/activating/etc. state machine, but that assumes everything is sliced up into nice bite-sized pieces. I'm doubting most developers would be on board with this. The thing about the proposed transaction model where transactions can stay alive arbitrarily is that it seems like a backdoor mechanism to introduce mutexes/blackboards for this coordination process at the expense of massive complexity for the Cache API. It seems far better to be able to create an explicit API to support this instead. For example, a particularly excessive one with weak-ish references to clients so that entries can automatically disappear would be: clientKeyedMaps['levels'].set(event.clientId, 2);
const levelUsed = (level) => clientKeyedMaps['levels'].values().some(x => x === level); 2: Transaction-wise, I do think it makes sense to enable more control of creating a single list of CacheBatchOperation dictionaries for a single invocation of Batch Cache Operations. Exposing a wrapper so that multiple deletes and puts could be placed in there doesn't increase complexity. This avoids developers needing to reason about transactions stacking up and/or the nightmarish potential interactions of fetching/incumbent records. |
This really sounds like caches.open('static-v2').then(c => c.addAll(urls)).catch(err => {
// if you don't want to leave the empty cache there:
caches.delete('static-v2')
throw err;
}); |
I'm not against |
@jakearchibald what happens if browser is terminated (e.g. crashed) while |
Keeps. This is a good case for transactions. |
Here is possible polyfill to this problem: function putAll(cache, entries) {
const operation = entries.map((entry) => {
return cache.put(entry[0], entry[1]);
});
return Promise.all(operation).then(() => {
return cache.put('/__data_is_safe__', new Response(''));
});
}
function hasCache(storage, cacheName) {
return storage.open(cacheName, (cache) => {
return cache.match('/__data_is_safe__')
}).then((response) => {
return !!response;
});
} |
This is covered by the transactions proposal |
If you want to write a bunch of separately obtained request-responses to a cache (e.g. you make requests before you've decided which cache to put them in), there is
cache.put()
, but notcache.putAll()
. This makes it hard to make the write atomic. Why not add something likecache.putAll(requests[], responses[])
?The text was updated successfully, but these errors were encountered: