-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handling race conditions - API for accessing pending requests? #959
Comments
I think the answer in both cases is that the HTTP cache will help avoid actually issuing double requests over the network and that this simplifies things for everyone. Cache API-wise, it's my understanding from discussion about the Cache transaction API #823 that the Cache API will be changed so that an in-flight put/add will only become visible for match() purposes once its body has been retrieved and it is atomically inserted into the Cache. And that the existing language around fetching records and incumbent records will be simplified to reflect this. In other words, an explicit decision to not expose a pending requests API. |
I guess author meant that some request, say to The question is: how to re-use existing network request? I guess mentioning of Cache API just confused all the things. |
What's the benefit of doing this manually rather than |
I guess something simple like this could be used: self.addEventListener('fetch', event => {
const request = self.requests.match(event.request).then(inflight => {
return inflight || fetch(event.request);
});
event.respondWith(request);
}); This also can potentially solve the issue in |
@jakearchibald I see what you mean. For the case in #920 though, the SW is spawned exactly to the navigation request, so knowing this, browser may not remove that pre-flight request from For the case of link=preload, yeah, this way requests could be potentially missed, but since SW already controls the scope, it could be handled this way: (cache matching added, assuming SW cached request and if it's not inflight -- try to get from cache) self.addEventListener('fetch', event => {
const request = self.requests.match(event.request).then(inflight => {
return inflight || self.caches.match(event.request) || fetch(event.request);
});
event.respondWith(request);
}); Also there could be other solution for link=preload right now: let stylesRequest;
self.addEventListener('fetch', event => {
const url = new URL(event.request.url);
if (url.path === '/styles.css') {
let request;
if (!stylesRequest) {
stylesRequest = fetch(event.request).then((res) => {
stylesRequest = null;
return res;
});
}
event.respondWith(stylesRequest);
}
}); But this is reliance on a global state which is obviously bad and shouldn't be used, especially we SW team decide to on multiple SW instances. Just trying to generate some ideas. P.S. I think in the case with |
Would this necessarily be true if the first request hasn't completed yet? At what stage does an object get put in the http cache?
I didn't realise they were more widely supported than preload, and they could be a good option (although prerender is still missing in firefox, and who knows in which order safari might implement sw & pre* link headers). If I fetch /page1.html and it has a link header If not, it'd be useful to have something between the two - prefetch-with-linked-resources. Prerender is a bit extreme sometimes e.g. one potential use case is to prefetch all stories featured on the home page - I will want to fetch all the html files and their styles and scripts, but won't want to expend resources on background render for each of them. One other reason to not simply use link headers is that some of these prefetches may be based on user recommendations/interactions on the page, so information about them won't be carried in the page's link headers e.g. a user clicks on a 'make this article available offline button', and then navigates to the page fairly quickly. |
Yes. Speaking at least for Firefox/Gecko, before thinking about talking to the network, the HTTP channel consults the cache at the same time it creates the entry. Requests will be coalesced as long as the fetch/request is allowed to be served from cache. (For example, a fetch cache mode of "no-store" would not be allowed to be served from cache and so could not be coalesced.) Source-delving, if you look at Gecko's cache2 CacheStorageService::AddStorageEntry, that code looks for an existing cache entry and creates a new handle to it if an entry already exists. Otherwise, it creates the entry. (And this is called by CacheStorage::AsyncOpenURI which is used by nsHttpChannel::OpenCacheEntry, which you can follow back to its callers, etc.) |
Some other brainstorming on self.addEventListener('fetch', event => {
const request = event.request;
const url = new URL(request.url);
if (url.pathname === '/') {
const result = self.caches.match(request).then(res => {
if (!res) {
return loadIndex(request);
}
return res;
});
event.respondWith(result);
return;
}
const result = self.requests.match(request).then(inflight => {
return inflight || self.caches.match(request);
}).then(res => {
if (res) return res;
return fetch(request).then(res => {
if (res && res.ok) {
putCache(request, res);
}
return res;
});
});
event.respondWith(result);
});
function loadIndex(request) {
// Preload
[
'/main.js',
'/main.css',
'/logo.png'
].forEach(asset => {
const req = new Request(asset);
const fetching = fetch(req).then(res => {
if (res && res.ok) {
return putCache.then(() => res);
}
return res;
});
// Puts request / request + response into memory store
// until `fetching` is settled
self.requests.putUntil(req, fetching);
});
return fetch(request).then(res => {
if (res && res.ok) {
putCache(request, res);
}
return res;
});
}
function putCache(req, res) {
return caches.open('cache').then(cache => {
return cache.put(req, res);
});
} Here main page's assets are requested along side with it and are picked inside
For the case with pre-flight |
IMHO, it should be transparent for the dev to handle a second request to the same resource. An ideal network will respond with no lag and your problem would vanish but this is precisely what HTTP cache achieves. |
There's a couple of race conditions arising in the SW I'm working on relating to prefetch/preload:
It'd be useful to have a pending requests API, with similar request matching rules to the cache API. Could it even be built into the Cache API? e.g.
If the promise rejects, or resolves with anything other than a response then the response would not be put in the cache.
cache.get(request)
would resolve/reject when the promise resolves/rejectsThe text was updated successfully, but these errors were encountered: