-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rethinking storage proxy map #96
Comments
The LocalStorage case being dealt with in whatwg/html#5560 isn't synchronously dealing with the authoritative map, it's dealing with a replicated copy of the map, but that's largely hand-waved away via the "multiprocess" disclaimer. Perhaps the hand-waving should be reduced and that will help clear up the error handling[1]? I think the inescapable implementation reality is that there are always going to be at least 3 event loops involved for any storage endpoint and it could be worth specifying this:
Although there will always be policy checks that can happen in the agent event loop that are synchronous, the reality is that most unexpected failures will happen in the I/O event loops and these will then want to notify the authoritative storage bottle map. Especially given that there's interest in the Storage Corruption Reporting use-case (explainer issue in this repo, this async processing would make sense as any corruption handlers would want to be involved in the middle of the process. One might create the following mechanisms:
For all Storage endpoints, the question whenever any error occurs on the I/O loop or when ingesting data provided by the I/O loop is: Does this break the bottle?. For the "indexedDB", "caches", and "serviceWorkerRegistrations" endpoints there are already in-band API means of relaying I/O failures (fire an UnknownError or more specific error, reject the promise, reject the promise) and there's no need to break the bottle. For "localStorage" and "sessionStorage" there's no good in-band way to signal the problem, but any transient inability to persist changes to disk can be mitigated by buffering and when the transient inability becomes permanent, the bottle can be said to be broken. 1: From a spec perspective (ignoring optimizations), Firefox's LocalStorage NextGen overhaul can be said to synchronously queue a task to make a snapshot of the authoritative bottle map on the authoritative bottle map event loop the first time the LocalStorage API is used in a given task on the agent event loop. The snapshot is retained until the task and its micro-task checkpoint completes, at which point any changes made are sent to the authoritative bottle map in a task where they are applied. This maintains run-to-completion consistency (but does not provide magical global consistency). There are other possible implementations like "snapshot at first use and broadcast changes" which could also be posed in terms of the event loops/task sources. |
There's also "does this fit in the bottle?" I suppose, which does happen to fail synchronously for |
Yeah, I was lumping the LocalStorage/SessionStorage quota checks into agent-local policy decisions along with structured serialization refusing to serialize things (for other storage endpoints). For LocalStorage/SessionStorage the quota check need to happen synchronously (and structured serialization is not involved for them). Impl-specific notes: For Firefox's LSNG the agent can be said to hold a quota pre-authorization like used for credit/debit cards. If a call needs more space than was pre-allocated, a task is synchronously dispatched from the agent event loop to the authoritative bottle map's event loop in order to secure the added quota. |
One thing I noticed while working on whatwg/html#5560 is that we don't have a nice formalized way to deal with bottle/proxy map operations failing. And I think in principle all can fail for a variety of reasons.
The text was updated successfully, but these errors were encountered: