Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Response streaming and SW lifetime #651

Closed
annevk opened this issue Mar 12, 2015 · 9 comments
Closed

Response streaming and SW lifetime #651

annevk opened this issue Mar 12, 2015 · 9 comments
Milestone

Comments

@annevk
Copy link
Member

annevk commented Mar 12, 2015

With https://github.com/yutakahirano/fetch-with-streams/ by @yutakahirano Response objects can have their bits streamed from some logic in the SW. However, if this is done e.g. for media files that would mean the SW would be kept alive for potentially hours, which is not exactly what SW is designed for.

Curious what the thinking is on this.

@slightlyoff
Copy link
Contributor

It's a deep question, and not one we have an answer to today.

On one hand, SWs are lighter-weight (by design) than an iframe, so the price of having it open to service interactive content seems low. We're currently getting to a place where we can get real-world data about memory/cpu/latency from deployed SWs.

On the other hand, in cases where the SW might be (naively) allocating memory over the long period of time, we might be forced to kill it regardless and that behavior will look bad. E.g., the SW will need to handle a future range request when the video plays, issuing a new onfetch.

I think the bigger issues here are likely to be down to responsiveness. How do we make sure such a video-serving worker doesn't starve other requests for time? Maybe this is a case where we should find a way to use a sub-worker from the SW or a parallel SW (second, third, whatever).

Also, I think we have pretty much this exact issue when dealing with websockets. I've punted on both so far. Thoughts from others would be most helpful.

@wanderview
Copy link
Member

What about if we don't make stream hold the SW alive, but instead add the waitUntil() function we've talked about before?

@annevk
Copy link
Member Author

annevk commented Mar 26, 2015

@wanderview I think even in the face of waintUntil() the plan was not for a SW to be alive quite that long though.

@wanderview
Copy link
Member

I think I understand better what is being asked now. We want javascript in a worker to stream data to a consumer in the main thread.

Maybe we need stream transferability here. So you can create a pipe oriented stream, pass the reader side to respondWith(), and then transfer the writer side of the pipe to a Worker.

@jakearchibald
Copy link
Contributor

Yeah, the SW should be able to shut down as soon as the promise passed to respondWith resolves.

@wanderview
Copy link
Member

Yeah, the SW should be able to shut down as soon as the promise passed to respondWith resolves.

I agree the spec should make this possible. Implementations may need to keep it alive longer. For example, if respondWith(cache.match(req)) I need to keep the worker alive until the body is fully streamed across because of how IPC works.

@jakearchibald
Copy link
Contributor

Yeah that's fine. The browser's allowed to keep the SW open as long as it wants. Memory-constrained devices are expected to kill SW more aggressively, high-memory devices may keep it around for minutes to avoid the startup lag.

@jakearchibald
Copy link
Contributor

ahem now I've read the OP properly, I agree with @slightlyoff - having the SW alive for hours seems ok here, it's lighter than an iframe. But should playback pause, the SW can shutdown, and on resume, do a fetch with a range request.

Yeah, I basically have nothing new to add.

@jakearchibald
Copy link
Contributor

We added fetchEvent.waitUntil too. But keeping SW alive for requests seems fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants