-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Implement OfflineMediaContext #2824
Comments
@guest271314 this reads a bit "I've made a thing now pls add it to the platform". If your library above does the job, why does it need to go in a standard? If your library can't quite do what it intends to do, what's missing? The extensible web means we'd rather solve the lower level limitations than check a high-level library into browsers (if that's the choice). |
@jakearchibald Did not ask "please". The approach does not do the job adequately, as described at original post. What is missing is 1) tab crashing when N number of Yes am asking for low-level help for the missing parts - or parts which could be improved at a low-level, posted due to lack of experience at The code is a proof of concept. We have If the interest or need is not there, the interest is not there. |
@jakearchibald Have previously used Another way to view the issue is as a question: How to generate a discrete file capable of being played independently from a range request, having content length of requested range? |
Does w3c/ServiceWorker#913 solve this? |
@jakearchibald It may. Are media headers or media codecs necessary to be included within response at Can you put together a piece of code or plnkr where |
I can't create a live demo of this since it isn't implemented, but it would work like this:
|
@jakearchibald Then evidently w3c/ServiceWorker#913 does not currently solve the present issue. Do we need a boundary string HTTP/1.1 Range Requests within response to play 20 through 25 seconds? Perhaps a viewer with experience as to the essential parts of necessary media resource headers will chime in. |
I'm struggling with this because I don't feel like you've stated the actual problem. All I've got so far is you'd like to fetch a temporal range. Can you describe:
|
@jakearchibald 1) Anytime; 2) Any format which can be immediately played at a media element without the other portion of the original resource; for example if the original file is The primary issue is range request returns expected result for range The actual problem is: How to request and get any segment of a media resource as a distinct and discrete resource, capable of being played back without reliance on the other portion of the media resource. |
It feels like you're deliberately concealing details, but we could just be talking past each other.
I think it's clear that I'm looking for something specific. For instance, if the question was "When would the browser want to make a ranged request", the answer could be "To obtain metadata at the end of the file without downloading the whole file". Can you follow that example and answer with specifics? When I say "when" I'm not looking for a time like "early evening" or "just after breakfast" 😄 .
Ok, so you must mean an HTTP response that represents a fully-contained, but spliced media resource? If the resource is cut outside of a keyframe, I assume the browser will have to re-encode the media? What encoder settings should be used for this re-encode (quality etc)? Should this be standard across browsers?
If the HTTP response you've generated is a full container, are you sure you can concatenate two to produce a media resource which is one after the other? Is this true of all container and track formats the browser could use? |
No, not deliberately concealing details. Created a repo and demo to illustrate the concept attempting to convey.
That could be possible. Though since we are aware of the possibility we should be able to move beyond that stage of communication.
Not sure what you mean by "when", here
Yes. Though if that is not possible, once the full media resource is returned as a response, create a fully-contained resource reflecting the spliced media sought, whether the parameters are a time slice or a byte range.
Yes.
To begin with, whichever encoding returns expected result. Then we can work on quality.
Ideally, yes.
Well, not entirely sure as to exacting time slices using approach at https://github.com/guest271314/OfflineMediaContext. As An attempt to concatenate individual |
Ok, so your proposal that the browser should return response objects that represent media files doesn't support one of the use-cases you've laid out, so I guess you're looking for something else. You still haven't given a full use-case, so I'm going to come up with one, and you can correct it if need be. The aim is to build a web-based video editor that takes multiple sources and outputs a single resource.
If the use-case is correct (and again, I'm guessing from the bits of information you've given me), the low level features seem to be:
The idea is you'd be able to read from multiple media sources in raw formats, modify the image data using canvas, audio data using web audio, then feed them to the encoder for final output. This system avoids the CPU overhead and generational quality loss you'd suffer in your proposal, as slicing doesn't automatically incur encoding. The "representation of a media resource by url" sounds like The "streaming media encoder" seems a little similar to |
@jakearchibald Impressive. It is challenging for two or more observers to interpret the same phenomenon in the same manner from different vantage points. Your description is very close, if not equal in practical application, to what was attempting to describe. That is a cohesive write up that may not have not able to convey, here, at this stage of own development in technical writing. Probably do not want to add or subtract from your composition, for concern of not being as clear as you have been. Though should now also include that the implementation should be possible using either an |
@jakearchibald A state of the art working example using existing browser technologies to merge discrete media fragments into a single media file, courtesy of Kaiido. |
@jakearchibald fwiw what have composed so far https://github.com/guest271314/recordMediaFragments, with much of the credit going to https://github.com/legokichi/ts-ebml. Firefox implementation has several issues. |
Implement an
OfflineMediaContext
, modeled onOfflineAudioContext
, to fetch media resource and create independent media fragments of a given range of bytes or time slices as fast as possible, capable of being played individually.Ideally the necessary parts can be implemented internally, without having to use
MediaRecorder
.Proof of concept
https://github.com/guest271314/OfflineMediaContext
The text was updated successfully, but these errors were encountered: