You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently it is difficult, and in some cases impossible, for application developers to determine the time ranges that were added or removed as result of operations on SourceBuffers. Some video formats (HLS, for example) do not provide timestamp information in their manifest files and require the application to synchronize manually during quality changes. With video-on-demand content the application is normally able to accurately calculate the correct content to fill the buffer without gaps because the start and end time are known.
Live videos with multiple quality levels are more complicated, however. Different renditions may not be synchronized due to delays encoding or pushing content to edge servers. If the application is unlucky with its decision about what content to download, it may waste a lot of bandwidth trying to find the edge of the current buffered time range.
For example, image a live stream with two quality renditions which present some window of available content but are not exactly in sync:
A |--a0--|--a1--|--a2--|--a3--|
B |--b0--|--b1--|--b2--|
0 5 10 15 20
Assume the application has buffered b0 and b1 and decides to switch to a1. Appending a1 into a SourceBuffer in that state would have no effect on the buffered ranges and so the application would be forced to make another blind guess for the next segment to download. If the video has a large amount of buffered content, the application could be required to make multiple requests to find a segment that allowed it to determine the time shift between the two streams.
If update events included added and removed TimeRanges with the results from the coded frame processing algorithm, the app could synchronize across quality levels much more quickly and save viewers and publishers significant bandwidth costs.
The text was updated successfully, but these errors were encountered:
I think the event system could use more information in general:
What caused us to enter the updating state?
If it was an appendBuffer call: What was the start-time of the fragment? What was the total fragment duration? What was the duration after applying the appendWindow?
If it was a remove call: What was the requested range? What was the range that was successfully removed?
Most major MSE players - Dash.js, Shaka, and Video.js's videojs-contrib-hls have issues with manifests specifying imperfect fragment durations.
Almost all of them attempt to extract information from changes to the buffered property after an append as a way of providing information to the "fragment fetching" algorithm about the real segment durations and bounds.
This information is imperfect though and if MSE itself provided exact fragment boundaries after an append, the player would be able to make much better informed guesses as to what fragment to load for a particular time.
Currently it is difficult, and in some cases impossible, for application developers to determine the time ranges that were added or removed as result of operations on SourceBuffers. Some video formats (HLS, for example) do not provide timestamp information in their manifest files and require the application to synchronize manually during quality changes. With video-on-demand content the application is normally able to accurately calculate the correct content to fill the buffer without gaps because the start and end time are known.
Live videos with multiple quality levels are more complicated, however. Different renditions may not be synchronized due to delays encoding or pushing content to edge servers. If the application is unlucky with its decision about what content to download, it may waste a lot of bandwidth trying to find the edge of the current buffered time range.
For example, image a live stream with two quality renditions which present some window of available content but are not exactly in sync:
Assume the application has buffered
b0
andb1
and decides to switch toa1
. Appendinga1
into a SourceBuffer in that state would have no effect on the buffered ranges and so the application would be forced to make another blind guess for the next segment to download. If the video has a large amount of buffered content, the application could be required to make multiple requests to find a segment that allowed it to determine the time shift between the two streams.If
update
events includedadded
andremoved
TimeRanges with the results from the coded frame processing algorithm, the app could synchronize across quality levels much more quickly and save viewers and publishers significant bandwidth costs.The text was updated successfully, but these errors were encountered: