-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: DecodeConcatVideoData (input multiple files or streams) => Output: single webm file #575
Comments
I think this belongs in MediaRecorder, if anywhere; this API does not deal with files. |
@alvestrand While you are around, what is the specification compliant procedure to get this code (close to what this issue proposes) https://github.com/guest271314/MediaFragmentRecorder/blob/webrtc-replacetrack/MediaFragmentRecorder.html working at Firefox (https://bugzilla.mozilla.org/show_bug.cgi?id=1542616#c8)? |
@alvestrand That is, where two |
@alvestrand Presumptively https://github.com/web-platform-tests/wpt/blob/master/webrtc-identity/RTCPeerConnection-peerIdentity.html |
Closing since this issue does not relate to Media Capture and Streams. |
@aboba This issue does directly relate to Media Capture and Streams, as evidenced by the title of the issue
and the body of the issue
If, in your view, the issue does not relate to Media Capture and Streams, how do reconcile the fact that the title and content of the proposal specifically describe capturing media streams and outputting a single media file? |
Proposal: DecodeConcatVideoData (input multiple files or streams) => Output: single webm file
Web Audio API provides the ability to
decodeAudioData
where the result is a singleAudioBuffer
.AudioBuffer
s can be concatenated into a singleAudioBuffer
, see merging / layering multiple ArrayBuffers into one AudioBuffer using Web Audio APIwhich can be played back.
Concatenating multiple
MediaStream
s into a single resultingwebm
file usingMediaRecorder
is not necessarily straightforward, though is possible usingcanvas.captureStream()
andAudioContext.createMediaStreamDestination()
andMediaRecorder()
, see MediaStream Capture Canvas and Audio Simultaneously; How to use Blob URL, MediaSource or other methods to play concatenated Blobs of media fragments?; https://github.com/guest271314/MediaFragmentRecorder/blob/canvas-webaudio/MediaFragmentRecorder.html;
and/or
MediaSource()
https://github.com/guest271314/MediaFragmentRecorder/tree/master (there is a Chromium bug using this approach, see w3c/media-source#190).This proposal is for an API which accepts an
Array
of either multiple static files (potentially having different encodings/file extensions) and/orMediaStream
s and outputs a singlewebm
file (as aBlob
) potentially "transcoded" (The process of converting a media asset from one codec to another.) in sequence to a "Mezzanine" file (4.1 Create a Mezzanine File; see also Scalable Video Coding (SVC) Extension for WebRTC; w3c/mediacapture-record#4) file (that is seekable, see https://bugs.chromium.org/p/chromium/issues/detail?id=642012).For example
The text was updated successfully, but these errors were encountered: