-
-
Notifications
You must be signed in to change notification settings - Fork 32
Player APIs
NOT FINISHED. SEE SDK HEADERS.
Functions with callback(s) are async.
Release GL resources bound to the context.
- MUST be called when a foreign OpenGL context previously used is being destroyed and player object is already destroyed. The context MUST be current.
- If player object is still alive, setVideoSurfaceSize(-1, -1, ...) is preferred.
- If forget to call both foreignGLContextDestroyed() and setVideoSurfaceSize(-1, -1, ...) in the context, resources will be released in the next draw in the same context. But the context may be destroyed later, then resource will never be released
Set a new media url. If url changed, will stop current playback, and reset active tracks, external tracks set by setMedia(url, type)
. Supported protocols/schemes are:
- FFmpeg protocols. For avdevice inputs, the url is
"avdevice://format:filename"
, for example"avdevice://dshow:video=USB2.0 HD UVC WebCam"
- Android:
content
,android.resource
,assets
- iOS:
assets-library
- UWP/WinRT:
winrt
. It's a custom protocol, the url format iswinrt:IStorageItem@ADDRESS
andwinrt:IStorageFile@ADDRESS
, whereADDRESS
is the object address, and the object needs to be alive before media loaded.
A url query mdkopt=avformat&...
will be treated as ffmpeg avformat options, for example, to speedup opening rtsp the default options is not suitable. some_url?mdkopt=avformat&fflags=+nobuffer&probesize=100&fpsprobesize=0
will set fflags option. You can also set the options globally without changing url SetGlobalOption("avformat", "fflags=+nobuffer:analyzeduration=10000:probesize=1000:fpsprobesize=0:avioflags=direct")
, or via per player property(recommended)
player.setProperty("avformat.fflags", "+nobuffer");
player.setProperty("avformat.analyzeduration", "10000");
player.setProperty("avformat.probesize", "1000");
player.setProperty("avformat.fpsprobesize", "0");
player.setProperty("avformat.avioflags", "direct");
Set an individual source as track of type
, e.g. audio track file, external subtile file. MUST be after main media setMedia(url)
.
If url is empty, use type
tracks in MediaType::Video url.
The url can contains other track types although they will not used, e.g. you can load an external audio/subtitle track from a video file, and use setActiveTracks()
to select a track.
Note: because of filesystem restrictions on some platforms(iOS, macOS, uwp), and unable to access files in a sandbox, so you have to load subtitle files manually yourself via this function.
examples:
- set subtitle file:
setMedia("name.ass", MediaType::Subtitle)
Set a callback which is invoked when current media is stopped and a new media is about to play, or when setMedia() is called.
Call before setMedia() to take effect.
Gapless play the next media after current media playback end. setState(State::Stopped) only stops current media. Call setNextMedia(nullptr, -1) first to disable next media.
- startPosition: start milliseconds of next media
-
flags: seek flag if
startPosition
> 0
Usually you can call currentMediaChanged()
to set a callback which invokes setNextMedia()
, then call setMedia()
.
see Render API
When media url protocol is stream
, i.e. setMedia("stream:empty_or_any_string")
, then player is in stream playback mode, and user must provide data via appendBuffer()
.
setTimeout()
can abort current playback if timedout to read data from user.
Get render api. For offscreen rendering, may only api type be valid in setRenderAPI()
, and other members are filled internally, and used by user after renderVideo()
same as property "buffer". set duration range of buffered data.
-
minMs: default 1000. wait for buffered duration >= minMs when before popping a packet.
-
minMs
< 0:minMs
,maxMs
anddrop
will be reset to the default value. -
minMs
> 0: when packets queue becomes empty,MediaStatus::Buffering
will be set until queue duration >=minMs
, "reader.buffering" MediaEvent will be triggered. -
minMs
== 0: decode ASAP.
-
-
maxMs: default 4000. max buffered duration. Large value is recommended. Latency is not affected.
-
maxMs
< 0:maxMs
anddrop
will be reset to the default value -
maxMs
== 0: same asINT64_MAX
-
-
drop
- true: drop old non-key frame packets to reduce buffered duration until <
maxMs
.-
maxMs
== 0 orINT64_MAX
: always drop old packets and keep at most 1 key-frame packet. -
maxMs
(!=0 orINT64_MAX
) < key-frame interval: no drop effect. -
maxMs
(!=0 orINT64_MAX
) > key-frame interval: start to drop packets when buffered duration >maxMs
.
-
- false: wait for buffered duration <
maxMs
before pushing packets
- true: drop old non-key frame packets to reduce buffered duration until <
For realtime streams like(rtp, rtsp, rtmp sdp etc.), the default range is [0, INT64_MAX, true]
.
Usually you don't need to call this api. This api can be used for low latency live videos, for example setBufferRange(0, INT64_MAX, true)
will decode as soon as possible when media data received, and no accumulated delay.
Get buffered undecoded data duration and size.
- bytes: buffered bytes
- return: buffered data(packets) duration in milliseconds
Set audio volume level
- value: linear volume level, >=0. 1.0 is source volume
- channel: channel number, int value of AudioFormat::Channel, -1 for all channels. The same as ms log2(SpeakerPosition), see https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/ksmedia/ns-ksmedia-ksaudio_channel_config#remarks
- play left channel only:
player.setVolume(0);
player.setVolume(1.0f, 0);
mute or not
Set frame rate, frames per seconds. Useful for videos without audio and timestamp.
-
value: frame rate
- 0 (default): use frame timestamp, or default frame rate 25.0fps if stream has no timestamp
- -: render ASAP.
- +: desired frame rate
Set background color. r, g, b, a range is [0, 1], and default is 0. If out of range, background color will not be filled.
See https://github.com/wang-bin/mdk-sdk/wiki/Types#enum-videoeffect
set output color space.
-
value target ColorSpace.
- If invalid(
ColorSpaceUnknown
), the renderer will try to use the value of decoded frame, and will send hdr10 metadata when possible(example). Currently only supported by metal, andMetalRenderAPI.layer
must be aCAMetalLayer
(example) - If target color space is hdr(for example
ColorSpaceBT2100_PQ
), no hdr metadata will be sent to the display, sdr will map to hdr. Can be used by the gui toolkits which support hdr swapchain but no api to change swapchain colorspace and format on the fly, see Qt example - The default target color space is sdr
ColorSpaceBT709
- If invalid(
To render multiple HDR and SDR videos(on the same device) at the same time, choose ColorSpaceBT2100_PQ
and make sure your gui toolkit is running in hdr10 colorspace.
Window size, surface size or drawable size. Render callback(if exists) will be invoked if width and height > 0.
Usually for foreign contexts, i.e. not use updateNativeSurface()
.
If width or heigh < 0, corresponding video renderer (for vo_opaque) will be removed. But subsequence call with this vo_opaque will create renderer again. So it can be used before destroying the renderer.
OpenGL: resources must be released by setVideoSurfaceSize(-1, -1, ...) in a correct context. If player is destroyed before context, MUST call Player::foreignGLContextDestroyed() when destroying the context.
The rectangular viewport where the scene will be drawn relative to surface viewport.
x, y, w, h are normalized to [0, 1]
Set video frame display aspect ratio.
-
value: aspect ratio. can be any, or one of predfined value. If value > 0, frame expend inside viewport. If value < 0, frame expend outside and crop
-
IgnoreAspectRatio
: 0, ignore aspect ratio and scale to fit renderer viewport. -
KeepAspectRatio
: default, keep frame aspect ratio and scale as large as possible inside renderer viewport. -
KeepAspectRatioCrop
: keep frame aspect ratio and scale as small as possible outside renderer viewport. - other value > 0: like KeepAspectRatio, but keep given aspect ratio and scale as large as possible inside renderer viewport
- other value < 0: like KeepAspectRatioCrop, but keep given aspect ratio and scale as small as possible inside renderer viewport
-
rotate around video frame center
- degree: 0, 90, 180, 270, counterclockwise
scale frame size. x, y can be < 0, means scale and flip.
Map a point from one coordinates to another. a frame must be rendered. coordinates is normalized to [0, 1].
- dir: value of
enum MapDirection {
FrameToViewport, // left-hand
ViewportToFrame, // left-hand
};
- x, y, z: points to x/y/z coordinate of viewport or currently rendered video frame. z is not used.
void setPointMap(const float* videoRoi, const float* viewRoi = nullptr, int count = 2, void* vo_opaque = nullptr)
Set points of region of interest. Can be called on any thread.
- videoRoi: array of 2d point {x1, y1, x2, y2} in video frame, (x1, y1) is top-left point, (x2, y2) is bottom-left point of interested rectangle in video. coordinate: top-left = (0, 0), bottom-right=(1, 1). set null to disable mapping
- viewRoi: array of 2d point {x1, y1, x2, y2} in video renderer. coordinate: top-left = (0, 0), bottom-right=(1, 1). null is the whole renderer.
- count: point count. only support 2. set 0 to disable mapping
- video scale 2x:
const float videoRoi[] = {0.25f, 0.25f, 0.75f, 0.75f}
Try decoders by name(case sensitive) in the given order and select it if works for current media. This function can be called at anytime. When state is State::Playing
, new decoders will be applied immediately.
names
can contain decoder options/properties. Properties are separated by :
and in key=value
pattern. For example, a MFT decoder with d3d11 acceleration is MFT:d3d=11
, without d3d acceleration and with pool enabled is MFT:d3d=0:pool=1
.
Decoder properties can also be set via Player.setProperty("video.decoder", "key1=val1:key2=val2")
or Player.setProperty("audio.decoder", "key1=val1:key2=val2")
, then properties apply for all video or audio decoders, and can set multiple times.
Decoder name and properties are listed here: https://github.com/wang-bin/mdk-sdk/wiki/Decoders#video-decoders
- Recommended decoders for win32:
player->setDecoders(MediaType::Video, {"MFT:d3d=11", "hap", "D3D11", "DXVA", "CUDA", "FFmpeg", "dav1d"});
- Recommended decoders for linux desktop:
player->setDecoders(MediaType::Video, {"hap", "VAAPI", "CUDA", "VDPAU", "FFmpeg", "dav1d"});
- Recommended decoders for macOS and iOS:
player->setDecoders(MediaType::Video, {"VT", "hap", "FFmpeg", "dav1d"});
- Recommended decoders for android:
player->setDecoders(MediaType::Video, {"AMediaCodec", "FFmpeg", "dav1d"});
- Recommended decoders for raspberry pi:
player->setDecoders(MediaType::Video, {"MMAL", "FFmpeg", "dav1d"});
Deprecated since 0.11.0, use setDecoders(MediaType::Video, names)
instead
Deprecated since 0.11.0, use setDecoders(MediaType::Audio, names)
instead
Enable given tracks of a type to decode and render. The first track of each type is active by default.
- type:
-
MediaType::Unknown
: select a program(usually for mpeg ts streams).tracks
must contains only 1 value,N
, indicates using the Nth program's audio and video tracks -
MediaType::Audio
: select audio tracks -
MediaType::Video
: select video tracks
-
- track: set of active track number, from 0~N. Invalid track numbers will be ignored. An empty set will disable all tracks of given type
- ms(default 10s): timeout value in milliseconds. Negative is infinit.
- cb: callback to be invoked when time is out
- return true to abort current operation on timeout
- A null callback can abort current operation
void prepare(int64_t startPosition = 0, function<bool(int64_t position, bool* boost)> cb = nullptr, SeekFlag flags = SeekFlag::FromStart)
Preload a media and then becomes State::Paused
.
To play a media from a given position, call prepare(ms)
then setState(State::Playing)
.
-
startPosition start from position, relative to media start position(i.e.
MediaInfo.start_time
) - flags seek flag if startPosition != 0.
-
cb: if startPosition > 0, same as callback of
seek()
, called after the first frame is decoded or load/seek/decode error. If startPosition == 0, called when mediaInfo is ready or load error, then onMediaStatus callback will be invoked.- position: seek result, < 0 is error
- boost: in callback can be set by user(*boost = true/false) to boost the first frame rendering. default is true. example: always return false can be used as media information reader
Seek to a give position.
-
pos target position. If
flags
hasSeekFlag::Frame
, pos is frame count, otherwise it's milliseconds.- If pos > media time range, e.g. INT64_MAX, will seek to the last frame of media for SeekFlag::AnyFrame, and the last key frame of media for SeekFlag::Fast.
- If pos > media time range with SeekFlag::AnyFrame, playback will stop unless setProperty("keep_open", "1") was called
-
flags If has flag SeekFlag::Frame, only
SeekFlag::FromNow|SeekFlag::Frame
is supported, and video frame rate MUST be known. -
cb if succeeded, callback is called when stream seek finished and after the 1st frame decoded or decode error(e.g. video tracks disabled), ret(>=0) is the timestamp of the 1st frame(video if exists) after seek. If error(io, demux, not decode) occured(ret < 0, usually -1) or skipped because of unfinished previous seek(ret == -2), out of range(-4) or media unloaded(-3).
NOTE: the result position in seek callback is usually <= requested pos, while timestamp of the first frame decoded after seek is the nearest position to requested pos
examples:
- step forward 1 frame:
seek(1LL, SeekFlag::FromNow|SeekFlag::Frame)
- step backward 1 frame:
seek(-1LL, SeekFlag::FromNow|SeekFlag::Frame)
- seek to the end of media(last frame):
seek(INT64_MAX, SeekFlag::FromStart)
- seek to the last key frame:
seek(INT64_MAX, SeekFlag::FromStart|SeekFlag::KeyFrame)
Current playback time in milliseconds. Relative to media's first timestamp. i.e. mediaInfo().start_time
, which usually is 0.
Current MediaInfo. You can call it in prepare() callback which is called when loaded or load failed.
Some fields can change during playback, e.g. video frame size change(via MediaEvent), live stream duration change, realtime bitrate change.
You may get an invalid value if mediaInfo() is called immediately after set(State::Playing)
or prepare()
because media is still opening but not loaded , i.e. mediaStatus() has no MediaStatus::Loaded flag.
A live stream's duration is 0 in prepare()
callback or when MediaStatus::Loaded
is added, then duration increases current read duration.
Request a new state. It's async and may take effect later. see https://github.com/wang-bin/mdk-sdk/wiki/Types#enum-state
setState(State::Stopped)
only stops current media. Call setNextMedia(nullptr, -1)
before stop to disable next media.
setState(State::Stopped)
will release all resouces and clear video renderer viewport. While a normal playback end will keep renderer resources and the last video frame. Manually call setState(State::Stopped)
to clear them.
call
SetGlobalOption("videoout.clear_on_stop", 0)
to keep renderer resource and the last frame
NOTE: the requested state is not queued. so set one state immediately after another may have no effect. e.g. State::Playing after State::Stopped may have no effect if playback have not been stopped and still in Playing state so the final state is State::Stopped. Current solution is waitFor(State::Stopped) before setState(State::Playing). Usually no waitFor(State::Playing) because we want async load
Player& onMediaStatus(std::function<bool(MediaStatus oldValue, MediaStatus newValue)> cb, CallbackToken* token = nullptr)
Add/Remove a callback or clear all callbacks for MediaStatus change.
- cb: the callback. return true.
- token: see https://github.com/wang-bin/mdk-sdk/wiki/Types#callbacktoken
Deprecated. Use onMediaStatus
instead.
Set a callback which is invoked when the vo coresponding to vo_opaque
needs to update/draw content, e.g. when a new frame is received in the renderer. Also invoked in setVideoSurfaceSize()
, setVideoViewport()
, setAspectRatio()
and rotate()
.
With vo_opaque, user can know which vo/renderer is rendering, useful for multiple renderers
Render the next or current(redraw) frame. Foreign render context only (i.e. not created by createSurface()/updateNativeSurface()).
OpenGL: Can be called in multiple foreign contexts for the same vo_opaque.
- return: timestamp of rendered frame, or < 0 if no frame is rendered. precision is microsecond.
Send a user provided frame to video renderer. You must call renderVideo() later in render thread. The frame data can be in host memory, and also can be d3d11/9 resources, for example
mdkVideoBufferPool* pool{}; // can be reused for textures from the same producer
player.enqueue(VideoFrame::from(&pool, DX11Resources{
.resource = tex,
.subResource = index,
})//.setTimestamp(...)
mdkVideoBufferPoolFree(&pool);
set playback speed. FFmpeg atempo
filter is required.
- value: >= 0.5. 1.0 is original speed
Set A-B loop repeat count.
- param: repeat count. 0 to disable looping and stop when out of range(B)
Set A-B loop range, or playback range.
- a: loop position begin, in ms.
- b: loop position end, in ms. -1, INT64_MAX or numeric_limit<int64_t>::max() indicates b is the end of media
Add/Remove a callback which will be invoked right before a new A-B loop.
- cb: callback with current loop count elapsed
Set custom sync callback as clock
- cb: called when about to render a frame. return expected current playback position(seconds). sync callback clock should handle pause, resume, seek and seek finish events.
using SnapshotCallback = std::function<std::string(SnapshotRequest*, double frameTime)>;
Take a snapshot from current renderer. The result is in bgra format, or null on failure. An MediaEvent
may be fired.
When snapshot()
is called, redraw is scheduled for vo_opaque
's renderer, then renderer will take a snapshot in rendering thread. So for a foreign context, if renderer's surface/window/widget is invisible or minimized, snapshot may do nothing because of system or gui toolkit painting optimization.
If no on-screen renderer, an offscreen OpenGL(or other RenderAPI) context is required, and setRenderCallback()
must schedule a task to call renderVideo()
in the offscreen context.
- request: see https://github.com/wang-bin/mdk-sdk/wiki/Types#struct-snapshotrequest
- cb: the callback called when video frame is captured, with result request and captured frame time. return a file path to save as file, or empty to do nothing
BUG: to capture the first frame, must call snapshot() twice because there is no frame rendered when the 1st snapshot is called
Start to record or stop recording current media by remuxing packets read. If media is not loaded, recorder will start when playback starts
- url: destination. null or the same value as recording one to stop recording. can be a local file, or a network stream
- format: forced format. if null, guess from url. if null and format guessed from url does not support all codecs of current media, another suitable format will be used.
examples:
// start
player.record("record.mov");
player.record("rtmp://127.0.0.1/live/0", "flv");
player.record("rtsp://127.0.0.1/live/0", "rtsp");
// stop
player.record(nullptr);
Set a callback to be invoked before delivering a decoded and avfilter processed(if exists) frame to renderers. Frame can be VideoFrame and AudioFrame(NOT IMPLEMENTED).
The callback can be used as a filter.
- cb: callback to be invoked. returns pendding number of frames. callback parameter is input and output frame. if input frame is an invalid frame, output a pendding frame.
WARNING: set(State::Stopped)
in the callback is undefined, may result in a dead lock
For most filters, 1 input frame generates 1 output frame, then return 0.
Example:
player.onFrame<VideoFrame>([&](auto& frame, int){
// read frame info. or edit the frame and set as output like a filter
return 0; // usually it's 0, unless you need to output multiple frames
});
Add/Remove a MediaEvent listener, or remove listeners.
- cb: the callback. return true if event is processed and should stop dispatching.
- token: see https://github.com/wang-bin/mdk-sdk/wiki/Types#callbacktoken
Can be used to store user data, or change player behavior if the property is defined internally.
Predefined properties are:
- "continue_at_end" or "keep_open": do not stop playback when decode and render to end of stream. Useful for timeline preview. only setState(State::Stopped) can stop playback
- "audio.decoders": decoder list for setDecoders(), with or without decoder properties. "name1,name2:key21=val21"
- "video.decoders": decoder list for setDecoders(), with or without decoder properties. "name1,name2:key21=val21"
- "audio.decoder": audio decoder properties, value is "key=value" or "key1=value1:key2=value2". override "decoder" properties. key-values can be FFmpeg options(AVOption) for ffmpeg based decoders
- "video.decoder": video decoder properties, value is "key=value" or "key1=value1:key2=value2". override "decoder" properties. key-values can be FFmpeg options(AVOption) for ffmpeg based decoders
- "decoder": video and audio decoder properties, value is "key=value" or "key1=value1:key2=value2". key-values can be FFmpeg options(AVOption) for ffmpeg based decoders
- "record.copyts", "recorder.copyts": "1" or "0"(default), use input packet timestamp, or correct packet timestamp to be continuous.
- "record.$opt_name": option for recorder's muxer or io,
opt_name
can also be an ffmpeg option, e.g. "record.avformat.$opt_name" and "record.avio.$opt_name". - "reader.decoder.$DecoderName": $DecoderName decoder properties, value is "key=value" or "key1=value1:key2=value2". override "decoder" properties. key-values can be FFmpeg options(AVOption) for ffmpeg based decoders
- "reader.starts_with_key": "0" or "1"(default). if "1", recorder and video decoder starts with key-frame, and drop non-key packets before the first decode.
- "buffer" or "buffer.range": parameters setBufferRange(). value is "minMs", "minMs+maxMs", "minMs+maxMs-", "minMs-". the last '-' indicates drop mode
- "demux.buffer.ranges": default "0". set a positive integer to enable demuxer's packet cache(if protocol is listed in property "demux.buffer.protocols"), the value is cache ranges count. Cache is useful for network streams, download data only once(if a cache range is not dropped), speedup seeking. Cache ranges are increased by seeking to a uncached position, decreased by merging ranges which are overlapped and LRU algorithm.
- "demux.buffer.protocols": default is "http,https". only these protocols will enable demuxer cache.
- "demux.max_errors": continue to demux the stream if error count is less than this value. same as global option "demuxer.max_errors"
- "avformat.$opt_name": avformat option via AVOption, e.g. {"avformat.fpsprobesize": "0"}. if global option "demuxer.io=0", it also can be AVIOContext/URLProtocol option.
video_codec_id, audio_codec_id and subtitle_codec_id
are also supported even are not AVOption, value is codec name.video_codec_id
is useful for capture devices with multiple codecs supported. - "avio.$opt_name": AVIOContext/URLProtocol option, e.g.
avio.user_agent
for UA,avio.headers
for http headers. - "avcodec.$opt_name": AVCodecContext option, will apply for all FFmpeg based video/audio/subtitle decoders. To set for a single decoder, use setDecoders() with properties.
- "video.avfilter": ffmpeg avfilter filter graph string for video track. take effect immediately when playing(not paused). ONLY WORKS WITH SOFTWARE DECODERS
- "audio.avfilter": ffmpeg avfilter filter graph string for audio track. take effect immediately when playing(not paused).
- "cc": "0" or "1"(default). enable closed caption decoding and rendering.
- "subtitle": "0" or "1"(default). enable subtitle(including cc) rendering.
setActiveTracks(MediaType::Subtitle, {...})
enables decoding only.
style properties(for srt, subrip, text etc, not ass/ssa):
- "subtitle.font": font name, can be empty(default)
- "subtitle.font.size": font size, default is 22
- "subtitle.font.spacing": font spacing between chars, float value string. default is "0"
- "subtitle.bold", "subtitle.font.bold": bold, "0"(default) or "1"
- "subtitle.italic", "subtitle.font.italic": italic, "0"(default) or "1"
- "subtitle.underline", "subtitle.font.underline": underline, "0"(default) or "1"
- "subtitle.strikeout", "subtitle.font.strikeout": strikeout, "0"(default) or "1"
- "subtitle.color": font color. rgba integer value of base 10 or 16, default is "0xffffffff"
- "subtitle.color.outline": outline color. rgba integer value of base 10 or 16, default is "0x000000ff"
- "subtitle.color.background": shadow or background box color. rgba integer value of base 10 or 16, default is "0"
- "subtitle.border": border size, float value, default is "1.2"
- "subtitle.shadow": shadow size, float value, default is "0". if < 0, will show background box if "box" value >=0
- "subtitle.box": background box edge width, float value, default is "0". if < 0, will show shadow if "shadow" value > 0
- "subtitle.alignment.x": horizontal aligment, value can be "-1": left, "0": center, "1": right
- "subtitle.alignment.y": vertical aligment, value can be "-1": top, "0": center, "1": bottom
- "subtitle.margin.x": horizontal margin, int value, default "10". no effect if align to horizontal center
- "subtitle.margin.y": vertical margin, int value, default "10". no effect if align to vertical center
Deprecated since 0.11.0, use SetGlobalOption("jvm", vm)
instead
Android only. Set a JavaVM*
or get current value if vm
is null. Required by android to use AMediaCodec, MediaCodec, AudioTrack and android IO and no System.loadLibrary("mdk")
用于作为渲染器的id。目前用于外部提供的渲染(OpenGL)上下文。可以为nullptr。调用带有此参数的接口会保证创建了一个相应的渲染器
To support multiple video outputs, mdk uses vo_opaque to identify a video output(maybe rendererId is a better name). vo_opaque is unique for an video output, but it can be any value. For example it's the widget ptr in https://github.com/wang-bin/mdk-examples/blob/master/Qt/QMDKRenderer.cpp#L64 vo_opaque can be null, which is used when there is only 1 video output. For most programs, 1 output is enough, so null is the default value.