Skip to content
This repository has been archived by the owner on Aug 5, 2022. It is now read-only.

yami transcode always have dup frames #3

Open
mypopydev opened this issue Jul 4, 2016 · 2 comments
Open

yami transcode always have dup frames #3

mypopydev opened this issue Jul 4, 2016 · 2 comments
Assignees
Labels

Comments

@mypopydev
Copy link
Contributor

mypopydev commented Jul 4, 2016

yami transcode always have dup frames.

@mypopydev mypopydev changed the title yami transcode always have dup frame. yami transcode always have dup frames Jul 4, 2016
@popkartyeah
Copy link
Contributor

popkartyeah commented Jul 5, 2016

I think I had fixed this issue.because the total surface num is 18 and libyami decode dpb will use 16 surfaces , only 2 surfases that encoder can use.

@mypopydev
Copy link
Contributor Author

No, I can reproduce this issue in master branch 7eadf3

@mypopydev mypopydev self-assigned this Jul 12, 2016
mypopydev pushed a commit that referenced this issue Jul 28, 2016
#3)

Rev #2: Fixes doubled header writing, checked FATE running without errors
Rev #3: Fixed coding style

This commit addresses the following scenario:

we are using ffmpeg to transcode or remux mkv (or something else) to mkv. The result is being streamed on-the-fly to an HTML5 client (streaming starts while ffmpeg is still running). The problem here is that the client is unable to detect the duration because the duration is only written to the mkv at the end of the transcoding/remoxing process. In matroskaenc.c, the duration is only written during mkv_write_trailer but not during mkv_write_header.

The approach:

FFMPEG is currently putting quite some effort to estimate the durations of source streams, but in many cases the source stream durations are still left at 0 and these durations are nowhere mapped to or used for output streams. As much as I would have liked to deduct or estimate output durations based on input stream durations - I realized that this is a hard task (as Nicolas already mentioned in a previous conversation). It would involve changes to the duration calculation/estimation/deduction for input streams and propagating these durations to output streams or the output context in a correct way.
So I looked for a simple and small solution with better chances to get accepted. In webmdashenc.c I found that a duration is written during write_header and this duration is taken from the streams' metadata, so I decided for a similar approach.

And here's what it does:

At first it is checking the duration of the AVFormatContext. In typical cases this value is not set, but: It is set in cases where the user has specified a recording_time or an end_time via the -t or -to parameters.
Then it is looking for a DURATION metadata field in the metadata of the output context (AVFormatContext::metadata). This would only exist in case the user has explicitly specified a metadata DURATION value from the command line.
Then it is iterating all streams looking for a "DURATION" metadata (this works unless the option "-map_metadata -1" has been specified) and determines the maximum value.
The precendence is as follows: 1. Use duration of AVFormatContext - 2. Use explicitly specified metadata duration value - 3. Use maximum (mapped) metadata duration over all streams.

To test this:

1. With explicit recording time:
ffmpeg -i file:"src.mkv" -loglevel debug -t 01:38:36.000 -y "dest.mkv"

2. Take duration from metadata specified via command line parameters:
ffmpeg -i file:"src.mkv" -loglevel debug -map_metadata -1 -metadata Duration="01:14:33.00" -y "dest.mkv"

3. Take duration from mapped input metadata:
ffmpeg -i file:"src.mkv" -loglevel debug -y "dest.mkv"

Regression risk:

Very low IMO because it only affects the header while ffmpeg is still running. When ffmpeg completes the process, the duration is rewritten to the header with the usual value (same like without this commit).

Signed-off-by: SoftWorkz <[email protected]>
Signed-off-by: Michael Niedermayer <[email protected]>
mypopydev added a commit that referenced this issue Jul 29, 2016
Update with the upstream
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants