-
Notifications
You must be signed in to change notification settings - Fork 529
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
看不懂大老是怎么把视频做成 流的~~ #275
Comments
俺也一样! |
大老是个流 操作高手中的高手~~ 正看啃代码,云里雾里的,我盲猜是通过队列来实现 音频与视频同步的。 ~~ , 感觉这块好复杂呀, 要是有个精简的例子就好了 。或是有别的更简单的方式~~ //不管是音频还是视频帧,都放队里中 res_frame_queue.put((res_frame,__mirror_index(length,index),audio_frames[i2:i2+2])) |
俺也不懂! |
一点浅见:process_frames承接同一个文件的inference函数的输出,在有数据时将人脸合成到原有图片上,否则采用原有视频的图片处理。然后将图片与视频合并放在audio与video track(webrtc.py) 中,计算时间戳,走webrtc channel发送。同步是基于图像帧数(25fps)与音频帧数(50fps)。所以那边会有个2倍的音频处理。 |
简单的例子看看aiortc里的example |
没做你说的这一步: 《然后将图片与视频合并放在audio》, 作者直接用图片了。 ~~ , 说白了就是直接推图片,一秒25张,与声音25帧到队列,然后从队列渲染到页面~~,然后接着从50开始依次循环..,没有牵扯到合并,合并就慢了. 作者是全网一个把视频与音频做成流来推送的,非常有创新 |
看不懂是怎么回事~~
The text was updated successfully, but these errors were encountered: