Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ffmpeg eats all the memory and crash within a minute - recording or streaming #269

Closed
Arzar opened this issue Apr 16, 2020 · 103 comments
Closed

Comments

@Arzar
Copy link

Arzar commented Apr 16, 2020

Description

On jitsi when I start a recording or a streaming session, in less than a minute the recording/stream will stop and my whole server become slow and unresponsive.

With top, I could pin the culprit: ffmpeg. It eats away all the memory very quickly. In less than a minute my 8GB are filled.

You can find attached the log of jibri when I tried a streaming session. Nothing stands out to me. I stopped the streaming after 15 seconds and ffmpeg was already at 40% memory.

Also if I stop completely prosody, jicofo, jvb and jibri and if I log as a jibri user and starts ffmpeg by myself, using the command I found in log.0.txt, I get the same issue, the CPU shoot to 150% and the memory keeps growing. I have to kill ffmpeg before it saturates the memory.

ffmpeg -y -v info -f x11grab -draw_mouse 0 -r 30 -s 1280x720 -thread_queue_size 4096 -i :0.0+0,0 -f alsa -thread_queue_size 4096 -i plug:bsnoop -acodec aac -strict -2 -ar 44100 -c:v libx264 -preset veryfast -maxrate 2976k -bufsize 5952k -pix_fmt yuv420p -r 30 -crf 25 -g 60 -tune zerolatency -f flv rtmp://a.rtmp.youtube.com/live2/aaa

If I remove every parameters related to sound in this ffmpeg command line, so removing -f alsa -thread_queue_size 4096 -i plug:cloop -acodec aac, then the memory saturation issue goes away. Memory usage is stable. So It clearly seems to be related to the sound. How can I debug this kind of issue ?

Possible Solution


Steps to reproduce


Environment details

Ubuntu 16, followed the instruction on github

lsmod | grep snd_aloop
snd_aloop              24576  0
snd_pcm               106496  1 snd_aloop
snd                    81920  3 snd_aloop,snd_timer,snd_pcm

jibri@JibriTestSrv:/root$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Loopback [Loopback], device 0: Loopback PCM [Loopback PCM]
  Subdevices: 8/8
  Subdevice #0: subdevice #0
  Subdevice #1: subdevice #1
  Subdevice #2: subdevice #2
  Subdevice #3: subdevice #3
  Subdevice #4: subdevice #4
  Subdevice #5: subdevice #5
  Subdevice #6: subdevice #6
  Subdevice #7: subdevice #7
card 0: Loopback [Loopback], device 1: Loopback PCM [Loopback PCM]
  Subdevices: 8/8
  Subdevice #0: subdevice #0
  Subdevice #1: subdevice #1
  Subdevice #2: subdevice #2
  Subdevice #3: subdevice #3
  Subdevice #4: subdevice #4
  Subdevice #5: subdevice #5
  Subdevice #6: subdevice #6
  Subdevice #7: subdevice #7

browser.0.txt
log.0.txt
ffmpeg.0.txt
asoundrc.txt

@rfdparker
Copy link

rfdparker commented Apr 21, 2020

We seem to have found similar or the same issue with Jitsi/Jibri on Debian 10, so far as regards the memory usage of ffmpeg increasing indefinitely until all real memory and swap is used until the kernel OOM killer kills the ffmpeg processes.

That said, I've not yet tried running without ffmpeg being passed the arguments -f alsa -thread_queue_size 4096 -i plug:cloop -acodec aac as suggested. How does one override what arguments Jibri uses to start ffmpeg?

@Arzar
Copy link
Author

Arzar commented Apr 22, 2020

memory usage of ffmpeg increasing indefinitely until all real memory and swap is used until the kernel OOM killer kills the ffmpeg processes.

Seems to be it ! Also, a third person reported the same on the forum

How does one override what arguments Jibri uses to start ffmpeg?

In the general case, I think you need to modify the jibri source code. In my case, just for testing purpose I did the following quick hack: rename /usr/bin/ffmpeg to /usr/bin/ffmpeg-original and then create the following /usr/bin/ffmpeg script

#!/bin/bash
PARAMS=$@
echo $PARAMS > /tmp/param.txt
PARAMNOSOUND="$(echo $PARAMS |  sed 's/-f alsa.*aac//' | sed 's/ffmpeg//g' )"
echo $PARAMNOSOUND >>  /tmp/param.txt
ffmpeg-original $PARAMNOSOUND

So it's not a solution by any means, because jibri can't even stop ffmpeg at the end of a session anymore, just a quick check to confirm sound is the issue.

@bbaldino
Copy link
Member

bbaldino commented Apr 22, 2020

Do you have an ffmpeg log of when this happens? We might be seeing some instances of this as well. Oops, read right past the attachments. The failure mode in those logs is different than what I was thinking of, though (and I definitely haven't seen anything like this after such a short amount of time).

@ec-blaster
Copy link

I have the exact same issue.
We are using an independent machine for Jibri.
Debian 10, 2 vCPUs and 8GB RAM.
It eats up all the memory and all the swap, and ffmpeg crashes a while after all the memory is taken.

@NBoESFWbVaf
Copy link

Here the same.
VirtualServer with U18.04, 4 Cpu's, 8GB RAM
Very interesting is, if i set "disableThirdPartyRequests: true," (Gravatar) in
/etc/jitsi/meet/meet.mydomain.com-config.js
my memory usage is stable.

Can anybody confirm this?

@ec-blaster
Copy link

I can confirm that, when you disable third party requests, the memory usage seems to be stable.
I did a test for about 10 minutes and it stayed below 1GB.
Thanks!

@NBoESFWbVaf
Copy link

Ok, but that only seems to have been the beginning. After 26-27min, the memory went through the roof from 1.5GB to 8GB and Swap. The same stream, no interaction, hardly any sound.

@rfdparker
Copy link

Here the same.
VirtualServer with U18.04, 4 Cpu's, 8GB RAM
Very interesting is, if i set "disableThirdPartyRequests: true," (Gravatar) in
/etc/jitsi/meet/meet.mydomain.com-config.js
my memory usage is stable.

Can anybody confirm this?

We have have tried setting disableThirdPartyRequests: true, however it did not seem to resolve the issue unfortunately.

@rfdparker
Copy link

memory usage of ffmpeg increasing indefinitely until all real memory and swap is used until the kernel OOM killer kills the ffmpeg processes.

Seems to be it ! Also, a third person reported the same on the forum

How does one override what arguments Jibri uses to start ffmpeg?

In the general case, I think you need to modify the jibri source code. In my case, just for testing purpose I did the following quick hack: rename /usr/bin/ffmpeg to /usr/bin/ffmpeg-original and then create the following /usr/bin/ffmpeg script

#!/bin/bash
PARAMS=$@
echo $PARAMS > /tmp/param.txt
PARAMNOSOUND="$(echo $PARAMS |  sed 's/-f alsa.*aac//' | sed 's/ffmpeg//g' )"
echo $PARAMNOSOUND >>  /tmp/param.txt
ffmpeg-original $PARAMNOSOUND

So it's not a solution by any means, because jibri can't even stop ffmpeg at the end of a session anymore, just a quick check to confirm sound is the issue.

Thanks for the suggestion. We have tried a similar script (in /usr/local/bin) to remove the unwanted parameters to ffmpeg. That does seem to stop the memory usage growing for both recording and streaming. However with that in place YouTube does not report it is receiving a stream (of course a stream with no sound would not be too useful anyhow).

@NBoESFWbVaf
Copy link

NBoESFWbVaf commented Apr 24, 2020

Ok, here's another try. I've changed the ffmpeg version which is come from Ubuntu 18.04 Repo in version

ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)

up to...

ffmpeg version 4.2.2-1ubuntu1~18.04.york0 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)

with

sudo apt install software-properties-common
sudo add-apt-repository --yes ppa:jonathonf/ffmpeg-4
sudo apt update
sudo apt install ffmpeg

Memory is the mostly time 1.7GB. Sometimes it goes up to 3GB for no apparent reason, but quickly returns to 1.7GB.

Edit: CPU load 5-10% better.

@NBoESFWbVaf
Copy link

After 30 min without reason out of memory and swap. Incredible.

@NBoESFWbVaf
Copy link

Ok, I'm breaking off now. Video recording is great feature, but not particularly important for me at the moment. 50min stream no problem. After restarting the server, it no longer works properly. I set it up to a working videochat without recording stream. Bye Guys and good luck.

@ec-blaster
Copy link

For us, it's specially important.
I tried today the parameter you sent yesterday and yes, it's stable for about 25-30 mins and then crashes again with full memory.
Today, I'll try to compile my own ffmpeg...

@NBoESFWbVaf
Copy link

I no longer thought about compiling it myself. Then you should also be able to set your own parameters. Definitely exciting. I'll keep my fingers crossed.

@snoopytfr
Copy link

Hi,

i have the same problem, after a moment memory go full and crash ffmpeg.

@ec-blaster
Copy link

ec-blaster commented Apr 24, 2020

I finally compiled the latest version of ffmpeg and now everything is fine.
I made a test recording of a whole hour and the memory usage stayed at about 3GB, with no other incident nor any warning or error in logs.
I think that combination of Jibri and the version of ffmpeg bundled with Debian Buster is the culprit of the error.

@rfdparker
Copy link

I finally compiled the latest version of ffmpeg and now everything is fine.
I made a test recording of a whole hour and the memory usage stayed at about 3GB, with no other incident nor any warning or error in logs.
I think that combination of Jibri and the version of ffmpeg bundled with Debian Buster is the culprit of the error.

It's good to hear you found a way to resolve this; although we haven't been able to replicate your fix so far.

Which version of ffmpeg did you compile? Is this still on Debian 10?

We too are running Debian 10 (buster). Rather than actually compile ffmpeg we tried the FFmpeg Static Builds, placing the static binaries (ffmpeg, ffprobe and qt-faststart) into /usr/local/bin such that they take precedence over those of the ffmpeg Debian package (which is installed as dependency of the jibri package).

Firstly we tried release 4.2.2, which seemed to result in the same behaviour as before ‒ the ffmpeg process would use increasingly more memory until both real memory and swap are exhausted and the kernel OOM killer kills ffmpeg.

Second we tried the git master build dated 20200324 (ffmpeg -version reports its version as N-52056-ge5d25d1147-static). This appeared to stabilise the memory usage of the ffmpeg process and stop it growing indefinitely. However we seemed to get a new issue where the memory usage of the Xorg process (belonging to jibri-xorg.service) grows indefinitely akin to what the ffmpeg process was previously doing. Ultimately the kernel OOM killer kills Xorg.

As an aside, before your last comment I'd thought the ffmpeg version was unlikely to be the cause given that we seem to have a variety of versions floating around including those in the repos of Ubuntu 16.04, Ubuntu 18.04 and Debian 10 as well as some from third-party repos. In addition the install guide at https://github.com/jitsi/jibri refers to adding a repo to get a newer ffmpeg version when using Ubuntu 14.04 (which is EOL since last year) but that only provides ffmpeg version 3.4.0 which is older than the versions from the repos of Ubuntu 18.04 and Debian 10.

@ec-blaster
Copy link

I finally compiled the latest version of ffmpeg and now everything is fine.
I made a test recording of a whole hour and the memory usage stayed at about 3GB, with no other incident nor any warning or error in logs.
I think that combination of Jibri and the version of ffmpeg bundled with Debian Buster is the culprit of the error.

It's good to hear you found a way to resolve this; although we haven't been able to replicate your fix so far.

Which version of ffmpeg did you compile? Is this still on Debian 10?

We too are running Debian 10 (buster). Rather than actually compile ffmpeg we tried the FFmpeg Static Builds, placing the static binaries (ffmpeg, ffprobe and qt-faststart) into /usr/local/bin such that they take precedence over those of the ffmpeg Debian package (which is installed as dependency of the jibri package).

Firstly we tried release 4.2.2, which seemed to result in the same behaviour as before ‒ the ffmpeg process would use increasingly more memory until both real memory and swap are exhausted and the kernel OOM killer kills ffmpeg.

Second we tried the git master build dated 20200324 (ffmpeg -version reports its version as N-52056-ge5d25d1147-static). This appeared to stabilise the memory usage of the ffmpeg process and stop it growing indefinitely. However we seemed to get a new issue where the memory usage of the Xorg process (belonging to jibri-xorg.service) grows indefinitely akin to what the ffmpeg process was previously doing. Ultimately the kernel OOM killer kills Xorg.

As an aside, before your last comment I'd thought the ffmpeg version was unlikely to be the cause given that we seem to have a variety of versions floating around including those in the repos of Ubuntu 16.04, Ubuntu 18.04 and Debian 10 as well as some from third-party repos. In addition the install guide at https://github.com/jitsi/jibri refers to adding a repo to get a newer ffmpeg version when using Ubuntu 14.04 (which is EOL since last year) but that only provides ffmpeg version 3.4.0 which is older than the versions from the repos of Ubuntu 18.04 and Debian 10.

I compiled the latest ffmpeg available as of yesterday, following the guide at:

https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu

We are at Debian Buster, and I used the following settings:

  • From standard repositories:
    • nasm
    • yasm
    • libx264-dev
    • libmp3lame-dev
    • libopus-dev
  • Compiled from sources:
    • libfdk-aac-dev
    • ffmpeg

The resulting ffmpeg version is N-97465-g4e81324

I just tried to record a session for an entire hour, and it worked very stable.
The used memory stayed at 3.03 GB all the time (the machine has 8GB) and the recording was fine.
Hope it helps

@ec-blaster
Copy link

I tested now streaming to youtube.
And now it crashed again. Memory full

@pdarcos
Copy link

pdarcos commented May 12, 2020

Has this been fixed?

I was about to install jibri but after reading this I'm a bit weary of how much memory is actually needed.

Anyone have a working setup? If so, what hardware specs?

Thanks

@pdarcos
Copy link

pdarcos commented May 16, 2020

To answer my own question in order to help others, no it hasn't been fixed.

I recorded a 1.5 hour long conference yesterday and I had a bunch of errors like the ones described here. I'll see if I can dig into the logs when I have some time and post them here.

This is very disappointing as there doesn't seem to be any solution available.

@igorayres-falepaco
Copy link

I having this same problem...

@VengefulAncient
Copy link

We have the same issue on Kubernetes (GKE), but strangely enough, not Docker. The Kubernetes Jibri deployment managed to OOM a node with 6.5 GB RAM with a single livestream. The Docker deployment is running at 1.15 GB. Both are using exactly the same ffmpeg version, since they're using the same Jibri docker image.

@rfdparker
Copy link

We may have found a solution/workaround for this, although it's a surprising one.

In a nutshell, it's using a Java 8 JRE instead of a Java 11 JRE.

This despite the high memory usage we've seen before being by either FFmpeg or Xorg, neither of which are Java processes. The Jibri service truns in a JRE, though.

We came across this when looking for differences between the Jibri Docker image and our 'native' (i.e. not container based) installation on Debian 10. The Jibri docker image is based on Debian 9 where default-jre pulls in openjdk-8-jre (an OpenJDK 8 JRE), whereas on Debian 10 it brings in openjdk-11-jre (an OpenJDK 11 JRE).

Another clue was Woodworker_Life's How-to to setup integrated Jitsi and Jibri for dummies, my comprehensive tutorial for the beginner thread on the Jitsi Community Forum. There Woodworker_Life appears to literally say that Jibri won't work with a Java 11 JRE, although not specifically how/why.

There is no OpenJDK 8 JRE in the Debian 10 repos and so, like in Woodworker_Life's thread, we used the OpenJDK 8 JRE package for Debian from the AdoptOpenJDK project.

So far we've several long attempts at recording and streaming (to YouTube), albeit most only with 2 participants. For at least one we've run for over an hour. In no case of those cases have we had memory issues or crashes. Memory seems to hold steady sometimes as low as 6xx MB, but always less than 2 GB so far. That said, for real-world use where we'll have more participants (albeit < 10) we won't skimp on memory allocated for the VM used (we'll have 6 GB).

We'll shortly be building and working with the server for 'real world' use, so we'll update if we again encounter issues ‒ but, fingers crossed, it's looking good!

@sanvila
Copy link

sanvila commented Jun 5, 2020

My case: Debian 10 with Java 8. Test: youtube live streaming. Machines from GCE.

2 vCPUs and 6GB RAM -> eventual crash
4 vCPUs and 4GB RAM -> smooth and low memory usage (below 1GB as reported by grafana)

My theory is that a low number of CPUs makes ffmpeg not to be able to process data as fast as it should.

@ec-blaster
Copy link

We had a very frustrating experience this week with Jibri.

We had tested it with our latest specifications (Debian 10, Jre 8, 4 vCPUs and 12GB RAM) for several recordings that went well, so we decided to go on and scheduled a very important meeting to stream it to YouTube.

The meeting was for 27 participants, 14 of them visible (LastN=14) and was going to be very long (about 5 hours).
When the meeting was heading to the first hour, we had to stop the Jibri because all memory (12GB) and swapping (8GB) was eaten.
After several attempts of stopping and relaunching the streaming (we have 2 cloned jibris) we had to set 24GB and 6 vCPUs and then all went smooth...
But the damage to our reputation was already done...

@bbaldino
Copy link
Member

bbaldino commented Jun 8, 2020

We had a very frustrating experience this week with Jibri.

We had tested it with our latest specifications (Debian 10, Jre 8, 4 vCPUs and 12GB RAM) for several recordings that went well, so we decided to go on and scheduled a very important meeting to stream it to YouTube.

The meeting was for 27 participants, 14 of them visible (LastN=14) and was going to be very long (about 5 hours).
When the meeting was heading to the first hour, we had to stop the Jibri because all memory (12GB) and swapping (8GB) was eaten.
After several attempts of stopping and relaunching the streaming (we have 2 cloned jibris) we had to set 24GB and 6 vCPUs and then all went smooth...
But the damage to our reputation was already done...

Have you searched around for anything on the ffmpeg side for this? Seems like such an odd bug. I don't think it can be a fundamental parameter issue, as we're not seeing this consistently, but maybe a combination of some setting we pass to ffmpeg and some network condition?

@agustinramirez
Copy link

we have the same issue, our configuration is Ubuntu Server 16.04 and ffmpeg 2.8.15-0ubuntu0.16 and jibri 8.0.30,any solution? By the way in other enviroment with the same ffmpeg and ubuntu versions this does not happen , the difference is the Jibri Version in this enviroment is 7.2.71-1

@kpeiruza
Copy link

kpeiruza commented Jun 11, 2020

We've found the same issue on a local Kubernetes cluster.

It's an Ubuntu 18.04 based cluster, with Jibri compiled 6 weeks ago from testing release.

Jibri was recording directly into a NFS folder.

In our case, the Kernel wasn't flushing cache memory quickly enough so OOM_Killer got triggered. We fixed that situation by moving the recordings folder to a local folder of the node and then moving the recording to NFS.

This way, the Kernel is behaving properly and never fills the machine, despite RAM consumption is huge, anyway (Cache, not RSS).

What's weird is that the exact same Jibri deployment works fine on other Kubernetes, so, maybe it's related to something else (base Kernel, CPU power....).

Still investigating.

In any case, maybe you can try to increase cache pressure in your kernels to avoid filling up your memory: vm.vfs_cache_pressure=150 or 200

@agustinramirez
Copy link

@kpeiruza I increased cache pressure in my Ubuntu kernels and and it didn't work.
this problem only occurs when using a virtual machine locally on our server, we have tried with instances of ec2 and this problem does not happen, in ec2 instance jibri works fine! with the same versions of jibri, ffmpeg and ubuntu server both locally and in aws, locally. on local virtual machines: jibri crashed, but on ec2 instances in AWS jibri works fine

@pierreozoux
Copy link

Thanks a lot @sblotus you helped me to nail it down.

So I just finished my tests.

Basically, last time, it worked, because I fiddled with ffmpeg scripts, thanks to @emrahcom and I did put something like that:

...
ARGS=`echo $ARGS | \
      sed "s#2976k#1900k#g"`
...

And the previous version, it was also sending 720p instead of 1080p.

But basically the "memory leak" comes from our ffmpeg not being fast enough.. So the only thing you have to watch is:

tail -f  ./config/jibri/logs/ffmpeg.0.txt

If it looks like this:

2021-05-27 15:49:42.844 INFO: [58] ffmpeg.log() frame=  867 fps= 30 q=23.0 size=    1732kB time=00:00:28.86 bitrate= 491.4kbits/s speed=1.01x
2021-05-27 15:49:43.844 INFO: [58] ffmpeg.log() frame=  884 fps= 30 q=24.0 size=    1911kB time=00:00:29.44 bitrate= 531.6kbits/s speed=1.01x
2021-05-27 15:49:43.845 INFO: [58] ffmpeg.log() frame=  900 fps= 30 q=24.0 size=    2113kB time=00:00:29.97 bitrate= 577.4kbits/s speed=1.01x
2021-05-27 15:49:44.845 INFO: [58] ffmpeg.log() frame=  913 fps= 30 q=21.0 size=    2252kB time=00:00:30.40 bitrate= 606.9kbits/s speed=   1x
2021-05-27 15:49:44.845 INFO: [58] ffmpeg.log() frame=  927 fps= 30 q=24.0 size=    2380kB time=00:00:30.86 bitrate= 631.7kbits/s speed=   1x
2021-05-27 15:49:45.846 INFO: [58] ffmpeg.log() frame=  943 fps= 30 q=23.0 size=    2556kB time=00:00:31.40 bitrate= 666.9kbits/s speed=   1x
2021-05-27 15:49:45.846 INFO: [58] ffmpeg.log() frame=  957 fps= 30 q=25.0 size=    2741kB time=00:00:31.86 bitrate= 704.6kbits/s speed=   1x
2021-05-27 15:49:46.846 INFO: [58] ffmpeg.log() frame=  972 fps= 30 q=25.0 size=    2912kB time=00:00:32.36 bitrate= 737.1kbits/s speed=   1x
2021-05-27 15:49:46.847 INFO: [58] ffmpeg.log() frame=  986 fps= 30 q=21.0 size=    3066kB time=00:00:32.83 bitrate= 765.0kbits/s speed=0.998x
2021-05-27 15:49:47.847 INFO: [58] ffmpeg.log() frame=  999 fps= 30 q=25.0 size=    3209kB time=00:00:33.27 bitrate= 790.1kbits/s speed=0.996x
2021-05-27 15:49:47.848 INFO: [58] ffmpeg.log() frame= 1014 fps= 30 q=24.0 size=    3437kB time=00:00:33.76 bitrate= 833.7kbits/s speed=0.995x
2021-05-27 15:49:48.848 INFO: [58] ffmpeg.log() frame= 1028 fps= 30 q=25.0 size=    3651kB time=00:00:34.23 bitrate= 873.7kbits/s speed=0.993x
2021-05-27 15:49:48.849 INFO: [58] ffmpeg.log() frame= 1041 fps= 30 q=25.0 size=    3824kB time=00:00:34.66 bitrate= 903.5kbits/s speed=0.991x
2021-05-27 15:49:49.849 INFO: [58] ffmpeg.log() frame= 1055 fps= 30 q=25.0 size=    4001kB time=00:00:35.13 bitrate= 932.9kbits/s speed=0.989x
2021-05-27 15:49:49.849 INFO: [58] ffmpeg.log() frame= 1069 fps= 30 q=25.0 size=    4168kB time=00:00:35.60 bitrate= 959.0kbits/s speed=0.988x

It means it is not fast enough, and the queue is accumulating in memory. Then it crashes.

So either you fiddle with the ffmpeg args (definition, preset..) or you need better hardware. I also checked that with a hetzner VM non dedicated cpu, it doesn't work, whereas with a dedicated modern cpu, it does.

I think we can close this issue, as we just need to have a faster cpu to be able to encode faster, or less quality for ffmpeg to be able to be up to date.

Thanks again @sblotus and @emrahcom for your help :)

@tranrn
Copy link

tranrn commented May 27, 2021

not sure
my last crash log:

2021-05-27 13:32:29.683 INFO: [74] ffmpeg.log() [alsa @ 0x55f9b57a94c0] ALSA buffer xrun.
2021-05-27 13:32:29.733 INFO: [74] ffmpeg.log() frame= 1935 fps=8.2 q=26.0 size= 16896kB time=00:01:04.46 bitrate=2147.0kbits/s speed=0.272x
2021-05-27 13:32:29.734 INFO: [74] ffmpeg.log() frame= 1935 fps=8.1 q=26.0 size= 16896kB time=00:01:04.46 bitrate=2147.0kbits/s speed=0.271x
EOF

@fragglerock015
Copy link

Try this piping ffmpeg trick to override the default number of thread without changing the original code, it works on any version of jibri.

ubuntu@jibri-xenial:~$ mv -v /usr/bin/ffmpeg /usr/bin/ffmpeg-original
ubuntu@jibri-xenial:~$ vim /usr/bin/ffmpeg

Then add this script on /usr/bin/ffmpeg

#!/bin/bash

ARGS=$@
ARGS=$(echo $ARGS | sed -e "s/-thread_queue_size 4096/-thread_queue_size 2048/g")
ARGS="$ARGS"
echo -n $ARGS >> /tmp/ffmpeg.log
exec /usr/bin/ffmpeg-original $ARGS

Now save the /usr/bin/ffmpeg then make it executable:

ubuntu@jibri-xenial:~$ chmod +x /usr/bin/ffmpeg

Note: Make sure nobody is using the jibri instance or ffmpeg, and also there's no need to restart the jibri service when applying this action*

This does not work for me - lasts 3 minutes and 38 seconds (ran three times) & consistently fails.

2021-05-27 18:34:14.556 INFO: [59] ffmpeg.log() ffmpeg version 4.3.2-0york018.04 Copyright (c) 2000-2021 the FFmpeg developers
2021-05-27 18:34:14.557 INFO: [59] ffmpeg.log() built with gcc 7 (Ubuntu 7.5.0-3ubuntu1
18.04)
2021-05-27 18:34:14.671 INFO: [59] ffmpeg.log() configuration: --prefix=/usr --extra-version='0york0~18.04' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libzimg --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
2021-05-27 18:34:14.671 INFO: [59] ffmpeg.log() libavutil 56. 51.100 / 56. 51.100
2021-05-27 18:34:14.672 INFO: [59] ffmpeg.log() libavcodec 58. 91.100 / 58. 91.100
2021-05-27 18:34:14.672 INFO: [59] ffmpeg.log() libavformat 58. 45.100 / 58. 45.100
2021-05-27 18:34:14.672 INFO: [59] ffmpeg.log() libavdevice 58. 10.100 / 58. 10.100
2021-05-27 18:34:14.673 INFO: [59] ffmpeg.log() libavfilter 7. 85.100 / 7. 85.100
2021-05-27 18:34:14.673 INFO: [59] ffmpeg.log() libavresample 4. 0. 0 / 4. 0. 0
2021-05-27 18:34:14.673 INFO: [59] ffmpeg.log() libswscale 5. 7.100 / 5. 7.100
2021-05-27 18:34:14.674 INFO: [59] ffmpeg.log() libswresample 3. 7.100 / 3. 7.100
2021-05-27 18:34:14.674 INFO: [59] ffmpeg.log() libpostproc 55. 7.100 / 55. 7.100
2021-05-27 18:34:14.674 INFO: [59] ffmpeg.log() [x11grab @ 0x55b49c46e980] Stream #0: not enough frames to estimate rate; consider increasing probesize
2021-05-27 18:34:14.674 INFO: [59] ffmpeg.log() Input #0, x11grab, from ':0.0+0,0':
2021-05-27 18:34:14.675 INFO: [59] ffmpeg.log() Duration: N/A, start: 1622136854.649970, bitrate: 1990656 kb/s
2021-05-27 18:34:15.042 INFO: [59] ffmpeg.log() Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1920x1080, 1990656 kb/s, 30 fps, 1000k tbr, 1000k tbn, 1000k tbc
2021-05-27 18:34:15.043 INFO: [59] ffmpeg.log() Guessed Channel Layout for Input Stream #1.0 : stereo
2021-05-27 18:34:15.043 INFO: [59] ffmpeg.log() Input #1, alsa, from 'plug:bsnoop':
2021-05-27 18:34:15.043 INFO: [59] ffmpeg.log() Duration: N/A, start: 1622136854.291646, bitrate: 1536 kb/s
2021-05-27 18:34:15.043 INFO: [59] ffmpeg.log() Stream #1:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
2021-05-27 18:34:15.044 INFO: [59] ffmpeg.log() Stream mapping:
2021-05-27 18:34:15.044 INFO: [59] ffmpeg.log() Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
2021-05-27 18:34:15.044 INFO: [59] ffmpeg.log() Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (native))
2021-05-27 18:34:15.045 INFO: [59] ffmpeg.log() Press [q] to stop, [?] for help
2021-05-27 18:34:15.045 INFO: [59] ffmpeg.log() [libx264 @ 0x55b49c4a3ec0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
2021-05-27 18:34:15.045 INFO: [59] ffmpeg.log() [libx264 @ 0x55b49c4a3ec0] profile High, level 4.0
2021-05-27 18:34:16.046 INFO: [59] ffmpeg.log() [libx264 @ 0x55b49c4a3ec0] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=4 lookahead_threads=4 sliced_threads=1 slices=4 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=1 keyint=60 keyint_min=6 scenecut=40 intra_refresh=0 rc_lookahead=0 rc=crf mbtree=0 crf=25.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=2976 vbv_bufsize=5952 crf_max=0.0 nal_hrd=none filler=0 ip_ratio=1.40 aq=1:1.00
2021-05-27 18:34:16.046 INFO: [59] ffmpeg.log() Output #0, flv, to 'rtmp://a.rtmp.youtube.com/live2/gcqg-drrc-mkc1-wh01-29yd':
2021-05-27 18:34:16.047 INFO: [59] ffmpeg.log() Metadata:
2021-05-27 18:34:16.047 INFO: [59] ffmpeg.log() encoder : Lavf58.45.100
2021-05-27 18:34:16.047 INFO: [59] ffmpeg.log() Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuv420p(progressive), 1920x1080, q=-1--1, 30 fps, 1k tbn, 30 tbc
2021-05-27 18:34:16.048 INFO: [59] ffmpeg.log() Metadata:
2021-05-27 18:34:16.048 INFO: [59] ffmpeg.log() encoder : Lavc58.91.100 libx264
2021-05-27 18:34:16.048 INFO: [59] ffmpeg.log() Side data:
2021-05-27 18:34:16.048 INFO: [59] ffmpeg.log() cpb: bitrate max/min/avg: 2976000/0/0 buffer size: 5952000 vbv_delay: N/A
2021-05-27 18:34:16.049 INFO: [59] ffmpeg.log() Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 44100 Hz, stereo, fltp, 128 kb/s
2021-05-27 18:34:16.049 INFO: [59] ffmpeg.log() Metadata:
2021-05-27 18:34:16.049 INFO: [59] ffmpeg.log() encoder : Lavc58.91.100 aac
2021-05-27 18:34:16.050 INFO: [59] ffmpeg.log() frame= 11 fps=0.0 q=21.0 size= 7kB time=00:00:00.34 bitrate= 165.4kbits/s speed=0.658x
2021-05-27 18:34:16.050 INFO: [59] ffmpeg.log() frame= 23 fps= 22 q=21.0 size= 9kB time=00:00:00.74 bitrate= 103.7kbits/s speed=0.699x
2021-05-27 18:34:17.051 INFO: [59] ffmpeg.log() frame= 36 fps= 23 q=21.0 size= 12kB time=00:00:01.16 bitrate= 83.8kbits/s speed=0.732x
2021-05-27 18:34:18.052 INFO: [59] ffmpeg.log() frame= 51 fps= 24 q=21.0 size= 15kB time=00:00:01.67 bitrate= 73.0kbits/s speed=0.794x
2021-05-27 18:34:18.052 INFO: [59] ffmpeg.log() frame= 68 fps= 26 q=21.0 size= 23kB time=00:00:02.23 bitrate= 83.8kbits/s speed=0.85x
2021-05-27 18:34:19.053 INFO: [59] ffmpeg.log() frame= 82 fps= 26 q=21.0 size= 25kB time=00:00:02.70 bitrate= 76.9kbits/s speed=0.858x
2021-05-27 18:34:19.054 INFO: [59] ffmpeg.log() frame= 96 fps= 26 q=21.0 size= 28kB time=00:00:03.16 bitrate= 72.1kbits/s speed=0.859x
2021-05-27 18:34:20.057 INFO: [59] ffmpeg.log() frame= 113 fps= 27 q=21.0 size= 31kB time=00:00:03.73 bitrate= 67.8kbits/s speed=0.889x
2021-05-27 18:34:20.058 INFO: [59] ffmpeg.log() frame= 128 fps= 27 q=21.0 size= 38kB time=00:00:04.24 bitrate= 74.0kbits/s speed=0.903x
2021-05-27 18:34:21.058 INFO: [59] ffmpeg.log() frame= 143 fps= 27 q=21.0 size= 41kB time=00:00:04.73 bitrate= 71.0kbits/s speed=0.91x
2021-05-27 18:34:21.059 INFO: [59] ffmpeg.log() frame= 157 fps= 28 q=21.0 size= 44kB time=00:00:05.20 bitrate= 68.7kbits/s speed=0.911x
2021-05-27 18:34:22.059 INFO: [59] ffmpeg.log() frame= 177 fps= 28 q=21.0 size= 47kB time=00:00:05.87 bitrate= 65.8kbits/s speed=0.941x
2021-05-27 18:34:22.059 INFO: [59] ffmpeg.log() frame= 191 fps= 28 q=21.0 size= 54kB time=00:00:06.33 bitrate= 70.4kbits/s speed=0.934x
2021-05-27 18:34:23.060 INFO: [59] ffmpeg.log() frame= 207 fps= 28 q=21.0 size= 57kB time=00:00:06.87 bitrate= 68.3kbits/s speed=0.942x
2021-05-27 18:34:23.060 INFO: [59] ffmpeg.log() frame= 222 fps= 28 q=26.0 size= 164kB time=00:00:07.36 bitrate= 182.7kbits/s speed=0.939x
2021-05-27 18:34:24.060 INFO: [59] ffmpeg.log() frame= 230 fps= 27 q=21.0 size= 302kB time=00:00:07.64 bitrate= 324.1kbits/s speed=0.912x
2021-05-27 18:34:24.061 INFO: [59] ffmpeg.log() frame= 240 fps= 27 q=23.0 size= 465kB time=00:00:07.96 bitrate= 478.3kbits/s speed=0.894x
2021-05-27 18:34:25.061 INFO: [59] ffmpeg.log() frame= 250 fps= 26 q=23.0 size= 613kB time=00:00:08.31 bitrate= 604.1kbits/s speed=0.879x
2021-05-27 18:34:25.062 INFO: [59] ffmpeg.log() frame= 260 fps= 26 q=24.0 size= 749kB time=00:00:08.63 bitrate= 710.5kbits/s speed=0.867x
2021-05-27 18:34:26.066 INFO: [59] ffmpeg.log() frame= 271 fps= 26 q=23.0 size= 901kB time=00:00:09.01 bitrate= 818.8kbits/s speed=0.858x
2021-05-27 18:34:26.066 INFO: [59] ffmpeg.log() frame= 282 fps= 26 q=21.0 size= 1070kB time=00:00:09.36 bitrate= 936.0kbits/s speed=0.851x
2021-05-27 18:34:27.067 INFO: [59] ffmpeg.log() frame= 291 fps= 25 q=26.0 size= 1206kB time=00:00:09.66 bitrate=1021.7kbits/s speed=0.839x

I have installed Ubuntu 18.04, latest JIBRI/ffmpeg chrome 90 etc on an old Gen 4 Intel i3 with 4GB RAM & it just works, 24x7 - sample log files - 4 cpus at 50 to 60% constantly less than 1GB RAM used

2021-05-27 18:41:35.161 INFO: [55] ffmpeg.log() frame=732090 fps= 30 q=24.0 size= 7414826kB time=06:46:42.98 bitrate=2489.1kbits/s speed= 1x
2021-05-27 18:41:35.161 INFO: [55] ffmpeg.log() frame=732106 fps= 30 q=21.0 size= 7414907kB time=06:46:43.50 bitrate=2489.1kbits/s speed= 1x
2021-05-27 18:41:36.161 INFO: [55] ffmpeg.log() frame=732121 fps= 30 q=20.0 size= 7415088kB time=06:46:44.01 bitrate=2489.1kbits/s speed= 1x
2021-05-27 18:41:36.161 INFO: [55] ffmpeg.log() frame=732136 fps= 30 q=24.0 size= 7415224kB time=06:46:44.52 bitrate=2489.1kbits/s speed= 1x
2021-05-27 18:41:37.161 INFO: [55] ffmpeg.log() frame=732151 fps= 30 q=25.0 size= 7415569kB time=06:46:45.00 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:37.161 INFO: [55] ffmpeg.log() frame=732166 fps= 30 q=24.0 size= 7415762kB time=06:46:45.54 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:38.162 INFO: [55] ffmpeg.log() frame=732182 fps= 30 q=23.0 size= 7415890kB time=06:46:46.03 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:38.162 INFO: [55] ffmpeg.log() frame=732197 fps= 30 q=23.0 size= 7416013kB time=06:46:46.54 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:39.162 INFO: [55] ffmpeg.log() frame=732212 fps= 30 q=23.0 size= 7416253kB time=06:46:47.07 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:39.162 INFO: [55] ffmpeg.log() frame=732228 fps= 30 q=22.0 size= 7416332kB time=06:46:47.56 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:40.162 INFO: [55] ffmpeg.log() frame=732243 fps= 30 q=23.0 size= 7416429kB time=06:46:48.07 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:40.162 INFO: [55] ffmpeg.log() frame=732258 fps= 30 q=19.0 size= 7416519kB time=06:46:48.56 bitrate=2489.1kbits/s speed= 1x
2021-05-27 18:41:41.162 INFO: [55] ffmpeg.log() frame=732274 fps= 30 q=21.0 size= 7416780kB time=06:46:49.10 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:41.162 INFO: [55] ffmpeg.log() frame=732288 fps= 30 q=25.0 size= 7417066kB time=06:46:49.60 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:42.163 INFO: [55] ffmpeg.log() frame=732304 fps= 30 q=23.0 size= 7417328kB time=06:46:50.10 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:42.163 INFO: [55] ffmpeg.log() frame=732319 fps= 30 q=23.0 size= 7417444kB time=06:46:50.60 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:43.163 INFO: [55] ffmpeg.log() frame=732335 fps= 30 q=23.0 size= 7417555kB time=06:46:51.13 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:43.163 INFO: [55] ffmpeg.log() frame=732350 fps= 30 q=24.0 size= 7417786kB time=06:46:51.67 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:44.163 INFO: [55] ffmpeg.log() frame=732366 fps= 30 q=21.0 size= 7417865kB time=06:46:52.16 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:44.163 INFO: [55] ffmpeg.log() frame=732381 fps= 30 q=23.0 size= 7417936kB time=06:46:52.67 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:45.163 INFO: [55] ffmpeg.log() frame=732396 fps= 30 q=22.0 size= 7418057kB time=06:46:53.18 bitrate=2489.2kbits/s speed= 1x

@pierreozoux
Copy link

In both cases your ffmpeg speed is < 1x so then queuing in mem, looks like a leak, but it is just normal, you need to tune your VM and ffmpeg to have a speed of 1x :)

@fragglerock015
Copy link

@pierreozoux any particular settings for VM ? We have tried many, some work for an hour or two but none so far that provide 24x7 reliability.

@pierreozoux
Copy link

@pierreozoux in hetzner, using a VM with dedicated CPU worked, whereas non dedicated didn't.
(I suspect, hetzner does throttle, and as ffmpeg is probably hammering the CPU, unfortunately, you can't montior as you could on AWS your credits, so it is hard to say).
But then, play with bitrate and resolution, I'm sure any VM can stream at 1x, but yeah, it depends on the size of the pipe it has to ingest :) .

@VengefulAncient
Copy link

On GCP/GKE, we've had much better luck with AMD Epyc machines (N2D) than standard ones (N1 - Intel up to Skylake). We haven't done extensive testing, but with 2 cores and 4 GB RAM, N2D nodes could run Jibri for over half an hour while N1 nodes with the same or even better specs overloaded and crashed within minutes. If your CPU is not fast enough, frames will start buffering in RAM - it's as simple as that, as far I understood it.

Shared vCPUs are a no-go for any serious workload on any provider, this should be obvious. Their performance is extremely inconsistent.

@pierreozoux
Copy link

@saghul I think we can close this discussion and continue in the forum if needed.

@fragglerock015
Copy link

fragglerock015 commented Jun 1, 2021

@pierreozoux @saghul further testing reveals -

  1. Use bare-metal i3 or i5 4th gen - Ubuntu 18.04.05 + updates + latest FFMPEG & JIBRI & CHROME 91 and as stable as a rock for 36 hours streaming or recording (1.2GB - 1.4 GB RAM in use). 20 Users in conference.
  2. ESXi 6.7 VM - same as bare-metal build but 4 vCPU 8 MB RAM - dies in under 5 minutes
  3. Windows 10 Hyper-V - same as bare-metal build but 4 vCPU 8 MB RAM - lasts about 4 hours (had 3 JIBRI VMs running same machine & simultaneous streaming)
  4. Windows 2019 Hyper-V "headless" - same as bare-metal build but 4 vCPU 8 MB RAM - lasts about 5-10 minutes
  5. FreeNAS/TrueNAS - same as bare-metal build but 4 vCPU 8 MB RAM - lasts about 4 hours
  6. AWS - same as bare-metal build but 4 vCPU 8 MB RAM - lasts about 40 minutes

All tests repeated / repeatable.

So other than bare-metal i3 4th Gen this is quite an issue.

@tranrn
Copy link

tranrn commented Jun 1, 2021

Same.

  1. KVM. E5-2670. Centos 8.3. VM Ubuntu 20.04 LTS with jibri installed. 16 CPU and 16 RAM. Died.
  2. KVM Centos. VM Fedora 34. Podman and Docker container. 8 CPU, 16 RAM. Died.
  3. HyperV 2016. E5-2643. VM Fedora 34 with 100% CPU reservation. Podman and Docker container. 8 CPU, 16 RAM. Died.
  4. Bare metal old E3-1225 v3 @ 3.20GHz with 8 RAM. Host Fedora 34 with podman container works fine. Eats less then 1Gb RAM.
    How can we use jibri with VM?

@fragglerock015
Copy link

You certainly cannot follow the principal of fire up another JIBRI VM when the others are busy - unless its a Wake On LAN command to fire up another i3 old desktop pulled in for use ..... Bare metal is the only 100% stable option.

@tranrn
Copy link

tranrn commented Jun 1, 2021

You certainly cannot follow the principal of fire up another JIBRI VM when the others are busy - unless its a Wake On LAN command to fire up another i3 old desktop pulled in for use ..... Bare metal is the only 100% stable option.

No, it was simple test with 1 conf and 1 VM.
New jitsi instance just for me, I'm the only participant. VMs on the hypervisors with all 24-48 cores idle.

@fragglerock015
Copy link

You certainly cannot follow the principal of fire up another JIBRI VM when the others are busy - unless its a Wake On LAN command to fire up another i3 old desktop pulled in for use ..... Bare metal is the only 100% stable option.

No, it was simple test with 1 conf and 1 VM.
New jitsi instance just for me, I'm the only participant. VMs on the hypervisors with all 24-48 cores idle.

Sorry was not referring to you directly but metaphorically "no one" can follow the principal as it's just too unreliable.

@pierreozoux
Copy link

Do you know that the default resolution changed?
https://community.jitsi.org/t/jibri-resolution-now-defaults-to-1080p/95478

So if your box can't stream 1080p, try 720p. I just did the test with a box that can't stream 1080p, it can 720p :)

@fragglerock015
Copy link

Do you know that the default resolution changed?
https://community.jitsi.org/t/jibri-resolution-now-defaults-to-1080p/95478

So if your box can't stream 1080p, try 720p. I just did the test with a box that can't stream 1080p, it can 720p :)

Yeah fully aware. It's not about "a box" it's about VMs in general not being reliable .... full stop on ESXi/AWS/XEN/Hyper-V etc.

My base piece of bare-metal for testing is a "lowly" Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz with 4GB RAM. This can handle JIBRI all day long, 24x7 - longest stream is 48 hours (max 1.4GB RAM used), including to an nginx RTMP server running on the same box. Full resolution 1920x1080.

So if you have a dual CPU Xeon (32 cores, 192GB RAM) - you'd expect at a minimum 3 or 4 JIBRI to run quite happily (in theory a lot more). So, it appears that "recommended" are 4 cores / 8 GB RAM on Hypervisors per VM. What @trashographer and I are saying is that with just one JIBRI with pretty much zero load it just dies very quickly. Even if you bump to 8 cores & guarantee 50% CPU or greater it dies still fairly quickly.

If bare-metal works on such lowly specifications then why is it we should "mess around" with lowering resolution as I have done that and can still get failures within one hour.

@DonxDaniyar
Copy link

In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version

@tranrn
Copy link

tranrn commented Jun 2, 2021

In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version

wow, thanks
will try tomorrow

@carellano
Copy link

@DonxDaniyar

In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version

Could you tell me what tag to use? for jibri in docker

@McL1V3
Copy link

McL1V3 commented Oct 8, 2021

@DonxDaniyar

In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version

Now it still works for you? Could you share your Docker configuration in a repository, please, it would be helpful. Thanks.

@DonxDaniyar
Copy link

@DonxDaniyar

In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version

Now it still works for you? Could you share your Docker configuration in a repository, please, it would be helpful. Thanks.

I use this image
jitsi/jibri:stable-5076

@DonxDaniyar
Copy link

@DonxDaniyar

In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version

Could you tell me what tag to use? for jibri in docker

jitsi/jibri:stable-5076

@saghul
Copy link
Member

saghul commented Oct 25, 2021

The latest version now defaults to 720p, I'd encourage you to try that. Soon enough the Chrome version on that old image will be too old to run.

@sergeByishimo
Copy link

@DonxDaniyar

In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version

Could you tell me what tag to use? for jibri in docker

jitsi/jibri:stable-5076

@McL1V3 @carellano @DonxDaniyar

Kindly can you confirm if this worked for you? Thanks.

@saghul
Copy link
Member

saghul commented Jul 14, 2022

That release is almost a year old, i'd suggest you test with the latest image.

@McL1V3
Copy link

McL1V3 commented Jul 14, 2022

I have a jibri image with docker customized with chrome 101. Recording at 720p, very little happens to me but as I have 5 jibris working, in a last update if one crashes in a live recording the jibri ends and uses another jibri, like this successively I have practically no video loss and it has been working without problems for several months

@bjkforlife
Copy link

1080x720 working

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests