-
Notifications
You must be signed in to change notification settings - Fork 13
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will be able to do the rest (not like I did much) of the review on Monday, but you don't have to wait if other reviews are finished earlier.
# The netem command will return an error when a container is stopped before the packet loss duration | ||
# is up. This means we either need to kill it (and know when to do that), or ignore the error: | ||
set +e | ||
|
||
echo "Applying packet loss to $APPLY_LOSS_TO for $LOSS_DURATION seconds" | ||
pumba netem \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not necessarily connected to this PR, but it would be nice to check if pumba
is available in PATH
when starting the test (and kill the test if pumba
fails). I've run it without it installed and it simply failed the tests with 1 line of log about that buried somewhere amongst the docker-compose
logs.
test "packet loss on one browser" do | ||
TestVideoroom.Integration.ResultReceiver.start_link(browser_count: 3, parent: self()) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think it would be nicer to have one Elixir node outside of the containers connected to them via distributed Erlang. You could move the logic responsible for testing outside of the mediaserver
docker, get rid of the run_packet_loss_test.sh
script and writing to file to enable packet loss hack and do everything from Elixir, also logs from test wouldn't be mixed with docker containers logs. It seems like it was decided to take care of this in later PR, but idk if this is the later PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I completely agree, but that was outside the scope of this PR, and will have to be done in the future ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That should be the next thing after this PR
Codecov Report
@@ Coverage Diff @@
## master #292 +/- ##
==========================================
+ Coverage 62.91% 63.05% +0.14%
==========================================
Files 44 44
Lines 2130 2130
==========================================
+ Hits 1340 1343 +3
+ Misses 790 787 -3 see 4 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
|
integration_test/test_videoroom/test/integration/containerised_test.exs
Outdated
Show resolved
Hide resolved
if (ctx.track.kind === "video") { | ||
this.peerIdToVideoTrack[ctx.endpoint.id] = ctx.track; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick but I feel like this could be in peers
integration_test/test_videoroom/test/integration/simulcast_test.exs
Outdated
Show resolved
Hide resolved
integration_test/test_videoroom/test/integration/containerised_test.exs
Outdated
Show resolved
Hide resolved
test "packet loss on one browser" do | ||
TestVideoroom.Integration.ResultReceiver.start_link(browser_count: 3, parent: self()) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That should be the next thing after this PR
This PR makes the test implemented in #280 actually compare the stats, as opposed to pass at all times ;)