-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(WIP) Initial implementation of the new videoReader API #2683
Conversation
test/test_video.py
Outdated
s = min(r) | ||
e = max(r) | ||
|
||
reader = torch.classes.torchvision.Video(full_path, "video") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a follow-up PR: we should expose in torchvision Video
, so that you can access it via torchvision.io.Video
or something like that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adding that to #2660 feature tracker
test/test_video.py
Outdated
self.assertEqual(tv_result.size(), new_api.size()) | ||
|
||
def test_partial_video_reading_fn(self): | ||
torchvision.set_video_backend("video_reader") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we might need to comment this out for now. Many of the test issues we had before were due to switching globally to using video_reader
during the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure - pushed the changes
@fmassa seems segfaulting on travis. the output of the raw log
which makes sense given that travis is installing av from conda with no ffmpeg version check. |
We have disabled all IO tests in travis, TravisCI now only compiles those blocks. so I would say you can for now just skip Line 58 in 5320f74
|
Codecov Report
@@ Coverage Diff @@
## master #2683 +/- ##
==========================================
+ Coverage 72.42% 73.11% +0.68%
==========================================
Files 96 96
Lines 8313 8332 +19
Branches 1293 1299 +6
==========================================
+ Hits 6021 6092 +71
+ Misses 1903 1848 -55
- Partials 389 392 +3
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merging this to move forward, there are a few follow-up cleanups that can be done but let's do them in a different PR
* adding base files * setup modification to actually build the thing * video api constructor registration * FAIL metadata * FAIL update for QS * revert * debugging with Victor * adding base files * setup modification to actually build the thing * video api constructor registration * FAIL metadata * FAIL update for QS * revert * debugging with Victor * metadata registration works * API build next * test * Merge change * formatting parameters to avoid the segfault * next now works on a video * make size of the output tensor format dependent * Make next work on audio stream only as well * refactoring the _setCurrentStream param * Fixing the last frame return and sensor * todo docs * Formatting * cleanup and comments * introducing new tests for the API * cleanup * Comment out unnecesary format (will add following FFMPEG fix) * Reformat parsing function * removing the seek bug `get_decoder_params` * Removing unnecessary code/variables * enforce RGB24 as a reading format (will crash before ffmpeg fix) * permute the dimensions to return (RGB x H x W) * Changing the return type to std::tuple<torch::Tensor, double> as opposed to tensor list * Adjusting tests for the new return type * remove unnecessary jitter * clangangangang * Metadata return changes (pytorch#1) * remove implicit calls to set a current stream (pytorch#2) * Adding new tests to check the accuracy of the seek * cleanup debugging statements * adding base files * setup modification to actually build the thing * video api constructor registration * FAIL metadata * FAIL update for QS * revert * debugging with Victor * adding base files * video api constructor registration * FAIL metadata * FAIL update for QS * revert * debugging with Victor * metadata registration works * API build next * test * Merge change * formatting parameters to avoid the segfault * next now works on a video * make size of the output tensor format dependent * Make next work on audio stream only as well * refactoring the _setCurrentStream param * Fixing the last frame return and sensor * todo docs * Formatting * cleanup and comments * introducing new tests for the API * cleanup * Comment out unnecesary format (will add following FFMPEG fix) * Reformat parsing function * removing the seek bug `get_decoder_params` * Removing unnecessary code/variables * enforce RGB24 as a reading format (will crash before ffmpeg fix) * permute the dimensions to return (RGB x H x W) * Changing the return type to std::tuple<torch::Tensor, double> as opposed to tensor list * Adjusting tests for the new return type * remove unnecessary jitter * clangangangang * Metadata return changes (pytorch#1) * remove implicit calls to set a current stream (pytorch#2) * Adding new tests to check the accuracy of the seek * cleanup debugging statements * Addressing PR comments * addressing Francisco's comments * CLANG build formatting * Updated testing to test against pyav for the video tensor reads * Formatting * remove pyav from pip deps and add it to conda build * add pyav and ffmeped to conda builds * Formatting? * Setting up linter once and for all hopefully * Testing pyav * Fix to 8.0.0 * Try 6.2.0 * See what happens with av from pip * Remove FFMPEG blocker * What is going on? * More tests * Forgot something * unblocker * Check if cache is messing up with things * Now try with different ffmpeg * Now try with different ffmpeg * Testing pyav * Fix to 8.0.0 * Try 6.2.0 * See what happens with av from pip * What is going on? * More tests * Forgot something * Check if cache is messing up with things * Now try with different ffmpeg * Now try with different ffmpeg * Do not install av * Test with ffmpeg 4.2 * clean up video tests * cleaning up the tests a bit to better test partial reading * arrgh linter * Forgot the av test * forgot av test * checkout build files from master * revert circleci * addressing Franciscos comments * addressing Franciscos comments * Ignore ffmpeg in travis Co-authored-by: Francisco Massa <[email protected]> Co-authored-by: Edgar Andrés Margffoy Tuay <[email protected]>
* adding base files * setup modification to actually build the thing * video api constructor registration * FAIL metadata * FAIL update for QS * revert * debugging with Victor * adding base files * setup modification to actually build the thing * video api constructor registration * FAIL metadata * FAIL update for QS * revert * debugging with Victor * metadata registration works * API build next * test * Merge change * formatting parameters to avoid the segfault * next now works on a video * make size of the output tensor format dependent * Make next work on audio stream only as well * refactoring the _setCurrentStream param * Fixing the last frame return and sensor * todo docs * Formatting * cleanup and comments * introducing new tests for the API * cleanup * Comment out unnecesary format (will add following FFMPEG fix) * Reformat parsing function * removing the seek bug `get_decoder_params` * Removing unnecessary code/variables * enforce RGB24 as a reading format (will crash before ffmpeg fix) * permute the dimensions to return (RGB x H x W) * Changing the return type to std::tuple<torch::Tensor, double> as opposed to tensor list * Adjusting tests for the new return type * remove unnecessary jitter * clangangangang * Metadata return changes (#1) * remove implicit calls to set a current stream (pytorch#2) * Adding new tests to check the accuracy of the seek * cleanup debugging statements * adding base files * setup modification to actually build the thing * video api constructor registration * FAIL metadata * FAIL update for QS * revert * debugging with Victor * adding base files * video api constructor registration * FAIL metadata * FAIL update for QS * revert * debugging with Victor * metadata registration works * API build next * test * Merge change * formatting parameters to avoid the segfault * next now works on a video * make size of the output tensor format dependent * Make next work on audio stream only as well * refactoring the _setCurrentStream param * Fixing the last frame return and sensor * todo docs * Formatting * cleanup and comments * introducing new tests for the API * cleanup * Comment out unnecesary format (will add following FFMPEG fix) * Reformat parsing function * removing the seek bug `get_decoder_params` * Removing unnecessary code/variables * enforce RGB24 as a reading format (will crash before ffmpeg fix) * permute the dimensions to return (RGB x H x W) * Changing the return type to std::tuple<torch::Tensor, double> as opposed to tensor list * Adjusting tests for the new return type * remove unnecessary jitter * clangangangang * Metadata return changes (#1) * remove implicit calls to set a current stream (pytorch#2) * Adding new tests to check the accuracy of the seek * cleanup debugging statements * Addressing PR comments * addressing Francisco's comments * CLANG build formatting * Updated testing to test against pyav for the video tensor reads * Formatting * remove pyav from pip deps and add it to conda build * add pyav and ffmeped to conda builds * Formatting? * Setting up linter once and for all hopefully * Testing pyav * Fix to 8.0.0 * Try 6.2.0 * See what happens with av from pip * Remove FFMPEG blocker * What is going on? * More tests * Forgot something * unblocker * Check if cache is messing up with things * Now try with different ffmpeg * Now try with different ffmpeg * Testing pyav * Fix to 8.0.0 * Try 6.2.0 * See what happens with av from pip * What is going on? * More tests * Forgot something * Check if cache is messing up with things * Now try with different ffmpeg * Now try with different ffmpeg * Do not install av * Test with ffmpeg 4.2 * clean up video tests * cleaning up the tests a bit to better test partial reading * arrgh linter * Forgot the av test * forgot av test * checkout build files from master * revert circleci * addressing Franciscos comments * addressing Franciscos comments * Ignore ffmpeg in travis Co-authored-by: Francisco Massa <[email protected]> Co-authored-by: Edgar Andrés Margffoy Tuay <[email protected]>
Per description in #2660 , here is a proof of concept implementation for video reading and metadata accessing.
THIS API IS STILL EXPERIMENTAL AND WILL LIKELY BE CHANGED/MODIFIED
Some key features:
Missing features: