-
Notifications
You must be signed in to change notification settings - Fork 661
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Fast Audio Loading (from file) #1000
Comments
Can you clarify how resampling at load time mentioned in the interface relates to fast loading? I'm assuming the implicit advantage is that the resampling is done "streaming-style" while loading in memory by the backend, is that correct? A side effect is that the resampling algorithm would depend on the backend. |
(I would also flag the choice of |
Just passing by on this discussion, we've already enabled FFmpeg support (also packaging) for FFmpeg on torchvision for all operating systems. Let us know if we can help you here cc @fmassa |
@andfoy Thanks for the offer. Yes, this RFC is at the moment for defining the goal. When I get to the detail of the implementation detail, I will ask for the help there. |
@andfoy I just realized that if torchaudio binds ffmpeg and ship static binary, then the version that torchvision ships and torchaudio ships might collide, so if torchaudio decides to add ffmpeg binding, we definitely work together. |
@mthrok, the last point shouldn't be a problem once pytorch/vision#2818 is merged, that PR relocates FFmpeg always in torchvision in order to prevent collisions |
Hi, Chiming in with some background on what we did for torchvision. In the 0.4.0 release, we introduced an API for reading video / audio which is very similar to the API you proposed, where the API is def read_video(
filename: str, start_pts: int = 0, end_pts: Optional[float] = None, pts_unit: str = "pts"
) -> Tuple[torch.Tensor, torch.Tensor, Dict[str, Any]]: One difference was that we didn't expose any resampling method for changing the fps of the videos (nor audio), and we left those as functions that the user could write outside. While this API was ok for some types of tasks (like classification), for others it was clear that this API was suboptimal. Here is one example of how it can be used from torchvision.io import VideoReader
# stream indicates if reading from audio or video
reader = VideoReader('path_to_video.mp4', stream='video')
# can change the stream after construction
# via reader.set_current_stream
# to read all frames in a video starting at 2 seconds
for frame in reader.seek(2):
# frame is a dict with "data" and "pts" metadata
print(frame["data"], frame["pts"])
# because reader is an iterator you can combine it with
# itertools
from itertools import takewhile, islice
# read 10 frames starting from 2 seconds
for frame in islice(reader.seek(2), 10):
pass
# or to return all frames between 2 and 5 seconds
for frame in takewhile(lambda x: x["pts"] < 5, reader.seek(2)): I would be glad to discuss about the trade-offs of both APIs. Our decision on the more granular API was motivated by flexibility without hurting speed (which was backed by benchmarks we did), but I understand that decoding audio might have different overheads and thus other things might need to be adjusted. |
Some thoughs Sample rateI'm wondering whether we should return sample_rate alongside the wav form by default? Why not use torch.info or such? a) User wants sample rate under current proposal
b) User wants sample rate using torch.info
c) User doesn't want sample rate under current proposal
or
d) User doesn't want sample rate using torch.info
I claim that we can always augment this function to return a richer type which might include information about sample rates, bit depth or other metadata, but we shouldn't start out with that unless we have a concrete list of reasons. Reasons for returning sample_rate by default
Reasons for not returning sample_rate by default
Using time or frame number for offsetsI think we should support both by dispatching on the offset and duration type. If the type is integral it's interpreted as a frame, if it's floating it is interpreted as time. Are there formats where there is no clear linear correspondence between the time and frame number? |
On a general note, I'd propose narrowing the scope of this RFC to even just fast audio loading from paths unless we broaden it and also include io Buffers and streams. That is in favor of having something very specific like torchaudio.io.read_audio_from_path and then adding a mechanism to torch.load that'll choose the right factory function. To derisk an ill-designed grab bag of file location specific (io Buffer, file path, stream, etc.) load function we could make this a prototype feature that's only available in the nightlies at first. |
Is there an example for reading an audio file into a torch::tensor in C++ ? Note: Resolved in #1562 |
Add information about potential pitfall of not serializing the model state and keeping it as a reference during training. Co-authored-by: holly1238 <[email protected]>
Background
Fast audio loading is critical to audio application, and this is more so for the case of music data because of the following properties which are different from speech applications.
Proposal
Add a new I/O scheme to torchaudio, which utilizes libraries that provide faster decoding, wide coverage of codecs and portable across different OSs (Linux / macOS / Windows).
Currently torchaudio binds libsox. Libsox is not supported on Windows. There are a variety of decoding libraries that we can take advantage of. These include
Fast mp3 decoding library.
Similar to minimp3, a MP4 decoding library by the same author.
Fast for wav format.*
Also handles flac, ogg/vorbis
Resampling
Covers a much wider range of codecs, with higher decode/encode quality, but not as fast.
Can handle AAC format (in addition to what is already listed above) and a lot more.
Unlike the existing torchaudio backends, which implement the same generic interfaces, the new I/O will provide one unified Python interface to all the supported platforms (Linux / macOS / Windows), and delegate the library selection to underlying C++ implementation.
Benchmark for some of these libraries are available at https://github.com/faroit/python_audio_loading_benchmark . (Thanks @faroit !)
Non-Goals
In-memory decoding
In-memory-decoding support is nice to have, but currently, we do not know if it is possible to pass memory objects from Python to C++ via TorchScript. For the sake of simplicity, we exclude this feature from the scope of this proposal. For Python-only solution, see #800 for the gist.
Streaming decoding
Streaming decoding support will be critical for supporting real-time applications. However it is difficult to design real-time decoding as a stand-alone module, because the design of the downstream process, such as preprocessing, feeding to NN, and using the result, are all relevant to the upstream I/O mechanism, therefore, the streaming decoding is excluded from this proposal.
Effects (filterings)
ffmpeg supports filterings like libsox does. We can make it available too but this is outside the scope of fast audio loading.
Interface
Python frontend to the C++ interface. No (significant) logic should happen here.
Example Usage (Python)
FAQ
Will the proposed API replace the current
torchaudio.load
?No, this proposal does not remove
torchaudio.load
or ask users to migrate to the new API. Instead,torchaudio.load
will make use of the proposed API. (the detail of how it does is TBD)When we think of supporting other types of I/O, such as memory-object, file-like object, or streaming object, we will design APIs separately and plug-in to
torchaudio.load
.This way, we decouple the concerns and requirements, yet are able to extend the functionality.
The text was updated successfully, but these errors were encountered: