You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope not to be too confusing here... I currently have a multitrack Host with various custom SampleProviders (:ISampleProvider). Based on the typical approach of using multiple Clips on individual tracks feeding into respective Track sample providers, and those in turn feeding into a "Master" sample provider.
In a previous version I simply had the AsioOut device initiated using the "Master" SampleProvider, and calculating the time position in the mix based on byte count for each cycle of the Read method. (Default seemed to be 1536 bytes read each cycle, which for 44.1KHz, 16bit, 2-ch came out to about 8.707 ms per Read which was more than sufficiently fast).
However later, for reasons I've since forgotten, I changed this to a different strategy, namely using a microtimer as the base to call the Read method of the "Master" SampleProvider (setting an interval of 10 ms, 1764 byte count, for 16-bit 44.1KHz), and feeding it into a BufferedWaveProvider (from which AsioOut is initialized), as per the following skeleton code:
The microtimer is quite precise and this has worked satisfactorily even with many tracks, but have I been unknowingly introducing a performance hit by doing it this way? I do clear the BufferedWaveProvider buffer on stop or position reset, but I'm not noticing any dropouts during play, although maybe under higher stress testing, a problem would present itself?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I hope not to be too confusing here... I currently have a multitrack Host with various custom SampleProviders (:ISampleProvider). Based on the typical approach of using multiple Clips on individual tracks feeding into respective Track sample providers, and those in turn feeding into a "Master" sample provider.
In a previous version I simply had the AsioOut device initiated using the "Master" SampleProvider, and calculating the time position in the mix based on byte count for each cycle of the Read method. (Default seemed to be 1536 bytes read each cycle, which for 44.1KHz, 16bit, 2-ch came out to about 8.707 ms per Read which was more than sufficiently fast).
However later, for reasons I've since forgotten, I changed this to a different strategy, namely using a microtimer as the base to call the Read method of the "Master" SampleProvider (setting an interval of 10 ms, 1764 byte count, for 16-bit 44.1KHz), and feeding it into a BufferedWaveProvider (from which AsioOut is initialized), as per the following skeleton code:
byte[] sendBytes = new byte[1764];
...
private void projPlayTimer_MicroTimerElapsed(...) // 10,000 microsecond interval
{
int readcount = MasterSampleProvider.ToWaveProvider16().Read(sendBytes, 0, sendBytes.Length);
bufferedWaveProvider.AddSamples(sendBytes, 0, readcount);
}
The microtimer is quite precise and this has worked satisfactorily even with many tracks, but have I been unknowingly introducing a performance hit by doing it this way? I do clear the BufferedWaveProvider buffer on stop or position reset, but I'm not noticing any dropouts during play, although maybe under higher stress testing, a problem would present itself?
Beta Was this translation helpful? Give feedback.
All reactions