Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add unit tests for buffer processor (pa_process) code #779

Open
dechamps opened this issue Mar 4, 2023 · 2 comments
Open

Add unit tests for buffer processor (pa_process) code #779

dechamps opened this issue Mar 4, 2023 · 2 comments
Assignees
Labels
P2 Priority: High test Test code in /test
Milestone

Comments

@dechamps
Copy link
Contributor

dechamps commented Mar 4, 2023

In #772 it was pointed out that the buffer processor code (i.e. PaUtil_InitializeBufferProcessor() and friends in pa_process.h) could benefit from a test suite. Currently this code is not directly tested anywhere, although it is indirectly covered through any tests exercising host API streaming code (in particular paloopback).

The API inpa_process.h would lend itself very well to unit tests, because it's a self-contained module with well-defined boundaries and behaviour, and no dependencies (so nothing to fake/mock). Also this module is extremely critical as it sits at the core of all host API streaming code, and its operation is quite non-trivial, so it seems like it would benefit a lot from the test coverage.

In particular, tests could ensure that the stream callback is called appropriately and with the correct parameters, that the correct amount of data is read and written from/to host API buffers at the proper offsets (this would have caught #770) including wraparound, that sample format conversions are done appropriately, that interleaved/non-interleaved conversions are done appropriately, that the latency/timing reporting is correct, etc. for a variety of modes (input only, output only, full duplex) and buffer size combinations.

Best practice dictates this should be done through lots of small, focused tests. We might want to use some kind of unit testing framework for this.

@RossBencina
Copy link
Collaborator

Well put, I agree.

We might want to use some kind of unit testing framework for this.

Phil and I have decided that we probably don't want to introduce such a dependency. I believe that Phil has some macros that will be sufficient. Personally I don't mind lightweight header-only frameworks such as Catch, but I don't know whether there's anything equivalent for C.

I don't mind working on these tests, as I think it will help me get my head back in the code to help with resolving #770 / #772.

If you have any ideas for specific unit tests feel free to add a bullet point list.

@RossBencina RossBencina added the test Test code in /test label Mar 7, 2023
@RossBencina RossBencina self-assigned this Mar 7, 2023
@RossBencina RossBencina added this to the V19.8 milestone Mar 7, 2023
@RossBencina RossBencina added the P2 Priority: High label Mar 7, 2023
@dechamps
Copy link
Contributor Author

dechamps commented Mar 7, 2023

If you have any ideas for specific unit tests feel free to add a bullet point list.

Off the top of my head:

  • Verify that the stream callback is being called appropriately.
  • Verify that the data being read/written from/to host buffers and user buffers is correct.
    • It might make sense to use test data where the samples go like 1, 2, 3, 4, 5, 6... which is easy to check (both automatically and manually while looking at a memory dump)
  • Verify that the buffer processor never writes data before or after the user/host buffer portion it's supposed to write to.
    • This could be done by writing "sentinel" values before and after the range we expect the buffer processor to write to, and then systematically checking the sentinel values are still there.
  • Checks that sample format conversions are done as necessary.
    • We don't need to check the correctness of the conversion itself (that's covered by patest_converters), but we do need to check that the buffer processor is calling the correct converter.
    • Of particular importance is checking correct operation when converting across different bit depths (e.g. int16 vs int24), to catch issues where offset/size calculations are using the wrong sample size.
  • Check that interleaved/non-interleaved conversions are done correctly.
  • Check that latency is being reported correctly.
  • Check that timing data (PaStreamCallbackTimeInfo) is being reported correctly.

We'd potentially want to run some (most?) tests in a matrix of parameters:

  • All possible values of PaUtilHostBufferSizeMode
  • Input-only vs. output-only vs. full-duplex
  • paFramesPerBufferUnspecified vs. various fixed buffer sizes (including edge cases e.g. 1)
  • Various host buffer sizes, including edge cases such as 1 frame, integer dividor of user frames per buffer, prime number less than user frames per buffer, equal to user frames per buffer, integer multiple of user frames per buffer, and non-integer multiple of user frames per buffer
    • This matters because the buffer adaptation stuff seems to care about these buffer sizes being multiples of each other

Where it gets tricky is striking the right tradeoff between test suite readability/maintainability, test runtime, and coverage. Trying to test every possible combination of everything across every dimension, while valuable, might not be practical.

Once we have the beginning of a test suite we could run it on CI and automatically fail CI if the tests fail - we can do this because the proposed test suite would have a clear PASS/FAIL result that is completely independent of the platform it's running on (and doesn't depend on scheduling/timing/hardware etc.).

@philburk philburk modified the milestones: V19.8, V19.9 May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2 Priority: High test Test code in /test
Projects
None yet
Development

No branches or pull requests

3 participants