Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallelize tests #17872

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft

Parallelize tests #17872

wants to merge 3 commits into from

Conversation

majocha
Copy link
Contributor

@majocha majocha commented Oct 11, 2024

Enable running xUnit tests in parallel.

To use xUnit means to customize it. Two customizations added:

  • Running collection and theory cases in parallel based on https://www.meziantou.net/parallelize-test-cases-execution-in-xunit.htm
    By default xUnit's unit of parallelization is test collection. Test cases in a collection run in sequence. Also, by default each class/module constitutes a collection. We have a lot of test cases in large modules and large theories that were bottlenecked by this.
    This customization enables parallelism in such cases. It can be reverted to default for a particular module with [<RunInSequence>] attribute.

  • Console streams captured universally and redirected to xUnit's output mechanism, which means you can just do printfn in a test case and it goes to the respective output.
    image
    This can be inspected in the IDE and in case of failure is printed out when testing from the command line.
    image

The default way in xUnit is to use ITestOutputHelper. This is very unwieldy, because it requires placing test cases in a class with a constructor, and then threading the injected output helper into any function that wants to output text. We have many tests in modules not classes, and many of the tests are using a lot of utility functions. Adjusting it all to use ITestOutputHelper is not feasible. OTOH just outputting with printfn is unobtrusive, natural and works well with interactive prototyping of test cases.

Some local run times:

dotnet test .\tests\FSharp.Compiler.ComponentTests\ -c Release -f net9.0

Test summary: total: 4489, failed: 0, succeeded: 4258, skipped: 231, duration: 199.0s

dotnet test .\tests\fsharp\ -c Release -f net9.0

Test summary: total: 579, failed: 0, succeeded: 579, skipped: 0, duration: 41.9s

dotnet test .\FSharp.sln -c Release  -f net9.0

Test summary: total: 12963, failed: 0, succeeded: 12694, skipped: 269, duration: 253.3s

Some considerations to make this work and keep it working
To run tests in parallel we must deal with global resources and global state accessed by the test cases.

Out of proc:
Tests running as separate processes are sharing the file system. We must make sure they execute in their own temporary directories and don't overwrite any hardcoded paths. This is already done, mostly in separate PR.

Hosted:
Many tests use hosted compiler and FsiEvaluationSession, sharing global resources and global state within the runner process:

  • Console streams - this is swept under a rug for now by using a simple AsyncLocal stream splitter.
  • FileSystem global mutable of the file system shim - few tests that mutate it, must be excluded from parallelization.
  • Environment.CurrentDirectory - many tests executing in hosted session were doing a variation of File.WriteAllText("test.ok", "ok") all in the current directory i.e. bin, leading to conflicts. This is replaced with a threadsafe mechanism.
  • Environment variables, Path - mostly this applies to DependencyManager, excluded from parallelization for now.
  • Async default cancellation token - few tests doing Async.CancelDefaultToken() must be excluded from parallelization.
  • global state used in conjunction with --times option - tests excluded from parallelization.
  • global mutable state in the form of multiple caches implemented as ConcurrentDictionary. This needs further investigation.

I'll ad to the above list if I recall anything else.

Problems:
Tests depending on tight timing, orchestrating stuff by combinations of Thread.Sleep, Async.Sleep and wait timeouts.
These are mostly excluded from parallelization, some attempts at fixing things were made.

Obscure compiler bugs revealed in this PR:

  • Internal error: value cannot be null this mostly happens in coreClr, one time, sometimes a few times during the test run.

  • Error creating evaluation session because of NRE somewhere in TcImports.BuildNonFrameworkTcImports. This is more rare but may be related to the above.

These were related to some concurrency issues; modyfing frameworkTcImportsCache without lock and a bug in custom lazy implementation in il.fs. Hopefully both fixed now.

Running in parallel:
Xunit runners are configured with mostly default parallelization settings.

dotnet test .\FSharp.sln -c Release -f net9.0 will run all discovered test assemblies in parallel as soon as they're built.
This can be limited with the -m switch. For example,
dotnet test -m:2 .\FSharp.Compiler.Service.sln
will limit the test run to at most 2 simultaneous processes. Still, each test host process runs its test collections in parallel.

Some test collections are excluded form parallelization with [<Collection(nameof DoNotRunInParallel)>] attribute.

Running in the IDE with "Run tests in parallel" enabled will respect xunit.runner.json settings and the above exclusions.

TODO:

@majocha majocha mentioned this pull request Oct 11, 2024
8 tasks
Copy link
Contributor

github-actions bot commented Oct 11, 2024

⚠️ Release notes required, but author opted out

Warning

Author opted out of release notes, check is disabled for this pull request.
cc @dotnet/fsharp-team-msft

@majocha majocha changed the title Parallelize tests, continuation Parallelize tests Oct 12, 2024
@majocha majocha force-pushed the parallel-tests branch 6 times, most recently from decc8cb to 278e2ec Compare October 13, 2024 09:01
@psfinaki
Copy link
Member

Thanks for your endurance, Jakub 💪

@majocha
Copy link
Contributor Author

majocha commented Oct 18, 2024

@psfinaki I will need some help with this Source-Build error:
https://dev.azure.com/dnceng-public/public/_build/results?buildId=847523&view=logs&j=2f0d093c-1064-5c86-fc5b-b7b1eca8e66a&t=52d0a7a6-39c9-5fa2-86e8-78f84e98a3a2&l=45

At this moment this is very stable locally but will also probably need testing on other machines than mine :)

What's left to do is to tune this for stability in CI. I've been trying different things and timing runs. The most glaring problem is the testDesktop. In CI desktop runs both FSharpSuite and ComponentTests take around 40 minutes each. I guess slicing the test suite and running with multi-agent parallel strategy would improve things here.
I added some simple provisions for easier slicing using traits: --filter ExecutionNode=n will now take a stable slice of the test suite (currently n is hardcoded 1..4)

I also noticed Linux run is constantly low on memory, this is unrelated as it happens on main, too. For this reason I set MaxParallelThreads=4 in build.sh to cool things down a bit.

@psfinaki
Copy link
Member

@majocha the error is weird, nothing comes to my mind right away. Let's rebase and rerun and see if it's still happening... Sorry, I know this is somewhat lame, it's just that SourceBuild is a Linux thing and it's not trivial to debug its issues locally.

As for cooling things down - I also noticed this today, thanks for addressing this.

What else do you think we can split from this PR into some separate ones?

@majocha
Copy link
Contributor Author

majocha commented Oct 21, 2024

What else do you think we can split from this PR into some separate ones?

There are some small further fixes, maybe also the whole console handling does not really depend on parallel execution.

Somewhat related thing I have on my mind recently is to implement a FileSystem shim for tests that will be as much in-memory as possible and isolated per testcase. It wouldn't handle the tests that start separate processes, though.

@majocha
Copy link
Contributor Author

majocha commented Oct 21, 2024

@majocha the error is weird, nothing comes to my mind right away. Let's rebase and rerun and see if it's still happening... Sorry, I know this is somewhat lame, it's just that SourceBuild is a Linux thing and it's not trivial to debug its issues locally.

Thanks! Rebasing did help.

@psfinaki psfinaki added the NO_RELEASE_NOTES Label for pull requests which signals, that user opted-out of providing release notes label Oct 21, 2024
@psfinaki
Copy link
Member

There are some small further fixes, maybe also the whole console handling does not really depend on parallel execution.

Yeah console handling would be probably good to isolate if possible.

Somewhat related thing I have on my mind recently is to implement a FileSystem shim for tests that will be as much in-memory as possible and isolated per testcase. It wouldn't handle the tests that start separate processes, though.

Just for my understanding, what would this add on top of the current results the PR achieves?

@majocha
Copy link
Contributor Author

majocha commented Oct 21, 2024

Just for my understanding, what would this add on top of the current results the PR achieves?

This would be an experiment for another PR, but basically, I don't like all that copying to temp dirs that I added in recent PRs.
A FileSystem shim just for testing, that virtualizes all of the writes and keeps track of which test case wrote what, to correctly isolate them, would be maybe possibly more performant and a cleaner solution.

@psfinaki
Copy link
Member

Right, yeah, I see. No it's worth playing with, although given that we don't touch these tests too much, it's probably worth seriously investing into only if it starts yielding reasonable performance fruits.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NO_RELEASE_NOTES Label for pull requests which signals, that user opted-out of providing release notes
Projects
Status: New
Development

Successfully merging this pull request may close these issues.

2 participants