Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move tests to invoke te.exe directly instead of using VSTest runner #4490

Merged
18 commits merged into from
Feb 10, 2020

Conversation

miniksa
Copy link
Member

@miniksa miniksa commented Feb 6, 2020

Moves the tests from using the vstest.console.exe route to just using te.exe.

PROs:

  • te.exe is significantly faster for running tests because the TAEF/VSTest adapter isn't great.
  • Running through te.exe is closer to what our developers are doing on their dev boxes
  • te.exe is how they run in the Windows gates.
  • te.exe doesn't seem to have the sporadic 0x6 error code thrown during the tests where somehow the console handles get lost
  • te.exe doesn't seem to repro the other intermittent issues that we have been having that are inscrutable.
  • Fewer processes in the tree (te is running anyway under vstest.console.exe, just indirected a lot
  • The log outputs scroll live with all our logging messages instead of suppressing everything until there's a failure
  • The log output is actually in the order things are happening versus vstest.

CONs:

  • No more code coverage.
  • No more test records in the ADO build/test panel.
  • Tests really won't work inside Visual Studio at all.
  • The log files are really big now
  • Testing is not a test task anymore, just another script.

Refuting each CON:

  • We didn't read the code coverage numbers
  • We didn't look at the ADO test panel results or build-over-build velocities
  • Tests didn't really work inside Visual Studio anyway unless you did the right incantations under the full moon.
  • We could tone down the logging if we wanted at either the te.exe execution time (with a switch) or by declaring properties in the tests/classes/modules that are very verbose to not log unless it fails.
  • I don't think anyone cares how they get run as long as they do.

@DHowett-MSFT
Copy link
Contributor

/me reviews it

@miniksa
Copy link
Member Author

miniksa commented Feb 6, 2020

K. I know this is bad. I need to use a var to determine the config/platform and/or a PowerShell script that can replace the pattern ** search from what the VS test task could do.

@miniksa
Copy link
Member Author

miniksa commented Feb 6, 2020

vstest.console.exe runtime x86 unit tests: 5m8s
te.exe runtime x86 unit tests: 46s

Yeah, uh, let's just run them direct.

@DHowett-MSFT
Copy link
Contributor

DHowett-MSFT commented Feb 6, 2020

# EARLIER

- task: PowerShell@2
  displayName: 'Rationalize build platform'
  inputs:
    targetType: inline
    script: |
      $Arch = "$(BuildPlatform)"
      If ($Arch -Eq "x86") { $Arch = "Win32" }
      Write-Host "##vso[task.setvariable variable=RationalizedBuildPlatform]${Arch}"

# LATER

- task: PowerShell@2
  displayName: 'Run Unit Tests'
  inputs:
    targetType: inline
    script: |
      $TestRoot = ".\bin\$(RationalizedBuildPlatform)\$(BuildConfiguration)"
      & ${TestRoot}\TE.exe ${TestRoot}\*.Unit.Test*.dll
  condition: bla bla bla

perhaps? Maybe?

@miniksa
Copy link
Member Author

miniksa commented Feb 6, 2020

feature tests vstest.console.exe: 2m42s
feature tests te.exe: 1m39s

@miniksa
Copy link
Member Author

miniksa commented Feb 6, 2020

# EARLIER

- task: PowerShell@2
  displayName: 'Rationalize build platform'
  inputs:
    targetType: inline
    script: |
      $Arch = "$(BuildPlatform)"
      If ($Arch -Eq "x86") { $Arch = "Win32" }
      Write-Host "##vso[task.setvariable variable=RationalizedBuildPlatform]${Arch}"

# LATER

- task: PowerShell@2
  displayName: 'Run Unit Tests'
  inputs:
    targetType: inline
    script: |
      $TestRoot = ".\bin\$(RationalizedBuildPlatform)\$(BuildConfiguration)"
      & ${TestRoot}\TE.exe ${TestRoot}\*.Unit.Test*.dll
  condition: bla bla bla

perhaps? Maybe?

Yeah that looks a bit better than what I came up with.

@oising
Copy link
Collaborator

oising commented Feb 6, 2020

ERMAGAHD

@miniksa miniksa changed the title messing with tests 2, electric boogaloo. do not review. Moves tests to invoke te.exe directly instead of using VSTest runner Feb 6, 2020
@miniksa miniksa self-assigned this Feb 6, 2020
src/foo.txt Outdated Show resolved Hide resolved
This reverts commit 35e771b.
@miniksa miniksa marked this pull request as ready for review February 6, 2020 23:03
@DHowett-MSFT
Copy link
Contributor

nit: change name to be imperative (Move, Do, Fix, Teach, etc.)

@DHowett-MSFT DHowett-MSFT added the AutoMerge Marked for automatic merge by the bot when requirements are met label Feb 6, 2020
@ghost
Copy link

ghost commented Feb 6, 2020

Hello @DHowett-MSFT!

Because this pull request has the AutoMerge label, I will be glad to assist with helping to merge this pull request once all check-in policies pass.

Do note that I've been instructed to only help merge pull requests of this repository that have been opened for at least 8 hours, a condition that will be fulfilled in about 4 hours 33 minutes. No worries though, I will be back when the time is right! 😉

p.s. you can customize the way I help with merging this pull request, such as holding this pull request until a specific person approves. Simply @mention me (@msftbot) and give me an instruction to get started! Learn more here.

@miniksa miniksa changed the title Moves tests to invoke te.exe directly instead of using VSTest runner Move tests to invoke te.exe directly instead of using VSTest runner Feb 6, 2020
@zadjii-msft
Copy link
Member

image
heres the build

I'm inclined to sign off anyways for the speed boost, but it looks like the roundtrip tests still hang in just TAEF on CI 😢

@DHowett-MSFT
Copy link
Contributor

At least we can now see which one was running when it hangs! When DevOps actually lets you scroll continuously (instead of jumping from line 110000 to line 100025...)

@DHowett-MSFT
Copy link
Contributor

2020-02-06T23:53:14.6908403Z StartGroup: ConptyOutputTests::WriteAFewSimpleLines
2020-02-06T23:53:14.6911125Z Write more lines of output. We should use 
2020-02-06T23:53:14.6911809Z  to move the cursor
2020-02-06T23:53:14.6912996Z Verify: IsNotNull(_pVtRenderEngine.get())
2020-02-06T23:53:14.6914899Z Verify: IsGreaterThan(expectedOutput.size(), static_cast<size_t>(0)): writing="�[2J", expecting 4 strings
2020-02-06T23:53:14.6915910Z Expected =	"�[2J"
2020-02-06T23:53:14.6917308Z Actual =	"�[2J"

@miniksa
Copy link
Member Author

miniksa commented Feb 7, 2020

Yes, I will go dig into that test now and fixing it or turning it off (with a TODO) may be a part of this too.

@miniksa
Copy link
Member Author

miniksa commented Feb 7, 2020

@miniksa
Copy link
Member Author

miniksa commented Feb 7, 2020

Yeah I'm missing TerminalAppUnitTests because they're in a bin subdirectory...

@miniksa
Copy link
Member Author

miniksa commented Feb 10, 2020

OK it blended mostly. Take out the failboat test and update the solution. If this works, we're good to go.

@DHowett-MSFT
Copy link
Contributor

Get them good-good signoffs!

@ghost ghost merged commit 86706d7 into master Feb 10, 2020
@ghost ghost deleted the dev/miniksa/tests2 branch February 10, 2020 19:14
@miniksa miniksa mentioned this pull request Aug 14, 2020
8 tasks
ghost pushed a commit that referenced this pull request Aug 18, 2020
Use the Helix testing orchestration framework to run our Terminal LocalTests and Console Host UIA tests.

## References
#### Creates the following new issues:
- #7281 - re-enable local tests that were disabled to turn on Helix
- #7282 - re-enable UIA tests that were disabled to turn on Helix
- #7286 - investigate and implement appropriate compromise solution to how Skipped is handled by MUX Helix scripts

#### Consumes from:
- #7164 - The update to TAEF includes wttlog.dll. The WTT logs are what MUX's Helix scripts use to track the run state, convert to XUnit format, and notify both Helix and AzDO of what's going on.

#### Produces for:
- #671 - Making Terminal UIA tests is now possible
- #6963 - MUX's Helix scripts are already ready to capture PGO data on the Helix machines as certain tests run. Presuming we can author some reasonable scenarios, turning on the Helix environment gets us a good way toward automated PGO.

#### Related:
- #4490 - We lost the AzDO integration of our test data when I moved from the TAEF/VSTest adapter directly back to TE. Thanks to the WTTLog + Helix conversion scripts to XUnit + new upload phase, we have it back!

## PR Checklist
* [x] Closes #3838
* [x] I work here.
* [x] Literally adds tests.
* [ ] Should I update a testing doc in this repo?
* [x] Am core contributor. Hear me roar.
* [ ] Correct spell-checking the right way before merge.

## Detailed Description of the Pull Request / Additional comments
We have had two classes of tests that don't work in our usual build-machine testing environment:
1. Tests that require interactive UI automation or input injection (a.k.a. require a logged in user)
2. Tests that require the entire Windows Terminal to stand up (because our Xaml Islands dependency requires 1903 or later and the Windows Server instance for the build is based on 1809.)

The Helix testing environment solves both of these and is brought to us by our friends over in https://github.com/microsoft/microsoft-ui-xaml.

This PR takes a large portion of scripts and pipeline configuration steps from the Microsoft-UI-XAML repository and adjusts them for Terminal needs.
You can see the source of most of the files in either https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/Helix or https://github.com/microsoft/microsoft-ui-xaml/tree/master/build/AzurePipelinesTemplates

Some of the modifications in the files include (but are not limited to) reasons like:
- Our test binaries are named differently than MUX's test binaries
- We don't need certain types of testing that MUX does.
- We use C++ and C# tests while MUX was using only C# tests (so the naming pattern and some of the parsing of those names is different e.g. :: separators in C++ and . separators in C#)
- Our pipeline phases work a bit differently than MUX and/or we need significantly fewer pieces to the testing matrix (like we don't test a wide variety of OS versions).

The build now runs in a few stages:
1. The usual build and run of unit tests/feature tests, packaging verification, and whatnot. This phase now also picks up and packs anything required for running tests in Helix into an artifact. (It also unifies the artifact name between the things Helix needs and the existing build outputs into the single `drop` artifact to make life a little easier.)
2. The Helix preparation build runs that picks up those artifacts, generates all the scripts required for Helix to understand the test modules/functions from our existing TAEF tests, packs it all up, and queues it on the Helix pool.
3. Helix generates a VM for our testing environment and runs all the TAEF tests that require it. The orchestrator at helix.dot.net watches over this and tracks the success/fail and progress of each module and function. The scripts from our MUX friends handle installing dependencies, making the system quiet for better reliability, detecting flaky tests and rerunning them, and coordinating all the log uploads (including for the subruns of tests that are re-run.)
4. A final build phase is run to look through the results with the Helix API and clean up the marking of tests that are flaky, link all the screenshots and console output logs into the AzDO tests panel, and other such niceities.

We are set to run Helix tests on the Feature test policy of only x64 for now. 

Additionally, because the set up of the Helix VMs takes so long, we are *NOT* running these in PR trigger right now as I believe we all very much value our 15ish minute PR turnaround (and the VM takes another 15 minutes to just get going for whatever reason.) For now, they will only run as a rolling build on master after PRs are merged. We should still know when there's an issue within about an hour of something merging and multiple PRs merging fast will be done on the rolling build as a batch run (not one per).

In addition to setting up the entire Helix testing pipeline for the tests that require it, I've preserved our classic way of running unit and feature tests (that don't require an elaborate environment) directly on the build machines. But with one bonus feature... They now use some of the scripts from MUX to transform their log data and report it to AzDO so it shows up beautifully in the build report. (We used to have this before I removed the MStest/VStest wrapper for performance reasons, but now we can have reporting AND performance!) See https://dev.azure.com/ms/terminal/_build/results?buildId=101654&view=ms.vss-test-web.build-test-results-tab for an example. 

I explored running all of the tests on Helix but.... the Helix setup time is long and the resources are more expensive. I felt it was better to preserve the "quick signal" by continuing to run these directly on the build machine (and skipping the more expensive/slow Helix setup if they fail.) It also works well with the split between PR builds not running Helix and the rolling build running Helix. PR builds will get a good chunk of tests for a quick turn around and the rolling build will finish the more thorough job a bit more slowly.

## Validation Steps Performed
- [x] Ran the updated pipelines with Pull Request configuration ensuring that Helix tests don't run in the usual CI
- [x] Ran with simulation of the rolling build to ensure that the tests now running in Helix will pass. All failures marked for follow on in reference issues.
zadjii-msft added a commit that referenced this pull request Aug 15, 2023
This does two bits:
1. correctly marks our tests as failed in xUnit, so that AzDo will pick
up that the tests have failed.
2. Actually intentionally mark skipped tests as skipped in xUnit. We
were doing this accidentally before.
3. Add a CI step to log test failures in a way that they can show up on
GitHub


Probably regressed around #6992 and #4490.

### details

#### Part the first
We were relying on the MUX build scripts to convert our WTT test logs to
xUnit format, which AzDo then ingests. That script we used relied on
some WinUI-specific logic around retrying tests. They have some logic to
auto-retry failed tests. They then mark a test as "skipped" if it passed
less than some threshold of times. Since we were never setting that
variable, we would mark a test as "skipped" if it had _0_ passes. So,
all failures showed up on AzDo as "skipped".

Why didn't we notice this? Well, the `Run-Tests.ps1` script will still
return `1` if _any_ tests failed. So the test job would fail if there
was a failure, AzDo just wouldn't know which test it was.

#### part the second
Updates `ConvertWttLogToXUnitLog` in `HelixTestHelpers.cs` to understand
that a test can be skipped, in addition to pass/fail. Removes all the
logic for dealing with retries, cause we didn't need that.

#### part the third
TAEF doesn't emit error messages in a way that AzDo can immediately pick
up on which tests failed. This means that Github gives us this useless
error message:

![image](https://github.com/microsoft/terminal/assets/18356694/3be6de00-22e1-421c-93d4-176bd2be4cab)
That's the only "error" that AzDo knows about. 

This PR changes that by adding a build step to manually parse the xUnit
results, and log the names of any tests that failed. By logging them
with a prefix of `##vso[task.logissue type=error]`, then AzDo will
surface that text as an error message. GitHub can then grab that text
and surface it too.

### Addenda: Why aren't we using the VsTest module
as noted in
#4490 (comment),
the vstest module is literally 6x slower than just running TAEF
directly.
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AutoMerge Marked for automatic merge by the bot when requirements are met
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants