Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testhost.x86 is not large address aware #1985

Closed
sharwell opened this issue Apr 10, 2019 · 3 comments · Fixed by #1986
Closed

testhost.x86 is not large address aware #1985

sharwell opened this issue Apr 10, 2019 · 3 comments · Fixed by #1986
Labels

Comments

@sharwell
Copy link
Member

Currently testhost.x86 is compiled with platform set to x86:

https://github.com/Microsoft/vstest/blob/53c0a341b07c1f224a31b2378e696f535a1f6080/src/testhost.x86/testhost.x86.csproj#L12

This compilation setting causes the application to omit the large address aware setting, which causes problems on larger multi-core machines (I'm observing frequent OOM errors on 2990WX).

The correct setting for an x86 build which is large address aware is a combination of the AnyCPU platform and the Prefer32Bit setting. See microsoft/perfview@1582d04 for an example of this configuration.

@AbhitejJohn
Copy link
Contributor

@sharwell : Just trying to understand the affect of moving to LAA, how would this affect low config systems?

@ivonin
Copy link

ivonin commented Apr 16, 2019

From what I read, setting the performance hit from setting the /LAA flag exists although it's not too big.

Can't find anything more thourough than this StackOverflow post at the moment. @sharwell what's your take on this?

@sharwell
Copy link
Member Author

sharwell commented Apr 16, 2019

I've never heard of there being a difference. I can't benchmark the difference in a meaninful way because the current release just crashes on anything of interest to performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants