-
Notifications
You must be signed in to change notification settings - Fork 45
Multiple Tests in one Test File #147
Comments
Ugh. Really sorry I never responded to this. @jpsider's doing us a favor and breathing life into this discussion via PR #184. Over there, we have a working proof of concept on what it would look like to test all vCenter advanced settings, one value to each file. It's safe to say I didn't fully comprehend the inherent scaling issues here. I'd love to see that branch compared to an example that tests all ~700 values in one test. I suspect that the performance change is going to speak for itself, and that it'll be obvious to work toward implementing this as you've laid out above. |
It is possible that the way tests are run would be to be: Playing around with the config file to get it to work might need to happen as well. I dabbled with it in the past. At a high level, I'd much rather work with one test file that 800 individual advanced option tests. |
I would assume that it could be properly formatted in the json right? Under VC there would be an 'advanced setting option' that would open up to each of the advanced settings, then each one of those would run the specific test. My only question would be how do you handle the 'type' if the test file does not change? right now the type is hard coded in the test file. (I'm not sure its properly used, but thats a separate issue!). |
The type really isn't the issue.
You can pull the current type from the existing setting.
Technically I'm not sure we need the type to begin with if you follow that
logic.
Maybe there are certain use cases though.
The bigger issues is adding in the portions to the vestertemplate function
to account for arrays/non single value tests.
I worked/scrapped this 6+ months ago so I'm not really as fresh on it.
…On Sep 26, 2017 3:43 PM, "Justin Sider" ***@***.***> wrote:
I would assume that it could be properly formatted in the json right?
Under VC there would be an 'advanced setting option' that would open up to
each of the advanced settings, then each one of those would run the
specific test.
My only question would be how do you handle the 'type' if the test file
does not change? right now the type is hard coded in the test file. (I'm
not sure its properly used, but thats a separate issue!).
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#147 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADxtkEeanxI6sKTXeR8iCMnnO7-CnQJJks5smVPqgaJpZM4NBNp0>
.
|
I'm not sold on using 'type' either or the value it provides, or if it's used correctly. But I agree that's not the overarching theme is this Issue. Maintaining a bunch of files sucks, those 600+ I created are a bit crazy. However, there is automation in place to manage the file creation, and to capture the current state. The alternative is as you stated updating the current behavior in several places, if you want to test multiple settings with the same primary command. I don't think there is a right or wrong, just a general direction we need to head in once the line in the sand is drawn. |
Yes. Agreed.
Whether or not we should or shouldn't and then going down each path.
…On Sep 26, 2017 6:56 PM, "Justin Sider" ***@***.***> wrote:
I'm not sold on using 'type' either or the value it provides, or if it's
used correctly. But I agree that's not the overarching theme is this Issue.
Maintaining a bunch of files sucks, those 600+ I created are a bit crazy.
However, there is automation in place to manage the file creation, and to
capture the current state. The alternative is as you stated updating the
current behavior in several places, if you want to test multiple settings
with the same primary command. I don't think there is a right or wrong,
just a general direction we need to head in once the line in the sand is
drawn.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#147 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADxtkN5UoSTVWPYGFjPR0bIDhzUCUvXdks5smYEZgaJpZM4NBNp0>
.
|
I was looking through all my vester files i have i thought i didnt have my work anymore (which I don't) but luckily this inssue has a gist with the code changes i was working on: link Basically there is an If/Else in there if it is sinlge value or multiple valued test. That is where i'd think would be the first place to start the conversation or begin looking. |
Maybe another use case for multiple settings in one test is checking the portgroups of a VMhost |
I don't want to hold this up if anyone feels like they need my blessing. Just to be clear: I'd like to see this prototyped.
|
I haven't gotten around to testing this out yet. I hope to be able to spend some hours on it this week. |
gist (pretty much the same). I'm write hosting some test for testing purposes. The slowness is due to the foreach loop. Side Note ComparisonWhenever I run Math: |
I'm thinking we might be able to do one We should be able to get At that point, it should just be a matter of thinking about the |
I think this may work. *Disclaimer - it is just a prototype * I added a With that we can compare hash tables super quick. so it compared all the host's advanced properties (1110 on my host) in 1.86s. So outside of: If you remediate, the test must loop through each failed value: |
Did anyone need any more information on this to progress this along? |
Bleh, sorry! I'm in new baby mode, and sleep has engulfed GitHub time lately. 😫 Haven't tested, but the basic overview looks good! I support the conversion to HT for performance. Thanks again for taking the time to work on this! It's a pretty cool idea. |
haha np. A few guys I work with are also in that boat. |
Currently, each test file checks one setting.
This proposal is to allow multiple settings to be able to be checked at once.
This would be beneficial for things like advanced settings or other large groups of settings that could be pulled all at once.
Personally, when you run
Invoke-Vester
you are running it against hosts, clusters, etc that should all be the same. Thereby, a new Config.json file should be specified for environments/hosts/clusters/etc that are different.By grouping these large subset of settings all at once it should:
For my NFS settings test, there are 23 settings, which would result in 23 individual tests.
vCenter has upwards of 840 advanced settings. I don't know if we will add more of those, but i wouldn't want to make 840 individual tests.
Expected Behavior
To entertain the idea of allowing multiple tests per test file
Current Behavior
One test per test file
VesterTemplate.Tests.ps1 is more or less expecting one test
(currently you get one failure test and you are out of the Try/Catch loop. More on this later)
Possible Solution
Gist link to possible solution
Test File - $Desired
$Desired will result in the value from the Config.json being imported as a 'PSCustomObject'. This is just host the
ConvertFrom-Json
cmdlet works.(in the gist we convert $Desired into a 'hashtable' so we can compare it with $Actual)
Test File - $Type and $Actual
Quick side note: From my testing,
$Actual
should result with an output of a hashtable (meaning only 'key/name' and 'value')I have
$Actual
result in an [ordered] hashtableNote:
$Type
can be 'hashtable' or 'PSCustomObject' in this scenario. It doesnt matter. But i used 'hashtable' for consistency.Which results in Config.json files like this:
If we made the
$Actual
output as a 'PSCustomobject' using just this:Get-AdvancedSetting -Entity $Object | Where-Object {$_.Name -Match "NFS"} | Sort Name | Select Name,Value
The Config.json output looks like this:
Note:
$Type
MUST BE 'PSCustomObject' in this scenario, or the results will benull
Test File - $Fix - AKA the Magic
One line to rule them all (in my scenario - and probably most that will use advanced settings):
Get-AdvancedSetting -Entity $Object -Name $DesiredObject.Name | Set-AdvancedSetting -Value $DesiredObject.Value -Confirm:$FALSE
We will get into this more below but when the test file is dot sourced, $Fix can be run and it references
$DesiredObject
for whatever the current value is (we loop through all of the$DesiredObjects
individually)So when a (multiple) test is run, it verifies
$DesiredObject
is equal to$ActualObject
. If it isn't and you use-Remediate
, it will push the current$DesiredObject.Value
to the specific$DesiredObject.Name
setting.Now onto the VesterTemplate.Test.ps1
In my gist i have a few changes that i have issues open for, but regardless.
Everything is basically the same until line 104. I added an
if/else
that says:if
$Desired
is a 'PSCustomObject' it's a test with multiple tests in itelse, do what we have been doing.
Pull in the results from
$Actual
We get the results from
$Actual
and convert them to a 'PSCustomObject' (so we can compare them with$Desired
as it too is a 'PSCustomObject').Loop through all the objects in
$Desired
First thing is i had to move the
It
in here.Otherwise, the very first test that failed, Pester would no longer proceed with the rests of the tests.
and it just considered them to all be one large test, instead of individual ones.
Note: i added
: $($DesiredObject.Name)
after Title in theIt
clause so we could better see the individual test names.First thing in the loop is to get the value from
$ActualObjects
(the 'PSCustomObject' from$Actual
) that has the same name of the current$DesiredObject
(the object we are checking against currently).Now that we have our
$DesiredObject
and$ActualObject
If
$DesiredObject
is null, we still want to make sure it is nullelse we compare the two objects. Since they are both 'PSCustomObjects', it runs
Compare-Object
with the-Property Name,Value
specified.Everything else is the same.
if
Compare-Object
fails, it goes into theCatch
block.(I think that covers everything. it's kind of late here after writing all this).
As a side note, i have
Write-Host
in there just for testing/debugging. We can for sure remove those.Steps to Reproduce (for bugs)
Not really a bug
Context
Accomplish: Simplicity of test writing for large subsets of settings
Your Environment
The text was updated successfully, but these errors were encountered: