Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added tSQLt.DropAllClasses.ssp #13

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

NReilingh
Copy link

This procedure runs DropClass on all schemas that are marked with a tSQLt.TestClass extended property. This can be used to easily reset your development environment, especially for those of us who make heavy use of query shortcuts (like RunAll) in lieu of GUI tools.

I've included some tests and added to the buildorder.txt file, but I haven't tested a full project build — not really sure how I would go about doing that anyway.

This procedure runs DropClass on all schemas that are marked with a tSQLt.TestClass extended property. This can be used to easily reset your development environment, especially for those of us who make heavy use of query shortcuts (like RunAll) in lieu of GUI tools.
@mbt1
Copy link
Collaborator

mbt1 commented Nov 27, 2016

Nick, thanks for the submission. Could I ask you to do the following changes:

  1. Use the tSQLt.TestClasses view instead of reverse engineering the extended properties.
  2. In the test(s) fake that view and spy dropclass. (That way you can test that the actual procedure is called and prevent code duplication that could lead to a brittle code base.)
  3. Write three tests in total following the 0-1-some heuristics. That means, one test where the (faked) TestClases view returns 0 rows, one where it contains one row and one where it contains a few rows (like 3). (You don't need to test that no non-test-class is dropped, as the tests of TestClasses are covering that.)

@NReilingh
Copy link
Author

@mbt1 - Thank you for the advice, Sebastian! I've modified the ssp to use that view — silly that I didn't go looking for it before.

I ran into some trouble when writing tests; SQL Server 2012 would not let me create a stored procedure with raw INSERT INTO tSQLt.TestClasses statements, complaining: "Ad-hoc updates to system catalogs are not allowed". Obviously when the table is faked this isn't a problem, so by wrapping those statements in an EXEC() I seem to have worked around that issue. Not sure if there's a better solution though.

Also, I haven't written a ton of tests using SpyProcedure yet, so I'm happy to receive feedback on those if you have any comments.

@mbt1
Copy link
Collaborator

mbt1 commented Nov 28, 2016

Thanks for the quick turn around. The tests look good. But, I am seeing two anti-patterns:

  1. using AssertEqualsTable to compare a #temptable and a normal table produces fragile tests. The result is dependent on the collation setting of the server. That means, while the test might succeed in some environments, it will fail in others.

  2. using SELECT * to prepare for a table compare. You always want to specify the columns you are comparing on.

You can solve both issues by doing a SELECT ClassName INTO #actual FROM and then comparing #actual to #expected.

The other thing you could consider is to load #expected from the faked TestClasses view to not repeat the same insert statement twice. But that is a matter of taste. I always try to remove as much duplication as possible without hampering readability.

The final note, also a matter of taste, I don't like the SetUp functionality. I prefer to be able to see what is going on when looking at a test, without having to hunt down if there is a SetUp procedure. But you also don't want to repeat that code in every test, particularly as you might have to add other fakes or spies as the procedure under test evolves. The pattern I use is to write my own setup procedure and call it something like TestClass.FakeAndSpyAllObjects and then call it explicitly in all test. A little more clarity - bought with a little more duplication. It also makes dealing with that odd test that requires a different setup easier; something I run into quite often.

@NReilingh
Copy link
Author

@mbt1 The feedback is much appreciated! Is this more what you had in mind?

I agree about SetUp — I just got bitten by that in one of my own projects, and I like your idea of an explicitly called setup procedure to aid readability of each individual test. I decided to just bring the two fake/spy calls into each test here since it's not an overly large bit of material.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants