-
-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Undetermined runtime of tests due to large non-matching datasets #113
Comments
Interesting case, thx to report it. I knew that NBi was fast to compare sets with around 36k ... but indeed I've never tested the case where none of them would have a match. About your option 2, no additional task than asserting the comparison is done for all the rows. And display shouldn't really use more than the needed rows (You can override the limit of 10) so I can't really optimize this part (at a first look). Implementing the timeout was planned but unfortunately NUnit 2.X doesn't expose this parameter in its "API". So I'd need to implement my own timeout and it could be tricky. I'll check how I can manage this but will probably need time to check the root cause of this issue and find a work-around. |
Hi FuegoArtificial, I've tried to reproduce but can't (at the moment). Comparing two datasets with 1.000.000 rows takes 5 seconds on my machine. But I'm not sure I've really understood your case. Do the rows in the two datasets have matching keys or not? |
I'll try to get an answer with specifics from my colleague next week. Sorry for the delay! |
I think I got it, but I need further investigations. |
Well the problem is when you need to compare values of a large set of data and is not dependant of the count of differences. So basically all I cound do is put a timeout on the test and for this I'll wait NUnit 3.0 Anyway, there is room for improving the performances of the method (caching keys, avoiding usage of contain when not needed). I'll try to work on this. |
That's great! |
Just to be clear, it's not part of release v1.11 but I've already worked on this and had nice improvement. It's included in the beta for v1.12 (with all the features of v1.11). https://github.com/Seddryck/NBi/releases/v1.12-beta |
Hi Seddryck,
I would like to report an issue where NBi can run unlimited time (6+ hours for one test). It might be a NUnit issue though.
If a dataset with columns like key (text), value (text), value (text), value (text), value (text) contains 500,000 rows is compared to a same-structured dataset with 500,000 rows where no row results in a match, then the test can run infinitely.
I know that this kind of test is far away from best practice and does not make sense. However during the development of a test this occured. The test suite run should not "crash". Due to a missing result after the "crash" it can be hard to figure out which test triggers the issue. No notification is possible.
Can you reproduce the issue or do you need more information?
Ideas:
Both ideas probably aren't ideal. Perhaps there is a better workaround or approach? :)
Have a great day!
Tilo
The text was updated successfully, but these errors were encountered: