Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests that test for completion & correctness #20

Open
pshriwise opened this issue Jan 28, 2016 · 6 comments
Open

Tests that test for completion & correctness #20

pshriwise opened this issue Jan 28, 2016 · 6 comments

Comments

@pshriwise
Copy link
Member

Right now tests in mcnp2cad seem to consist of a directory holding a set of test input files which mcnp2cad should be able to complete without error. It does not, however, test to see if the resulting file is correct.

I'd like to propose that we add SAT files representative of each input file to this directory (or a subdirectory). Then after running mcnp2cad on the test input files we can compare the known SAT file to the mcnp2cad-generated SAT file by subtracting/intersecting one set of volumes from the other in CGM/iGeom. The success result being a file in which no entities exist and failure if any entities exist at all.

Thoughts?

@gonuke
Copy link
Member

gonuke commented Jan 29, 2016

We'd be better of with some small carefully designed geometry that we query with CGM after conversion. This means that we don't have to worry about inconsistencies with ACIS versions or need to be bound to ACIS, even.

@pshriwise
Copy link
Member Author

For the relatively simple geometries already in the test folder wouldn't the import of a SAT file still work despite different ACIS versions?

@gonuke
Copy link
Member

gonuke commented Jan 29, 2016

Sorry - I thought you just wanted to compare files...

What you suggest may work, but if the geometries are sufficiently simple, we can hard code the things we want to compare rather than comparing with a file, thus insulating the process from file versions and/or solid modeling engine choices.

@pshriwise
Copy link
Member Author

What I'd like to avoid (if possible) is a few hard coded values we're comparing in favor of the boolean operations available to us in CGM. A robust subtraction/intersection process would cover any/all hard coded tests we might want to perform.

What we might do in order to insulate the testing process from file versions or underlying solid modeling engines is to write functions which create the expected geometry using iGeom/CGMA in the same program and then so this subtraction.

The way I currently envision it is in order to create a new test you would simply add the new card and then a function in the test suite corresponding to that card. The test suite will then run mcnp2cad on the new test card, run the corresponding function to create the expected geometry, intelligently subtract the two and then test for a final NULL set. @makeclean expressed some concern that the subtraction process might not be robust due to numerical tolerance issues. I'm going to investigate that a little bit today, but I expect that at least for the simple geometries involved in the unittests it should be sufficient to provide the expected results.

@sjackso
Copy link

sjackso commented Feb 2, 2016

For what it's worth, I agree with @makeclean that "subtract two geometries that should be identical and test for an empty result" would be an invitation for numerical stability problems.

@pshriwise
Copy link
Member Author

If the same calls are being made to the underlying solid modeling engine, I think the boolean operations should be able to produce the desired null result, but @gonuke made a good point in our last research meeting that if this is the case, then it may be better to simply test that the correct functions are being called with the right arguments. This way we aren't relying on the robustness of the solid modeling engine itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants