-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Currently failing orso validation #139
Comments
Thanks for keeping an eye on this! I was aware that some of the validation from ORSO were failing - I'll be looking into these before we make a release. We just completely switched from using C-extensions to numba for all the reflectivity calculations and convolutions etc., so it is a big change and we will be running many tests before release. In particular I may need to pay attention to the combination of QProbe and oversampling, which doesn't really come up for real experimental datasets but is important for testing. |
The tests are now passing - though there is still a warning for QProbe oversampling, so I'll leave this open for now. |
Great. The tolerances for the resolution smearing test are interesting to think about. I'm not sure quite how tight/loose they should be before they don't make sense. |
There is something funny going on with the resolution smearing... just by looking at it, it doesn't seem like the results make sense for the very last point in test5.txt |
1-sigma is used in the validation. I thought test5 was passing. I've seen that last point weirdness before, but it wasn't always there. |
I think I generated the test dataset with a super fine mesh. So the question could be how far an NR smear should go in the tails. |
You are correct, of course - with a very fine mesh and manual convolution loop, I get back very nearly the "accepted" R. It never hurts to check! The refl1d oversampling method needs to get another look to find out why it is not generating a sufficient basis even when the number of points is the same as your manual oversampling. |
Whilst great for avoiding aliasing, surely the random nature of the oversampling means effects like this will be more likely, at least for lower factors of oversampling. These questions remain:
|
Typically what kind of probability distribution does a monochromatic instrument have for the wavelength? Uniform/Gaussian/Trapezoidal? What kind of width? |
I agree that we should address your questions. It is probably a good topic for an ORSO Analysis working group meeting. We have not had enough of those. To answer some of your questions in short: When evaluating real convolution implementations for fitting, the tolerances that matter are all relative to the uncertainty in the measured R. E.g. at high Q, your dR/R will be much higher than it is at low Q. We tolerance for real fits is not just ((calculated_convolution - ideal_convolution) / calculated_convolution) but should be (calculated_convolution - ideal_convolution) / (dR). For non-Gaussian kernels I would think linear segments (piecewise trapezoid) or box-models (pointwise definition) would be pretty straightforward. You have more experience with this that we do - if you can share your particular instrument NGK shape that would be a good way to seed the conversation. |
Does this mean that during a fitting process there should be a learning process where the oversampling needs to be readjusted as the difference between calculated and ideal changes? For the NGK see something like fig6b in this paper (that paper was written whilst on sabbatical at NIST in 2013). The resolution function for our instruments is a convolution of a few Uniform distributions (wavelength) and a Trapezoid (angular). Sometimes the main wavelength contribution (dlambda/lambda = 8) is much broader than the angular part (dtheta/theta = 3.3), which can lead to some funny resolution kernels. |
I've been investigating this issue - I'm beginning to lean toward using a simple linspace as our Q-basis for convolution, like you do in the validation code. It seems to be more efficient in several ways, and it's already what we offer for extra oversampling near the critical edge. |
@bmaranville @pkienzle changes made to refl1d last week have caused the orso validation run to start failing for refl1d.
Test 0 and 6 fail when using
refl1d.reflectivity.reflectivity_amplitude
, but pass withrefl1d.abeles.refl
. Those tests aren't using any resolution smearing.Test4 fails when using
refl1d.abeles.refl
, this one is a resolution smearing test. The test script tries a QProbe oversampling with a factor of 21, and if that fails tries to create its own oversampling.Can you look into this issue and see whether it's truly an issue with refl1d, or with the orso validation code? Some of the divergences from the 'known good' arrays are quite large.
The text was updated successfully, but these errors were encountered: