You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the SANS workflow can be default compute BeamCenter by looking at the center of mass of the sample data. While this is nice, the scientists have previously indicated that this is not how things operate in practice, instead the beam center would be computed once, then reused. The reason I am bringing this up is different though, as explained in the following.
The current approach adds a chain of dependencies in the workflow graph which I fear will cause more trouble in the long run. In simplified terms, the dependency is:
BeamCenter->CalibratedData->SolidAngle — solid angle computation needs sample positions and detector positions, which we have after setting the beam center.
Because the beam center can be computed from detector counts, setting the beam center (producing CalibratedData) is done after loading event data, even if the beam center is actually set to a fixed value.
An implicit dependency on the loaded masks exists as well.
This implies that if masks would not be obtained from a file but, e.g., computed based on positions we would have a cyclic dependency (DetectorMasks->MaskedData->CalibratedData->DetectorMasks).
The full dependency graph looks as follows (after #153, which made the problem more obvious):
An example where this becomes more problematic is chunking processing of files as well as live reduction: The beam center would be set once, but with the current workflow graph we would have to re-compute SolidAngle for every chunk. Even when only a single chunk is processed, the graph-level parallelism of the current approach is worse than it has to be.
Another potential example is the TOF computation: It might be that we need the calibrated positions for precise frame-unwrapping, which is currently not possible, since calibration happens after defining TOF.
I therefore propose to re-structure this:
Pixel calibration should be done before loading event data.
If pixel calibration depends on the event data, I think this would just be run as a separate call to the workflow, i.e., computing the calibration is separate from setting/using it.
This would look as follows:
HereSolidAngle does not contain pixel masks yet, we would probably add MaskedSolidAngle which inserts these. At first I was skeptical about this, since I felt that this would risk "forgetting" to mask the solid angle, which would yield a totally wrong normalization term. However, I now think that it will actually become more obvious that we use/need masks in this term by having the explicit MaskedSolidAngle. Previously this was kind of implicit and only obvious for an informed reader of the workflow graph (since SolidAngle used to depend on CalibratedMaskedData, not the "Masked").
Also note #62 which indicates that the current approach may be flawed anyway.
The text was updated successfully, but these errors were encountered:
Currently the SANS workflow can be default compute
BeamCenter
by looking at the center of mass of the sample data. While this is nice, the scientists have previously indicated that this is not how things operate in practice, instead the beam center would be computed once, then reused. The reason I am bringing this up is different though, as explained in the following.The current approach adds a chain of dependencies in the workflow graph which I fear will cause more trouble in the long run. In simplified terms, the dependency is:
BeamCenter->CalibratedData->SolidAngle
— solid angle computation needs sample positions and detector positions, which we have after setting the beam center.CalibratedData
) is done after loading event data, even if the beam center is actually set to a fixed value.DetectorMasks->MaskedData->CalibratedData->DetectorMasks
).The full dependency graph looks as follows (after #153, which made the problem more obvious):
An example where this becomes more problematic is chunking processing of files as well as live reduction: The beam center would be set once, but with the current workflow graph we would have to re-compute
SolidAngle
for every chunk. Even when only a single chunk is processed, the graph-level parallelism of the current approach is worse than it has to be.Another potential example is the TOF computation: It might be that we need the calibrated positions for precise frame-unwrapping, which is currently not possible, since calibration happens after defining TOF.
I therefore propose to re-structure this:
This would look as follows:
Here
SolidAngle
does not contain pixel masks yet, we would probably addMaskedSolidAngle
which inserts these. At first I was skeptical about this, since I felt that this would risk "forgetting" to mask the solid angle, which would yield a totally wrong normalization term. However, I now think that it will actually become more obvious that we use/need masks in this term by having the explicitMaskedSolidAngle
. Previously this was kind of implicit and only obvious for an informed reader of the workflow graph (sinceSolidAngle
used to depend onCalibratedMaskedData
, not the "Masked").Also note #62 which indicates that the current approach may be flawed anyway.
The text was updated successfully, but these errors were encountered: