-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FFT refactoring #310
FFT refactoring #310
Conversation
|
||
|
||
def test_deferred_3d(): | ||
check_3d_c2c(N=(64, 40, 100)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we losing the larger test cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are removed because they don't increase test coverage. All tests in the testing mode exercise the deferred path, and there's a separate mode for testing the eager path. Tests themselves don't need to be aware of what testing mode they run in. As far as I know, the FFT task is insensitive to the input size, so passing two different input sizes doesn't seem to add much other than testing cuFFT in different ways, which I believe should be done by the cuFFT test suite.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that cuFFT runs its own test cases for different sizes. However, numerical errors due to incorrect input parameters tend to become more evident with larger test cases. I question if the resources you are saving by not running these cases are worth it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In any case, I'll wait for an appropriate channel to discuss feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However, numerical errors due to incorrect input parameters tend to become more evident with larger test cases
if there's a source of numerical errors other than cuFFT itself, I guess we could add back some of the tests with big inputs, but even then, I'd rather isolate them from functionality tests that exercise various parameters. And I don't think there's much of computation that could either worsen or mask the numerical errors from cuFFT.
Also, "numerical errors due to incorrect input parameters" don't make sense to me. If the input parameters are "incorrect", those should be caught rather than being passed to the rest of computation. (I suspect you were trying to say "suboptimal input parameters"?) If for some reason you want to do some additional numerical testing that the existing cuFFT test suite doesn't do, I guess it's fine to throw them here, but I feel those should rather be done to cuFFT itself...
@@ -98,8 +98,8 @@ std::vector<StoreMapping> CuNumericMapper::store_mappings( | |||
auto& inputs = task.inputs(); | |||
auto& outputs = task.outputs(); | |||
mappings.push_back(StoreMapping::default_mapping(inputs[0], options.front())); | |||
mappings.back().policy.exact = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this change was related to this comment
removing the exact mapping constraint for FFT inputs (which wasn't doing exactly what it was meant to do)
Can you expand on that? What was wrong with the way it was done before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dimension ordering doesn't take the store transforms into account and is always absolute with respect to the original coordinate space. Because of this, the exact
constraint alone cannot guarantee the exact instance in C order that cuFFT expects, if the store was transposed. Currently, the only way to check this is to make an affine accessor with the affine transform and call is_dense_row_major
. Once we start supporting relative dimension ordering, I guess we could put back the exact constraint and revert the change in the FFT task, but even then the exact constraint is unnecessary for certain types of FFTs we are doing; for example, forward and backward FFTs over axes make copies of the input internally, so the exact constraint only makes a copy that would be thrown away immediately: https://github.com/nv-legate/cunumeric/blob/branch-22.05/src/cunumeric/fft/fft.cu#L262
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, the exact constraint we would potentially want is something like this:
// scalars[2] being true means we're doing FFT over axes, which makes a copy inside the task
if (!task.scalars()[2].value<bool>()) mappings.back().policy.exact = true;
This PR makes the following changes:
FFT_C2C
and others) to reduce repetitions