Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve performance of "no-broadcasting-needed" scenario in &array +&array operation #965

Merged
merged 3 commits into from
Apr 2, 2021

Conversation

SparrowLii
Copy link
Contributor

@SparrowLii SparrowLii commented Apr 1, 2021

This PR does the following:

  1. Add a method to_dimensionality which creats an array view from an array with the same shape, but different dimensionality type.
  2. Use to_dimensionality to avoid unnecessary calls of broadcast in &array + &array operation.

Updates #936

src/impl_ops.rs Outdated
@@ -179,7 +179,13 @@ where
{
type Output = Array<A, <D as DimMax<E>>::Output>;
fn $mth(self, rhs: &'a ArrayBase<S2, E>) -> Self::Output {
let (lhs, rhs) = self.broadcast_with(rhs).unwrap();
let (lhs, rhs) = if self.ndim() == rhs.ndim() && self.shape() == rhs.shape() {
let lhs = self.to_dimensionality::<<D as DimMax<E>>::Output>().unwrap();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These to_dimensionality calls seem redundant with the ones already in broadcast_with. Is there a reason this PR shouldn't just do one or the other, not both?

If it was just the calls in this method, here it looks like self.view().into_dimensionality::<...>() is enough, i.e. using the existing method.

Copy link
Contributor Author

@SparrowLii SparrowLii Apr 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right. We should use view().into_dimensionality() here. It is faster in the benchmark test (though I don’t know why)

Copy link
Contributor Author

@SparrowLii SparrowLii Apr 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding the same shape detection in impl_ops can avoid executing co_broadcast in broadcast_with. I think it is worth to reserve.

/// Creat an array view from an array with the same shape, but different dimensionality
/// type. Return None if the numbers of axes mismatch.
#[inline]
pub(crate) fn to_dimensionality<D2>(&self) -> Option<ArrayView<'_, A, D2>>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this method is added, it should be just next to (after) into_dimensionality in this file.

@bluss
Copy link
Member

bluss commented Apr 2, 2021

Is there a benchmark for this case, and does it show an improvement? Benchmarks for non-dyn dimensions would be the most interesting. 🙂

@SparrowLii
Copy link
Contributor Author

SparrowLii commented Apr 2, 2021

I did benchmark tests in two cases: 1.no need broadcasting 2. only one side need broadcasting. (Look bench1.rs) The result shows that view().into_dimensionality() is faster than to_dimensionality(). And judging whether broadcasting is not needed before broadcast_with can significantly increase the speed.
The following is the result of the benchmark:

no need broadcast
(ns/iter)
origin:
441 430 434 436 453 average: 438.8
broadcast_with using view().into_dimensionality:
406 411 410 410 404 average: 407.8
broadcast_with using to_dimensionality:
423 404 420 410 402 average: 411.8
no broadcast_with and use view().into_dimensionality():
384 388 392 380 386 average: 386
no broadcast_with and use to_dimensionality():
389 394 399 386 390 average: 391.6

one side broadcast
(ns/iter)
origin:
477 473 468 491 479 average: 477.6
broadcast_with using view().into_dimensionality:
457 456 452 458 454 average: 455.4
broadcast_with using to_dimensionality:
463 463 456 458 473 average: 462.6

@bluss
Copy link
Member

bluss commented Apr 2, 2021

Is the "origin" benchmark from before this PR, or with changes in this PR? It's mostly the changes from before this PR to after it that are interesting :)

@SparrowLii
Copy link
Contributor Author

SparrowLii commented Apr 2, 2021

Is the "origin" benchmark from before this PR, or with changes in this PR? It's mostly the changes from before this PR to after it that are interesting :)

yes it is from before this PR

src/impl_ops.rs Outdated
(lhs, rhs)
} else {
self.broadcast_with(rhs).unwrap()
};
Zip::from(&lhs).and(&rhs).map_collect(clone_opf(A::$mth))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Zip::from(&lhs).and(&rhs).map_collect(clone_opf(A::$mth))
Zip::from(lhs).and(rhs).map_collect(clone_opf(A::$mth))

I think here we should just consume the views, saves more redundant view creations in Zip (won't be a very noticeable change, maybe the compiler can remove the difference).

Copy link
Contributor Author

@SparrowLii SparrowLii Apr 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. It has been correct.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*improved. Cool.

@bluss bluss merged commit d50f4ea into rust-ndarray:master Apr 2, 2021
@bluss
Copy link
Member

bluss commented Apr 2, 2021

Thanks!

@bluss bluss added this to the 0.15.2 milestone Apr 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants