Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TOPI][X86] Pool operator parallel support. #4090

Merged
merged 1 commit into from
Oct 9, 2019
Merged

Conversation

anijain2305
Copy link
Contributor

@anijain2305 anijain2305 commented Oct 9, 2019

Background #3607 solved a functional bug in Pooling. Earlier, accumulation and division were happening in the same for loop. This was fine for FP32, but caused problem with Integers where division leads to rounded quotient. This bug was solved by putting a separate for loop that just does the division after accumulation has been done.

// attr [tensor] storage_scope = "global"
allocate tensor[int32 * 323136]
// attr [T_cast] storage_scope = "global"
allocate T_cast[uint8 * 323136]
produce tensor {
  parallel (ax0.ax1.fused, 0, 288) {
    for (ax2, 0, 34) {
      for (ax3, 0, 33) {
        tensor[(((ax0.ax1.fused*1122) + (ax2*33)) + ax3)] = 0
        for (rv, 0, 3) {
          for (rv, 0, 3) {
            tensor[(((ax0.ax1.fused*1122) + (ax2*33)) + ax3)] = (tensor[(((ax0.ax1.fused*1122) + (ax2*33)) + ax3)] + tvm_if_then_else(((ax2 + rv) < 35), placeholder[(((((ax0.ax1.fused*1225) + (ax2*35)) + (rv*35)) + ax3) + rv)], 0))
          }
        }
      }
    }
  }
}
produce T_cast {
  for (ax1, 0, 288) {
    for (ax2, 0, 34) {
      for (ax3, 0, 33) {
        T_cast[(((ax1*1122) + (ax2*33)) + ax3)] = uint8((tensor[(((ax1*1122) + (ax2*33)) + ax3)]/(min(3, (35 - ax2))*3)))
      }
    }
  }
}

However, this caused performance issues because the second for loop is no longer paralle. This PR makes the second for loop parallel.

produce tensor {
  parallel (ax0.ax1.fused, 0, 288) {
    for (ax2, 0, 34) {
      for (ax3, 0, 33) {
        tensor[(((ax0.ax1.fused*1122) + (ax2*33)) + ax3)] = 0
        for (rv, 0, 3) {
          for (rv, 0, 3) {
            tensor[(((ax0.ax1.fused*1122) + (ax2*33)) + ax3)] = (tensor[(((ax0.ax1.fused*1122) + (ax2*33)) + ax3)] + tvm_if_then_else(((ax2 + rv) < 35), placeholder[(((((ax0.ax1.fused*1225) + (ax2*35)) + (rv*35)) + ax3) + rv)], 0))
          }
        }
      }
    }
  }
}
produce T_cast {
  parallel (ax0.ax1.fused, 0, 288) {
    for (ax2, 0, 34) {
      for (ax3, 0, 33) {
        T_cast[(((ax0.ax1.fused*1122) + (ax2*33)) + ax3)] = uint8((tensor[(((ax0.ax1.fused*1122) + (ax2*33)) + ax3)]/(min(3, (35 - ax2))*3)))
      }
    }
  }
}


Performance - This leads to very large performance improvement. For my test, for EC2 C5 18x.large machine, the runtime goes from 750 us to 60 us.


There is an additional optimization possible that performs compute_at to sink the accumulation inside division. However, that change is much more involved as we have to handle different layouts. So, this PR just does parallel for now.


Tests - There are already functional tests that check the correctness. So I don't need any more tests.

@yzhliu @vinx13 @zhiics

Copy link
Member

@zhiics zhiics left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yzhliu yzhliu merged commit 3a32729 into apache:master Oct 9, 2019
@yzhliu
Copy link
Member

yzhliu commented Oct 9, 2019

Thanks @anijain2305 @zhiics

anijain2305 added a commit to anijain2305/tvm that referenced this pull request Oct 17, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Oct 18, 2019
@anijain2305 anijain2305 deleted the pad branch November 13, 2019 00:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants