Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump dlpack.h to latest version #65047

Closed
wants to merge 2 commits into from
Closed

Bump dlpack.h to latest version #65047

wants to merge 2 commits into from

Conversation

t-vi
Copy link
Collaborator

@t-vi t-vi commented Sep 15, 2021

Fixes #64995

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Sep 15, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 4ef92f4 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_build (1/2)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .github/generated-ciflow-ruleset.json
Auto-merging .github/generated-ciflow-ruleset.json
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/job-specs/job-specs-custom.yml
Auto-merging .circleci/verbatim-sources/job-specs/job-specs-custom.yml
CONFLICT (add/add): Merge conflict in .circleci/generate_config_yml.py
Auto-merging .circleci/generate_config_yml.py
CONFLICT (add/add): Merge conflict in .circleci/docker/common/install_rocm.sh
Auto-merging .circleci/docker/common/install_rocm.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_build (2/2)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .github/generated-ciflow-ruleset.json
Auto-merging .github/generated-ciflow-ruleset.json
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/job-specs/job-specs-custom.yml
Auto-merging .circleci/verbatim-sources/job-specs/job-specs-custom.yml
CONFLICT (add/add): Merge conflict in .circleci/generate_config_yml.py
Auto-merging .circleci/generate_config_yml.py
CONFLICT (add/add): Merge conflict in .circleci/docker/common/install_rocm.sh
Auto-merging .circleci/docker/common/install_rocm.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

Copy link
Collaborator

@emcastillo emcastillo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
I wonder if we should add the supported version somewhere in the documentation?

@rgommers
Copy link
Collaborator

rgommers commented Sep 15, 2021

I wonder if we should add the supported version somewhere in the documentation?

Right now that doesn't seem so useful, because it is not possible to tell from the DLPack version itself whether there's an issue or not. Right now it should be possible to mix one library which is on version 0.2 with another one which is on 0.6 (there has not been an ABI change in ~4 years).

dlpack.h itself contains DLPACK_VERSION, which is how to look up the version - if we put it in the docs it may go out of sync or just lead to confusion for end users. Exposing a proper ABI version should be done in DLPack itself (xref dmlc/dlpack#34).

Copy link
Collaborator

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM too, thanks @t-vi!

The one CI failure seems unrelated.

@codecov
Copy link

codecov bot commented Sep 15, 2021

Codecov Report

Merging #65047 (9e59d12) into master (feefc94) will increase coverage by 0.01%.
The diff coverage is n/a.

❗ Current head 9e59d12 differs from pull request most recent head 4ef92f4. Consider uploading reports for the commit 4ef92f4 to get more accurate results

@@            Coverage Diff             @@
##           master   #65047      +/-   ##
==========================================
+ Coverage   66.37%   66.39%   +0.01%     
==========================================
  Files         739      725      -14     
  Lines       94299    93457     -842     
==========================================
- Hits        62595    62047     -548     
+ Misses      31704    31410     -294     

@rgommers rgommers requested a review from mruberry October 28, 2021 12:53
@rgommers
Copy link
Collaborator

@mruberry would you be able to land this?

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stamped! Thank you @t-vi, @rgommers, @emcastillo!

@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@rgommers
Copy link
Collaborator

@mruberry was there a hiccup with landing this? I'd like to work on a follow-up PR, that's easier to do if this is in so I don't have to do "manual stacking".

@mruberry
Copy link
Collaborator

@mruberry was there a hiccup with landing this? I'd like to work on a follow-up PR, that's easier to do if this is in so I don't have to do "manual stacking".

Yes, sorry, there's a mystery internal issue we need to debug. I'll try to get someone on it soon.

@@ -28,7 +28,7 @@ class DLPackWrapper {
: tensor(tensor), device_option(device_option) {}

py::object data() {
DLContext tensor_context;
DLDevice tensor_context;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The internal error is

caffe2/python/pybind_state_dlpack.h:31:5: error: unknown type name 'DLDevice'; did you mean 'Device'?
    DLDevice tensor_context;

which I don't understand.

@t-vi, what would happen if we didn't update Caffe2 and just updated ATen/dlpack.h?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My interest is exclusively with PyTorch /ATen, so I neither know nor could I say whether any particular outcome was good or bad. I was more concerned that you might have internal interfaces between them and caffe2.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind creating a PyTorch-exclusive version of this PR I can try?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the changes in caffe2/ (and rebased to master from a few days ago).

@pytorch-probot
Copy link

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/t-vi/pytorch/blob/4ef92f496b7614e676ec91e36bcc1c4b0000e9af/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-dynamic ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
docker-builds ciflow/all 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-py3-clang5-mobile-code-analysis ciflow/all, ciflow/linux, ciflow/mobile 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in d049772.

@mruberry
Copy link
Collaborator

Looks like the PyTorch-only version worked!

@t-vi
Copy link
Collaborator Author

t-vi commented Nov 11, 2021

Thank you for merging @mruberry!

@mruberry
Copy link
Collaborator

This is causing some internal build failures that weren't immediately detected, unlanding and reopening while we investigate.

@mruberry mruberry reopened this Nov 12, 2021
@t-vi
Copy link
Collaborator Author

t-vi commented Nov 12, 2021

So what is the perspective? If the PR is doomed to rot, I would prefer closing it.

@mruberry
Copy link
Collaborator

So what is the perspective? If the PR is doomed to rot, I would prefer closing it.

I don't think we're at "doomed to rot" yet, but I understand and appreciate your perspective. I (or someone else with internal FB access) is going to have to do some digging for a bit. Let's give that a chance to happen.

desertfire pushed a commit that referenced this pull request Nov 15, 2021
Summary:
Fixes #64995

Pull Request resolved: #65047

Reviewed By: ngimel

Differential Revision: D32039318

Pulled By: mruberry

fbshipit-source-id: 7dfc653e1e77799d1f26a95fa9bbae3c7ffc887c
desertfire pushed a commit that referenced this pull request Nov 15, 2021
Summary:
Fixes #64995

Pull Request resolved: #65047

Reviewed By: ngimel

Differential Revision: D32039318

Pulled By: mruberry

fbshipit-source-id: 7dfc653e1e77799d1f26a95fa9bbae3c7ffc887c
@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@mruberry
Copy link
Collaborator

So here's what I think is happening.

Within Facebook there are multiple systems that depend on DLPack. This includes PyTorch, Caffe2 and TVM. Somehow when we're updating PyTorch's DLPack header I think the issue we're hitting is that PyTorch files are still being compiled against the dlpack headers found in those other frameworks. Here's the DLPack header used by the internal version of TVM:

/*!
 *  Copyright (c) 2017 by Contributors
 * \file dlpack.h
 * \brief The common header of DLPack.
 */
#ifndef DLPACK_DLPACK_H_
#define DLPACK_DLPACK_H_

#ifdef __cplusplus
#define DLPACK_EXTERN_C extern "C"
#else
#define DLPACK_EXTERN_C
#endif

/*! \brief The current version of dlpack */
#define DLPACK_VERSION 020

/*! \brief DLPACK_DLL prefix for windows */
#ifdef _WIN32
#ifdef DLPACK_EXPORTS
#define DLPACK_DLL __declspec(dllexport)
#else
#define DLPACK_DLL __declspec(dllimport)
#endif
#else
#define DLPACK_DLL
#endif

#include <stdint.h>
#include <stddef.h>

#ifdef __cplusplus
extern "C" {
#endif
/*!
 * \brief The device type in DLContext.
 */
typedef enum {
  /*! \brief CPU device */
  kDLCPU = 1,
  /*! \brief CUDA GPU device */
  kDLGPU = 2,
  /*!
   * \brief Pinned CUDA GPU device by cudaMallocHost
   * \note kDLCPUPinned = kDLCPU | kDLGPU
   */
  kDLCPUPinned = 3,
  /*! \brief OpenCL devices. */
  kDLOpenCL = 4,
  /*! \brief Vulkan buffer for next generation graphics. */
  kDLVulkan = 7,
  /*! \brief Metal for Apple GPU. */
  kDLMetal = 8,
  /*! \brief Verilog simulator buffer */
  kDLVPI = 9,
  /*! \brief ROCm GPUs for AMD GPUs */
  kDLROCM = 10,
  /*!
   * \brief Reserved extension device type,
   * used for quickly test extension device
   * The semantics can differ depending on the implementation.
   */
  kDLExtDev = 12,
} DLDeviceType;

/*!
 * \brief A Device context for Tensor and operator.
 */
typedef struct {
  /*! \brief The device type used in the device. */
  DLDeviceType device_type;
  /*! \brief The device index */
  int device_id;
} DLContext;

/*!
 * \brief The type code options DLDataType.
 */
typedef enum {
  kDLInt = 0U,
  kDLUInt = 1U,
  kDLFloat = 2U,
  kDLBfloat = 4U,
} DLDataTypeCode;

/*!
 * \brief The data type the tensor can hold.
 *
 *  Examples
 *   - float: type_code = 2, bits = 32, lanes=1
 *   - float4(vectorized 4 float): type_code = 2, bits = 32, lanes=4
 *   - int8: type_code = 0, bits = 8, lanes=1
 */
typedef struct {
  /*!
   * \brief Type code of base types.
   * We keep it uint8_t instead of DLDataTypeCode for minimal memory
   * footprint, but the value should be one of DLDataTypeCode enum values.
   * */
  uint8_t code;
  /*!
   * \brief Number of bits, common choices are 8, 16, 32.
   */
  uint8_t bits;
  /*! \brief Number of lanes in the type, used for vector types. */
  uint16_t lanes;
} DLDataType;

/*!
 * \brief Plain C Tensor object, does not manage memory.
 */
typedef struct {
  /*!
   * \brief The opaque data pointer points to the allocated data. This will be
   * CUDA device pointer or cl_mem handle in OpenCL. This pointer is always
   * aligned to 256 bytes as in CUDA.
   *
   * For given DLTensor, the size of memory required to store the contents of
   * data is calculated as follows:
   *
   * \code{.c}
   * static inline size_t GetDataSize(const DLTensor* t) {
   *   size_t size = 1;
   *   for (tvm_index_t i = 0; i < t->ndim; ++i) {
   *     size *= t->shape[i];
   *   }
   *   size *= (t->dtype.bits * t->dtype.lanes + 7) / 8;
   *   return size;
   * }
   * \endcode
   */
  void* data;
  /*! \brief The device context of the tensor */
  DLContext ctx;
  /*! \brief Number of dimensions */
  int ndim;
  /*! \brief The data type of the pointer*/
  DLDataType dtype;
  /*! \brief The shape of the tensor */
  int64_t* shape;
  /*!
   * \brief strides of the tensor (in number of elements, not bytes)
   *  can be NULL, indicating tensor is compact and row-majored.
   */
  int64_t* strides;
  /*! \brief The offset in bytes to the beginning pointer to data */
  uint64_t byte_offset;
} DLTensor;

/*!
 * \brief C Tensor object, manage memory of DLTensor. This data structure is
 *  intended to facilitate the borrowing of DLTensor by another framework. It is
 *  not meant to transfer the tensor. When the borrowing framework doesn't need
 *  the tensor, it should call the deleter to notify the host that the resource
 *  is no longer needed.
 */
typedef struct DLManagedTensor {
  /*! \brief DLTensor which is being memory managed */
  DLTensor dl_tensor;
  /*! \brief the context of the original host framework of DLManagedTensor in
   *   which DLManagedTensor is used in the framework. It can also be NULL.
   */
  void * manager_ctx;
  /*! \brief Destructor signature void (*)(void*) - this should be called
   *   to destruct manager_ctx which holds the DLManagedTensor. It can be NULL
   *   if there is no way for the caller to provide a reasonable destructor.
   *   The destructors deletes the argument self as well.
   */
  void (*deleter)(struct DLManagedTensor * self);
} DLManagedTensor;
#ifdef __cplusplus
}  // DLPACK_EXTERN_C
#endif
#endif  // DLPACK_DLPACK_H_

So when we see errors like

Summary: 
aten/src/ATen/DLConvertor.h:17:11: error: unknown type name 'DLDevice'; did you mean 'Device'?
TORCH_API DLDevice getDLContext(const Tensor& tensor, const int64_t& device_id);
          ^~~~~~~~
aten/src/ATen/DLConvertor.h:17:11: error: unknown type name 'DLDevice'; did you mean 'Device'?
TORCH_API DLDevice getDLContext(const Tensor& tensor, const int64_t& device_id);

I think that's because DLDevice isn't in the above header! This is just a guess because what the heck C++-based build systems. Maybe we should switch to Rust.

@t-vi is the v60 version of dlpack backwards compatible? I could try updating all our internal dlpack headers to resolve this issue. Alternatively I could look at updating all the dlpack headers to v60 and modifying the files that depend on them.

@t-vi
Copy link
Collaborator Author

t-vi commented Nov 18, 2021

@t-vi is the v60 version of dlpack backwards compatible? I could try updating all our internal dlpack headers to resolve this issue. Alternatively I could look at updating all the dlpack headers to v60 and modifying the files that depend on them.

I have no idea. It's not API compatible, as we have seen. I think it might be ABI compatible as it's mostly renaming things.
Recently, Apache TVM changed to a version of dlpack that is (API) incompatible with PyTorch's current one.
I hit this when I tried to implement a PyTorch fallback for TVM which would link in PyTorch (apache/tvm#7401). Of course, one might ask whether one could use some legacy flag to redefine the names in question, too.

@tqchen , my apologies for dragging you into this, but I wonder if we could benefit from your wisdom here w.r.t. upgrade paths for projects using dlpack.

@rgommers
Copy link
Collaborator

It should be ABI compatible back to at least DLPack 0.2, so upgrading should be fine. Changes to field names should be fine.

I hit this when I tried to implement a PyTorch fallback for TVM which would link in PyTorch (apache/tvm#7401).

Hmm, that does #include <ATen/dlpack.h> from outside of PyTorch - that seems like not such a good idea. Why can't TVM only rely on its own dlpack.h and public libtorch API like at::fromDLPack? That is the intended design I believe, and then field renames don't matter.

@rgommers
Copy link
Collaborator

It should be ABI compatible back to at least DLPack 0.2,

To add to that: this was ensured not only in PR reviews, but at least CuPy went through this upgrade cycle and didn't have any problems. There are no ABI breakage issues on the DLPack repo. And the release note for v0.2 says it's compatible with v0.1: dmlc/dlpack#23 (comment)

@tqchen
Copy link

tqchen commented Nov 24, 2021

Sorry for being late to the thread @rgommers is right that the change is ABI compatible, so the same compiled DLPack symbol should be compatible to every other versions after v0.2. ABI compatibility means that different frameworks can use different versions of DLPack but still correctly exchange

@t-vi
Copy link
Collaborator Author

t-vi commented Nov 28, 2021

Given the complications inside facebook, I don't think I am the best person to propose this change, so I would have an inclination to close this PR.
It's a bit sad w.r.t. the PyTorch fallback inside TVM but we live in a world where everyone maintains their own fork of everything anyways.

@rgommers
Copy link
Collaborator

I could try updating all our internal dlpack headers to resolve this issue. Alternatively I could look at updating all the dlpack headers to v60 and modifying the files that depend on them.

Either of these should work. They're also very similar - the former solution is basically a partial version of the latter. Is it possible to update to v60 for all these in sync internally @mruberry? If the changes land sequentially, then it seems like you're going to have failures somewhere at some point.

@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit that referenced this pull request Jan 21, 2022
Summary:
Fixes #64995

Pull Request resolved: #65047

Reviewed By: VitalyFedyunin

Differential Revision: D32468916

Pulled By: mruberry

fbshipit-source-id: 3e0a17a3a264a77956ea7b795bd472c6fc79566c
pytorchmergebot pushed a commit that referenced this pull request Jan 21, 2022
Summary:
Fixes #64995

Pull Request resolved: #65047

Reviewed By: VitalyFedyunin

Differential Revision: D32468916

Pulled By: mruberry

fbshipit-source-id: 3e0a17a3a264a77956ea7b795bd472c6fc79566c
(cherry picked from commit bd480b9)
@t-vi
Copy link
Collaborator Author

t-vi commented Jan 21, 2022

Thank you @mruberry !

@mruberry
Copy link
Collaborator

Thanks for your patience, @t-vi, and sorry this took so long. I had to update dlpack.h throughout Meta.

cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 3, 2022
Summary:
Fixes pytorch/pytorch#64995

Pull Request resolved: pytorch/pytorch#65047

Reviewed By: VitalyFedyunin

Differential Revision: D32468916

Pulled By: mruberry

fbshipit-source-id: 3e0a17a3a264a77956ea7b795bd472c6fc79566c
(cherry picked from commit bd480b9)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 3, 2022
Summary:
Fixes pytorch/pytorch#64995

Pull Request resolved: pytorch/pytorch#65047

Reviewed By: VitalyFedyunin

Differential Revision: D32468916

Pulled By: mruberry

fbshipit-source-id: 3e0a17a3a264a77956ea7b795bd472c6fc79566c
(cherry picked from commit bd480b9)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 9, 2022
Summary:
Fixes pytorch/pytorch#64995

Pull Request resolved: pytorch/pytorch#65047

Reviewed By: VitalyFedyunin

Differential Revision: D32468916

Pulled By: mruberry

fbshipit-source-id: 3e0a17a3a264a77956ea7b795bd472c6fc79566c
(cherry picked from commit bd480b9)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 9, 2022
Summary:
Fixes pytorch/pytorch#64995

Pull Request resolved: pytorch/pytorch#65047

Reviewed By: VitalyFedyunin

Differential Revision: D32468916

Pulled By: mruberry

fbshipit-source-id: 3e0a17a3a264a77956ea7b795bd472c6fc79566c
(cherry picked from commit bd480b9)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bump DLPack dependency
7 participants