Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: include numpy._core imports for NumPy 2.0 #4857

Merged
merged 11 commits into from
Sep 27, 2023

Conversation

mtsokol
Copy link
Contributor

@mtsokol mtsokol commented Sep 21, 2023

Description

numpy.core is becoming officially a private module and is being renamed to numpy._core in numpy/numpy#24634 PR.

It is a backward and forward compatible change (in terms of this change only libraries compiled against numpy 1.x will work with nightly numpy 2.0 installed, and the other way around). A warning will be emitted when accessing an attribute or submodule via numpy.core.* path.

To avoid warnings for downstream libraries that use pybind11, such as scipy (these depercation warnings are turned into errors in the CI), I propose a change:
pybind11/numpy.h imports numpy and retrieves its version string. If numpy >= 2.x is installed numpy._core.* path is used for importing internal attributes. Otherwise numpy.core.* is used.

With this change downstream libraries depending on numpy nightly builds and pybind11 won't experience the deprecation warning.

Suggested changelog entry:

``pybind11/numpy.h`` now imports NumPy's ``multiarray`` and ``_internal`` submodules with paths depending on the version of installed NumPy (handling change in NumPy 2.0).

@rwgk
Copy link
Collaborator

rwgk commented Sep 25, 2023

I got tripped up a few times by the try-this-import-then-that pattern, while working on changes to pybind11. I think making this safe requires a little more attention, with an eventual error message clearly indicating that both imports failed.

However, I have a more fundamental concern:

  • If _core is made private, don't we undermine that effort by using the private access?

I see it's your own PR: numpy/numpy#24634

What will be the official (non-private) way to obtain the _ARRAY_API and _dtype_from_pep3118 pointers in the future?

If we continue to need access to those pointers, could you provide an official API for them?

@Skylion007
Copy link
Collaborator

Skylion007 commented Sep 25, 2023

Can we also ensure we just catch the ImportError? Do we have bindings for that exception type?

Ah, we do this properly one other place in the codebase:

if (!ex.matches(PyExc_ImportError)) {

@rwgk
Copy link
Collaborator

rwgk commented Sep 25, 2023

Can we also ensure we just catch the ImportError?

That would definitely be better.

Pointing out for completeness:

void **api_ptr = (void **) PyCapsule_GetPointer(c.ptr(), nullptr);

if (PyErr_Occurred()) is missing after this. I.e. this will segfault a couple lines down if there is an error.

@ngoldbaum
Copy link

ngoldbaum commented Sep 25, 2023

Hi! I'm mentoring @mtsokol's Numpy 2.0 API migration project.

What will be the official (non-private) way to obtain the _ARRAY_API and _dtype_from_pep3118 pointers in the future?
If we continue to need access to those pointers, could you provide an official API for them?

Can you open issues on the NumPy issue tracker describing your needs for these two symbols?

For _dtype_from_pep3118, @hoodmane was recently looking at writing a grammar for pep3118 format strings (see numpy/numpy#24428) maybe it makes sense to have a small package on pypi that parses this format? Just spitballing, I don't have any context about why NumPy's PEP 3118 format string parser is in internals and don't know if there is some issue with making NumPy's implementation public.

For _ARRAY_API, this is used internally by NumPy to set up its own public C API. It looks like pybind11's usage is doing something very similar to numpy's import_array() function that downstream code is supposed to call to set up the C API. If there isn't a public way for a downstream project like pybind11 to get access to the NumPy C API symbols without going through the _ARRAY_API capsule, we probably need to add such a public mechanism that won't break if NumPy decides to change the way it sets up the public C API in the future.

We want to work with you to make sure you have public APIs for everything you need so you can move away from the things you're using in _core. One of the goals of making this change was to shake out downstream usages of numpy internals that we aren't aware of.

@rwgk
Copy link
Collaborator

rwgk commented Sep 25, 2023

Hi @ngoldbaum, some background and opinions:

  • I wasn't involved in the implementation of pybind11/numpy.h. I just happen to be the most active pybind11 maintainer at the moment.

  • IIUC pybind11/numpy.h was designed to work even if the numpy headers are not installed on the system on which the pybind11 bindings needing pybind11/numpy.h are compiled. Which is extremely convenient. But implies that it is assumed that the numpy API is set in stone and will never change. — Full disclosure: I was horrified when I first saw that around 3 years ago. Then amazed that nothing obviously broke or at least showed signs of bit rot in the meantime.

But maybe with the Numpy 2.0 API it's finally happening?

What do we do now? Taking away candy usually isn't going over well. Which brings me to a maybe radical question:

  • Could we just leave pybind11/numpy.h as is?

  • Add pybind11/numpy2.h and start over with the implementation, this time requiring the numpy headers? — We could work together to figure out what APIs we need.

Probably a silly question: how do you #include the numpy 2.0 API?

@ngoldbaum
Copy link

I think leaving behind numpy.h for older numpy releases and then adding a numpy2.h that does something nicer makes a lot of sense.

Probably a silly question: how do you #include the numpy 2.0 API?

Not silly, the numpy docs are really bad about explaining this.

You need to do something like this in one compilation unit per C extension:

#define NPY_TARGET_VERSION NPY_2_0_API_VERSION
#include "numpy/arrayobject.h"

and then later in that same file do:

import_array();

(in the extension module I'm working on right now this happens inside of the PyMODINIT_FUNC module initialization function)

Other compilation units need something like:

#define NPY_TARGET_VERSION NPY_2_0_API_VERSION
#define NO_IMPORT_ARRAY
#include "numpy/arrayobject.h"

The NO_IMPORT_ARRAY bit is to ensure import_array only gets called once per extension.

You probably also want #define NPY_NO_DEPRECATED_API NPY_2_0_API_VERSION to get warnings about uses of deprecated API items, but that's not required.

@mtsokol
Copy link
Contributor Author

mtsokol commented Sep 26, 2023

Could we just leave pybind11/numpy.h as is?

@rwgk For now I think yes, in terms of numpy/numpy#24634 changes - I don't plan any other changes to the core module. I will push additional checks to this PR that you mentioned in the comments (I think it can be merged to avoid warnings in CI, if you build against numpy nightly builds).

Add pybind11/numpy2.h and start over with the implementation, this time requiring the numpy headers? - We could work together to figure out what APIs we need.

Sounds good in a long term - I'm curious about changes that would be required to include/numpy.h file.

@mtsokol
Copy link
Contributor Author

mtsokol commented Sep 26, 2023

Ok, I added additional import checks for each step.

@mtsokol mtsokol force-pushed the numpy-core-import-update branch from 13fb695 to 340afab Compare September 26, 2023 12:50
@mtsokol mtsokol force-pushed the numpy-core-import-update branch from 41100e6 to 45cc2a0 Compare September 26, 2023 13:45
@rwgk
Copy link
Collaborator

rwgk commented Sep 26, 2023

@ngoldbaum A couple thoughts:

#define NPY_TARGET_VERSION NPY_2_0_API_VERSION
#define NO_IMPORT_ARRAY

This looks like you've chosen a "maximally incremental" path. I'm not sure if my numpy2.h idea makes sense in that case, but we can see.

Is the API above (the two defines) already set in stone?

The NO_IMPORT_ARRAY bit is to ensure import_array only gets called once per extension.

So this sounds like nothing is needed 1 time, NO_IMPORT_ARRAY is needed N times.

Did you consider something like this?

#define NPY_TARGET_VERSION NPY_2_0_API_VERSION
#define NPY_2_0_API_IMPORT_ARRAY_IN_THIS_TRANSLATION_UNIT

That way you'd have something special only 1 time, nothing N times.

@ngoldbaum
Copy link

Is the API above (the two defines) already set in stone?

This is how it's always worked in NumPy, you'd just be targeting a different API version to target an older version of the NumPy API.

Did you consider something like this?

I'm not sure if there is a problem with adding a new macro, I'd open an upstream NumPy issue.

@@ -120,6 +120,25 @@ inline numpy_internals &get_numpy_internals() {
return *ptr;
}

PYBIND11_NOINLINE module_ import_numpy_core_submodule(const char *submodule_name) {
try {
return module_::import((std::string("numpy._core.") + submodule_name).c_str());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add a (terse) comment here explaining when this is expected to work, and below similarly?

Rough idea:

// numpy._core was established with numpy PR ...
// numpy.core was the import needed up until numpy release ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure! I added a comment.

+ " from numpy.");
}
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What you have here looks acceptable, but I wonder if we could make this more robust pretty easily, along the lines of:

  • import numpy
  • Inspect numpy.__version__ or similar to decide if core or _core is needed, and try only that one import. (I'm hoping this might be more feasible than usual because you own the numpy API.)

This is less likely to be surprising if there is some low-level issue in pybind11 or Python itself (refactoring, new major dev version), or numpy is changed further in the future. The error message then could be the original import error, re-raised using raise_from (very easy):

inline void raise_from(error_already_set &err, PyObject *type, const char *message) {

Copy link
Contributor Author

@mtsokol mtsokol Sep 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I did similar changes for JAX and originally I also decided to go with numpy version checks in an if statement and then a single import.
But I was advised that try ... catch approach is more robust. Here's an explanation from Jake: jax-ml/jax#16972 (review)

Version checks are fine for tests, but I'd prefer to avoid them in package import paths – e.g. imagine a user who has some custom numpy build with a version string that we don't anticipate, and then import jax might fail on trying to cast the version to a tuple of ints.

But I'm happy to switch to a version check if you prefer it!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're dealing with 1 try vs 2 tries here, that's similar but not the same.

version check

I'm thinking of it more as a "capability check".

(Version numbers a convenient way to answer "what exactly do I have", that's a different purpose.)

I'm still hoping that we can rise to a level of organization that allows us to only have 1 import and report the original error: does or can numpy provide a simple API that allows us to only try once?

Copy link
Contributor Author

@mtsokol mtsokol Sep 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure! I switched to an if statement that selects package path based on numpy version - now (_)core.multiarray or (_)core._internal is imported only once.
(it might look lengthy, but this is the recommended way to get major version integer: https://numpy.org/doc/stable/release/1.9.0-notes.html#numpyversion-class-added)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I think I somehow lost a comment here that I thought I typed in already. Sorry if something appears twice.)

std::string numpy_core_path = major_version >= 2 ? "numpy._core" : "numpy.core";

What you have looks fine though, if you prefer, the only thing I'd do is remove the explicit std::string() inside the two if branches, i.e. I think simply numpy_core_path = "numpy._core"; will work and is what people usually do.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can’t really check (at a conference), but what are we using the core module for? If it’s private that means NumPy is free to break us any time. Would be very nice to start moving to whatever is supported instead longer term.

Copy link
Collaborator

@henryiii henryiii Sep 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it only our tests? Ahh, no, multiarray. Is that available directly? Doesn’t seem to be. Is it supposed to stick around?

Copy link
Collaborator

@EthanSteinberg EthanSteinberg Sep 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@henryiii
We are using the core module to obtain access to numpy's private C API.

Yes, NumPy can break us at any time as we are relying on a private API. @rwgk and I discuss a long term solution in the main PR thread, which is moving us to Numpy's public Python API instead of their private C API.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Numpy has a public C API as well, but that's difficult for us to use because that would complicate our build process quite a bit).

@@ -263,7 +282,7 @@ struct npy_api {
};

static npy_api lookup() {
module_ m = module_::import("numpy.core.multiarray");
module_ m = detail::import_numpy_core_submodule("multiarray");
auto c = m.attr("_ARRAY_API");
void **api_ptr = (void **) PyCapsule_GetPointer(c.ptr(), nullptr);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's an existing problem, but could you please add this fix or similar while you're at it (untested)?

if (api_ptr == nullptr) {
    raise_from(PyExc_SystemError, "FAILURE obtaining numpy _ARRAY_API pointer.");
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I added this if statement - looks good to me.

@rwgk
Copy link
Collaborator

rwgk commented Sep 26, 2023

This is how it's always worked in NumPy

Oh, sorry, I just didn't know :-)

I'm not sure if there is a problem with adding a new macro

If NO_IMPORT_ARRAY is long established I have a different mind about it and wouldn't add a new macro (I believe it's more likely to make things worse rather than a little better as hoped).

@rwgk rwgk requested a review from EthanSteinberg September 26, 2023 16:23
@rwgk
Copy link
Collaborator

rwgk commented Sep 26, 2023

I just added @EthanSteinberg as a reviewer. I believe he knows much more about numpy than I do.

@EthanSteinberg
Copy link
Collaborator

Thanks for helping fix this @ngoldbaum. I agree with @rwgk's recommendations but have two comments

  1. We absolutely cannot use numpy's C code or headers because that significantly complicates the build system (which is almost always a mess for C++ extensions). So we cannot use import_array(); or #include "numpy/arrayobject.h" or similar.

  2. What we can do is transition away from numpy's C api and instead work with its python api. There will probably be some slight overhead here, but I expect it will be minimal, especially because we can use tools like https://docs.python.org/3/c-api/buffer.html#c.PyObject_GetBuffer to avoid a lot of overhead.

This will require a significant revamp, but we can do it gradually, removing one numpy C api call at a time and replacing each of those calls with their python api alternatives.

What are your thoughts @rwgk, should we go through that effort? Is it worth the cost of maybe introducing bugs?

@EthanSteinberg EthanSteinberg removed their request for review September 26, 2023 17:32
@rwgk
Copy link
Collaborator

rwgk commented Sep 26, 2023

We absolutely cannot use numpy's C code or headers because that significantly complicates the build system

This makes me very sad TBH, but mainly because there is no way I could reasonably disagree.

What we can do is transition away from numpy's C api and instead work with its python api.

That sounds like a great plan!

This will require a significant revamp ... should we go through that effort?

It would be fantastic if someone stepped up to do it. (I can only offer to do timely reviews at the moment.)

But it sounds like we could also incrementally work towards that goal? Like what we're currently doing here?

@EthanSteinberg
Copy link
Collaborator

EthanSteinberg commented Sep 26, 2023

I can help port over at least some of the calls, but not until at least November. I'll add a reminder to my calender.

@mtsokol mtsokol force-pushed the numpy-core-import-update branch from d125e07 to 4c39988 Compare September 26, 2023 21:31
@mtsokol
Copy link
Contributor Author

mtsokol commented Sep 26, 2023

I see that one of the pre-commit checks disallows Numpy capitalization - but it's needed to access an attribute. Can I remove that keyword from pre-commit config?

+ " from numpy.");
}
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I think I somehow lost a comment here that I thought I typed in already. Sorry if something appears twice.)

std::string numpy_core_path = major_version >= 2 ? "numpy._core" : "numpy.core";

What you have looks fine though, if you prefer, the only thing I'd do is remove the explicit std::string() inside the two if branches, i.e. I think simply numpy_core_path = "numpy._core"; will work and is what people usually do.

@mtsokol
Copy link
Contributor Author

mtsokol commented Sep 26, 2023

What you have looks fine though, if you prefer, the only thing I'd do is remove the explicit std::string() inside the two if branches, i.e. I think simply numpy_core_path = "numpy._core"; will work and is what people usually do.

Right, I changed it to a one-line variant.

I also added throw error_already_set(); to void **api_ptr = ... because I found it explained this way in exceptions.rst. Let me know if I should reverse it.

Otherwise it's ready from my side!

@@ -287,6 +282,7 @@ struct npy_api {
void **api_ptr = (void **) PyCapsule_GetPointer(c.ptr(), nullptr);
if (api_ptr == nullptr) {
raise_from(PyExc_SystemError, "FAILURE obtaining numpy _ARRAY_API pointer.");
throw error_already_set();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're right! This needs to be here. How did you notice?

Looks like a made a mistake in #4570.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here in documentation: https://pybind11.readthedocs.io/en/stable/advanced/exceptions.html#chaining-exceptions-raise-from

To do a similar thing in pybind11, you can use the py::raise_from function. It sets the current python error indicator, so to continue propagating the exception you should throw py::error_already_set().

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch (it completely slipped my mind), thanks a lot!

.pre-commit-config.yaml Outdated Show resolved Hide resolved
@rwgk
Copy link
Collaborator

rwgk commented Sep 27, 2023

Otherwise it's ready from my side!

I'm happy, too.

Waiting for @EthanSteinberg or @henryiii for a 2nd set of eyes.

@EthanSteinberg EthanSteinberg self-requested a review September 27, 2023 00:46
Copy link
Collaborator

@EthanSteinberg EthanSteinberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! Thanks again @mtsokol for the PR!

Copy link
Collaborator

@henryiii henryiii left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few comments, but not blockers.

Co-authored-by: Ralf W. Grosse-Kunstleve <[email protected]>
@rwgk
Copy link
Collaborator

rwgk commented Sep 27, 2023

For completeness:

I ran this PR through Google's global testing (internal test ID OCL:568738744:BASE:568878849:1695833342001:c72445f1).

I didn't expect issues and in fact found none.

@rwgk rwgk merged commit 0a756c0 into pybind:master Sep 27, 2023
82 checks passed
@github-actions github-actions bot added the needs changelog Possibly needs a changelog entry label Sep 27, 2023
@mtsokol mtsokol deleted the numpy-core-import-update branch September 27, 2023 17:23
@rwgk
Copy link
Collaborator

rwgk commented Sep 27, 2023

@mtsokol Could you please update the PR description and add a Suggested changelog entry?

I already restored the usual markup template, pasting most of your original description into the template, but it's no accurate anymore (doesn't match the merged implementation). If you could tweak it that would be great. Short and terse is best (IMO).

The changelog entry is usually just one sentence, just enough to give the idea for someone glancing through the list of changes for a release.

@mtsokol
Copy link
Contributor Author

mtsokol commented Sep 27, 2023

@mtsokol Could you please update the PR description and add a Suggested changelog entry?

@rwgk Sure! I updated the PR description.

@rwgk
Copy link
Collaborator

rwgk commented Oct 6, 2023

I deployed this PR Google-internally yesterday. Unfortunately I had to roll it back today. We saw a number of tests timing out, certainly related to this PR. My best guess, but totally without proof at the moment, is deadlocking.

It'll try to get to the bottom of it asap.

.cast<object>()
.release()
.ptr();
module_ m = detail::import_numpy_core_submodule("_internal");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rwgk I would bet this change right here is causing the deadlocking. Look at how the old code did it all in one static call so the import was only once (even if this function got called a bunch).

The new code is importing _internal every time this function is called.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was my first suspicion, too, but that change by itself did not help.

Then I decided to roll back and debug after production is in a healthy state again.

Sorry I should have mentioned this before. (I'm currently battling multiple fires.)

Almost certainly we should change the code here back, but I want to get to the bottom of the timeouts first. (I only have an internal reproducer at the moment. It involves running a test 100 times, ~1/3 of the tests time out.)

gigony added a commit to gigony/cucim that referenced this pull request Oct 26, 2023
This applies the following patches to pybind11:

- pybind/pybind11#4857
- pybind/pybind11#4877

to avoid deadlock when using pybind11 without importing numpy in
multi-threaded environment.
gigony added a commit to gigony/cucim that referenced this pull request Oct 27, 2023
This applies the following patches to pybind11:

- pybind/pybind11#4857
- pybind/pybind11#4877

to avoid deadlock when using pybind11 without importing numpy in
multi-threaded environment.
rapids-bot bot pushed a commit to rapidsai/cucim that referenced this pull request Oct 30, 2023
…mmand (#618)

### Update Catch2 to v3.4.0

Without upgrading Catch2, The following error occurs when building on
Ubuntu 22.04 due to glibc:
```
cucim/build-debug/_deps/deps-catch2-src/single_include/catch2/catch.hpp:10830:58: error: call to non-‘constexpr’ function
‘long int sysconf(int)’
10830 |     static constexpr std::size_t sigStackSize = 32768 >=  MINSIGSTKSZ ? 32768 : MINSIGSTKSZ;
```
### Update pybind11 to v2.11.1

Even with the latest version of pybind11, we still have an issue
with `pybind11::array_t` when cuCIM is used in multithread without
importing numpy in the main thread.

See pybind/pybind11#4877

Will need to wait for the next release of pybind11.

### Use runtime option instead of using nvidia-docker command

nvidia-docker binary is not available if user doesn't install
nvidia-docker2 package. This change uses runtime option instead
of using nvidia-docker command.

### Apply pybind11 patch to avoid deadlock (until new release is available)

This applies the following patches to pybind11:

- pybind/pybind11#4857
- pybind/pybind11#4877

to avoid deadlock when using pybind11 without importing numpy in
multi-threaded environment.

Authors:
  - Gigon Bae (https://github.com/gigony)
  - Gregory Lee (https://github.com/grlee77)

Approvers:
  - Gregory Lee (https://github.com/grlee77)
  - https://github.com/jakirkham

URL: #618
@henryiii henryiii changed the title MAINT: Include numpy._core imports fix: include numpy._core imports for NumPy 2.0 Nov 15, 2023
@henryiii henryiii removed the needs changelog Possibly needs a changelog entry label Mar 27, 2024
rwgk pushed a commit to rwgk/pybind11 that referenced this pull request Jun 12, 2024
* fix: Use lowercase builtin collection names (pybind#4833)

* Update render for buffer sequence and handle  (pybind#4831)

* fix: Add capitalize render name of `py::buffer` and `py::sequence`

* fix: Render `py::handle` same way as `py::object`

* tests: Fix tests `handle` -> `object`

* tests: Test capitaliation of `py::sequence` and `py::buffer`

* style: pre-commit fixes

* fix: Render `py::object` as `Any`

* Revert "fix: Render `py::object` as `Any`"

This reverts commit 7861dcf.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Ralf W. Grosse-Kunstleve <[email protected]>

* fix: Missing typed variants of `iterator` and `iterable` (pybind#4832)

* Fix small bug introduced with PR pybind#4735 (pybind#4845)

* Bug fix: `result[0]` called if `result.empty()`

* Add unit test that fails without the fix.

* fix(cmake): correctly detect FindPython policy and better warning (pybind#4806)

* fix(cmake): support DEBUG_POSTFIX correctly (pybind#4761)

* cmake: split extension

Into suffix and debug postfix. Pybind11 is currently treating both as
suffix, which is problematic when something else defines the
DEBUG_POSTFIX because they will be concatenated.

pybind11_extension sets SUFFIX to _d.something and if DEBUG_POSTFIX is
set to _d.

    _d + _d.something = _d_d.something

The issue has been reported at:

pybind#4699

* style: pre-commit fixes

* fix(cmake): support postfix for old FindPythonInterp mode too

Signed-off-by: Henry Schreiner <[email protected]>

---------

Signed-off-by: Henry Schreiner <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Henry Schreiner <[email protected]>

* Avoid copy in iteration by using const auto & (pybind#4861)

This change is fixing a Coverity AUTO_CAUSES_COPY issues.

* Add 2 missing `throw error_already_set();` (pybind#4863)

Fixes oversights in PR pybind#4570.

* MAINT: Include `numpy._core` imports (pybind#4857)

* MAINT: Include numpy._core imports

* style: pre-commit fixes

* Apply review comments

* style: pre-commit fixes

* Add no-inline attribute

* Select submodule name based on numpy version

* style: pre-commit fixes

* Update pre-commit check

* Add error_already_set and simplify if statement

* Update .pre-commit-config.yaml

Co-authored-by: Ralf W. Grosse-Kunstleve <[email protected]>

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Ralf W. Grosse-Kunstleve <[email protected]>

* MAINT: Remove np.int_ (pybind#4867)

* chore(deps): update pre-commit hooks (pybind#4868)

* chore(deps): update pre-commit hooks

updates:
- [github.com/psf/black-pre-commit-mirror: 23.7.0 → 23.9.1](psf/black-pre-commit-mirror@23.7.0...23.9.1)
- [github.com/astral-sh/ruff-pre-commit: v0.0.287 → v0.0.292](astral-sh/ruff-pre-commit@v0.0.287...v0.0.292)
- [github.com/codespell-project/codespell: v2.2.5 → v2.2.6](codespell-project/codespell@v2.2.5...v2.2.6)
- [github.com/shellcheck-py/shellcheck-py: v0.9.0.5 → v0.9.0.6](shellcheck-py/shellcheck-py@v0.9.0.5...v0.9.0.6)
- [github.com/PyCQA/pylint: v3.0.0a7 → v3.0.0](pylint-dev/pylint@v3.0.0a7...v3.0.0)

* Update .pre-commit-config.yaml

* style: pre-commit fixes

* Update .pre-commit-config.yaml

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Henry Schreiner <[email protected]>

---------

Signed-off-by: Henry Schreiner <[email protected]>
Co-authored-by: Sergei Izmailov <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Henry Schreiner <[email protected]>
Co-authored-by: László Papp <[email protected]>
Co-authored-by: Oleksandr Pavlyk <[email protected]>
Co-authored-by: Mateusz Sokół <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants