Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing libpython with Debian default Python install #30

Closed
matthew-brett opened this issue Feb 6, 2016 · 46 comments
Closed

Missing libpython with Debian default Python install #30

matthew-brett opened this issue Feb 6, 2016 · 46 comments

Comments

@matthew-brett
Copy link
Contributor

I am testing manylinux numpy wheels.

In particular, I am testing this guy: http://nipy.bic.berkeley.edu/manylinux/numpy-1.10.4-cp27-none-linux_x86_64.whl

With a default install, test of Python, starting with either Wheezy or Jessie:

docker run -ti --rm -v $PWD:/io debian:latest /bin/bash
docker run -ti --rm -v $PWD:/io tianon/debian:wheezy /bin/bash

and running this script:

apt-get update 
apt-get install -y python curl 
curl -sLO https://bootstrap.pypa.io/get-pip.py 
python get-pip.py 
pip install -f https://nipy.bic.berkeley.edu/manylinux numpy nose 
python -c "import numpy; numpy.test()" 

I get this:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/numpy/__init__.py", line 180, in <module>
    from . import add_newdocs
  File "/usr/local/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 13, in <module>
    from numpy.lib import add_newdoc
  File "/usr/local/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 8, in <module>
    from .type_check import *
  File "/usr/local/lib/python2.7/dist-packages/numpy/lib/type_check.py", line 11, in <module>
    import numpy.core.numeric as _nx
  File "/usr/local/lib/python2.7/dist-packages/numpy/core/__init__.py", line 14, in <module>
    from . import multiarray
ImportError: libpython2.7.so.1.0: cannot open shared object file: No such file or directory

Sure enough, in order for the wheel to work, I need:

apt-get install libpython2.7

Maybe we need to add a check / informative error message for the presence of libpython?

@matthew-brett
Copy link
Contributor Author

Same on Ubuntus 14.04, 15.10.

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 6, 2016

Where would such a check go? This is really a Debian problem with the way they split upstream python into many subpackages, no?

@matthew-brett
Copy link
Contributor Author

It would have to go in the pip check for the validity of manylinux for the platform.

It is unfortunate that Debian does this, but I bet that a large majority of our consumers will be on Debian or Ubuntu, and there's not much prospect of Debian / Ubuntu changing this immediately, and even if they do, there will be many systems liable to this problem in the wild.

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 6, 2016

But on that system, if you disable manylinux wheels, pip install is going to pull from an sdist, and that will fail too.

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 6, 2016

Because clearly the system doesn't have the python header files (apt-get install python-dev), otherwise they'd also have libpython.

@matthew-brett
Copy link
Contributor Author

Right - good point.

On the other hand, on a Debian-based system, if I try an sdist pip install I first get an error about being unable to compile a C program, and, after installing build-essential I get this:

    SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel.

That makes it pretty clear what I have to do. On the other hand, if we blanket enable manylinux wheels, on systems without python-dev, the error (libpython2.7.so.1.0: cannot open shared object file) is much harder to understand and fix.

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 6, 2016

Do you know where that SystemError is being thrown from? Is that check in pypa/pip, or something numpy-specific.

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 6, 2016

@matthew-brett
Copy link
Contributor Author

Yes, it is a numpy thing, and so a bad example:

      File "numpy/core/setup.py", line 278, in check_types
        "Cannot compile 'Python.h'. Perhaps you need to "
    SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel.

Trying with another package, regex:

    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c Python2/_regex.c -o build/temp.linux-x86_64-2.7/Python2/_regex.o
    Python2/_regex.c:46:20: fatal error: Python.h: No such file or directory
    compilation terminated.
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

So - error, arising from gcc not very helpful, but easily solved with a web search. I guess it would make the transition easier, if we could give a more helpful message than that.

@njsmith
Copy link
Member

njsmith commented Feb 7, 2016

@matthew-brett: Here's a wacky question. What happens if you do patchelf --remove-needed libpython2.7.so.1 multiarray.so?

Reason I ask: For a regular interpreter where the executable's linked against libpython2.7.so.1, it shouldn't matter whether multiarray.so has a DT_NEEDED for libpython2.7.so.1, b/c it will be loaded into the ELF namespace regardless. And for Debian, /usr/bin/python2.7 itself seems to be exporting all the standard symbols... in fact, the multiarray.so in debian's python-numpy package doesn't seem to be linked against libpython at all! And if I build a numpy using debian, the resulting multiarray.so also has no DT_NEEDED for libpython, but isn't statically linked either (objdump shows lots of *UND* symbols for PyWhatever_Foo functions).

OTOH debian doesn't seem to be consistent -- some python-foo packages depend on libpython, and some don't. And I don't know what they've done to distutils so that it doesn't link to -lpython2.7. Something pretty weird going on here :-/

(The one place where failing to link to libpython will break, AFAICT, is if you have a python interpreter that's embedded, but not via DT_NEEDED linking to an executable like gnucash or blender or whatever, but instead the python interpreter itself is dlopen'ed (or linked to another .so that gets dlopen'ed, etc.). In this case the python symbols don't make it into the global ELF namespace, so that won't be visible to extension modules. So AFAICT it really is better if python extensions link to libpython explicitly, and libpython gets included on distros out-of-the-box -- that's how fedora does it. But maybe we can't fix this anytime soon :-/.)

@matthew-brett
Copy link
Contributor Author

Yup, doing this:

patchelf --remove-needed libpython2.7.so.1.0 /usr/local/lib/python2.7/dist-packages/numpy/core/multiarray.so
patchelf --remove-needed libpython2.7.so.1.0 /usr/local/lib/python2.7/dist-packages/numpy/core/umath.so
patchelf --remove-needed libpython2.7.so.1.0 /usr/local/lib/python2.7/dist-packages/numpy/linalg/lapack_lite.so 
patchelf --remove-needed libpython2.7.so.1.0 /usr/local/lib/python2.7/dist-packages/numpy/linalg/_umath_linalg.so 
patchelf --remove-needed libpython2.7.so.1.0 /usr/local/lib/python2.7/dist-packages/numpy/fft/fftpack_lite.so 
patchelf --remove-needed libpython2.7.so.1.0 /usr/local/lib/python2.7/dist-packages/numpy/random/mtrand.so 
patchelf --remove-needed libpython2.7.so.1.0 /usr/local/lib/python2.7/dist-packages/numpy/core/tests
patchelf --remove-needed libpython2.7.so.1.0 /usr/local/lib/python2.7/dist-packages/numpy/core/multiarray_tests.so 
patchelf --remove-needed libpython2.7.so.1.0 /usr/local/lib/python2.7/dist-packages/numpy/core/umath_tests.so 
patchelf --remove-needed libpython2.7.so.1.0 /usr/local/lib/python2.7/dist-packages/numpy/core/operand_flag_tests.so 
patchelf --remove-needed libpython2.7.so.1.0 /usr/local/lib/python2.7/dist-packages/numpy/core/struct_ufunc_test.so 

allows the test to run without any missing libraries. Are you suggesting that we do this for manylinux wheels in general?

@njsmith
Copy link
Member

njsmith commented Feb 7, 2016

I'm really not sure...

@njsmith
Copy link
Member

njsmith commented Feb 7, 2016

Ah, here's an example Debian-compiled extension module that does link to libpython:

~/src$ readelf -a /usr/lib/python2.7/dist-packages/PySide/QtOpenGL.so | grep NEEDED
 0x0000000000000001 (NEEDED)             Shared library: [libpyside-python2.7.so.1.2]
 0x0000000000000001 (NEEDED)             Shared library: [libpython2.7.so.1.0]
 0x0000000000000001 (NEEDED)             Shared library: [libQtCore.so.4]
 0x0000000000000001 (NEEDED)             Shared library: [libQtGui.so.4]
 0x0000000000000001 (NEEDED)             Shared library: [libQtOpenGL.so.4]
 0x0000000000000001 (NEEDED)             Shared library: [libshiboken-python2.7.so.1.2]
 0x0000000000000001 (NEEDED)             Shared library: [libstdc++.so.6]
 0x0000000000000001 (NEEDED)             Shared library: [libm.so.6]
 0x0000000000000001 (NEEDED)             Shared library: [libgcc_s.so.1]
 0x0000000000000001 (NEEDED)             Shared library: [libc.so.6]

Why? I don't know.

@matthew-brett
Copy link
Contributor Author

How about floating the idea of editing out the libpython dependency on distutils-sig?

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 7, 2016

I'm very nervous about editing it out like that, especially in an automatic tool like auditwheel repair. I don't know exactly what the reason is that some packages get built with a dependency on libpython.so and others don't, or that some packages might have a dependency on libpython.so but seem to work fine when that dependency is removed with patchelf, but it seems like a rash thing to do in general.

@matthew-brett
Copy link
Contributor Author

I guess the idea would be to have another tool or a command line flag to auditwheel to do this extra processing.

I think our options are either to add a useful warning and instruction in pip, or to do this hack - what do y'all think?

@matthew-brett
Copy link
Contributor Author

I bet the qt libpython dependency is an accident of the complexity of building the package.

@njsmith
Copy link
Member

njsmith commented Feb 7, 2016

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 7, 2016

The PEP text says that it's permissible for manylinux wheels to link against libpython. So adding a check to the pip platform detection PR that ensures the library is available would be my vote.

@njsmith
Copy link
Member

njsmith commented Feb 7, 2016

If we just do that we get the following awful UX:

$ sudo apt install python2.7
$ pip install somepackage
[... downloading sdist ...]
error: gcc: command not found

# argh fine
$ sudo apt install build-essential
$ pip install somepackage
[... downloading sdist ...]
error: Python.h not found

# arhgghghgh fine
$ sudo apt install python-dev
$ pip install somepackage
[... pip downloads and installs the wheel ...]

# wait wtf?

So if we do want to stick with allowing manylinux wheels to link against libpython, then my vote would be that we add a special case to pip to detect missing libpython, and if that's the only reason it can't install a manylinux wheel then it should error out with some useful error message like "you should install libpython".

Anyway, as explained in the email the full details are actually rather complicated and subtle... I guess we can see what people say. Maybe just not linking to libpython is a good solution, and it doesn't necessarily have to require auditwheel to do the fixups.

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 7, 2016

I'll respond there.

@matthew-brett
Copy link
Contributor Author

I guess that pip would first have to detect whether there was a manylinux wheel available to install, before raising the error, otherwise the user will lose the ability to do installs of pure python packages from sdists.

That in turn would mean that if the author of a pure Python package made a manylinux wheel, then users would start getting an error instead of the installed package, on these systems.

@njsmith
Copy link
Member

njsmith commented Feb 7, 2016

That in turn would mean that if the author of a pure Python package made a manylinux wheel, then users would start getting an error instead of the installed package, on these systems.

No, that's the key point: the way Debian/Ubuntu have their packages set up, any situation in which there is a manylinux wheel and it won't work, is also a situation where if you tried installing the sdist instead, that wouldn't work either. If you're missing libpython, and your only options are sdist or manylinux-that-depends-on-libpython, then you're doomed no matter what, so we might as well give you an error message that leads you to the simple solution (install libpython so you can use the wheel).

That said, after further investigation I think I am leaning towards saying that manylinux1 wheels should not link to libpython (at least for current python versions, and maybe we can fix it for 3.6) -- I will send a more detailed email to distutils-sig later.

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 7, 2016

If I see things correctly, there are a couple options. None of them are great.

  1. Remove libpythonX.Y.so from the list of allowed external shared library dependencies in the PEP, change auditwheel to enfore this, and change the Docker images to compile Pythons without --enable-shared.
    • Advantages: As Antoine pointed out https://bugs.python.org/issue21536, it's essentially the case that extensions built for an --enable-shared Python are not portable to the static Pythons, but those built for static Pythons are. So this seems to me in the manylinux spirit of going with the lowest common denominator.
    • Disadvantages: I suspect that some packages like PySide simply will not compile against a static Python. So, we might be shutting them out without upstream changes.
  2. We could also essentially say that only --enable-shared Pythons are supported platforms for manylinux1. For Debian, pip could have some special error messages that instruct users how to get libpythonX.Y.so, like sudo at-get install python-dev.
    • Advantages: ...
    • Disadvantages: We're leaving people that compiled Python from source without --enable-shared out in the cold.
  3. Some hybrid solutions which involves having auditwheel use patchelf to remove the dependencies on libpython.

@matthew-brett
Copy link
Contributor Author

Nathaniel - to make sure we understand each other:

With current Debian / apt-get install python:

Works:

pip install six

Fails:

pip install numpy

We're proposing to raise an error from pip if libpython is missing.

Presumably we don't want to raise this error for pip install six because this used to work and there is no reason why it should not in the manylinux era.

The question then is, how to know when to raise an error. We could ask pip to try and work out whether the sdist needs compilation or not (is there an easy way)? Or we could ask pip to look for a manylinux wheel, and fail in that case, on the basis that this is likely to be a package that pip cannot install without the manylinux wheel.

But that does have the disadvantage that if the packager of six makes a manylinux wheel, then the user will get an error, despite the fact that an install from sdist will work fine.

EDIT: of course (I realized when sitting in a church pew today) you can't build a 'manylinux1' wheel for six - because it's pure Python and it won't have a platform tag. On the other hand, there are packages like tornado and pyyaml and markupsafe that will build in pure Python mode, but can also build with compiled modules, for for these it would be possible to break a previously working sdist install by adding a manylinux wheel.

@matthew-brett
Copy link
Contributor Author

Maybe pip or Debian could be persuaded to include the logic to give a better error message if pip tries and fails to compile package from an sdist - something like "Cannot compile package with extension code; you may need to install python-dev".

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 7, 2016

We'd still be leaving other people that use statically-compiled Pythons either on a different distro, or compiling from source, in the lurch.

@matthew-brett
Copy link
Contributor Author

Robert - you mean, if we use the pip error, then people with statically compiled Pythons will not be able to use manylinux wheels?

But, if we remove the libpython stated dependency from the manylinux wheels, I think these people will be able to use the wheels, except in weird cases like this : https://bugs.python.org/msg18810

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 7, 2016

I'm not totally sure about this one way or the other, but I think my position is that

  • both static and shared builds of python are legit targets that are widely used and that we should support as targets.
  • if these are both valid platforms, then libpythonx.y.so is not a library that we can depend on being present on the system, so it should be excluded from t he PEP policy.
  • the best way to build these wheels is probably to change the docker image to use a static python.
  • certain libraries, like the bug you linked to, might be written in such a way that they only work for shared pythons, and don't work for static pythons. And perhaps that's just the compromise we have to make, and we have to suck it up and say "manylinux wheels are required to work with both static and shared pythons"

@matthew-brett
Copy link
Contributor Author

The fact that Debian and Ubuntu has been packaging most extensions without a libpython dependency for a long time, makes me think that we are very likely to get away with it for manylinux.

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 7, 2016

Right, so are we agreeing here, that the PEP text should be changed to disallow linking to libpython?

@matthew-brett
Copy link
Contributor Author

Yes, I think we're all agreeing now. But I thought you said over on distutils-sig that you didn't think that was acceptable?

@njsmith
Copy link
Member

njsmith commented Feb 8, 2016

I checked two programs that have plugins where the plugins embed python -- libreoffice and xchat. In both cases the program ends up using RTLD_GLOBAL to load libpython. This is terrible and not what you should do, but it does mean that they will with correctly even with extensions that don't link against libpython. (I guess this is probably why they do this terrible thing.) And manylinux is a relentlessly practical design, so...

(You can check by running the program with LD_DEBUG=scopes)

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 8, 2016

Matthew, sort for being unclear. These issues as confusing, and typing on my phone doesn't help.

After we agree on the necessary changes in the policy and the PEP text (to remove libpythonx.y.so from the list of permitted external shared libraries), the next question is what changes are going to be made to the docker images and auditwheel.

One implementation route is to change the docker image to use a static builds of python. This will mean that all compiled python extension won't link libpythonx.y.so, since the build box won't even contain a libpythonx.y.so. I think this is the right route.

I may have misunderstood the discussion, but my impression was that a second option was also being floated, which was to leave the docker image unchanged, but add code to 'auditwheel repair' that would use patchelf to remove the DT_NEEDED dependencies in libpythonx.y.so after the wheel has been compiled. This was the route that I don't think is satisfactory.

@njsmith
Copy link
Member

njsmith commented Feb 8, 2016

a second option was also being floated

I guess I was assuming we might want to do both. :-)

(auditwheel should at the least check for linking to libpython and yell at people if found, right?)

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 8, 2016

I guess I was assuming we might want to do both. :-)
(auditwheel should at the least check for linking to libpython and yell at people if found, right?)

Yes, I agree definitely that auditwheel should check for linking to libpython, and give a special error message if found -- this is clearly the minimum required behavior to implement the PEP-as-modified. I think we're in agreement here, so we can defer the discussion about what extra features auditwheel should or should not add for later.

(Wearing my auditwheel author/maintainer hat, the thing that worries me about using patchelf to remove the DT_NEEDED entry for libpython is that I want to minimize the amount of black magic going on in that tool. I don't think this DT_NEEDED-removal feature will be foolproof, and I worry about adding a feature that often works for simple packages, but sometimes breaks the wheels it's supposed to be repairing.)

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 8, 2016

Perhaps, for example, we could add a feature to auditwheel that would scan the wheel for references to dlopen. For wheels link libpython.so and don't use dlopen, auditwheel repair could use patchelf to remove the libpython.so DT_NEEDED record. But if the wheel links libpython.so and uses dlopen, it could say "hey, we just can't be sure that this wheel is repairable, you really need to compile it on a statically-built Python instead"

But anyways, we can discuss this later -- it's not the first thing that needs to be implemented.

@njsmith
Copy link
Member

njsmith commented Feb 8, 2016

Okay, #31 has proposed text for mandating that no-one link to libpython; review would be great.

Re: patchelf: I'm actually pretty confident that it won't break any wheels, because by supposition all extension modules will be loaded into host processes that already have libpython included in the global scope. This means that right now the DT_NEEDED is actually doing nothing -- the global libpython is already shadowing the local libpython. The only cases I can think of that might break involve either (a) extremely obscure dlopen tricks, or (b) if the wheel actually contains some executable that isn't even linked to python at all, but that wants to dlopen libpython. I sorta doubt any examples of the latter case exist :-). It's totally fine if a python extension module uses dlopen -- the only case that's risky is where a process that otherwise has no python in it uses dlopen to get itself some python.

That said, it might be good in general to change auditwheel's behavior so that instead of just trying to fix whatever it finds, it has one mode that says "here's the fixes needed" and then another mode where the user explicitly names (maybe in a config file) which fixes they want -- e.g. they have to explicitly list which libraries they want to vendor, etc. This should also help avoid all kinds of surprises -- e.g., you don't want to accidentally start distributing libraries without thinking about the licenses, maybe because the new version of upstream added a new dependency you didn't notice and auditwheel silently swept it up...

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 8, 2016

Re: patchelf: I'm actually pretty confident that it won't break any wheels

Here's the situation I've been considering. Imagine a wheel that contains two ELF files. One of the files, extension.cpython-35m-x86_64-linux-gnu.so, is an extension module. The other is a shared library, libOptional.so, is not a Python extension module (it doesn't have a PyInit_libOptional function), but it does link against libpython3.5m.so and does call functions from the Python C API. The design of the package is that extension.cpython-35m-x86_64-linux-gnu.so uses dlopen to load libOptional.so using RTLD_LOCAL.

In this case, my understanding (although I haven't tested it) is using patchelf to remove libOptional's dependency on libpython3.5m.so will break the wheel.

@njsmith
Copy link
Member

njsmith commented Feb 8, 2016

No, that case should be fine. As long as libOptional.so gets loaded into a Python process, it will be able to "see" the Python C API. When ELF says "global scope" they really mean "global scope."

Actually, this is easy to demonstrate -- ctypes is an extension module which we can easily convince to call dlopen on some random .so file :-). And if we do

$ LD_DEBUG=scopes python -c 'import ctypes; ctypes.CDLL("libgobject-2.0.so.0")'

then we see a bunch of output, including:

     18498:     object=/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0 [0]
     18498:      scope 0: python /lib/x86_64-linux-gnu/libpthread.so.0 /lib/x86_64-linux-gnu/libdl.so.2 /lib/x86_64-linux-gnu/libutil.so.1 /lib/x86_64-linux-gnu/librt.so.1 /lib/x86_64-linux-gnu/libexpat.so.1 /lib/x86_64-linux-gnu/libz.so.1 /lib/x86_64-linux-gnu/libm.so.6 /lib/x86_64-linux-gnu/libc.so.6 /lib64/ld-linux-x86-64.so.2
     18498:      scope 1: /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0 /lib/x86_64-linux-gnu/libglib-2.0.so.0 /usr/lib/x86_64-linux-gnu/libffi.so.6 /lib/x86_64-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/libpcre.so.3 /lib/x86_64-linux-gnu/libpthread.so.0 /lib64/ld-linux-x86-64.so.2

which means that when resolving each symbol that occurs inside libgobject-2.0.so.0, then the dynamic linker will check that list of binaries in that order. Notice that python itself is the very first place checked. (This is debian, so python is the binary that exports the Python C API. On Fedora you'd instead see libpythonX.Y.so.1 listed in "scope 0". Scope 0 is the global scope, scope 1 is the local scope; see section 1.5.4 of dsohowto for details.)

@rmcgibbo
Copy link
Member

rmcgibbo commented Feb 8, 2016

Oh, okay. My mistake then. Perhaps this use of patchelf is "safer" than I had thought.

@njsmith
Copy link
Member

njsmith commented Apr 3, 2016

This seems to be resolved, so closing... re-open if I missed something :-)

@SylvainCorlay
Copy link

Hey @njsmith @matthew-brett @ncoghlan

We are currently working on making a manylinux wheel for a project that embeds python. The project is xeus-python, which is a Jupyter kernel for the Python programming language written in C++ which embeds the interpreter. (The work on the wheel is happenning here).

We are trying to figure out what is the best way to achieve that given that libpython is missing. We are even considering building the Python interpreter that we will statically link in our executable as part of the wheel building.

Are you aware of another project that would be doing something similar?

@rdb
Copy link
Contributor

rdb commented Feb 14, 2020

@SylvainCorlay At Panda3D we also have a need to embed the interpreter. We solve this by just building a fork of manylinux1 which has as only difference that it doesn't delete libpython.a from the container:
https://github.com/panda3d/manylinux

See #91, which talks about this use case.

@SylvainCorlay
Copy link

@SylvainCorlay At Panda3D we also have a need to embed the interpreter. We solve this by just building a fork of manylinux1 which has as only difference that it doesn't delete libpython.a from the container:
https://github.com/panda3d/manylinux

See #91, which talks about this use case.

Wow, that is exactly what I was hoping for. It would be really great if manylinux what like this already.
I understand the decision to drop the .so from the conversation at https://mail.python.org/pipermail/distutils-sig/2016-February/028275.html, but the .a` could be kept arround.

@SylvainCorlay
Copy link

SylvainCorlay commented Feb 14, 2020

It seems that the base image for the manylinux 2010 is not pullable. Is this meant to be private?
https://quay.io/pypa/manylinux2010_centos-6-no-vsyscall

@njsmith @matthew-brett @ncoghlan would it be possible to make your base image public?

We are making the same change as @rdb but for the manylinux2010 docker image. Note that I would also support upstreaming this change into the manylinux docker image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants