Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The whl package downloaded from "Install using pip" can not run #9034

Closed
helinwang opened this issue Mar 13, 2018 · 2 comments · Fixed by #11806
Closed

The whl package downloaded from "Install using pip" can not run #9034

helinwang opened this issue Mar 13, 2018 · 2 comments · Fixed by #11806

Comments

@helinwang
Copy link
Contributor

After following the pip install steps, downloading and installing "paddlepaddle_gpu-0.11.0-cp27-cp27mu-linux_x86_64.whl", and run PaddlePaddle, got error:

ImportError: libmkldnn.so.0: cannot open shared object file: No such file or directory
@typhoonzero
Copy link
Contributor

Seems need to export LD_LIBRARY_PATH to the path where libmkldnn.so.0 located.

@tigerneil
Copy link
Contributor

you need to Install Intel MKL-DNN in Ubuntu 16.04 properly.

At first Install Dependencies. Intel MKL-DNN has the following dependencies:

  1. CMake* – a cross-platform tool used to build, test, and package software.
  2. Doxygen* – a tool for generating documentation from annotated source code.

If these software tools are not already set up on your computer you can install them by typing the following:

sudo apt install cmake

sudo apt install doxygen

Download and Build the Source Code
Clone the Intel MKL-DNN library from the GitHub repository by opening a terminal and typing the following command:

git clone https://github.com/01org/mkl-dnn.git

Once the installation has completed you will find a directory named mkl-dnn in the Home directory. Navigate to this directory by typing:

cd mkl-dnn

As explained on the GitHub repository site, Intel MKL-DNN uses the optimized general matrix to matrix multiplication (GEMM) function from Intel MKL. The library supporting this function is also included in the repository and can be downloaded by running the prepare_mkl.sh script located in the scripts directory:

cd scripts && ./prepare_mkl.sh && cd ..

This script creates a directory named external and then downloads and extracts the library files to a directory named mkl-dnn/external/mklml_lnx*.

The next command is executed from the mkl-dnn directory; it creates a subdirectory named build and then runs CMake and Make to generate the build system:

mkdir -p build && cd build && cmake .. && make

Finalize the Installation
Finalize the installation of Intel MKL-DNN by executing the following command from the mkl-dnn/build directory:

sudo make install

Then

sudo cp libmkldnn.so.0.9.0 /usr/lib/libmkldnn.so.0.9.0
cd /usr/lib
sudo ln -s libmkldnn.so.0.9.0 libmkldnn.so.0
sudo ln -s libmkldnn.so.0 libmkldnn.so
sudo ldconfig

Done. Try some example now.

Reference:

  1. https://software.intel.com/en-us/articles/intel-mkl-dnn-part-1-library-overview-and-installation
  2. https://www.2cto.com/kf/201708/670866.html

blacksheep-Aristotle pushed a commit to blacksheep-Aristotle/Paddle that referenced this issue Nov 22, 2024
…addlePaddle#9034)

* [Unified Checkpoint] speed up loading checkpoint by multi thread

* [Unified CHeckpoint] speed up load by multi-thread

* [Unified CHeckpoint] speed up load by multi-thread

* [Unified CHeckpoint] speed up load by multi-thread

* Unified CHeckpoint] speed up loading checkpoint by  multi-thread

* Unified CHeckpoint] speed up loading checkpoint by  multi-thread

* Unified CHeckpoint] speed up loading checkpoint by  multi-thread

* Unified CHeckpoint] speed up loading checkpoint by  multi-thread

* Unified CHeckpoint] speed up loading checkpoint by  multi-thread

* Unified CHeckpoint] speed up loading checkpoint by  multi-thread
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants