Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Documentation incorrect. Benchmarks are all not working. Testing needed #12629

Open
TimoGoetze opened this issue Dec 27, 2024 · 1 comment
Assignees

Comments

@TimoGoetze
Copy link

TimoGoetze commented Dec 27, 2024

Just try and follow this Doc for Linux: https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/DockerGuides/docker_pytorch_inference_gpu.md

Questions - as everything below sycl-ls it is not working at all (issues below): Is this the current doc? Or are there other docs which are known to work?
I would love to see this fixed or get a hint where to find currennt working examples.
My Boss expects me to port our stuff for ARC GPUs and iGPUs.

  1. env check script does not exist.

  2. Most directories have different locations

  3. hf downloader seems outdateted and refuses to download models from the default config

  4. pytorch torchvision libpng missing warniings

  5. TypeError: 'NoneType' object is not iterable run.py:2331

As the doc had its last changes a few weeks ago I thought it should be okay..

@ACupofAir
Copy link
Contributor

Answers:

  1. env-check.sh has been removed from the image, here is the source link, you can download it use wget -c https://raw.githubusercontent.com/intel-analytics/ipex-llm/refs/heads/main/python/llm/scripts/env-check.sh and bash env-check.sh to use it. The expected out should be like following image:
    image
    xpu-smi should be installed in host machine not in docker container, so ignore the last line recommendation.
  2. Following the docker readme file, only env-check.sh 's path has been removed, other directories are OK.
  3. It is recommended to download the hf model manually and then configure the path in the config.yaml like this https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/DockerGuides/docker_pytorch_inference_gpu.md#run-inference-benchmark, for example, the model path in your docker is /llm/models/Llama-2-7b-chat-hf, you need change config.yaml's repo_id and local_model_hub should look like this
    image
  4. Please ignore this warning, it has no impact。
  5. This error comes from an error in the path configuration of the model in config.yaml. It can be solved after successfully configuring the path by referring to the example in 3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants