-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix import from inference package #57
Comments
Merged
This was the change introduced by the UI change. @KepingYan can explain this. |
Talked with @KepingYan, the simplest fix is to use PYTHONPATH which requires minimum changes. Otherwise we can't run the script locally without installing into conda env. |
zhangjian94cn
pushed a commit
to zhangjian94cn/llm-on-ray
that referenced
this issue
Feb 4, 2024
…ntel#57) * slim dockerfile * remove credentials * rename * add postfix * update * push 1 version of dp * add new code * remove the old code * revert * remove unused libs * add new package * add parquet support * change name * use output_dir instead of output_prefix * merge * remove unused file * fix typo * add more automation
Merged
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I am not sure why finetune and inference are installed as packages in the conda env. depending on if inference exists in current directory and where the script is running. Either package installed or local package will be used when import, which is confusing:
from inference.inference_config import InferenceConfig, DEVICE_CPU
The text was updated successfully, but these errors were encountered: