-
Notifications
You must be signed in to change notification settings - Fork 835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Environmental variable error #215
Comments
I think you would need to install CUDA in the Docker image. Using above you could extend the assemble part of s2i with your own assemble logic to add CUDA and other dependencies. |
Hi, |
Do you have a MyModel.py file with a MyModel class as discussed here when you did the wrapping? |
@cliveseldon yes I have |
OK. I suggest you run the docker image with bash to investigate, e.g.,
|
I run that image as you said |
OK - can you show what is in that folder to check MyModel.py exists there? |
root@e764be690a3a:/microservice# ls -lart cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb |
You do need a runtime inference file that is going to do the prediction. In the turorial one is shown: https://github.com/SeldonIO/seldon-core/blob/master/docs/wrappers/python.md#python-file |
My build path contains these files But while building image Tensorflow_model and MyModel.py is not computed. |
My assemble file contains |
Does your custom assemble script envoke the default assemble script as discussed , e.g. |
No. Custom assemble script is not evoking default script. |
I am unable to find the default assemble file in /usr/libexec/s2i/assemble this path. |
Not sure what to suggest, have you followed the instructions here on finding the location of the scripts? |
I think for the python builder images the default assemble script will be at /s2i/bin |
I think your custom assemble script should take the form
|
Can you suggest me where to place custom assemble and how to invoke it while building image? |
You just need to put the above code in .s2i/bin/assemble and the add your custom GPU installation code. I think you almost got this far already? |
Hi, |
gRPC and REST endpoint will have been exposed via the API gateway or Ambassador. I suggest you run one of the notebook examples which have example python code to make predictions over these APIs. |
can you provide a link for that notebook examples? |
Hi, I port forwarded, XXXX@XX:~$ curl -d 'json={"data":{"tensor":{"shape":[1,37],"values":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37]}}}' http://localhost:8080/predict |
If you look in https://github.com/SeldonIO/seldon-core/blob/master/notebooks/seldon_utils.py
So the endpoint under Ambassador is |
Hi, I get the following error, {"timestamp":1537279713335,"status":415,"error":"Unsupported Media Type","exception":"org.springframework.web.HttpMediaTypeNotSupportedException","message":"Content type 'application/x-www-form-urlencoded' not supported","path":"/api/v0.1/predictions"} |
try adding the following to curl
|
I have tried that and I get this error, |
You are sending "json=X" but you should be sending just the JSON. |
@cliveseldon |
Suggest we connect on our Slack channel. |
Yes we can.. my slack url is sathiez.slack.com |
I tried as you said but still I get the following error, |
Your json is invalid. Try your previous on |
Hi, |
Can you provide the full stack trace and error? |
Hi,
I build seldon image using "s2i build 'src-folder' seldonio/seldon-core-s2i-python3:0.1 'imageName'".
I get the following error:
"ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory"
.s2i/environment:
MODEL_NAME=MyModel
API_TYPE=REST
SERVICE_TYPE=MODEL
PERSISTENCE =0
LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64
requirement.txt
tensorflow-gpu==1.10.1
help me, to resove this error. :)
The text was updated successfully, but these errors were encountered: