-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add mxnet #17580
add mxnet #17580
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( I do have some suggestions for making it better though... For recipes/mxnet:
Documentation on acceptable licenses can be found here. |
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/mxnet:
|
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
Hi! This is the friendly automated conda-forge-linting service. I was trying to look for recipes to lint for you, but couldn't find any. |
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/mxnet:
|
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
20e3417
to
341ef1a
Compare
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/mxnet:
|
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
@conda-forge/staged-recipes ready for review. The CI may not finish the cuda builds in time due to the complexity of the build --- I could try to upload the logs at some point. Here's a file from this recipe https://anaconda.org/ngam/mxnet/v2.0.0.beta0.rc1/download/osx-arm64/mxnet-v2.0.0.beta0.rc1-py39h32763b8_0_cpu.tar.bz2 but I have since tightened the pins on opencv and libjpeg-turbo for further safety; would potentially like to add an additional modification on numpy (selecting the corresponding backend by default, but not sure if that's doable or desirable) |
recipes/mxnet/meta.yaml
Outdated
- onednn | ||
- cudnn # [cuda_compiler_version != "None"] | ||
- nccl # [cuda_compiler_version != "None"] | ||
- numpy >=1.17 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Numpy is quite forward compatible, but it isn't really backward compatible. Doing this kind of pinning uses
2022-01-22T08:20:42.7430135Z + numpy 1.21.5 py37hf2998dd_0 conda-forge/linux-64 6 MB
which makes it only compatible with numpy 1.21.5
Please follow
https://conda-forge.org/docs/maintainer/knowledge_base.html#building-against-numpy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, should I just numpy ==1.17.*
or should I just drop the pinning here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually don't if numpy needs to be in this section... I will need to check that too!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just drop the pin. it will get a newer number from the config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, will remove. I now think numpy should only be in run... testing this takes forever, that's why I am hesitating to keep going back and forth, but I will try tweaking things around and tidying up locally first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot! 🥇
recipes/mxnet/meta.yaml
Outdated
- requests >=2.20.0,<3 | ||
- python-graphviz >=0.8.1,<0.9.0 | ||
- python_version | ||
- {{ pin_compatible('libjpeg-turbo') }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to cause inter-operability problems. You should simply build with jpeg. Do you need turbo? or is it just for performance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe they explicitly require turbo, but I can look deeper into it. It is actually built against libjpeg-turbo and I am not really sure if it is a runtime dep. I added all this pinning because when I was testing locally, I would run into problems like "can't find library libopencv.xxx.405" so I thought ooops, I gotta pin them all to start and then relax them later. What's your recommendation here? We potentially will have all sorts of problematic pinning because these are used in the building, not merely as a flexible runtime deps and so this is quite challenging for me to understand as a beginner!
recipes/mxnet/meta.yaml
Outdated
- {{ pin_compatible('llvm-openmp') }} # [osx] | ||
- openblas | ||
- {{ pin_compatible('mkl') }} # [(target_platform != "osx-arm64") and (cuda_compiler_version == "None")] | ||
- {{ pin_compatible('mkl-include') }} # [(target_platform != "osx-arm64") and (cuda_compiler_version == "None")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need development headers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is an explicit line in build.sh for the MKL include, but I haven't really checked if this passes without this and I am the whole onednn--mkl building is very confusing :(
I think mkl-include can be dropped, but I will want to check that first
@hmaarrfk, this is going to be a challenging and interesting feedstock to maintain --- while I am only running some of the "supported" distributions here (config files), the whole thing upstream is really flexible and we can potentially do a lot of interesting mixes for our purposes and help them out with distributing and optimizing their build. For example, currently, I believe at least some of onednn gets built --- even though it looks for it at the start to ensure compatibility, which is very confusing! Anyway, would you like me to add you as a maintainer here? This way you'd be on the tensorflow, pytorch, and mxnet feedstocks --- the whole axis lol. And you will teach me all the tricks :) Copying @h-vetinari to gauge interest in joining the maintainership too Edit: What I mean is: Please join me in maintaining this 🙇♂️ |
recipes/mxnet/meta.yaml
Outdated
|
||
extra: | ||
recipe-maintainers: | ||
- barry-jin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This person should chime in before we can merge. I suggest you add them after in the feedstock itself. instead of now at the beginning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@barry-jin indicated willingness (also a maintainer upstream). See above for the "like" and here #4447 (comment) but happy to remove for now and tidy up things later.
Btw, I couldn't figure out a clean way to get the library separated. The python package itself can be noarch and can be built separately. This is one main thing I wanna do once we get this into a feedstock. Another thing is being involved with niche (?) dependencies so that we can avoid "vendoring" them (to be clear, we are not vendoring much, but still I would like to figure out a way to get dlpack, dmlc, nnvm (libtvm) and mshadow from elsewhere --- though, it is not clear to me exactly how; I tried and I failed, so I am going to delay that until we get this up and running in a feedstock). I will also have one last swipe at the messy deps once this run concludes.
deps need fixing
|
While this is actually more or less ready, I'd rather spend a little more time trying to split it into a library and python pkg instead of this. Converting back to draft, will ping again soon. |
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/libmxnet:
|
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/mxnet:
|
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
@ngam, I missed this PR at the time, and I don't see much context for its closure (except that I remember you don't like having open PRs...?). What would you think about giving this one another shot? |
I don't think we should... at the time when I was trying to push this, it appeared as though mxnet was going to move to a community model (apache) and there was a push to get a big release (2.0.0). However, things stalled to essentially a halt. It was clear to me at the time something was up, and unsurprisingly --- but sadly --- the project has been retired last time I checked about a year later... This is kinda similar to other "disappointing" news wrt to tvm, etc :/ So, I think in conda-forge, we should focus on the most active of the frameworks in this space (tf, jax, pytorch --- and as you know, they're already very challenging) |
Fixes #4447
Closes #14832
Checklist
url
) rather than a repo (e.g.git_url
) is used in your recipe (see here for more details).