Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
[v1.9.x]Update git repo reference (#20496)
Browse files Browse the repository at this point in the history
* update git repo url

* remove incubator

* fix typo

Co-authored-by: Wei Chu <[email protected]>
  • Loading branch information
waytrue17 and Wei Chu authored Aug 10, 2021
1 parent 13df0d7 commit 1cf2fe5
Show file tree
Hide file tree
Showing 42 changed files with 74 additions and 74 deletions.
2 changes: 1 addition & 1 deletion 3rdparty/mshadow/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,5 +50,5 @@ Version

Projects Using MShadow
----------------------
* [MXNet: Efficient and Flexible Distributed Deep Learning Framework](https://github.com/dmlc/mxnet)
* [MXNet: Efficient and Flexible Distributed Deep Learning Framework](https://github.com/apache/mxnet)
* [CXXNet: A lightweight C++ based deep learnig framework](https://github.com/dmlc/cxxnet)
8 changes: 4 additions & 4 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -3572,7 +3572,7 @@ For more information and examples, see [full release notes](https://cwiki.apache
- ImageRecordIter now stores data in pinned memory to improve GPU memcopy speed.
### Bugfixes
- Cython interface is fixed. `make cython` and `python setup.py install --with-cython` should install the cython interface and reduce overhead in applications that use imperative/bucketing.
- Fixed various bugs in Faster-RCNN example: https://github.com/dmlc/mxnet/pull/6486
- Fixed various bugs in Faster-RCNN example: https://github.com/apache/mxnet/pull/6486
- Fixed various bugs in SSD example.
- Fixed `out` argument not working for `zeros`, `ones`, `full`, etc.
- `expand_dims` now supports backward shape inference.
Expand Down Expand Up @@ -3648,9 +3648,9 @@ This is the last release before the NNVM refactor.
- Support CuDNN v5 by @antinucleon
- More applications
- Speech recognition by @yzhang87
- [Neural art](https://github.com/dmlc/mxnet/tree/master/example/neural-style) by @antinucleon
- [Detection](https://github.com/dmlc/mxnet/tree/master/example/rcnn), RCNN bt @precedenceguo
- [Segmentation](https://github.com/dmlc/mxnet/tree/master/example/fcn-xs), FCN by @tornadomeet
- [Neural art](https://github.com/apache/mxnet/tree/v0.7.0/example/neural-style) by @antinucleon
- [Detection](https://github.com/apache/mxnet/tree/v0.7.0/example/rcnn), RCNN bt @precedenceguo
- [Segmentation](https://github.com/apache/mxnet/tree/v0.7.0/example/fcn-xs), FCN by @tornadomeet
- [Face identification](https://github.com/tornadomeet/mxnet-face) by @tornadomeet
- More on the example

Expand Down
2 changes: 1 addition & 1 deletion R-package/R/zzz.R
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ NULL
if (!interactive() || stats::runif(1) > 0.1) return()

tips <- c(
"Need help? Feel free to open an issue on https://github.com/dmlc/mxnet/issues",
"Need help? Feel free to open an issue on https://github.com/apache/mxnet/issues",
"For more documents, please visit https://mxnet.io",
"Use suppressPackageStartupMessages() to eliminate package startup messages."
)
Expand Down
2 changes: 1 addition & 1 deletion R-package/vignettes/CallbackFunction.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ head(logger$eval)
## How to write your own callback functions


You can find the source code for two callback functions from [here](https://github.com/dmlc/mxnet/blob/master/R-package/R/callback.R) and they can be used as your template:
You can find the source code for two callback functions from [here](https://github.com/apache/mxnet/blob/v1.x/R-package/R/callback.R) and they can be used as your template:

Basically, all callback functions follow the structure below:

Expand Down
2 changes: 1 addition & 1 deletion R-package/vignettes/CustomIterator.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ You'll get two files, `mnist_train.csv` that contains 60.000 examples of hand wr

## Custom CSV Iterator

Next we are going to create a custom CSV Iterator based on the [C++ CSVIterator class](https://github.com/dmlc/mxnet/blob/master/src/io/iter_csv.cc).
Next we are going to create a custom CSV Iterator based on the [C++ CSVIterator class](https://github.com/apache/mxnet/blob/master/src/io/iter_csv.cc).

For that we are going to use the R function `mx.io.CSVIter` as a base class. This class has as parameters `data.csv, data.shape, batch.size` and two main functions, `iter.next()` that calls the iterator in the next batch of data and `value()` that returns the train data and the label.

Expand Down
2 changes: 1 addition & 1 deletion R-package/vignettes/mnistCompetition.Rmd
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Handwritten Digits Classification Competition

[MNIST](http://yann.lecun.com/exdb/mnist/) is a handwritten digits image data set created by Yann LeCun. Every digit is represented by a 28x28 image. It has become a standard data set to test classifiers on simple image input. Neural network is no doubt a strong model for image classification tasks. There's a [long-term hosted competition](https://www.kaggle.com/c/digit-recognizer) on Kaggle using this data set.
We will present the basic usage of [mxnet](https://github.com/dmlc/mxnet/tree/master/R-package) to compete in this challenge.
We will present the basic usage of [mxnet](https://github.com/apache/mxnet/tree/v1.x/R-package) to compete in this challenge.

## Data Loading

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,10 +81,10 @@ What's New
* [0.12.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/0.12.0) - MXNet 0.12.0 Release.
* [0.11.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/0.11.0) - MXNet 0.11.0 Release.
* [Apache Incubator](http://incubator.apache.org/projects/mxnet.html) - We are now an Apache Incubator project.
* [0.10.0 Release](https://github.com/dmlc/mxnet/releases/tag/v0.10.0) - MXNet 0.10.0 Release.
* [0.10.0 Release](https://github.com/apache/mxnet/releases/tag/v0.10.0) - MXNet 0.10.0 Release.
* [0.9.3 Release](./docs/architecture/release_note_0_9.md) - First 0.9 official release.
* [0.9.1 Release (NNVM refactor)](./docs/architecture/release_note_0_9.md) - NNVM branch is merged into master now. An official release will be made soon.
* [0.8.0 Release](https://github.com/dmlc/mxnet/releases/tag/v0.8.0)
* [0.8.0 Release](https://github.com/apache/mxnet/releases/tag/v0.8.0)

### Ecosystem News

Expand Down
2 changes: 1 addition & 1 deletion docker/Dockerfiles/Dockerfile.in.lib.cpu
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,6 @@ FROM ubuntu:14.04
COPY install/cpp.sh install/
RUN install/cpp.sh

RUN git clone --recursive https://github.com/dmlc/mxnet && cd mxnet && \
RUN git clone --recursive https://github.com/apache/mxnet && cd mxnet && \
make -j$(nproc) && \
rm -r build
2 changes: 1 addition & 1 deletion docker/Dockerfiles/Dockerfile.in.lib.gpu
Original file line number Diff line number Diff line change
Expand Up @@ -25,5 +25,5 @@ COPY install/cpp.sh install/
RUN install/cpp.sh

ENV BUILD_OPTS "USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1"
RUN git clone --recursive https://github.com/dmlc/mxnet && cd mxnet && \
RUN git clone --recursive https://github.com/apache/mxnet && cd mxnet && \
make -j$(nproc) $BUILD_OPTS
4 changes: 2 additions & 2 deletions docs/static_site/src/pages/api/faq/caffe.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Key topics covered include the following:
## Converting Caffe trained models to MXNet

The converting tool is available at
[tools/caffe_converter](https://github.com/dmlc/mxnet/tree/master/tools/caffe_converter). On
[tools/caffe_converter](https://github.com/apache/mxnet/tree/v1.x/tools/caffe_converter). On
the remaining of this section, we assume we are on the `tools/caffe_converter`
directory.

Expand Down Expand Up @@ -205,4 +205,4 @@ train = mx.io.CaffeDataIter(
### Put it all together

The complete example is available at
[example/caffe](https://github.com/dmlc/mxnet/blob/master/example/caffe/)
[example/caffe](https://github.com/apache/mxnet/blob/v1.x/example/caffe/)
4 changes: 2 additions & 2 deletions docs/static_site/src/pages/api/faq/cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ The following commands build _MXNet_ with CUDA/CUDNN, Amazon S3, and distributed
training.

```bash
git clone --recursive https://github.com/dmlc/mxnet
git clone --recursive https://github.com/apache/mxnet
cd mxnet; cp make/config.mk .
echo "USE_CUDA=1" >>config.mk
echo "USE_CUDA_PATH=/usr/local/cuda" >>config.mk
Expand Down Expand Up @@ -192,7 +192,7 @@ cat hosts | xargs -I{} ssh -o StrictHostKeyChecking=no {} 'uname -a; pgrep pytho
```

***Note:*** The preceding example is very simple to train and therefore isn't a good
benchmark for distributed training. Consider using other [examples](https://github.com/dmlc/mxnet/tree/master/example/image-classification).
benchmark for distributed training. Consider using other [examples](https://github.com/apache/mxnet/tree/v1.x/example/image-classification).

### More Options
#### Use Multiple Data Shards
Expand Down
6 changes: 3 additions & 3 deletions docs/static_site/src/pages/api/faq/multi_devices.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ import mxnet as mx
module = mx.module.Module(context=[mx.gpu(0), mx.gpu(2)], ...)
```
while if the program accepts a `--gpus` flag (as seen in
[example/image-classification](https://github.com/dmlc/mxnet/tree/master/example/image-classification)),
[example/image-classification](https://github.com/apache/mxnet/tree/v1.x/example/image-classification)),
then we can try
```bash
python train_mnist.py --gpus 0,2 ...
Expand Down Expand Up @@ -130,7 +130,7 @@ When using a large number of GPUs, e.g. >=4, we suggest using `device` for bette
Launching a distributed job is a bit different from running on a single
machine. MXNet provides
[tools/launch.py](https://github.com/dmlc/mxnet/blob/master/tools/launch.py) to
[tools/launch.py](https://github.com/apache/mxnet/blob/v1.x/tools/launch.py) to
start a job by using `ssh`, `mpi`, `sge`, or `yarn`.

An easy way to set up a cluster of EC2 instances for distributed deep learning
Expand All @@ -139,7 +139,7 @@ If you do not have a cluster, you can check the repository before you continue.

Assume we are at the directory `mxnet/example/image-classification`
and want to train LeNet to classify MNIST images, as demonstrated here:
[train_mnist.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/train_mnist.py).
[train_mnist.py](https://github.com/apache/mxnet/blob/v1.x/example/image-classification/train_mnist.py).

On a single machine, we can run:

Expand Down
6 changes: 3 additions & 3 deletions docs/static_site/src/pages/api/faq/new_op.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,12 +144,12 @@ To use the custom operator, create a mx.sym.Custom symbol with op_type as the re
mlp = mx.symbol.Custom(data=fc3, name='softmax', op_type='softmax')
```

Please see the full code for this example [here](https://github.com/dmlc/mxnet/blob/master/example/numpy-ops/custom_softmax.py).
Please see the full code for this example [here](https://github.com/apache/mxnet/blob/v1.x/example/numpy-ops/custom_softmax.py).

## C++
With MXNet v0.9 (the NNVM refactor) or later, creating new operators has become easier.
Operators are now registered with NNVM.
The following code is an example on how to register an operator (checkout [src/operator/tensor](https://github.com/dmlc/mxnet/tree/master/src/operator/tensor) for more examples):
The following code is an example on how to register an operator (checkout [src/operator/tensor](https://github.com/apache/mxnet/tree/v1.x/src/operator/tensor) for more examples):

```c++
NNVM_REGISTER_OP(abs)
Expand Down Expand Up @@ -189,7 +189,7 @@ In this section, we will go through the basic attributes MXNet expect for all op
You can find the definition for them in the following two files:

- [nnvm/op_attr_types.h](https://github.com/dmlc/nnvm/blob/master/include/nnvm/op_attr_types.h)
- [mxnet/op_attr_types.h](https://github.com/dmlc/mxnet/blob/master/include/mxnet/op_attr_types.h)
- [mxnet/op_attr_types.h](https://github.com/apache/mxnet/blob/v1.x/include/mxnet/op_attr_types.h)

#### Descriptions (Optional)

Expand Down
10 changes: 5 additions & 5 deletions docs/static_site/src/pages/api/faq/perf.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ So whether you specify `cpu(0)` or `cpu()`, _MXNet_ will use all CPU cores on th
### Scoring results
The following table shows performance of MXNet-1.2.0.rc1,
namely number of images that can be predicted per second.
We used [example/image-classification/benchmark_score.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/benchmark_score.py)
We used [example/image-classification/benchmark_score.py](https://github.com/apache/mxnet/blob/v1.x/example/image-classification/benchmark_score.py)
to measure the performance on different AWS EC2 machines.

AWS EC2 C5.18xlarge:
Expand Down Expand Up @@ -150,7 +150,7 @@ and V100 (EC2 p3.2xlarge).
### Scoring results

Based on
[example/image-classification/benchmark_score.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/benchmark_score.py)
[example/image-classification/benchmark_score.py](https://github.com/apache/mxnet/blob/v1.x/example/image-classification/benchmark_score.py)
and MXNet-1.2.0.rc1, with cuDNN 7.0.5

- K80 (single GPU)
Expand Down Expand Up @@ -213,7 +213,7 @@ Below is the performance result on V100 using float 16.
### Training results

Based on
[example/image-classification/train_imagenet.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/train_imagenet.py)
[example/image-classification/train_imagenet.py](https://github.com/apache/mxnet/blob/v1.x/example/image-classification/train_imagenet.py)
and MXNet-1.2.0.rc1, with CUDNN 7.0.5. The benchmark script is available at
[here](https://github.com/mli/mxnet-benchmark/blob/master/run_vary_batch.sh),
where the batch size for Alexnet is increased by 16x.
Expand Down Expand Up @@ -260,7 +260,7 @@ It's critical to use the proper type of `kvstore` to get the best performance.
Refer to [multi_device.md](https://mxnet.io/api/faq/distributed_training.html) for more
details.

Besides, we can use [tools/bandwidth](https://github.com/dmlc/mxnet/tree/master/tools/bandwidth)
Besides, we can use [tools/bandwidth](https://github.com/apache/mxnet/tree/v1.x/tools/bandwidth)
to find the communication cost per batch.
Ideally, the communication cost should be less than the time to compute a batch.
To reduce the communication cost, we can consider:
Expand Down Expand Up @@ -293,7 +293,7 @@ by summarizing at the operator level, instead of a function, kernel, or instruct

The profiler can be turned on with an [environment variable]({{'/api/faq/env_var#control-the-profiler' | relative_url}})
for an entire program run, or programmatically for just part of a run. Note that by default the profiler hides the details of each individual operator, and you can reveal the details by setting environment variables `MXNET_EXEC_BULK_EXEC_INFERENCE`, `MXNET_EXEC_BULK_EXEC_MAX_NODE_TRAIN` and `MXNET_EXEC_BULK_EXEC_TRAIN` to 0.
See [example/profiler](https://github.com/dmlc/mxnet/tree/master/example/profiler)
See [example/profiler](https://github.com/apache/mxnet/tree/v1.x/example/profiler)
for complete examples of how to use the profiler in code, or [this tutorial](https://mxnet.apache.org/api/python/docs/tutorials/performance/backend/profiler.html) on how to profile MXNet performance.

Briefly, the Python code looks like:
Expand Down
4 changes: 2 additions & 2 deletions docs/static_site/src/pages/api/faq/recordio.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ RecordIO implements a file format for a sequence of records. We recommend storin

We provide two tools for creating a RecordIO dataset.

* [im2rec.cc](https://github.com/dmlc/mxnet/blob/master/tools/im2rec.cc) - implements the tool using the C++ API.
* [im2rec.py](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) - implements the tool using the Python API.
* [im2rec.cc](https://github.com/apache/mxnet/blob/v1.x/tools/im2rec.cc) - implements the tool using the C++ API.
* [im2rec.py](https://github.com/apache/mxnet/blob/v1.x/tools/im2rec.py) - implements the tool using the Python API.

Both provide the same output: a RecordIO dataset.

Expand Down
2 changes: 1 addition & 1 deletion docs/static_site/src/pages/api/faq/s3_integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ aws s3 sync ./training-data s3://bucket-name/training-data
Once the data is in S3, it is very straightforward to use it from MXNet. Any data iterator that can read/write data from a local drive can also read/write data from S3.
Let's modify an existing example code in MXNet repository to read data from S3 instead of local disk. [`mxnet/tests/python/train/test_conv.py`](https://github.com/dmlc/mxnet/blob/master/tests/python/train/test_conv.py) trains a convolutional network using MNIST data from local disk. We'll do the following change to read the data from S3 instead.
Let's modify an existing example code in MXNet repository to read data from S3 instead of local disk. [`mxnet/tests/python/train/test_conv.py`](https://github.com/apache/mxnet/blob/v1.x/tests/python/train/test_conv.py) trains a convolutional network using MNIST data from local disk. We'll do the following change to read the data from S3 instead.
```
~/mxnet$ sed -i -- 's/data\//s3:\/\/bucket-name\/training-data\//g' ./tests/python/train/test_conv.py
Expand Down
6 changes: 3 additions & 3 deletions docs/static_site/src/pages/api/faq/smart_device.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ All that's necessary to create the library is to compile that single file.
This simplifies the problem of porting to various platforms.

Thanks to [Jack Deng](https://github.com/jdeng),
MXNet provides an [amalgamation](https://github.com/dmlc/mxnet/tree/master/amalgamation) script
MXNet provides an [amalgamation](https://github.com/apache/mxnet/tree/v1.x/amalgamation) script
that compiles all code needed for prediction based on trained DL models into a single `.cc` file,
containing approximately 30K lines of code. This code only depends on the BLAS library.
Moreover, we've also created an even more minimal version,
Expand All @@ -53,8 +53,8 @@ Porting to another language with a C foreign function interface requires little
For examples, see the following examples on GitHub:

- Go: [https://github.com/jdeng/gomxnet](https://github.com/jdeng/gomxnet)
- Java: [https://github.com/dmlc/mxnet/tree/master/amalgamation/jni](https://github.com/dmlc/mxnet/tree/master/amalgamation/jni)
- Python: [https://github.com/dmlc/mxnet/tree/master/amalgamation/python](https://github.com/dmlc/mxnet/tree/master/amalgamation/python)
- Java: [https://github.com/apache/mxnet/tree/v1.x/amalgamation/jni](https://github.com/apache/mxnet/tree/v1.x/amalgamation/jni)
- Python: [https://github.com/apache/mxnet/tree/v1.x/amalgamation/python](https://github.com/apache/mxnet/tree/v1.x/amalgamation/python)


If you plan to amalgamate your system, there are a few guidelines you ought to observe when building the project:
Expand Down
2 changes: 1 addition & 1 deletion docs/static_site/src/pages/api/faq/visualize_graph.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,5 +84,5 @@ You should see computation graph something like the following image:
width=400/>

# References
* [Example MXNet Matrix Factorization](https://github.com/dmlc/mxnet/blob/master/example/recommenders/demo1-MF.ipynb)
* [Example MXNet Matrix Factorization](https://github.com/apache/mxnet/blob/v1.x/example/recommenders/demo1-MF.ipynb)
* [Visualizing CNN Architecture of MXNet Tutorials](http://josephpcohen.com/w/visualizing-cnn-architectures-side-by-side-with-mxnet/)
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ You also can save the training and evaluation errors for later use by passing a
How to Write Your Own Callback Functions
----------

You can find the source code for the two callback functions on [GitHub](https://github.com/dmlc/mxnet/blob/master/R-package/R/callback.R) and use it as a template:
You can find the source code for the two callback functions on [GitHub](https://github.com/apache/mxnet/blob/v1.x/R-package/R/callback.R) and use it as a template:

Basically, all callback functions follow the following structure:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ You'll get two files, `mnist_train.csv` that contains 60.000 examples of hand wr

Custom CSV Iterator
----------
Next we are going to create a custom CSV Iterator based on the [C++ CSVIterator class](https://github.com/dmlc/mxnet/blob/master/src/io/iter_csv.cc).
Next we are going to create a custom CSV Iterator based on the [C++ CSVIterator class](https://github.com/apache/mxnet/blob/v1.x/src/io/iter_csv.cc).

For that we are going to use the R function `mx.io.CSVIter` as a base class. This class has as parameters `data.csv, data.shape, batch.size` and two main functions, `iter.next()` that calls the iterator in the next batch of data and `value()` that returns the train data and the label.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ To get an idea of what is happening, view the computation graph from R:
graph.viz(model$symbol)
```

[<img src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/knitr/graph.computation.png">](https://github.com/dmlc/mxnet)
[<img src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/knitr/graph.computation.png">](https://github.com/apache/mxnet)

```r
preds = predict(model, test.x)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Handwritten Digits Classification Competition
=============================================

[MNIST](http://yann.lecun.com/exdb/mnist/) is a handwritten digits image data set created by Yann LeCun. Every digit is represented by a 28 x 28 pixel image. It's become a standard data set for testing classifiers on simple image input. A neural network is a strong model for image classification tasks. There's a [long-term hosted competition](https://www.kaggle.com/c/digit-recognizer) on Kaggle using this data set.
This tutorial shows how to use [MXNet](https://github.com/dmlc/mxnet/tree/master/R-package) to compete in this challenge.
This tutorial shows how to use [MXNet](https://github.com/apache/mxnet/tree/v1.x/R-package) to compete in this challenge.

## Loading the Data

Expand Down
Loading

0 comments on commit 1cf2fe5

Please sign in to comment.