Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Link fixes4 #16764

Merged
merged 5 commits into from
Nov 13, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -267,8 +267,7 @@ finetune_net.export("flower-recognition", epoch=epochs)
MXNet provides various useful tools and interfaces for deploying your model for inference. For example, you can use [MXNet Model Server](https://github.com/awslabs/mxnet-model-server) to start a service and host your trained model easily.
Besides that, you can also use MXNet's different language APIs to integrate your model with your existing service. We provide [Python](/api/python.html), [Java](/api/java.html), [Scala](/api/scala.html), and [C++](/api/cpp) APIs.

Here we will briefly introduce how to run inference using Module API in Python. There is more detailed explanation available in the [Predict Image Tutorial](https://mxnet.apache.org/tutorials/python/predict_image.html).
In general, prediction consists of the following steps:
Here we will briefly introduce how to run inference using Module API in Python. In general, prediction consists of the following steps:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the broken link discussed in #16724

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to add it back? or the image-classification shouldn't exist anymore? What's the solution for that issue?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ChaiBapchya #16724 is still open, I wondered if something is pending. I personally don't know the expect solution, if everything is good we can close it.

But following link was mentioned in the issue, not sure if this need to be considered.

True link should be -- https://github.com/dmlc/mxnet-notebooks/blob/master/python/tutorials/predict_imagenet.ipynb

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see there was a link which previously existed. Now the link has been removed. Have a word with Aaron/Talia they have a better idea.

1. Load the model architecture (symbol file) and trained parameter values (params file)
2. Load the synset file for label names
3. Load the image and apply the same transformation we did on validation dataset during training
Expand Down Expand Up @@ -311,7 +310,7 @@ probability=9.798435, class=lotus

## What's next

You can continue to the [next tutorial](https://mxnet.apache.org/versions/master/tutorials/c++/mxnet_cpp_inference_tutorial.html) on how to load the model we just trained and run inference using MXNet C++ API.
You can continue to the [next tutorial](/api/cpp/docs/tutorials/cpp_inference) on how to load the model we just trained and run inference using MXNet C++ API.

You can also find more ways to run inference and deploy your models here:
1. [Java Inference examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ batch_size = 10

## Working with data

To work with data, Apache MXNet provides [Dataset](https://mxnet.apache.org/api/python/gluon/data.html#mxnet.gluon.data.Dataset) and [DataLoader](https://mxnet.apache.org/api/python/gluon/data.html#mxnet.gluon.data.DataLoader) classes. The former is used to provide an indexed access to the data, the latter is used to shuffle and batchify the data. To learn more about working with data in Gluon, please refer to [Gluon Datasets and Dataloaders](https://mxnet.apache.org/tutorials/gluon/datasets.html) tutorial.
To work with data, Apache MXNet provides [Dataset](https://mxnet.apache.org/api/python/gluon/data.html#mxnet.gluon.data.Dataset) and [DataLoader](https://mxnet.apache.org/api/python/gluon/data.html#mxnet.gluon.data.DataLoader) classes. The former is used to provide an indexed access to the data, the latter is used to shuffle and batchify the data. To learn more about working with data in Gluon, please refer to [Gluon Datasets and Dataloaders](/api/python/docs/api/gluon/data/index.html).

Below we define training and validation datasets, which we are going to use in the tutorial.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ mx_train_data = gluon.data.DataLoader(

Both frameworks allows you to download MNIST data set from their sources and specify that only training part of the data set is required.

The main difference between the code snippets is that MXNet uses [transform_first](https://mxnet.apache.org/api/python/docs/api/gluon/_autogen/mxnet.gluon.data.Dataset.html) method to indicate that the data transformation is done on the first element of the data batch, the MNIST picture, rather than the second element, the label.
The main difference between the code snippets is that MXNet uses [transform_first](/api/python/docs/api/gluon/data/index.html#mxnet.gluon.data.Dataset.transform_first) method to indicate that the data transformation is done on the first element of the data batch, the MNIST picture, rather than the second element, the label.

### 2. Creating the model

Expand Down Expand Up @@ -143,7 +143,7 @@ We used the Sequential container to stack layers one after the other in order to

* After the model structure is defined, Apache MXNet requires you to explicitly call the model initialization function.

With a Sequential block, layers are executed one after the other. To have a different execution model, with PyTorch you can inherit from `nn.Module` and then customize how the `.forward()` function is executed. Similarly, in Apache MXNet you can inherit from [nn.Block](https://mxnet.apache.org/api/python/docs/api/gluon/mxnet.gluon.nn.Block.html) to achieve similar results.
With a Sequential block, layers are executed one after the other. To have a different execution model, with PyTorch you can inherit from `nn.Module` and then customize how the `.forward()` function is executed. Similarly, in Apache MXNet you can inherit from [nn.Block](/api/python/docs/api/gluon/nn/index.html#mxnet.gluon.nn.Block) to achieve similar results.

### 3. Loss function and optimization algorithm

Expand Down
12 changes: 6 additions & 6 deletions docs/python_docs/python/tutorials/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,13 +56,13 @@ Packages & Modules

.. card::
:title: Symbol API
:link: packages/symbol/index.html
:link: /api/python/docs/api/symbol/index.html

How to use MXNet's Symbol API.
MXNet Symbol API has been depricated. API documentation is still available for reference.

.. card::
:title: Autograd API
:link: packages/autograd/autograd.html
:link: /api/python/docs/tutorials/packages/autograd/index.html

How to use Automatic Differentiation with the Autograd API.

Expand All @@ -86,13 +86,13 @@ Performance

.. card::
:title: Compression: int8
:link: performance/int8.html
:link: performance/compression/int8.html

How to use int8 in your model to boost training speed.

.. card::
:title: MKL-DNN
:link: performance/backend/mkl-dnn.html
:link: performance/backend/mkldnn/mkldnn_quantization

How to get the most from your CPU by using Intel's MKL-DNN.

Expand Down Expand Up @@ -172,4 +172,4 @@ Next steps
packages/index
performance/index
deploy/index
extend/index
extend/index
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@ visualize_activation(mx.gluon.nn.Swish())
## Next Steps

Activations are just one component of neural network architectures. Here are a few MXNet resources to learn more about activation functions and how they they combine with other components of neural nets.
* Learn how to create a Neural Network with these activation layers and other neural network layers in the [gluon crash course](http://beta.mxnet.io/guide/getting-started/crash-course/2-nn.html).
* Check out the guide to MXNet [gluon layers and blocks](http://beta.mxnet.io/guide/packages/gluon/nn.html) to learn about the other neural network layers in implemented in MXNet and how to create custom neural networks with these layers.
* Also check out the [guide to normalization layers](http://beta.mxnet.io/guide/packages/gluon/normalization/normalization.html) to learn about neural network layers that normalize their inputs.
* Finally take a look at the [Custom Layer guide](http://beta.mxnet.io/guide/packages/gluon/custom_layer_beginners.html) to learn how to implement your own custom activation layer.
* Learn how to create a Neural Network with these activation layers and other neural network layers in the [gluon crash course](/api/python/docs/tutorials/getting-started/crash-course/index.html).
* Check out the guide to MXNet [gluon layers and blocks](/api/python/docs/tutorials/packages/gluon/blocks/nn.html) to learn about the other neural network layers in implemented in MXNet and how to create custom neural networks with these layers.
* Also check out the [guide to normalization layers](/api/python/docs/tutorials/packages/gluon/training/normalization/index.html) to learn about neural network layers that normalize their inputs.
* Finally take a look at the [Custom Layer guide](/api/python/docs/tutorials/extend/custom_layer.html) to learn how to implement your own custom activation layer.
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ Training

.. card::
:title: Autograd API
:link: ../autograd/autograd.html
:link: /api/python/docs/tutorials/packages/autograd/index.html

How to use Automatic Differentiation with the Autograd API.

Expand Down
Loading