Skip to content

Latest commit

 

History

History
886 lines (577 loc) · 13.5 KB

nn.rst

File metadata and controls

886 lines (577 loc) · 13.5 KB

torch.nn

.. automodule:: torch.nn
.. currentmodule:: torch.nn

Parameters

.. autoclass:: Parameter
    :members:

Containers

Module

.. autoclass:: Module
    :members:

Sequential

.. autoclass:: Sequential
    :members:

ModuleList

.. autoclass:: ModuleList
    :members:

ModuleDict

.. autoclass:: ModuleDict
    :members:

ParameterList

.. autoclass:: ParameterList
    :members:

ParameterDict

.. autoclass:: ParameterDict
    :members:

Convolution layers

Conv1d

.. autoclass:: Conv1d
    :members:

Conv2d

.. autoclass:: Conv2d
    :members:

Conv3d

.. autoclass:: Conv3d
    :members:

ConvTranspose1d

.. autoclass:: ConvTranspose1d
    :members:

ConvTranspose2d

.. autoclass:: ConvTranspose2d
    :members:

ConvTranspose3d

.. autoclass:: ConvTranspose3d
    :members:

Unfold

.. autoclass:: Unfold
    :members:

Fold

.. autoclass:: Fold
    :members:


Pooling layers

MaxPool1d

.. autoclass:: MaxPool1d
    :members:

MaxPool2d

.. autoclass:: MaxPool2d
    :members:

MaxPool3d

.. autoclass:: MaxPool3d
    :members:

MaxUnpool1d

.. autoclass:: MaxUnpool1d
    :members:

MaxUnpool2d

.. autoclass:: MaxUnpool2d
    :members:

MaxUnpool3d

.. autoclass:: MaxUnpool3d
    :members:

AvgPool1d

.. autoclass:: AvgPool1d
    :members:

AvgPool2d

.. autoclass:: AvgPool2d
    :members:

AvgPool3d

.. autoclass:: AvgPool3d
    :members:

FractionalMaxPool2d

.. autoclass:: FractionalMaxPool2d
    :members:

LPPool1d

.. autoclass:: LPPool1d
    :members:

LPPool2d

.. autoclass:: LPPool2d
    :members:

AdaptiveMaxPool1d

.. autoclass:: AdaptiveMaxPool1d
    :members:

AdaptiveMaxPool2d

.. autoclass:: AdaptiveMaxPool2d
    :members:

AdaptiveMaxPool3d

.. autoclass:: AdaptiveMaxPool3d
    :members:

AdaptiveAvgPool1d

.. autoclass:: AdaptiveAvgPool1d
    :members:

AdaptiveAvgPool2d

.. autoclass:: AdaptiveAvgPool2d
    :members:

AdaptiveAvgPool3d

.. autoclass:: AdaptiveAvgPool3d
    :members:


Padding layers

ReflectionPad1d

.. autoclass:: ReflectionPad1d
    :members:

ReflectionPad2d

.. autoclass:: ReflectionPad2d
    :members:

ReplicationPad1d

.. autoclass:: ReplicationPad1d
    :members:

ReplicationPad2d

.. autoclass:: ReplicationPad2d
    :members:

ReplicationPad3d

.. autoclass:: ReplicationPad3d
    :members:

ZeroPad2d

.. autoclass:: ZeroPad2d
    :members:

ConstantPad1d

.. autoclass:: ConstantPad1d
    :members:

ConstantPad2d

.. autoclass:: ConstantPad2d
    :members:

ConstantPad3d

.. autoclass:: ConstantPad3d
    :members:


Non-linear activations (weighted sum, nonlinearity)

ELU

.. autoclass:: ELU
    :members:

Hardshrink

.. autoclass:: Hardshrink
    :members:

Hardtanh

.. autoclass:: Hardtanh
    :members:

LeakyReLU

.. autoclass:: LeakyReLU
    :members:

LogSigmoid

.. autoclass:: LogSigmoid
    :members:

MultiheadAttention

.. autoclass:: MultiheadAttention
    :members:

PReLU

.. autoclass:: PReLU
    :members:

ReLU

.. autoclass:: ReLU
    :members:

ReLU6

.. autoclass:: ReLU6
    :members:

RReLU

.. autoclass:: RReLU
    :members:

SELU

.. autoclass:: SELU
    :members:

CELU

.. autoclass:: CELU
    :members:

Sigmoid

.. autoclass:: Sigmoid
    :members:

Softplus

.. autoclass:: Softplus
    :members:

Softshrink

.. autoclass:: Softshrink
    :members:

Softsign

.. autoclass:: Softsign
    :members:

Tanh

.. autoclass:: Tanh
    :members:

Tanhshrink

.. autoclass:: Tanhshrink
    :members:

Threshold

.. autoclass:: Threshold
    :members:

Non-linear activations (other)

Softmin

.. autoclass:: Softmin
    :members:

Softmax

.. autoclass:: Softmax
    :members:

Softmax2d

.. autoclass:: Softmax2d
    :members:

LogSoftmax

.. autoclass:: LogSoftmax
    :members:

AdaptiveLogSoftmaxWithLoss

.. autoclass:: AdaptiveLogSoftmaxWithLoss
    :members:

Normalization layers

BatchNorm1d

.. autoclass:: BatchNorm1d
    :members:

BatchNorm2d

.. autoclass:: BatchNorm2d
    :members:

BatchNorm3d

.. autoclass:: BatchNorm3d
    :members:

GroupNorm

.. autoclass:: GroupNorm
    :members:

SyncBatchNorm

.. autoclass:: SyncBatchNorm
    :members:

InstanceNorm1d

.. autoclass:: InstanceNorm1d
    :members:

InstanceNorm2d

.. autoclass:: InstanceNorm2d
    :members:

InstanceNorm3d

.. autoclass:: InstanceNorm3d
    :members:

LayerNorm

.. autoclass:: LayerNorm
    :members:

LocalResponseNorm

.. autoclass:: LocalResponseNorm
    :members:

Recurrent layers

RNN

.. autoclass:: RNN
    :members:

LSTM

.. autoclass:: LSTM
    :members:

GRU

.. autoclass:: GRU
    :members:

RNNCell

.. autoclass:: RNNCell
    :members:

LSTMCell

.. autoclass:: LSTMCell
    :members:

GRUCell

.. autoclass:: GRUCell
    :members:

Transformer layers

Transformer

.. autoclass:: Transformer
    :members:

TransformerEncoder

.. autoclass:: TransformerEncoder
    :members:

TransformerDecoder

.. autoclass:: TransformerDecoder
    :members:

TransformerEncoderLayer

.. autoclass:: TransformerEncoderLayer
    :members:

TransformerDecoderLayer

.. autoclass:: TransformerDecoderLayer
    :members:

Linear layers

Identity

.. autoclass:: Identity
    :members:

Linear

.. autoclass:: Linear
    :members:

Bilinear

.. autoclass:: Bilinear
    :members:

Dropout layers

Dropout

.. autoclass:: Dropout
    :members:

Dropout2d

.. autoclass:: Dropout2d
    :members:

Dropout3d

.. autoclass:: Dropout3d
    :members:

AlphaDropout

.. autoclass:: AlphaDropout
    :members:


Sparse layers

Embedding

.. autoclass:: Embedding
    :members:

EmbeddingBag

.. autoclass:: EmbeddingBag
    :members:

Distance functions

CosineSimilarity

.. autoclass:: CosineSimilarity
    :members:

PairwiseDistance

.. autoclass:: PairwiseDistance
    :members:


Loss functions

L1Loss

.. autoclass:: L1Loss
    :members:

MSELoss

.. autoclass:: MSELoss
    :members:

CrossEntropyLoss

.. autoclass:: CrossEntropyLoss
    :members:

CTCLoss

.. autoclass:: CTCLoss
    :members:

NLLLoss

.. autoclass:: NLLLoss
    :members:

PoissonNLLLoss

.. autoclass:: PoissonNLLLoss
    :members:

KLDivLoss

.. autoclass:: KLDivLoss
    :members:

BCELoss

.. autoclass:: BCELoss
    :members:

BCEWithLogitsLoss

.. autoclass:: BCEWithLogitsLoss
    :members:

MarginRankingLoss

.. autoclass:: MarginRankingLoss
    :members:

HingeEmbeddingLoss

.. autoclass:: HingeEmbeddingLoss
    :members:

MultiLabelMarginLoss

.. autoclass:: MultiLabelMarginLoss
    :members:

SmoothL1Loss

.. autoclass:: SmoothL1Loss
    :members:

SoftMarginLoss

.. autoclass:: SoftMarginLoss
    :members:

MultiLabelSoftMarginLoss

.. autoclass:: MultiLabelSoftMarginLoss
    :members:

CosineEmbeddingLoss

.. autoclass:: CosineEmbeddingLoss
    :members:

MultiMarginLoss

.. autoclass:: MultiMarginLoss
    :members:

TripletMarginLoss

.. autoclass:: TripletMarginLoss
    :members:


Vision layers

PixelShuffle

.. autoclass:: PixelShuffle
    :members:

Upsample

.. autoclass:: Upsample
    :members:

UpsamplingNearest2d

.. autoclass:: UpsamplingNearest2d
    :members:

UpsamplingBilinear2d

.. autoclass:: UpsamplingBilinear2d
    :members:


DataParallel layers (multi-GPU, distributed)

DataParallel

.. autoclass:: DataParallel
    :members:

DistributedDataParallel

.. autoclass:: torch.nn.parallel.DistributedDataParallel
    :members:


Utilities

clip_grad_norm_

.. autofunction:: torch.nn.utils.clip_grad_norm_

clip_grad_value_

.. autofunction:: torch.nn.utils.clip_grad_value_

parameters_to_vector

.. autofunction:: torch.nn.utils.parameters_to_vector

vector_to_parameters

.. autofunction:: torch.nn.utils.vector_to_parameters

weight_norm

.. autofunction:: torch.nn.utils.weight_norm

remove_weight_norm

.. autofunction:: torch.nn.utils.remove_weight_norm

spectral_norm

.. autofunction:: torch.nn.utils.spectral_norm

remove_spectral_norm

.. autofunction:: torch.nn.utils.remove_spectral_norm


.. currentmodule:: torch.nn.utils.rnn

PackedSequence

.. autofunction:: torch.nn.utils.rnn.PackedSequence


pack_padded_sequence

.. autofunction:: torch.nn.utils.rnn.pack_padded_sequence


pad_packed_sequence

.. autofunction:: torch.nn.utils.rnn.pad_packed_sequence


pad_sequence

.. autofunction:: torch.nn.utils.rnn.pad_sequence


pack_sequence

.. autofunction:: torch.nn.utils.rnn.pack_sequence

.. currentmodule:: torch.nn

Flatten

.. autoclass:: Flatten
    :members:


Quantized Functions

Quantization refers to techniques for performing computations and storing tensors at lower bitwidths than floating point precision. PyTorch supports both per tensor and per channel asymmetric linear quantization. To learn more how to use quantized functions in PyTorch, please refer to the :ref:`quantization-doc` documentation.