Skip to content

Commit

Permalink
Fix some typos in api docs (apache#3309)
Browse files Browse the repository at this point in the history
  • Loading branch information
apivovarov authored and wweic committed Jun 27, 2019
1 parent 6ec131c commit 9a885be
Show file tree
Hide file tree
Showing 11 changed files with 26 additions and 26 deletions.
2 changes: 1 addition & 1 deletion include/tvm/relay/error.h
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ struct Error : public dmlc::Error {
*
* The final mode represents the old mode, if we report an error that has no span or
* expression, we will default to throwing an exception with a textual representation
* of the error and no indication of where it occured in the original program.
* of the error and no indication of where it occurred in the original program.
*
* The latter mode is not ideal, and the goal of the new error reporting machinery is
* to avoid ever reporting errors in this style.
Expand Down
2 changes: 1 addition & 1 deletion include/tvm/runtime/c_runtime_api.h
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ TVM_DLL void TVMAPISetLastError(const char* msg);
/*!
* \brief return str message of the last error
* all function in this file will return 0 when success
* and -1 when an error occured,
* and -1 when an error occurred,
* TVMGetLastError can be called to retrieve the error
*
* this function is threadsafe and can be called by different thread
Expand Down
2 changes: 1 addition & 1 deletion nnvm/include/nnvm/c_api.h
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ NNVM_DLL void NNAPISetLastError(const char* msg);
/*!
* \brief return str message of the last error
* all function in this file will return 0 when success
* and -1 when an error occured,
* and -1 when an error occurred,
* NNGetLastError can be called to retrieve the error
*
* this function is threadsafe and can be called by different thread
Expand Down
6 changes: 3 additions & 3 deletions nnvm/python/nnvm/frontend/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def __call__(self, inputs, attrs, *args):


class AttrConverter(object):
"""Common attribute conveter. An AttrConverter instance is a callable:
"""Common attribute converter. An AttrConverter instance is a callable:
```
attr_converter = AttrConverter(op_name, transforms={'a':'b', 'c':('d', 1)})
new_op_name, new_attr = attr_converter(attrs)
Expand All @@ -72,12 +72,12 @@ class AttrConverter(object):
`op_name = func(attr)`
transforms : dict of `new_name, or (new_name, default_value, transform function)`
If only a new_name is provided, it's like renaming the attribute name.
If default_value if provded, then the attribute is considered as optional.
If default_value if provided, then the attribute is considered as optional.
If transform function is provided, the original attribute value is handled
by transform function.
excludes : list
A list of excluded attributes that should `NOT` appear.
Raise NotImplementedError if occured.
Raise NotImplementedError if occurred.
disables : list
A list of attributes that is disabled in nnvm. Log warnings.
ignores : list
Expand Down
2 changes: 1 addition & 1 deletion nnvm/python/nnvm/frontend/tensorflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -1197,7 +1197,7 @@ def from_tensorflow(self, graph, layout="NHWC", shape=None, outputs=None):
-> All Const nodes are params.
-> Last node is assumed as graph output.
-> _output_shapes : Graph should be frozen with add_shapes=True.
Or user can pass input shape dictionaly optionally.
Or user can pass input shape dictionary optionally.
-> DecodeJpeg, ResizeBilinear: These are dummy operators.
Hence user should handle preprocessing outside.
-> CheckNumerics: No implementation as of now for this.
Expand Down
6 changes: 3 additions & 3 deletions python/tvm/relay/frontend/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ def clear_padding(self):


class AttrCvt(object):
"""Common attribute conveter. An AttrConverter instance is a callable:
"""Common attribute converter. An AttrConverter instance is a callable:
```
attr_converter = AttrConverter(op_name, transforms={'a':'b', 'c':('d', 1)})
new_op_name, new_attr = attr_converter(attrs)
Expand All @@ -300,12 +300,12 @@ class AttrCvt(object):
`op_name = func(attr)`
transforms : dict of `new_name, or (new_name, default_value, transform function)`
If only a new_name is provided, it's like renaming the attribute name.
If default_value if provded, then the attribute is considered as optional.
If default_value if provided, then the attribute is considered as optional.
If transform function is provided, the original attribute value is handled
by transform function.
excludes : list
A list of excluded attributes that should `NOT` appear.
Raise NotImplementedError if occured.
Raise NotImplementedError if occurred.
disables : list
A list of attributes that is disabled in relay. Log warnings.
ignores : list
Expand Down
12 changes: 6 additions & 6 deletions python/tvm/relay/frontend/tensorflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,12 @@ class AttrCvt(object):
`op_name = func(attr)`
transforms : dict of `new_name, or (new_name, default_value, transform function)`
If only a new_name is provided, it's like renaming the attribute name.
If default_value if provded, then the attribute is considered as optional.
If default_value if provided, then the attribute is considered as optional.
If transform function is provided, the original attribute value is handled
by transform function.
excludes : list
A list of excluded attributes that should `NOT` appear.
Raise NotImplementedError if occured.
Raise NotImplementedError if occurred.
disables : list
A list of attributes that is disabled in relay. Log warnings.
ignores : list
Expand Down Expand Up @@ -1567,7 +1567,7 @@ def _in_while_loop(control_flow_node_map, op_name):
Parameters
----------
control_flow_node_map : Dict[str, Set[str]]
A dictionay contains the unqiue control flow execution frame name to
A dictionay contains the unique control flow execution frame name to
a set of primitive operators mapping.
op_name : str
Expand Down Expand Up @@ -1619,7 +1619,7 @@ def f2():
return tf.add(4, 23)
r = tf.cond(tf.less(i, j), f1, f2)
This condition statement should be coverted into Relay in the following
This condition statement should be converted into Relay in the following
form:
.. code-block:: python
Expand Down Expand Up @@ -1727,7 +1727,7 @@ def __init__(self):
self._loop = None

def _while_loop(self):
"""An internal API to create a Relay recurisve call for a matched TF
"""An internal API to create a Relay recursive call for a matched TF
`while_loop` construct.
"""
wl = tvm.relay.var('while_loop')
Expand Down Expand Up @@ -1796,7 +1796,7 @@ def from_tensorflow(self, graph, layout="NHWC", shape=None, outputs=None):
-> All Const nodes are params.
-> Last node is assumed as graph output.
-> _output_shapes : Graph should be frozen with add_shapes=True.
Or user can pass input shape dictionaly optionally.
Or user can pass input shape dictionary optionally.
-> DecodeJpeg, ResizeBilinear: These are dummy operators.
Hence user should handle preprocessing outside.
-> CheckNumerics: No implementation as of now for this.
Expand Down
14 changes: 7 additions & 7 deletions python/tvm/relay/op/nn/nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ def conv2d(data,
The weight expressions.
strides : tuple of int, optional
The strides of convoltution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
Expand Down Expand Up @@ -129,7 +129,7 @@ def conv2d_transpose(data,
The weight expressions.
strides : Tuple[int], optional
The strides of convoltution.
The strides of convolution.
padding : Tuple[int], optional
The padding of convolution on both sides of inputs.
Expand Down Expand Up @@ -842,7 +842,7 @@ def contrib_conv2d_winograd_without_weight_transform(data,
The Tile size of winograd. E.g. 2 for F(2x2, 3x3) and 4 for F(4x4, 3x3)
strides : tuple of int, optional
The strides of convoltution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
Expand Down Expand Up @@ -908,7 +908,7 @@ def contrib_conv2d_winograd_nnpack_without_weight_transform(data,
The weight expressions.
strides : tuple of int, optional
The strides of convoltution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
Expand Down Expand Up @@ -975,7 +975,7 @@ def contrib_conv2d_nchwc(data,
The kernel expressions.
strides : tuple of int, optional
The strides of convoltution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
Expand Down Expand Up @@ -1040,7 +1040,7 @@ def contrib_depthwise_conv2d_nchwc(data,
The kernel expressions.
strides : tuple of int, optional
The strides of convoltution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
Expand Down Expand Up @@ -1156,7 +1156,7 @@ def deformable_conv2d(data,
The weight expressions.
strides : tuple of int, optional
The strides of convoltution.
The strides of convolution.
padding : tuple of int, optional
The padding of convolution on both sides of inputs before convolution.
Expand Down
2 changes: 1 addition & 1 deletion src/common/socket.h
Original file line number Diff line number Diff line change
Expand Up @@ -373,7 +373,7 @@ class TCPSocket : public Socket {
}
/*!
* \brief decide whether the socket is at OOB mark
* \return 1 if at mark, 0 if not, -1 if an error occured
* \return 1 if at mark, 0 if not, -1 if an error occurred
*/
int AtMark() const {
#ifdef _WIN32
Expand Down
2 changes: 1 addition & 1 deletion src/pass/arg_binder.h
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ namespace ir {
* - assert bufferB.shape[1] == n + 3
*
* In general, this is a constraint solving problem. We have simplified assumption
* over the binding declaration, such that we require the variable occured in
* over the binding declaration, such that we require the variable occurred in
* constraint must be declared in argument list. So it is illegal to have signature
* f(tA(shape=(n+3))) without any argument variable corresponds to n, even though
* it is already enough to derive n from the input argument.
Expand Down
2 changes: 1 addition & 1 deletion topi/python/topi/cuda/reduction.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ def _schedule_reduce(op, sch, is_idx_reduce=False):
num_thread = 32
target = tvm.target.current_target()
if target and target.target_name == "opencl":
# without it, CL_INVALID_WORK_GROUP_SIZE occured when running test_topi_reduce.py
# without it, CL_INVALID_WORK_GROUP_SIZE occurred when running test_topi_reduce.py
# don't know why
num_thread = 16
block_x = tvm.thread_axis("blockIdx.x")
Expand Down

0 comments on commit 9a885be

Please sign in to comment.