Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay][TOPI] Gluncv SSD support on the GPU #2784

Merged
merged 89 commits into from
Apr 29, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
89 commits
Select commit Hold shift + click to select a range
7de699c
merge with master
Laurawly Mar 15, 2019
542ba01
ssd gluoncv gpu op updated
Laurawly Mar 11, 2019
5ad2f20
tutorials and testes modified
Laurawly Mar 12, 2019
15baf93
fix lint
Laurawly Mar 12, 2019
b6198c6
address comment
Laurawly Mar 12, 2019
e4cc4f0
multibox bug fixed
Laurawly Mar 12, 2019
1e8f425
space line added
Laurawly Mar 12, 2019
87e5787
use less threads per block
Laurawly Mar 13, 2019
6d6b3f3
less threads per block for get valid count
Laurawly Mar 14, 2019
1f2e950
Revert "less threads per block for get valid count"
Laurawly Mar 15, 2019
44f1dab
typo fixed
Laurawly Mar 15, 2019
10c8681
elem length made to a variable
Laurawly Mar 15, 2019
8f2ec0d
fix lint error
Laurawly Mar 15, 2019
e333258
fix lint error
Laurawly Mar 16, 2019
ba85d6b
lint fixed
Laurawly Mar 16, 2019
3a8eec2
bug fixed
Laurawly Mar 16, 2019
5f9a87b
lint fixed
Laurawly Mar 16, 2019
6f5a326
error fixed
Laurawly Mar 18, 2019
e3545e7
test ci
Laurawly Mar 18, 2019
e8bcff6
seperate argsort to be an independent op
Laurawly Mar 19, 2019
a49b111
fix lint
Laurawly Mar 19, 2019
7f9f315
fix lint
Laurawly Mar 19, 2019
bcc5d34
remove unsupported models
Laurawly Mar 20, 2019
e586598
ssd gluoncv gpu op updated
Laurawly Mar 11, 2019
8bfec58
tutorials and testes modified
Laurawly Mar 12, 2019
5829b1b
fix lint
Laurawly Mar 12, 2019
93dad9b
use less threads per block
Laurawly Mar 13, 2019
a1a644d
less threads per block for get valid count
Laurawly Mar 14, 2019
1d016ba
Revert "less threads per block for get valid count"
Laurawly Mar 15, 2019
63e2133
bug fixed
Laurawly Mar 16, 2019
9b7a614
error fixed
Laurawly Mar 18, 2019
9ad8d50
test ci
Laurawly Mar 18, 2019
e449b29
seperate argsort to be an independent op
Laurawly Mar 19, 2019
106b081
typo fixed
Laurawly Mar 20, 2019
4dd5a48
argsort added to realy
Laurawly Mar 20, 2019
dc9c35c
solve conflicts with master
Laurawly Mar 20, 2019
e44f8d5
fix lint
Laurawly Mar 22, 2019
c4029b6
fix lint
Laurawly Mar 22, 2019
04d7926
test push
Laurawly Mar 22, 2019
337257b
Revert "test push"
Laurawly Mar 22, 2019
7fe834f
fix lint error
Laurawly Mar 22, 2019
9335be6
fix more lint
Laurawly Mar 22, 2019
f5d70e6
cpu test_sort udpated
Laurawly Mar 22, 2019
0ed53d8
debug ci
Laurawly Mar 22, 2019
501db25
nms fixed
Laurawly Mar 25, 2019
f3c7ef5
expose argsort to relay frontend
Laurawly Mar 26, 2019
8df96a2
test ci
Laurawly Mar 26, 2019
6f1bdd6
fix lint
Laurawly Mar 27, 2019
7862140
sort register error fixed
Laurawly Mar 27, 2019
d40a4df
fix nnvm
Laurawly Mar 27, 2019
c6a666b
adaptive pooling added to relay
Laurawly Mar 27, 2019
6c400e0
nms type fixed
Laurawly Mar 27, 2019
6369cd9
Revert "adaptive pooling added to relay"
Laurawly Mar 27, 2019
8e95f21
fix lint
Laurawly Mar 27, 2019
b6cf048
expose argsort op
Laurawly Mar 29, 2019
c90f947
fix lint
Laurawly Apr 1, 2019
6b6a68c
fix lint
Laurawly Apr 1, 2019
c607559
fix lint
Laurawly Apr 1, 2019
d8aa019
sort test updated
Laurawly Apr 2, 2019
d85f194
sort bug fixed
Laurawly Apr 2, 2019
d0aa6a4
nnvm error fixed
Laurawly Apr 2, 2019
3fadd9a
fix argsort default data type returned to be float insteaf of int
Laurawly Apr 4, 2019
b38b5c1
fix lint
Laurawly Apr 4, 2019
ca8b1e7
fix lint
Laurawly Apr 4, 2019
f994030
test fixed
Laurawly Apr 4, 2019
996097d
fix valid count
Laurawly Apr 8, 2019
4aa5f65
fix titanx bug
Laurawly Apr 9, 2019
2e0c05a
tutorial add both targets
Laurawly Apr 9, 2019
f8aec92
titanx error fixed
Laurawly Apr 10, 2019
f09e2c6
try to fix CI old gpu error
Laurawly Apr 10, 2019
b265130
try to solve CI GPU error
Laurawly Apr 10, 2019
a562c78
get_valid_count added
Laurawly Apr 10, 2019
f16b0ed
reverse get_valid_count
Laurawly Apr 18, 2019
f239789
get valid count optimized
Laurawly Apr 22, 2019
44f62d0
address comments
Laurawly Apr 22, 2019
a89a211
fix ci error
Laurawly Apr 23, 2019
6274ca1
remove unessesary block sync
Laurawly Apr 23, 2019
519cdfc
add back one sync
Laurawly Apr 23, 2019
37630b8
address comments
Laurawly Apr 23, 2019
d1aec1a
address more comments
Laurawly Apr 25, 2019
10e466e
more comments
Laurawly Apr 25, 2019
8b5708f
move sort to be indepent algorithm
Laurawly Apr 26, 2019
ee4a8fd
typo fixed
Laurawly Apr 26, 2019
7c8317d
more typos
Laurawly Apr 26, 2019
bbb83cc
comments addressed
Laurawly Apr 26, 2019
97051e7
doc updated
Laurawly Apr 26, 2019
f335399
fix pylint
Laurawly Apr 27, 2019
5c52634
address final comments
Laurawly Apr 28, 2019
143a515
apache license added
Laurawly Apr 28, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions docs/langref/relay_op.rst
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,14 @@ This level enables additional math and transform operators.
tvm.relay.vision.yolo_reorg


**Level 6: Algorithm Operators**

.. autosummary::
:nosignatures:

tvm.relay.argsort


**Level 10: Temporary Operators**

This level support backpropagation of broadcast operators. It is temporary.
Expand Down Expand Up @@ -292,6 +300,11 @@ Level 5 Definitions
.. autofunction:: tvm.relay.vision.yolo_reorg


Level 6 Definitions
-------------------
.. autofunction:: tvm.relay.argsort


Level 10 Definitions
--------------------
.. autofunction:: tvm.relay.broadcast_to_like
Expand Down
53 changes: 53 additions & 0 deletions include/tvm/relay/attrs/algorithm.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

/*!
* \file tvm/relay/attrs/vision.h
* \brief Auxiliary attributes for vision operators.
*/
#ifndef TVM_RELAY_ATTRS_ALGORITHM_H_
#define TVM_RELAY_ATTRS_ALGORITHM_H_

#include <tvm/attrs.h>
#include <string>

namespace tvm {
namespace relay {

/*! \brief Attributes used in argsort operators */
struct ArgsortAttrs : public tvm::AttrsNode<ArgsortAttrs> {
int axis;
bool is_ascend;
DataType dtype;

TVM_DECLARE_ATTRS(ArgsortAttrs, "relay.attrs.ArgsortAttrs") {
TVM_ATTR_FIELD(axis).set_default(-1)
.describe("Axis along which to sort the input tensor."
"If not given, the flattened array is used.");
TVM_ATTR_FIELD(is_ascend).set_default(true)
.describe("Whether to sort in ascending or descending order."
"By default, sort in ascending order");
TVM_ATTR_FIELD(dtype).set_default(NullValue<DataType>())
.describe("DType of the output indices.");
}
};

} // namespace relay
} // namespace tvm
#endif // TVM_RELAY_ATTRS_ALGORITHM_H_
6 changes: 6 additions & 0 deletions include/tvm/relay/attrs/vision.h
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,8 @@ struct NonMaximumSuppressionAttrs : public tvm::AttrsNode<NonMaximumSuppressionA
double iou_threshold;
bool force_suppress;
int top_k;
int coord_start;
int score_index;
int id_index;
bool return_indices;
bool invalid_to_bottom;
Expand All @@ -106,6 +108,10 @@ struct NonMaximumSuppressionAttrs : public tvm::AttrsNode<NonMaximumSuppressionA
.describe("Suppress all detections regardless of class_id.");
TVM_ATTR_FIELD(top_k).set_default(-1)
.describe("Keep maximum top k detections before nms, -1 for no limit.");
TVM_ATTR_FIELD(coord_start).set_default(2)
.describe("Start index of the consecutive 4 coordinates.");
TVM_ATTR_FIELD(score_index).set_default(1)
.describe("Index of the scores/confidence of boxes.");
TVM_ATTR_FIELD(id_index).set_default(0)
.describe("Axis index of id.");
TVM_ATTR_FIELD(return_indices).set_default(true)
Expand Down
6 changes: 6 additions & 0 deletions nnvm/include/nnvm/top/nn.h
Original file line number Diff line number Diff line change
Expand Up @@ -488,6 +488,8 @@ struct NonMaximumSuppressionParam : public dmlc::Parameter<NonMaximumSuppression
bool force_suppress;
int top_k;
int id_index;
int coord_start;
int score_index;
int max_output_size;
bool invalid_to_bottom;
DMLC_DECLARE_PARAMETER(NonMaximumSuppressionParam) {
Expand All @@ -500,6 +502,10 @@ struct NonMaximumSuppressionParam : public dmlc::Parameter<NonMaximumSuppression
.describe("Suppress all detections regardless of class_id.");
DMLC_DECLARE_FIELD(top_k).set_default(-1)
.describe("Keep maximum top k detections before nms, -1 for no limit.");
DMLC_DECLARE_FIELD(coord_start).set_default(2)
.describe("Start index of the consecutive 4 coordinates.");
DMLC_DECLARE_FIELD(score_index).set_default(1)
.describe("Index of the scores/confidence of boxes.");
DMLC_DECLARE_FIELD(id_index).set_default(0)
.describe("Axis index of id.");
DMLC_DECLARE_FIELD(return_indices).set_default(true)
Expand Down
10 changes: 7 additions & 3 deletions nnvm/python/nnvm/top/vision.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,8 +94,12 @@ def compute_nms(attrs, inputs, _):
id_index = attrs.get_int('id_index')
invalid_to_bottom = attrs.get_bool('invalid_to_bottom')

return topi.vision.non_max_suppression(inputs[0], inputs[1], max_output_size,
iou_threshold, force_suppress, top_k,
id_index, return_indices, invalid_to_bottom)
return topi.vision.non_max_suppression(inputs[0], inputs[1],
max_output_size=max_output_size,
iou_threshold=iou_threshold,
force_suppress=force_suppress,
top_k=top_k, id_index=id_index,
return_indices=return_indices,
invalid_to_bottom=invalid_to_bottom)

reg.register_pattern("non_max_suppression", OpPattern.OPAQUE)
51 changes: 24 additions & 27 deletions nnvm/tests/python/compiler/test_top_level4.py
Original file line number Diff line number Diff line change
Expand Up @@ -543,14 +543,13 @@ def verify_multibox_prior(dshape, sizes=(1,), ratios=(1,), steps=(-1, -1),
if clip:
np_out = np.clip(np_out, 0, 1)

target = "llvm"
ctx = tvm.cpu()
graph, lib, _ = nnvm.compiler.build(out, target, {"data": dshape})
m = graph_runtime.create(graph, lib, ctx)
m.set_input("data", np.random.uniform(size=dshape).astype(dtype))
m.run()
out = m.get_output(0, tvm.nd.empty(np_out.shape, dtype))
tvm.testing.assert_allclose(out.asnumpy(), np_out, atol=1e-5, rtol=1e-5)
for target, ctx in ctx_list():
graph, lib, _ = nnvm.compiler.build(out, target, {"data": dshape})
m = graph_runtime.create(graph, lib, ctx)
m.set_input("data", np.random.uniform(size=dshape).astype(dtype))
m.run()
tvm_out = m.get_output(0, tvm.nd.empty(np_out.shape, dtype))
tvm.testing.assert_allclose(tvm_out.asnumpy(), np_out, atol=1e-5, rtol=1e-5)

def test_multibox_prior():
verify_multibox_prior((1, 3, 50, 50))
Expand All @@ -577,17 +576,16 @@ def test_multibox_transform_loc():
[0, 0.44999999, 1, 1, 1, 1],
[0, 0.30000001, 0, 0, 0.22903419, 0.20435292]]])

target = "llvm"
dtype = "float32"
ctx = tvm.cpu()
graph, lib, _ = nnvm.compiler.build(out, target, {"cls_prob": (batch_size, num_anchors, num_classes),
"loc_preds": (batch_size, num_anchors * 4),
"anchors": (1, num_anchors, 4)})
m = graph_runtime.create(graph, lib, ctx)
m.set_input(**{"cls_prob": np_cls_prob.astype(dtype), "loc_preds": np_loc_preds.astype(dtype), "anchors": np_anchors.astype(dtype)})
m.run()
out = m.get_output(0, tvm.nd.empty(expected_np_out.shape, dtype))
tvm.testing.assert_allclose(out.asnumpy(), expected_np_out, atol=1e-5, rtol=1e-5)
for target, ctx in ctx_list():
graph, lib, _ = nnvm.compiler.build(out, target, {"cls_prob": (batch_size, num_anchors, num_classes),
"loc_preds": (batch_size, num_anchors * 4),
"anchors": (1, num_anchors, 4)})
m = graph_runtime.create(graph, lib, ctx)
m.set_input(**{"cls_prob": np_cls_prob.astype(dtype), "loc_preds": np_loc_preds.astype(dtype), "anchors": np_anchors.astype(dtype)})
m.run()
tvm_out = m.get_output(0, tvm.nd.empty(expected_np_out.shape, dtype))
tvm.testing.assert_allclose(tvm_out.asnumpy(), expected_np_out, atol=1e-5, rtol=1e-5)

def test_non_max_suppression():
dshape = (1, 5, 6)
Expand All @@ -607,15 +605,14 @@ def test_non_max_suppression():
[-1, -1, -1, -1, -1, -1], [-1, -1, -1, -1, -1, -1],
[-1, -1, -1, -1, -1, -1]]])

target = "llvm"
ctx = tvm.cpu()
graph, lib, _ = nnvm.compiler.build(out, target, {"data": dshape, "valid_count": (dshape[0],)},
dtype={"data": "float32", "valid_count": "int32"})
m = graph_runtime.create(graph, lib, ctx)
m.set_input(**{"data": np_data, "valid_count": np_valid_count})
m.run()
out = m.get_output(0, tvm.nd.empty(np_result.shape, "float32"))
tvm.testing.assert_allclose(out.asnumpy(), np_result, atol=1e-5, rtol=1e-5)
for target, ctx in ctx_list():
graph, lib, _ = nnvm.compiler.build(out, target, {"data": dshape, "valid_count": (dshape[0],)},
dtype={"data": "float32", "valid_count": "int32"})
m = graph_runtime.create(graph, lib, ctx)
m.set_input(**{"data": np_data, "valid_count": np_valid_count})
m.run()
tvm_out = m.get_output(0, tvm.nd.empty(np_result.shape, "float32"))
tvm.testing.assert_allclose(tvm_out.asnumpy(), np_result, atol=1e-5, rtol=1e-5)

def np_slice_like(np_data, np_shape_like, axis=[]):
begin_idx = [0 for _ in np_data.shape]
Expand Down
1 change: 1 addition & 0 deletions python/tvm/relay/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
from .op.reduce import *
from .op.tensor import *
from .op.transform import *
from .op.algorithm import *
from . import nn
from . import annotation
from . import vision
Expand Down
29 changes: 20 additions & 9 deletions python/tvm/relay/frontend/mxnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -186,6 +186,13 @@ def _pool2d(new_op, is_avg):
'Operator {} Pooling is not supported for frontend MXNet.'.format(pool_type.capitalize()))


def _mx_adaptive_avg_pooling(inputs, attrs):
output_size = attrs.get_int_tuple("output_size", [])
if output_size != (1,):
raise RuntimeError("AdaptiveAvgPooling with output_size other than 1 is not supported yet.")
return _op.nn.global_avg_pool2d(inputs[0])


def _mx_dropout(inputs, attrs):
rate = attrs.get_float("p", 0.5)
return _op.nn.dropout(inputs[0], rate=rate)
Expand Down Expand Up @@ -529,15 +536,6 @@ def _mx_box_nms(inputs, attrs):
id_index = attrs.get_int('id_index', -1)
in_format = attrs.get_str('in_format', 'corner')
out_format = attrs.get_str('out_format', 'corner')
if coord_start != 2:
raise tvm.error.OpAttributeInvalid(
'Value of attribute "coord_start" must equal 2 for operator box_nms.')
if score_index != 1:
raise tvm.error.OpAttributeInvalid(
'Value of attribute "score_index" must equal 1 for operator box_nms.')
if id_index != -1 and int(id_index) != 0:
raise tvm.error.OpAttributeInvalid(
'Value of attribute "id_index" must equal either -1 or 0 for operator box_nms.')
if in_format != 'corner':
raise tvm.error.OpAttributeInvalid(
'Value of attribute "in_format" must equal "corner" for operator box_nms.')
Expand All @@ -551,6 +549,8 @@ def _mx_box_nms(inputs, attrs):
iou_threshold=iou_thresh,
force_suppress=force_suppress,
top_k=top_k,
coord_start=coord_start,
score_index=score_index,
id_index=id_index,
return_indices=False,
invalid_to_bottom=True)
Expand Down Expand Up @@ -648,6 +648,15 @@ def _mx_deformable_convolution(inputs, attrs):
return res


def _mx_argsort(inputs, attrs):
assert len(inputs) == 1
new_attrs = {}
new_attrs["axis"] = attrs.get_int("axis", -1)
new_attrs["is_ascend"] = attrs.get_bool("is_ascend", True)
new_attrs["dtype"] = attrs.get_str("dtype", "float32")
return _op.argsort(inputs[0], **new_attrs)


# Note: due to attribute conversion constraint
# ops in the identity set must be attribute free
_identity_list = [
Expand Down Expand Up @@ -783,6 +792,7 @@ def _mx_deformable_convolution(inputs, attrs):
"BlockGrad" : _mx_BlockGrad,
"shape_array" : _mx_shape_array,
"Embedding" : _mx_embedding,
"argsort" : _mx_argsort,
"SoftmaxOutput" : _mx_softmax_output,
"SoftmaxActivation" : _mx_softmax_activation,
"smooth_l1" : _mx_smooth_l1,
Expand All @@ -796,6 +806,7 @@ def _mx_deformable_convolution(inputs, attrs):
"_contrib_MultiProposal" : _mx_proposal,
"_contrib_box_nms" : _mx_box_nms,
"_contrib_DeformableConvolution" : _mx_deformable_convolution,
"_contrib_AdaptiveAvgPooling2D" : _mx_adaptive_avg_pooling,
# List of missing operators that are present in NNVMv1
# TODO(tvm-tvm): support all operators.
#
Expand Down
2 changes: 2 additions & 0 deletions python/tvm/relay/op/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
from .reduce import *
from .tensor import *
from .transform import *
from .algorithm import *
from . import nn
from . import annotation
from . import image
Expand All @@ -36,6 +37,7 @@
from . import _tensor_grad
from . import _transform
from . import _reduce
from . import _algorithm
from ..expr import Expr
from ..base import register_relay_node

Expand Down
45 changes: 45 additions & 0 deletions python/tvm/relay/op/_algorithm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"Definition of classic algorithms"
# pylint: disable=invalid-name,unused-argument
from __future__ import absolute_import

import topi
from topi.util import get_const_int
from ..op import OpPattern, register_compute, register_schedule, register_pattern


@register_schedule("argsort")
def schedule_argsort(_, outs, target):
"""Schedule definition of argsort"""
with target:
return topi.generic.schedule_argsort(outs)


@register_compute("argsort")
def compute_argsort(attrs, inputs, _, target):
"""Compute definition of argsort"""
axis = get_const_int(attrs.axis)
is_ascend = bool(get_const_int(attrs.is_ascend))
dtype = str(attrs.dtype)
return [
topi.argsort(inputs[0], None, axis=axis, is_ascend=is_ascend, \
dtype=dtype, flag=False)
]


register_pattern("argsort", OpPattern.OPAQUE)
47 changes: 47 additions & 0 deletions python/tvm/relay/op/algorithm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Classic algorithm operation"""
from __future__ import absolute_import as _abs
from . import _make

def argsort(data, axis=-1, is_ascend=1, dtype="float32"):
"""Performs sorting along the given axis and returns an array of indicies
having same shape as an input array that index data in sorted order.

Parameters
----------
data : relay.Expr
The input data tensor.

valid_count : tvm.Tensor
The number of valid elements to be sorted.

axis : int, optional
Axis long which to sort the input tensor.

is_ascend : boolean, optional
Whether to sort in ascending or descending order.

dtype : string, optional
DType of the output indices.

Returns
-------
out : relay.Expr
Tensor with same shape as data.
"""
return _make.argsort(data, axis, is_ascend, dtype)
Loading