Skip to content

Commit

Permalink
fix deprecated
Browse files Browse the repository at this point in the history
Signed-off-by: Yan Xu <[email protected]>
  • Loading branch information
Connor-XY committed Jun 7, 2024
1 parent bc32b83 commit 2228f0d
Show file tree
Hide file tree
Showing 280 changed files with 2,005 additions and 1,900 deletions.
10 changes: 5 additions & 5 deletions docs/ConstPropagationPass.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,16 +27,16 @@ func @foo() -> tensor<1xf32> {
}
```

## Remark
## Remark

ONNXConstantOp uses MLIR DenseElementsAttr to store constant values. It is
important to note that, once a DenseElementsAttr is created, it is alive and
important to note that, once a DenseElementsAttr is created, it is alive and
consumes memory until the end of compilation. In [Example](#example), all the
three DenseElementsAttrs in the three ONNXConstantOps exist until the end of
compilation. Especially, two intermediate DenseElementsAttrs in the two
ONNXConstantOps produced by folding the two ONNXAddOps also exist. For a
real world model, the number of intermediate DenseElementsAttrs will increase
quickly, which leads to a large memory footprint during compilation.
quickly, which leads to a large memory footprint during compilation.

To avoid creating too many DenseElementsAttrs for intermediate ONNXConstantOps
during `--constprop-onnx`, we designed a mechanism that dynamically allocates and
Expand Down Expand Up @@ -93,7 +93,7 @@ def AddConstProp : Pat<
// source patten: From add(lhs, rhs).
(ONNXAddOp:$addOp (ONNXConstantOp:$lhs $_, $_, $_, $_, $_, $_, $_, $_),
(ONNXConstantOp:$rhs $_, $_, $_, $_, $_, $_, $_, $_)),
// result pattern: To c = lhs + rhs
// result pattern: To c = lhs + rhs
(CreateAddOfTwoConst $addOp, $lhs, $rhs),
// Additional constraints: if both lhs and rhs are dense constants.
[(IsFromDenseONNXConstantOp:$lhs), (IsFromDenseONNXConstantOp:$rhs)]>;
Expand Down Expand Up @@ -127,7 +127,7 @@ template <typename ElementwiseBinaryOp>
Value ConstPropElementwiseBinary(PatternRewriter &rewriter,
Value replacingValue, Value lhsValue, Value rhsValue) {
ConstPropCounters::count("ElementwiseBinary", {lhsValue, rhsValue});
Type replacingType = replacingValue.getType().cast<ShapedType>();
Type replacingType = mlir::cast<ShapedType>(replacingValue.getType());

// Get lhs and rhs ElementsAttr from the values' defining constant ops.
ElementsAttr lhs = getConstValueElements(lhsValue);
Expand Down
2 changes: 1 addition & 1 deletion docs/ImportONNXDefs.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ You will need to add the implementation code in the `src/Dialect/ONNX/ONNXOps.cp
Tips:
* Use `operandAdaptor` object to get the inputs (must use `operandAdaptor` to get the current values of the inputs) and the `op` object to get the attributes (can use `op` because attributes are typically immutable).
* Use `hasShapeAndRank(X)` to test if `X` input is currently shaped and ranked. If not, return success as we will get a chance later to test the operation with this info. Note that some inputs may be scalar too, in which case they may or may not be encoded as a shape type.
* You can then use MLIR call `X.getType().cast<ShapedType>()` to get a shape types, for which you can get the rank and the dimensions. At this time, we only check dimension validity for values known at runtime. Unknown dimensions are encoded as a negative number. Please only use the cast when you are sure that it will not assert, i.e. the type is indeed a `ShapedType`.
* You can then use MLIR call `mlir::cast<ShapedType>(X.getType())` to get a shape types, for which you can get the rank and the dimensions. At this time, we only check dimension validity for values known at runtime. Unknown dimensions are encoded as a negative number. Please only use the cast when you are sure that it will not assert, i.e. the type is indeed a `ShapedType`.
* When you find an error, report it with a friendly error message using `op->emitError(msg)`.

## Customize importer
Expand Down
96 changes: 50 additions & 46 deletions src/Accelerators/NNPA/Conversion/ONNXToZHigh/ONNXLegalityCheck.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -51,14 +51,14 @@ bool isCompatibleWithNNPALevel(std::string inputNNPALevel) {
/// zAIU supports only F16, F32 and BFLOAT. Since MLIR does not support BFLOAT,
/// we check F16 and F32 here only. zAIU only supports rank in range of (0, 4].
bool isValidElementTypeAndRank(Value val, bool donotCheckRank) {
if (val.getType().isa<NoneType>())
if (mlir::isa<NoneType>(val.getType()))
return true;
if (auto valueType = val.getType().dyn_cast_or_null<ShapedType>()) {
if (auto valueType = mlir::dyn_cast_or_null<ShapedType>(val.getType())) {
Type elementType = (valueType) ? valueType.getElementType() : val.getType();
// Element type must be in 16 or F32.
if (elementType.isa<FloatType>() &&
(elementType.cast<FloatType>().getWidth() == 16 ||
elementType.cast<FloatType>().getWidth() == 32)) {
if (mlir::isa<FloatType>(elementType) &&
(mlir::cast<FloatType>(elementType).getWidth() == 16 ||
mlir::cast<FloatType>(elementType).getWidth() == 32)) {
if (donotCheckRank)
return true;
// Rank must be in range of (0, 4].
Expand All @@ -78,8 +78,8 @@ bool checkLegalityPoolOpsCommon(POOLOP op, Value Y) {
shapeHelper.computeShapeAndAssertOnFailure();
Value X = op.getX();
int64_t ceilMode = op.getCeilMode();
ShapedType inputType = X.getType().cast<ShapedType>();
ShapedType outputType = Y.getType().cast<ShapedType>();
ShapedType inputType = mlir::cast<ShapedType>(X.getType());
ShapedType outputType = mlir::cast<ShapedType>(Y.getType());
ArrayRef<int64_t> shapeInput = inputType.getShape();
ArrayRef<int64_t> shapeOutput = outputType.getShape();

Expand Down Expand Up @@ -248,14 +248,14 @@ bool meetPoolParamRestrictions(int64_t inputShape, int64_t kernelShape,
if (outputShape != 1)
return false;
// padding_type must be VALID_PADDING.
if (!paddingType.equals("VALID_PADDING"))
if (!(paddingType == "VALID_PADDING"))
return false;
} else {
// strides are greater than zero
// kernel_width and kernel_height must be less than or equal to 64.
if (kernelShape > 64)
return false;
if (paddingType.equals("SAME_PADDING")) {
if (paddingType == "SAME_PADDING") {
if (outputShape != ceil((float)inputShape / strides))
return false;
} else { // VALID_PADDING
Expand Down Expand Up @@ -412,7 +412,7 @@ bool isSuitableForZDNN<ONNXSoftmaxOp>(
return false;
if (!isValidElementTypeAndRank(op.getInput()))
return false;
ShapedType inputType = op.getType().cast<ShapedType>();
ShapedType inputType = mlir::cast<ShapedType>(op.getType());
if (!inputType.hasRank())
return false;
int64_t rank = inputType.getRank();
Expand All @@ -429,7 +429,7 @@ bool isSuitableForZDNN<ONNXReluOp>(
return false;
if (!isValidElementTypeAndRank(op.getX()))
return false;
ShapedType xType = op.getX().getType().cast<ShapedType>();
ShapedType xType = mlir::cast<ShapedType>(op.getX().getType());
return xType.hasRank() && (xType.getRank() <= 4);
}

Expand All @@ -442,7 +442,7 @@ bool isSuitableForZDNN<ONNXTanhOp>(
return false;
if (!isValidElementTypeAndRank(op.getInput()))
return false;
ShapedType inputType = op.getType().cast<ShapedType>();
ShapedType inputType = mlir::cast<ShapedType>(op.getType());
return inputType.hasRank() && (inputType.getRank() <= 4);
}

Expand All @@ -455,7 +455,7 @@ bool isSuitableForZDNN<ONNXSigmoidOp>(
return false;
if (!isValidElementTypeAndRank(op.getX()))
return false;
ShapedType xType = op.getX().getType().cast<ShapedType>();
ShapedType xType = mlir::cast<ShapedType>(op.getX().getType());
return xType.hasRank() && (xType.getRank() <= 4);
}

Expand All @@ -468,7 +468,7 @@ bool isSuitableForZDNN<ONNXLogOp>(
return false;
if (!isValidElementTypeAndRank(op.getInput()))
return false;
ShapedType inputType = op.getInput().getType().cast<ShapedType>();
ShapedType inputType = mlir::cast<ShapedType>(op.getInput().getType());
return inputType.hasRank() && (inputType.getRank() <= 4);
}

Expand All @@ -481,7 +481,7 @@ bool isSuitableForZDNN<ONNXExpOp>(
return false;
if (!isValidElementTypeAndRank(op.getInput()))
return false;
ShapedType inputType = op.getInput().getType().cast<ShapedType>();
ShapedType inputType = mlir::cast<ShapedType>(op.getInput().getType());
return inputType.hasRank() && (inputType.getRank() <= 4);
}

Expand All @@ -500,8 +500,8 @@ bool isSuitableForZDNN<ONNXMatMulOp>(
return false;
if (!isValidElementTypeAndRank(op.getOperand(1)))
return false;
ShapedType aType = op.getOperand(0).getType().cast<ShapedType>();
ShapedType bType = op.getOperand(1).getType().cast<ShapedType>();
ShapedType aType = mlir::cast<ShapedType>(op.getOperand(0).getType());
ShapedType bType = mlir::cast<ShapedType>(op.getOperand(1).getType());

// Illegal if A or B is unranked.
if (!aType.hasRank() || !bType.hasRank())
Expand Down Expand Up @@ -558,16 +558,16 @@ bool isSuitableForZDNN<ONNXGemmOp>(
if (!isValidElementTypeAndRank(C))
return false;

ShapedType aType = A.getType().cast<ShapedType>();
ShapedType bType = B.getType().cast<ShapedType>();
ShapedType aType = mlir::cast<ShapedType>(A.getType());
ShapedType bType = mlir::cast<ShapedType>(B.getType());
ShapedType cType;
ArrayRef<int64_t> aShape = aType.getShape();
ArrayRef<int64_t> bShape = bType.getShape();
ArrayRef<int64_t> cShape;

bool hasC = !isNoneValue(C);
if (hasC) {
cType = C.getType().cast<ShapedType>();
cType = mlir::cast<ShapedType>(C.getType());
cShape = cType.getShape();
}

Expand Down Expand Up @@ -612,7 +612,7 @@ bool isSuitableForZDNN<ONNXReduceMeanV13Op>(

std::optional<mlir::ArrayAttr> axes = op.getAxes();
int64_t keepdims = op.getKeepdims();
ShapedType dataType = op.getData().getType().cast<ShapedType>();
ShapedType dataType = mlir::cast<ShapedType>(op.getData().getType());
auto shapeData = dataType.getShape();

// Check keepdims.
Expand All @@ -623,8 +623,8 @@ bool isSuitableForZDNN<ONNXReduceMeanV13Op>(
mlir::ArrayAttr axesVal = axes.value();
SmallVector<Attribute> axesAttrs(axesVal.begin(), axesVal.end());
if ((axesAttrs.size() != 2) ||
(axesAttrs[0].dyn_cast<IntegerAttr>().getInt() != 2) ||
(axesAttrs[1].dyn_cast<IntegerAttr>().getInt() != 3)) {
(mlir::dyn_cast<IntegerAttr>(axesAttrs[0]).getInt() != 2) ||
(mlir::dyn_cast<IntegerAttr>(axesAttrs[1]).getInt() != 3)) {
return false;
}

Expand Down Expand Up @@ -676,15 +676,15 @@ bool isSuitableForZDNN<ONNXLSTMOp>(
if (!isValidElementTypeAndRank(B))
return false;

int64_t hidden_size = R.getType().cast<ShapedType>().getShape()[2];
int64_t hidden_size = mlir::cast<ShapedType>(R.getType()).getShape()[2];
std::optional<ArrayAttr> activations = op.getActivations();
// Check if direction and hidden_size in W have static dimensions.
ArrayRef<int64_t> wShape = W.getType().cast<ShapedType>().getShape();
ArrayRef<int64_t> wShape = mlir::cast<ShapedType>(W.getType()).getShape();
if ((wShape[0] != 1 && wShape[0] != 2) || wShape[1] == ShapedType::kDynamic)
return false;
// Check if R has static dimensions, and the direction dim is 1 or 2.
ArrayRef<int64_t> rShape = R.getType().cast<ShapedType>().getShape();
if (!R.getType().cast<ShapedType>().hasStaticShape() ||
ArrayRef<int64_t> rShape = mlir::cast<ShapedType>(R.getType()).getShape();
if (!mlir::cast<ShapedType>(R.getType()).hasStaticShape() ||
(rShape[0] != 1 && rShape[0] != 2))
return false;
// Check hidden_size.
Expand All @@ -694,11 +694,11 @@ bool isSuitableForZDNN<ONNXLSTMOp>(
if (!isNoneValue(op.getSequenceLens()))
return false;
// check if B, initial_h and initial_c have static dimensions if given.
if (!isNoneValue(B) && !B.getType().cast<ShapedType>().hasStaticShape())
if (!isNoneValue(B) && !mlir::cast<ShapedType>(B.getType()).hasStaticShape())
return false;
// check if B's direction dim is 1 or 2.
if (!isNoneValue(B)) {
ArrayRef<int64_t> bShape = B.getType().cast<ShapedType>().getShape();
ArrayRef<int64_t> bShape = mlir::cast<ShapedType>(B.getType()).getShape();
if (bShape[0] != 1 && bShape[0] != 2)
return false;
}
Expand All @@ -708,12 +708,14 @@ bool isSuitableForZDNN<ONNXLSTMOp>(
return false;
// zDNN support the default activations (["Sigmoid", "Tanh", "Tanh"]) only.
if ((activations && (activations.value().size() > 0) &&
(activations.value()[0].cast<StringAttr>().getValue() !=
(mlir::cast<StringAttr>(activations.value()[0]).getValue() !=
"Sigmoid")) ||
(activations && (activations.value().size() > 1) &&
(activations.value()[1].cast<StringAttr>().getValue() != "Tanh")) ||
(mlir::cast<StringAttr>(activations.value()[1]).getValue() !=
"Tanh")) ||
(activations && (activations.value().size() > 2) &&
(activations.value()[2].cast<StringAttr>().getValue() != "Tanh")))
(mlir::cast<StringAttr>(activations.value()[2]).getValue() !=
"Tanh")))
return false;
// zDNN does not support clip(Cell clip threshold).
if (op.getClip())
Expand Down Expand Up @@ -755,24 +757,24 @@ bool isSuitableForZDNN<ONNXGRUOp>(
if (!isValidElementTypeAndRank(B))
return false;

int64_t hidden_size = R.getType().cast<ShapedType>().getShape()[2];
int64_t hidden_size = mlir::cast<ShapedType>(R.getType()).getShape()[2];
std::optional<ArrayAttr> activations = op.getActivations();
// Check if direction and hidden_size in W have static dimensions.
ArrayRef<int64_t> wShape = W.getType().cast<ShapedType>().getShape();
ArrayRef<int64_t> wShape = mlir::cast<ShapedType>(W.getType()).getShape();
if ((wShape[0] != 1 && wShape[0] != 2) || wShape[1] == ShapedType::kDynamic)
return false;
// Check if R has static dimensions.
if (!R.getType().cast<ShapedType>().hasStaticShape())
if (!mlir::cast<ShapedType>(R.getType()).hasStaticShape())
return false;
// Check hidden_size.
if (hidden_size > MAXIMUM_NUM_HIDDEN_SIZE_GRU)
return false;
// check if B and initial_h have static dimensions if given.
if (!isNoneValue(B) && !B.getType().cast<ShapedType>().hasStaticShape())
if (!isNoneValue(B) && !mlir::cast<ShapedType>(B.getType()).hasStaticShape())
return false;
// check if B's direction dim is 1 or 2.
if (!isNoneValue(B)) {
ArrayRef<int64_t> bShape = B.getType().cast<ShapedType>().getShape();
ArrayRef<int64_t> bShape = mlir::cast<ShapedType>(B.getType()).getShape();
if (bShape[0] != 1 && bShape[0] != 2)
return false;
}
Expand All @@ -781,12 +783,14 @@ bool isSuitableForZDNN<ONNXGRUOp>(
return false;
// zDNN support the default activations (["Sigmoid", "Tanh", "Tanh"]) only.
if ((activations && (activations.value().size() > 0) &&
(activations.value()[0].cast<StringAttr>().getValue() !=
(mlir::cast<StringAttr>(activations.value()[0]).getValue() !=
"Sigmoid")) ||
(activations && (activations.value().size() > 1) &&
(activations.value()[1].cast<StringAttr>().getValue() != "Tanh")) ||
(mlir::cast<StringAttr>(activations.value()[1]).getValue() !=
"Tanh")) ||
(activations && (activations.value().size() > 2) &&
(activations.value()[2].cast<StringAttr>().getValue() != "Tanh")))
(mlir::cast<StringAttr>(activations.value()[2]).getValue() !=
"Tanh")))
return false;
// zDNN does not support clip(Cell clip threshold).
if (op.getClip())
Expand Down Expand Up @@ -859,7 +863,7 @@ static bool checkConv2DParamRestrictions(int64_t inputDim, int64_t kernelDim,
int64_t stride, int64_t outputDim, StringRef paddingType) {
if (stride == 0) {
// paddingType must be VALID_PADDING.
if (!paddingType.equals("VALID_PADDING"))
if (!(paddingType == "VALID_PADDING"))
return false;
// inputDim must be = kernel dim.
if (inputDim != kernelDim)
Expand All @@ -875,7 +879,7 @@ static bool checkConv2DParamRestrictions(int64_t inputDim, int64_t kernelDim,
// kernel dim must be less than or equal to 64.
if (kernelDim > 64)
return false;
if (paddingType.equals("SAME_PADDING")) {
if (paddingType == "SAME_PADDING") {
// height_out restriction.
if (outputDim != ceil((float)inputDim / stride))
return false;
Expand Down Expand Up @@ -913,8 +917,8 @@ bool isSuitableForZDNN<ONNXConvOp>(
ONNXConvOpShapeHelper shapeHelper(op.getOperation(), {});
shapeHelper.computeShapeAndAssertOnFailure();

ShapedType inputType = op.getX().getType().cast<ShapedType>();
ShapedType outputType = op.getY().getType().cast<ShapedType>();
ShapedType inputType = mlir::cast<ShapedType>(op.getX().getType());
ShapedType outputType = mlir::cast<ShapedType>(op.getY().getType());
ArrayRef<int64_t> shapeInput = inputType.getShape();
ArrayRef<int64_t> shapeOutput = outputType.getShape();

Expand Down Expand Up @@ -978,8 +982,8 @@ bool isSuitableForZDNN<ONNXConvOp>(
template <>
bool isSuitableForZDNN<ONNXBatchNormalizationInferenceModeOp>(
ONNXBatchNormalizationInferenceModeOp op, const DimAnalysis *dimAnalysis) {
ShapedType inputType = op.getX().getType().cast<ShapedType>();
ShapedType outputType = op.getO_Y().getType().cast<ShapedType>();
ShapedType inputType = mlir::cast<ShapedType>(op.getX().getType());
ShapedType outputType = mlir::cast<ShapedType>(op.getO_Y().getType());
ArrayRef<int64_t> shapeInput = inputType.getShape();
ArrayRef<int64_t> shapeOutput = outputType.getShape();

Expand Down
21 changes: 11 additions & 10 deletions src/Accelerators/NNPA/Conversion/ONNXToZHigh/ONNXToZHigh.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ ArrayAttr getLSTMGRUBiasSplitShape(
Value getLSTMGRUZDNNWeightFromONNXWeight(
Location loc, PatternRewriter &rewriter, Value weight, int isLSTM) {
int64_t splitNum = isLSTM ? 4 : 3;
RankedTensorType weightType = weight.getType().cast<RankedTensorType>();
RankedTensorType weightType = mlir::cast<RankedTensorType>(weight.getType());
Type elementType = weightType.getElementType();
ArrayRef<int64_t> weightShape = weightType.getShape();
int64_t direction = weightShape[0];
Expand Down Expand Up @@ -124,7 +124,7 @@ Value getLSTMGRUGetYh(Location loc, PatternRewriter &rewriter, Value val,
if (isNoneValue(resYh) || isNoneValue(val))
return noneValue;

ArrayRef<int64_t> shapeX = X.getType().cast<ShapedType>().getShape();
ArrayRef<int64_t> shapeX = mlir::cast<ShapedType>(X.getType()).getShape();
MultiDialectBuilder<OnnxBuilder> create(rewriter, loc);
// Generate Y_h for onnx.LSTM from hn_output for all timestep
Value minusOne = create.onnx.constantInt64({-1});
Expand All @@ -136,12 +136,12 @@ Value getLSTMGRUGetYh(Location loc, PatternRewriter &rewriter, Value val,
Value intMax = create.onnx.constantInt64({INT_MAX});
StringRef directionStr = direction.getValue();
ArrayRef<int64_t> resYhShape =
resYh.getType().cast<RankedTensorType>().getShape();
mlir::cast<RankedTensorType>(resYh.getType()).getShape();
int64_t T = isNoneValue(resY) ? 1 : shapeX[0];
int64_t D = resYhShape[0];
int64_t B = resYhShape[1];
int64_t H = resYhShape[2];
Type elementType = resYh.getType().cast<ShapedType>().getElementType();
Type elementType = mlir::cast<ShapedType>(resYh.getType()).getElementType();
Value axis = zero;
Value step = one;
Value ret;
Expand Down Expand Up @@ -205,19 +205,20 @@ Value getLSTMGRUGetYc(

SmallVector<Value, 4> emitONNXSplitOp(Location loc, PatternRewriter &rewriter,
Value input, IntegerAttr axis, ArrayAttr split) {
Type elementType = input.getType().cast<ShapedType>().getElementType();
Type elementType = mlir::cast<ShapedType>(input.getType()).getElementType();
SmallVector<mlir::Type> outputTypes;
int64_t splitNum = split.size();
ArrayRef<int64_t> inputShape =
input.getType().cast<RankedTensorType>().getShape();
int64_t splitAxis = axis.cast<IntegerAttr>().getSInt();
mlir::cast<RankedTensorType>(input.getType()).getShape();
int64_t splitAxis = mlir::cast<IntegerAttr>(axis).getSInt();
assert(splitAxis >= 0 && "Negative axis");
for (int i = 0; i < splitNum; i++) {
SmallVector<int64_t> outputShape;
for (size_t dim = 0; dim < inputShape.size(); dim++) {
outputShape.emplace_back((dim == (unsigned int)splitAxis)
? split[dim].cast<IntegerAttr>().getInt()
: inputShape[dim]);
outputShape.emplace_back(
(dim == (unsigned int)splitAxis)
? mlir::cast<IntegerAttr>(split[dim]).getInt()
: inputShape[dim]);
}
outputTypes.emplace_back(RankedTensorType::get(outputShape, elementType));
}
Expand Down
Loading

0 comments on commit 2228f0d

Please sign in to comment.