Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various markdown syntax fixes #793

Merged
merged 3 commits into from
Dec 20, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 8 additions & 3 deletions docs/bytecode.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ following code comments:
**Types:** See `chlo_encoding::TypeCode` in `ChloBytecode.cpp`
[[link](https://github.com/openxla/stablehlo/search?q=filename%3AChloBytecode+TypeCode)]

### Not Included:
### Not Included

The following attributes / types are subclasses of builtin machinery and call
into the bytecode implementations in the Builtin Dialect.

Expand Down Expand Up @@ -64,6 +65,7 @@ into the bytecode implementations in the Builtin Dialect.
- `HLO_UInt`

**Special Cases:**

- `StableHLO_ConvolutionAttributes`
+ Despite its name, is not an attribute and is not encoded.
Rather, it is a dag which gets expanded into several attributes
Expand All @@ -77,14 +79,16 @@ into the bytecode implementations in the Builtin Dialect.
## Other Notes

### Testing Bytecode with Round Trips

Testing that the round-trip of an MLIR file produces the same results is a good
way to test that the bytecode is implemented properly.

```
$ stablehlo-opt -emit-bytecode stablehlo/tests/print_stablehlo.mlir | stablehlo-opt
stablehlo-opt -emit-bytecode stablehlo/tests/print_stablehlo.mlir | stablehlo-opt
```

### Find out what attributes or types are not encoded:
### Find out what attributes or types are not encoded

Since attributes and types that don't get encoded are instead stored as strings,
the `strings` command can be used to see what attributes were missed:

Expand Down Expand Up @@ -130,4 +134,5 @@ instructions is addressed. If so, bytecode for the attr/type should be generated
on next call to `stablehlo-opt -emit-bytecode`. This can be verified using the proper bytecode trace.

### Encoding `enum class` values

Enum class values can be encoded as their underlying numeric types using `varint`. Currently all enums in StableHLO use `uint32_t` as the underlying value.
7 changes: 4 additions & 3 deletions docs/reference.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# Interpreter Design


## Data Model

[StableHLO programs](spec.md#programs) are computations over tensors
Expand All @@ -17,10 +16,11 @@ discriminated union holding one of `APInt`, `APFloat` or `pair<APFloat,APFloat>`
for storage. The last one is used for storing elements with complex types.

`Tensor` class has the following APIs to interact with its individual elements:
- `Element Tensor::get(llvm::ArrayRef<int64_t> index)`: To extract an

- `Element Tensor::get(llvm::ArrayRef<int64_t> index)`: To extract an
individual tensor element at multi-dimensional index `index` as `Element`
object.
- `void Tensor::set(llvm::ArrayRef<int64_t> index, Element element);`:
- `void Tensor::set(llvm::ArrayRef<int64_t> index, Element element);`:
To update an `Element` object `element` into a tensor at multi-dimensional
index `index`.

Expand All @@ -31,6 +31,7 @@ The entry function to the interpreter is
```C++
SmallVector<Tensor> eval(func::FuncOp func, ArrayRef<Tensor> args);
```

which does the following:

1. Tracks the SSA arguments of `func` and their associated runtime `Tensor`
Expand Down
42 changes: 21 additions & 21 deletions docs/status.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,25 +16,25 @@ The progress of a StableHLO op, as mentioned in the corresponding row, on a
particular aspect, as mentioned in the corresponding column, is tracked using
one of the following tracking labels.

- Generic labels
- **yes**: there is a comprehensive implementation.
- **no**: there is no implementation, but working on that is part of
- Generic labels
- **yes**: there is a comprehensive implementation.
- **no**: there is no implementation, but working on that is part of
[the roadmap](https://github.com/openxla/stablehlo#roadmap).
Note that Verifier can never be labeled as "no" because the ODS already
implements some verification.
- Customized labels for Verifier and Type Inference
- **yes**: there is an implementation, and it's in sync with
[StableHLO semantics](https://github.com/openxla/stablehlo/blob/main/docs/spec.md).
- **yes\***: there is an implementation, and it's in sync with
[XLA semantics](https://www.tensorflow.org/xla/operation_semantics).
Since XLA semantics is oftentimes underdocumented, we are using
[hlo_verifier.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/hlo_verifier.cc)
and [shape_inference.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/shape_inference.cc)
as the reference.
- **revisit**: there is an implementation, but it doesn't fall under "yes"
- Customized labels for Verifier and Type Inference
- **yes**: there is an implementation, and it's in sync with
[StableHLO semantics](https://github.com/openxla/stablehlo/blob/main/docs/spec.md).
- **yes\***: there is an implementation, and it's in sync with
[XLA semantics](https://www.tensorflow.org/xla/operation_semantics).
Since XLA semantics is oftentimes underdocumented, we are using
[hlo_verifier.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/hlo_verifier.cc)
and [shape_inference.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/shape_inference.cc)
as the reference.
- **revisit**: there is an implementation, but it doesn't fall under "yes"
or "yes\*" - either because we haven't audited it yet, or because we have
and found issues.
- **infeasible**: there is no implementation, because it's infeasible.
- **infeasible**: there is no implementation, because it's infeasible.
For example, because the result type of an op cannot be inferred from
its operands and attributes.

Expand All @@ -54,7 +54,7 @@ one of the following tracking labels.
| batch_norm_inference | yes | revisit | yes | no | no |
| batch_norm_training | yes | revisit | yes | no | no |
| bitcast_convert | yes | yes | infeasible | yes | no |
| broadcast | no | yes* | yes* | yes | no |
| broadcast | no | yes\* | yes\* | yes | no |
| broadcast_in_dim | yes | yes | infeasible | yes | no |
| case | yes | revisit | yes | no | no |
| cbrt | yes | revisit | yes | yes | no |
Expand All @@ -71,8 +71,8 @@ one of the following tracking labels.
| convolution | revisit | yes | revisit | revisit | no |
| cosine | yes | yes | yes | yes | yes |
| count_leading_zeros | yes | yes | yes | yes | no |
| create_token | no | yes* | yes* | yes | no |
| cross-replica-sum | no | revisit | yes* | no | no |
| create_token | no | yes\* | yes\* | yes | no |
| cross-replica-sum | no | revisit | yes\* | no | no |
| cstr_reshapable | no | revisit | no | yes | no |
| custom_call | yes | yes | infeasible | yes | no |
| divide | yes | yes | yes | yes | no |
Expand All @@ -92,7 +92,7 @@ one of the following tracking labels.
| fft | yes | revisit | yes | yes | no |
| floor | yes | yes | yes | yes | yes |
| gather | yes | yes | yes | no | no |
| get_dimension_size | no | yes* | yes* | yes | no |
| get_dimension_size | no | yes\* | yes\* | yes | no |
| get_tuple_element | yes | yes | yes | yes | no |
| if | yes | revisit | yes | no | no |
| imag | yes | yes | yes | yes | no |
Expand Down Expand Up @@ -136,7 +136,7 @@ one of the following tracking labels.
| select | yes | yes | yes | yes | no |
| select_and_scatter | yes | revisit | yes | no | no |
| send | yes | revisit | yes | no | no |
| set_dimension_size | no | yes* | yes* | yes | no |
| set_dimension_size | no | yes\* | yes\* | yes | no |
| shift_left | yes | revisit | yes | yes | no |
| shift_right_arithmetic | yes | revisit | yes | yes | no |
| shift_right_logical | yes | revisit | yes | yes | no |
Expand All @@ -153,7 +153,7 @@ one of the following tracking labels.
| triangular_solve | yes | revisit | yes | no | no |
| tuple | yes | yes | yes | yes | no |
| unary_einsum | no | revisit | no | yes | no |
| uniform_dequantize | no | yes* | yes* | yes | no |
| uniform_quantize | no | yes* | infeasible | yes | no |
| uniform_dequantize | no | yes\* | yes\* | yes | no |
| uniform_quantize | no | yes\* | infeasible | yes | no |
| while | yes | revisit | yes | revisit | no |
| xor | yes | yes | yes | yes | yes |
8 changes: 5 additions & 3 deletions docs/type_inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ To implement high-quality verifiers and shape functions for StableHLO ops, these

These proposals apply to both revisiting existing implementations, and achieving new ops until a comprehensive coverage.

## (P1) Use the StableHLO spec as the source of truth.
## (P1) Use the StableHLO spec as the source of truth

The [spec](https://github.com/openxla/stablehlo/blob/main/docs/spec.md) is the source of truth for all verifiers and shape functions of the StableHLO ops. The existing verifiers and shape functions of every op need revisited to be fully aligned with the specification. Note that the specification document keeps evolving, in cases that the spec for an op is not available, the XLA implementation should be used as the source of truth instead: including [xla/service/shape\_inference.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/shape_inference.cc) and [xla/service/hlo\_verifier.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/hlo_verifier.cc). XLA implementation doesn't cover unbounded dynamism, so for unbounded dynamism we'll apply common sense until the dynamism RFC is available.

Expand All @@ -22,7 +22,8 @@ Do we need adding tests for the constraints from the ODS? Please see “Establis

## (P3) Maintain verification code in verifiers and shape functions

Both
Both:

- **verifiers**: implemented by `Op::verify()`, and
- **shape functions**: implemented by `InferTypeOpInterface` like `Op::inferReturnTypes()` or `Op::inferReturnTypeComponents`

Expand Down Expand Up @@ -59,8 +60,9 @@ But stay careful about the missing pieces: for example, if the op contains the t
### What to do

When implementing or revisiting the verifier and/or shape function of an op:

1. Put all positive cases and negative cases in [ops\_stablehlo.mlir](https://github.com/openxla/stablehlo/blob/main/stablehlo/tests/ops_stablehlo.mlir).
2. Add a single positive test in [infer\_stablehlo.mlir](https://github.com/openxla/stablehlo/blob/main/stablehlo/tests/infer_stablehlo.mlir) to test the interface.
3. (Optional) If an op is complicated and could contain a lot of tests, consider adding a separate test file named `verify_<op_name>.mlir` or` verify_<your_topic>.mlir` within the same folder.
3. (Optional) If an op is complicated and could contain a lot of tests, consider adding a separate test file named `verify_<op_name>.mlir` or `verify_<your_topic>.mlir` within the same folder.

Note: For now, the tests for new **bounded dynamism / sparsity** are also put in [infer\_stablehlo.mlir](https://github.com/openxla/stablehlo/blob/main/stablehlo/tests/infer_stablehlo.mlir).