Skip to content

Commit

Permalink
review iteration: various typo fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
sdasgup3 committed Aug 29, 2023
1 parent ebdc71b commit 39e8231
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions rfcs/20230622-quantized-reduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ The RFC introduces the following proposal, emerged out of discussion in the
, along with their tradeoffs.

The proposal allows the reducer block to express the computation in a different
element type (preferably higher accumulation type) than the one used in reduce
element type (preferably wider accumulation type) than the one used in reduce
op's ops arguments and return type. For illustrative purposes, in the following
example, the operand element type `tensor<!quant.uniform<ui8:f32,
input_scale:input_zp>>` is different from the element type for
Expand Down Expand Up @@ -71,7 +71,7 @@ example, the operand element type `tensor<!quant.uniform<ui8:f32,

### Semantics

Depending on (1) the input operand type is different from the reduction block
If (1) the input operand type is different from the reduction block
argument type or (2) the op result type is different from the reduction block
return type, there will be implicit type conversion defined by either
`stablehlo.convert`, `stablehlo.uniform_quantize`, or
Expand All @@ -86,17 +86,17 @@ return type, there will be implicit type conversion defined by either
| (E) `stablehlo.convert` | integer | floating-point |
| (F) `stablehlo.convert` | floating-point | floating-point |
| (G) `stablehlo.convert` | integer | integer |
| (G) `stablehlo.convert` | complex | complex |
| (H) `stablehlo.convert` | complex | complex |

At this point there is no use for cases other than (A), (F), and (G). My
proposal here would be to address (A), (F), and (G) only. Note that the (F)
partially addresses [Decide on mixed
precision](https://github.com/openxla/stablehlo/issues/369) for reduce op in
that it allows the the input or init value to differ from the corresponding
block arguments w.r.t the precision of floating-point types. However, the
mixed precision implementation in HLO seems more detailed in the sense that
even allows `inputs` and `init_values` to differ in floating-point
precision. My proposal would be to treat the above ticket separately.
partially addresses
[Decide on mixed precision](https://github.com/openxla/stablehlo/issues/369)
for reduce op in that it allows the the input or init value to differ from the
corresponding block arguments w.r.t the precision of floating-point types.
However, the mixed precision implementation in HLO seems more detailed in the
sense that even allows `inputs` and `init_values` to differ in floating-point
precision. My proposal would be to treat the above ticket separately.

## Appendix

Expand Down

0 comments on commit 39e8231

Please sign in to comment.