From 99f3d582c6b52b0fa82da6787664e4ea67145670 Mon Sep 17 00:00:00 2001 From: Sandeep Dasgupta Date: Tue, 25 Jul 2023 18:26:21 +0000 Subject: [PATCH] fix a few typos --- rfcs/20230622-quantized-reduction.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/rfcs/20230622-quantized-reduction.md b/rfcs/20230622-quantized-reduction.md index 24249697778..1251a59f4d6 100644 --- a/rfcs/20230622-quantized-reduction.md +++ b/rfcs/20230622-quantized-reduction.md @@ -220,9 +220,8 @@ To avoid identification of identity functions which could be tricky in general. ## Option 2: re-scale input to accumulation type This option is the simplest from the POV for specification of quantized `reduce` -op. This is adding `stablehlo.uniform_quantize` and `stablehlo.dequantize` ops -respectively before and after reduce op which operates on the "accumulator" -type. +op. This is adding `stablehlo.uniform_quantize`ops before and after reduce op +which operates on the "accumulator" type. ```mlir %widen = "stablehlo.uniform_quantize"(%input) @@ -234,7 +233,7 @@ type. } : (tensor<... x !quant.uniform>) -> tensor<... x !quant.uniform> -%narrowed = "stablehlo.uniform_dequantize"(%reduce) +%narrowed = "stablehlo.uniform_quantize"(%reduce) : (tensor<... x !quant.uniform>) -> tensor<... x !quant.uniform> ```