dequantize¶
- aimet_torch.quantization.affine.dequantize(tensor, scale, offset, block_size=None)[source]¶
Applies dequantization to the input.
Precisely,
\[out = (input + offset) * scale\]If block size \(B = \begin{pmatrix} B_0 & B_1 & \cdots & B_{D-1} \end{pmatrix}\) is specified, this equation will be further generalized as
\[ \begin{align}\begin{aligned}out_{j_0 \cdots j_{D-1}} & = (input_{j_0 \cdots j_{D-1}} + offset_{i_0 \cdots i_{D-1}}) * scale_{i_0 \cdots i_{D-1}}\\\text{where} \quad \forall_{0 \leq d < D} \quad i_d = \left\lfloor \frac{j_d}{B_d} \right\rfloor\end{aligned}\end{align} \]- Parameters:
tensor (Tensor) – Tensor to dequantize
scale (Tensor) – Scale for dequantization
offset (Tensor) – Offset for dequantization
block_size (Tuple[int, ...], optional) – Block size