Layer output generation

Context

Layer output generation is an API that captures and saves intermediate layer-outputs of your pre-trained model. The model can be original (FP32) or a QuantizationSimModel.

The layer outputs are named according to the exported model (PyTorch or ONNX) by the QuantSim export API QuantizationSimModel.export().

This enables layer output comparison between quantization simulated (QuantSim) models and quantized models on target runtimes like Qualcomm® AI Engine Direct to debug accuracy mismatch issues at the layer level (per operation).

Workflow

The layer output generation framework follows the same workflow for all model frameworks:

  1. Imports

  2. Load a model from AIMET

  3. Obtain inputs

  4. Generate layer outputs

Choose your framework below for code examples.

Step 1: Importing the API

Import the API.

Step 2: Loading a model

Export the original or QuantSim model from AIMET.

Step 3: Obtaining inputs

Obtain inputs from which to generate intermediate layer outputs.

Step 4: Generating layer outputs

Generate the specified layer outputs.

API

class aimet_torch.layer_output_utils.LayerOutputUtil(model, dir_path, naming_scheme=NamingScheme.PYTORCH, dummy_input=None, onnx_export_args=None)[source]

Implementation to capture and save outputs of intermediate layers of a model (fp32/quantsim).

Constructor for LayerOutputUtil.

Parameters:
  • model (Module) – Model whose layer-outputs are needed.

  • dir_path (str) – Directory wherein layer-outputs will be saved.

  • naming_scheme (NamingScheme) – Naming scheme to be followed to name layer-outputs. There are multiple schemes as per the exported model (pytorch, onnx or torchscript). Refer the NamingScheme enum definition.

  • dummy_input (Union[Tensor, Tuple, List, None]) – Dummy input to model. Required if naming_scheme is ‘NamingScheme.ONNX’ or ‘NamingScheme.TORCHSCRIPT’.

  • onnx_export_args (Union[OnnxExportApiArgs, Dict, None]) – Should be same as that passed to quantsim export API to have consistency between layer-output names present in exported onnx model and generated layer-outputs. Required if naming_scheme is ‘NamingScheme.ONNX’.

The following API can be used to Generate Layer Outputs

LayerOutputUtil.generate_layer_outputs(input_instance)[source]

This method captures output of every layer of a model & saves the single input instance and corresponding layer-outputs to disk.

Parameters:

input_instance (Union[Tensor, List[Tensor], Tuple[Tensor]]) – Single input instance for which we want to obtain layer-outputs.

Returns:

None

Naming Scheme Enum

class aimet_torch.layer_output_utils.NamingScheme(value)[source]

Enumeration of layer-output naming schemes.

ONNX = 2

Names outputs according to exported onnx model. Layer output names are generally numeric.

PYTORCH = 1

Names outputs according to exported pytorch model. Layer names are used.

TORCHSCRIPT = 3

Names outputs according to exported torchscript model. Layer output names are generally numeric.

class aimet_onnx.layer_output_utils.LayerOutputUtil(model, dir_path, device=0)[source]

Implementation to capture and save outputs of intermediate layers of a model (fp32/quantsim)

Constructor - It initializes the utility classes that captures and saves layer-outputs

Parameters:
  • model (ModelProto) – ONNX model

  • dir_path (str) – Directory wherein layer-outputs will be saved

  • device (int) – CUDA device-id to be used

The following API can be used to Generate Layer Outputs

LayerOutputUtil.generate_layer_outputs(input_instance)[source]

This method captures output of every layer of a model & saves the inputs and corresponding layer-outputs to disk.

Parameters:

input_instance (Union[ndarray, List[ndarray], Tuple[ndarray]]) – Single input instance for which we want to obtain layer-outputs.

Returns:

None