aimet_torch.layer_output_utils

class aimet_torch.layer_output_utils.LayerOutputUtil(model, dir_path, naming_scheme=NamingScheme.PYTORCH, dummy_input=None, onnx_export_args=None)[source]

Implementation to capture and save outputs of intermediate layers of a model (fp32/quantsim).

Constructor for LayerOutputUtil.

Parameters:
  • model (Module) – Model whose layer-outputs are needed.

  • dir_path (str) – Directory wherein layer-outputs will be saved.

  • naming_scheme (NamingScheme) – Naming scheme to be followed to name layer-outputs. There are multiple schemes as per the exported model (pytorch, onnx or torchscript). Refer the NamingScheme enum definition.

  • dummy_input (Union[Tensor, Tuple, List, None]) – Dummy input to model. Required if naming_scheme is ‘NamingScheme.ONNX’ or ‘NamingScheme.TORCHSCRIPT’.

  • onnx_export_args (Union[OnnxExportApiArgs, Dict, None]) – Should be same as that passed to quantsim export API to have consistency between layer-output names present in exported onnx model and generated layer-outputs. Required if naming_scheme is ‘NamingScheme.ONNX’.

The following API can be used to Generate Layer Outputs

LayerOutputUtil.generate_layer_outputs(input_instance)[source]

This method captures output of every layer of a model & saves the single input instance and corresponding layer-outputs to disk.

Parameters:

input_instance (Union[Tensor, List[Tensor], Tuple[Tensor]]) – Single input instance for which we want to obtain layer-outputs.

Returns:

None

Naming Scheme Enum

class aimet_torch.layer_output_utils.NamingScheme(value)[source]

Enumeration of layer-output naming schemes.

ONNX = 2

Names outputs according to exported onnx model. Layer output names are generally numeric.

PYTORCH = 1

Names outputs according to exported pytorch model. Layer names are used.

TORCHSCRIPT = 3

Names outputs according to exported torchscript model. Layer output names are generally numeric.