aimet_torch.quant_analyzer

Top level APIs

class aimet_common.utils.CallbackFunc(func, func_callback_args=None)[source]

Class encapsulating call back function and it’s arguments

Parameters:
  • func (Callable) – Callable Function

  • func_callback_args – Arguments passed to the callable function

class aimet_torch.quant_analyzer.QuantAnalyzer(model, dummy_input, forward_pass_callback, eval_callback, modules_to_ignore=None)[source]

QuantAnalyzer tool provides

  1. model sensitivity to weight and activation quantization

  2. per layer sensitivity analysis

  3. per layer encoding (min - max range)

  4. per PDF analysis and

  5. per layer MSE analysis

Parameters:
  • model (Module) – FP32 model to analyze for quantization.

  • dummy_input (Union[Tensor, Tuple]) – Dummy input to model.

  • forward_pass_callback (CallbackFunc) – A callback function for model calibration that simply runs forward passes on the model to compute encoding (delta/offset). This callback function should use representative data and should be subset of entire train/validation dataset (~1000 images/samples).

  • eval_callback (CallbackFunc) – A callback function for model evaluation that determines model performance. This callback function is expected to return scalar value representing the model performance evaluated against entire test/evaluation dataset.

  • modules_to_ignore (Optional[List[Module]]) – Excludes certain modules from being analyzed.

QuantAnalyzer.analyze(quant_scheme=QuantScheme.post_training_tf_enhanced, default_param_bw=8, default_output_bw=8, config_file=None, results_dir='./tmp/')
Analyze model for quantization and point out sensitive parts/hotspots of the model by performing
  1. model sensitivity to quantization,

  2. perform per layer sensitivity analysis by enabling and disabling quant wrappers,

  3. export per layer encodings min - max ranges,

  4. export per layer statistics histogram (PDF) when quant scheme is TF-Enhanced,

  5. per layer MSE analysis

Parameters:
  • quant_scheme (QuantScheme) – Quantization scheme. Supported values are QuantScheme.post_training_tf or QuantScheme.post_training_tf_enhanced.

  • default_param_bw (int) – Default bitwidth (4-31) to use for quantizing layer parameters.

  • default_output_bw (int) – Default bitwidth (4-31) to use for quantizing layer inputs and outputs.

  • config_file (Optional[str]) – Path to configuration file for model quantizers.

  • results_dir (str) – Directory to save the results.

Alternatively, you can run specific utility

You can avoid running all the utilities that QuantAnalyzer offers and only run those of your interest. For this you need to have the QuantizationSimModel object, Then you call the desired QuantAnalyzer utility of your interest and pass the same object to it.

QuantAnalyzer.check_model_sensitivity_to_quantization(sim)

Perform the sensitivity analysis to weight and activation quantization individually.

Parameters:

sim (_QuantizationSimModelInterface) – Quantsim model.

Return type:

Tuple[float, float, float]

Returns:

FP32 eval score, weight-quantized eval score, act-quantized eval score.

QuantAnalyzer.perform_per_layer_analysis_by_enabling_quant_wrappers(sim, results_dir)

NOTE: Option 1

  1. All quant wrappers’ parameters and activations quantizers are disabled.

  2. Based on occurrence for every quant wrappers
    • Each quant wrapper’s parameters and activations quantizers are enabled as per JSON config file and set to bit-width specified.

    • Measure and record eval score on subset of dataset.

    • Disable enabled quantizers in step 1.

  3. Returns dictionary containing quant wrapper name and corresponding eval score.

Parameters:
  • sim (_QuantizationSimModelInterface) – Quantsim model.

  • results_dir (str) – Directory to save the results.

Return type:

Dict

Returns:

layer wise eval score dictionary. dict[layer_name] = eval_score

QuantAnalyzer.perform_per_layer_analysis_by_disabling_quant_wrappers(sim, results_dir)

NOTE: Option 2

  1. All quant wrappers’ parameters and activations quantizers are enabled as per JSON config file and set to bit-width specified.

  2. Based on occurrence for every quant wrappers
    • Each quant wrapper’s parameters and activations quantizers are disabled.

    • Measure and record eval score on subset of dataset.

    • Enable disabled quantizers in step 1.

  3. Returns dictionary containing quant wrapper name and corresponding eval score.

Parameters:
  • sim (_QuantizationSimModelInterface) – Quantsim model.

  • results_dir (str) – Directory to save the results.

Return type:

Dict

Returns:

layer wise eval score dictionary. dict[layer_name] = eval_score

QuantAnalyzer.export_per_layer_encoding_min_max_range(sim, results_dir)

Export encoding min and max range for all weights and activations. results_dir should have html files in following format.

-results_dir

-activations.html -weights.html

If per channel quantization(PCQ) is enabled then,

-results_dir

-activations.html -{wrapped_module_name}_{param_name}.html

Parameters:
  • sim (_QuantizationSimModelInterface) – Quantsim model.

  • results_dir (str) – Directory to save the results.

Return type:

Tuple[Dict, Dict]

Returns:

layer wise min-max range for weights and activations.

QuantAnalyzer.export_per_layer_stats_histogram(sim, results_dir)

NOTE: Not to invoke when quantization scheme is not TF-Enhanced.

Export histogram that represents a PDF of collected statistics by a quantizer for every quant wrapper. After invoking this API, results_dir should have html files in following format for every quantizers of quant wrappers.

-results_dir
-activations_pdf

name_{input/output}_{index}.html

-weights_pdf
-name

param_name_{channel_index}.html

Parameters:
  • sim (_QuantizationSimModelInterface) – Quantsim model.

  • results_dir (str) – Directory to save the results.

QuantAnalyzer.export_per_layer_mse_loss(sim, results_dir)

NOTE: Need to pass same model input data through both fp32 and quantsim model to tap output activations of each layer.

Export MSE loss between fp32 and quantized output activations for each layer. :type sim: _QuantizationSimModelInterface :param sim: Quantsim model. :type results_dir: str :param results_dir: Directory to save the results. :return layer wise MSE loss. dict[layer_name] = MSE loss.

Return type:

Dict