Contents Menu Expand Light mode Dark mode Auto light/dark, in light mode Auto light/dark, in dark mode Skip to content
AIMET
AIMET
Version: 2.0.0
Other versions
  • Quick Start
  • Installation
  • User Guide
    • Quantization workflow
    • Debugging guidelines
    • On-target inference
  • Quantization Simulation Guide
    • Calibration
    • QAT
    • Advanced
  • Feature Guide
    • Adaptive rounding
    • Sequential MSE
    • Batch norm folding
    • Cross-layer equalization
    • Mixed precision
    • Automatic quantization
    • Batch norm re-estimation
    • Analysis tools
    • Compression
  • Example Notebooks
  • API Reference
    • aimet_torch
    • aimet_tensorflow
    • aimet_onnx
  • Release Notes
Back to top

aimet_onnx APIΒΆ

AIMET quantization for ONNX models provides the following functionality.

  • aimet_onnx.quantsim

  • aimet_onnx.adaround

  • aimet_onnx.seq_mse

  • aimet_onnx.quantsim.set_grouped_blockwise_quantization_for_weights

  • aimet_onnx.batch_norm_fold

  • aimet_onnx.cross_layer_equalization

  • aimet_onnx.mixed_precision

  • aimet_onnx.quant_analyzer

  • aimet_onnx.autoquant

  • aimet_onnx.layer_output_utils

Next
aimet_onnx.quantsim
Previous
aimet_tensorflow.compress
Copyright © 2020, Qualcomm Innovation Center, Inc.
Made with Sphinx and @pradyunsg's Furo