Contents Menu Expand Light mode Dark mode Auto light/dark, in light mode Auto light/dark, in dark mode Skip to content
AIMET
AIMET
Version: 2.5.0
Other versions
  • Quick Start
  • Installation
  • User Guide
    • AIMET features
    • Quantization workflow
    • Debugging guidelines
    • On-target inference
  • Quantization Simulation Guide
    • Calibration
    • QAT
    • Blockwise quantization
  • Feature Guide
    • Adaptive rounding
    • Sequential MSE
    • Batch norm folding
    • Cross-layer equalization
    • AdaScale
    • Mixed precision
      • Manual mixed precision
      • Automatic mixed precision
    • Automatic quantization
    • Batch norm re-estimation
    • Analysis tools
      • Interactive visualization
      • Quantization analyzer
      • Layer output generation
    • Compression
      • Compression guidebook
      • Greedy compression ratio selection
      • Visualization
      • Weight SVD
      • Spatial SVD
      • Channel pruning
        • Winnowing
    • Quantized LoRa
      • QW-LoRa
      • QWA-LoRa
  • Example Notebooks
  • API Reference
    • aimet_torch
      • Migrate to aimet_torch 2
      • aimet_torch.quantsim
      • aimet_torch.adaround
      • aimet_torch.nn
        • QuantizationMixin
        • QuantizedAdaptiveAvgPool1d
        • QuantizedAdaptiveAvgPool2d
        • QuantizedAdaptiveAvgPool3d
        • QuantizedAdaptiveMaxPool1d
        • QuantizedAdaptiveMaxPool2d
        • QuantizedAdaptiveMaxPool3d
        • QuantizedAlphaDropout
        • QuantizedAvgPool1d
        • QuantizedAvgPool2d
        • QuantizedAvgPool3d
        • QuantizedBCELoss
        • QuantizedBCEWithLogitsLoss
        • QuantizedBatchNorm1d
        • QuantizedBatchNorm2d
        • QuantizedBatchNorm3d
        • QuantizedBilinear
        • QuantizedCELU
        • QuantizedCTCLoss
        • QuantizedChannelShuffle
        • QuantizedCircularPad1d
        • QuantizedCircularPad2d
        • QuantizedCircularPad3d
        • QuantizedConstantPad1d
        • QuantizedConstantPad2d
        • QuantizedConstantPad3d
        • QuantizedConv1d
        • QuantizedConv2d
        • QuantizedConv3d
        • QuantizedConvTranspose1d
        • QuantizedConvTranspose2d
        • QuantizedConvTranspose3d
        • QuantizedCosineEmbeddingLoss
        • QuantizedCosineSimilarity
        • QuantizedCrossEntropyLoss
        • QuantizedDropout
        • QuantizedDropout1d
        • QuantizedDropout2d
        • QuantizedDropout3d
        • QuantizedELU
        • QuantizedEmbedding
        • QuantizedEmbeddingBag
        • QuantizedFeatureAlphaDropout
        • QuantizedFlatten
        • QuantizedFold
        • QuantizedFractionalMaxPool2d
        • QuantizedFractionalMaxPool3d
        • QuantizedGELU
        • QuantizedGLU
        • QuantizedGRU
        • QuantizedGRUCell
        • QuantizedGaussianNLLLoss
        • QuantizedGroupNorm
        • QuantizedHardshrink
        • QuantizedHardsigmoid
        • QuantizedHardswish
        • QuantizedHardtanh
        • QuantizedHingeEmbeddingLoss
        • QuantizedHuberLoss
        • QuantizedInstanceNorm1d
        • QuantizedInstanceNorm2d
        • QuantizedInstanceNorm3d
        • QuantizedKLDivLoss
        • QuantizedL1Loss
        • QuantizedLPPool1d
        • QuantizedLPPool2d
        • QuantizedLSTM
        • QuantizedLSTMCell
        • QuantizedLayerNorm
        • QuantizedLeakyReLU
        • QuantizedLinear
        • QuantizedLocalResponseNorm
        • QuantizedLogSigmoid
        • QuantizedLogSoftmax
        • QuantizedMSELoss
        • QuantizedMarginRankingLoss
        • QuantizedMaxPool1d
        • QuantizedMaxPool2d
        • QuantizedMaxPool3d
        • QuantizedMaxUnpool1d
        • QuantizedMaxUnpool2d
        • QuantizedMaxUnpool3d
        • QuantizedMish
        • QuantizedMultiLabelMarginLoss
        • QuantizedMultiLabelSoftMarginLoss
        • QuantizedMultiMarginLoss
        • QuantizedNLLLoss
        • QuantizedNLLLoss2d
        • QuantizedPReLU
        • QuantizedPairwiseDistance
        • QuantizedPixelShuffle
        • QuantizedPixelUnshuffle
        • QuantizedPoissonNLLLoss
        • QuantizedRNN
        • QuantizedRNNCell
        • QuantizedRReLU
        • QuantizedReLU
        • QuantizedReLU6
        • QuantizedReflectionPad1d
        • QuantizedReflectionPad2d
        • QuantizedReflectionPad3d
        • QuantizedReplicationPad1d
        • QuantizedReplicationPad2d
        • QuantizedReplicationPad3d
        • QuantizedSELU
        • QuantizedSiLU
        • QuantizedSigmoid
        • QuantizedSmoothL1Loss
        • QuantizedSoftMarginLoss
        • QuantizedSoftmax
        • QuantizedSoftmax2d
        • QuantizedSoftmin
        • QuantizedSoftplus
        • QuantizedSoftshrink
        • QuantizedSoftsign
        • QuantizedTanh
        • QuantizedTanhshrink
        • QuantizedThreshold
        • QuantizedTripletMarginLoss
        • QuantizedTripletMarginWithDistanceLoss
        • QuantizedUnflatten
        • QuantizedUnfold
        • QuantizedUpsample
        • QuantizedUpsamplingBilinear2d
        • QuantizedUpsamplingNearest2d
        • QuantizedZeroPad1d
        • QuantizedZeroPad2d
        • QuantizedZeroPad3d
      • aimet_torch.quantization
        • QuantizedTensorBase
        • QuantizedTensor
        • DequantizedTensor
        • Quantize
        • QuantizeDequantize
        • FloatQuantizeDequantize
        • quantize
        • quantize_dequantize
        • dequantize
      • aimet_torch.seq_mse
      • aimet_torch.adascale
      • aimet_torch.quantsim.config_utils
      • aimet_torch.batch_norm_fold
      • aimet_torch.cross_layer_equalization
      • aimet_torch.model_preparer
      • aimet_torch.model_validator
      • aimet_torch.mixed_precision
      • aimet_torch.quant_analyzer
      • aimet_torch.autoquant
      • aimet_torch.bn_reestimation
      • aimet_torch.visualization_tools
      • aimet_torch.layer_output_utils
      • aimet_torch.peft
      • aimet_torch.compress
      • aimet_torch.v1.quantsim
      • aimet_torch.v1.adaround
      • aimet_torch.v1.seq_mse
      • aimet_torch.v1.quant_analyzer
      • aimet_torch.v1.autoquant
      • aimet_torch.v1.amp
    • aimet_tensorflow
      • aimet_tensorflow.quantsim
      • aimet_tensorflow.adaround
      • aimet_tensorflow.batch_norm_fold
      • aimet_tensorflow.cross_layer_equalization
      • aimet_tensorflow.mixed_precision
      • aimet_tensorflow.quant_analyzer
      • aimet_tensorflow.auto_quant_v2
      • aimet_tensorflow.layer_output_utils
      • aimet_tensorflow.model_preparer
      • aimet_tensorflow.compress
    • aimet_onnx
      • aimet_onnx.quantsim
      • aimet_onnx.adaround
      • aimet_onnx.seq_mse
      • aimet_onnx.quantsim.set_grouped_blockwise_quantization_for_weights
      • aimet_onnx.batch_norm_fold
      • aimet_onnx.cross_layer_equalization
      • aimet_onnx.mixed_precision
      • aimet_onnx.quant_analyzer
      • aimet_onnx.layer_output_utils
  • Release Notes
Back to top
Copyright © 2020, Qualcomm Innovation Center, Inc.
Made with Sphinx and @pradyunsg's Furo