Contents Menu Expand Light mode Dark mode Auto light/dark, in light mode Auto light/dark, in dark mode Skip to content
AIMET
AIMET
Version: 2.9.0
Other versions
  • Overview
    • Quick Start
    • Install
  • Tutorials
    • Quantization Workflow
    • Quantization Simulation
    • Example Notebooks
    • Running Quantized Models on-device
    • Debugging Guide
  • Techniques
    • Post Training Quantization
    • Quantization Aware Training
    • Blockwise Quantization
    • Mixed precision
      • Manual mixed precision
      • Automatic mixed precision
    • Analysis tools
      • Interactive visualization
      • Quantization analyzer
      • Layer output generation
    • Compression
      • Compression guidebook
      • Greedy compression ratio selection
      • Visualization
      • Weight SVD
      • Spatial SVD
      • Channel pruning
        • Winnowing
  • PTQ Techniques
    • Adaptive rounding
    • Sequential MSE
    • Batch norm folding
    • Cross-layer equalization
    • AdaScale
    • Batch norm re-estimation
    • Quantized LoRa
      • QW-LoRa
      • QWA-LoRa
    • OmniQuant
    • Automatic quantization
  • API Reference
  • Release Notes
  • External Resources
    • Qualcomm AI Stack
    • Qualcomm Hub Models
    • Qualcomm Hub Apps
    • Qualcomm AI Hub
  • Glossary
Back to top

aimet_tensorflow APIΒΆ

AIMET quantization for TensorFlow models provides the following functionality.

  • aimet_tensorflow.quantsim

  • aimet_tensorflow.adaround

  • aimet_tensorflow.batch_norm_fold

  • aimet_tensorflow.cross_layer_equalization

  • aimet_tensorflow.mixed_precision

  • aimet_tensorflow.quant_analyzer

  • aimet_tensorflow.auto_quant_v2

  • aimet_tensorflow.layer_output_utils

  • aimet_tensorflow.model_preparer

  • aimet_tensorflow.compress

Copyright © 2020, Qualcomm Innovation Center, Inc.
Made with Sphinx and @pradyunsg's Furo