Building from source

This page describes how to install AIMET from source in a conda environment and within docker container.

You can also use a virtual environment (venv), provided your system has the required Python version and necessary dependencies that aren’t available via pip, such as CUDA and cuDNN.

Conda environment

Create a new conda environment with Python 3.10

An example of conda environment setup is shown below:

# Setup conda environment using Miniconda/Miniforge
source <CONDA_INSTALL_DIR>/bin/activate
conda create --name <CONDA_ENV_NAME> python=3.10 -y
conda activate <CONDA_ENV_NAME>

# Install general dependencies from conda-forge
conda install -c conda-forge pip-tools eigen pandoc

NVIDIA CUDA support

Skip the following step, if you don’t want to compile with CUDA support.

# Set desired CUDA version
VER_CUDA=12.1.0

# Install CUDA Toolkit and cuDNN from NVIDIA's CUDA channel
conda install -c "nvidia/label/cuda-${VER_CUDA}" cuda-toolkit cudnn

Set environment variables to build desired AIMET wheel

General Toggles

  • GPU build: -DENABLE_CUDA=ON

  • CPU-only build: -DENABLE_CUDA=OFF

  • Build C++ tests: -DENABLE_TESTS=ON

  • Skip building C++ tests: -DENABLE_TESTS=OFF

Variant-specific Toggles

Variant

CMake flags

aimet-onnx

-DENABLE_ONNX=ON -DENABLE_TORCH=OFF -DENABLE_TENSORFLOW=OFF

aimet-torch

-DENABLE_TORCH=ON -DENABLE_ONNX=OFF -DENABLE_TENSORFLOW=OFF

aimet-tf

-DENABLE_TENSORFLOW=ON -DENABLE_ONNX=OFF -DENABLE_TORCH=OFF

Docs

-DENABLE_TENSORFLOW=ON -DENABLE_ONNX=ON -DENABLE_TORCH=ON -DENABLE_CUDA=OFF

# Example: Build for aimet-onnx with GPU
export 'CMAKE_ARGS=-DENABLE_CUDA=ON -DENABLE_ONNX=ON -DENABLE_TORCH=OFF -DENABLE_TENSORFLOW=OFF -DENABLE_TESTS=OFF'
export 'SKBUILD_BUILD_TARGETS=all'

Compile and install pip package dependencies

# cd to AIMET root directory
cd aimet/

# Compile requirements from pyproject.toml with constraints
python3 -m piptools compile pyproject.toml -v --extra=dev,test --output-file=/tmp/requirements.txt

# Install the compiled dependencies
python3 -m pip install -r /tmp/requirements.txt

Build AIMET wheel and run unit tests

# Build AIMET wheel
python3 -m build --wheel --no-isolation .

# Install the built wheel
pip install dist/aimet*.whl

# Run unit tests (ONNX)
cd TrainingExtensions/onnx/test/python
pytest

Build AIMET documentation

# cd to AIMET root directory
cd aimet/

# Example: Build for Documentation Only
export 'CMAKE_ARGS=-DENABLE_TENSORFLOW=ON -DENABLE_ONNX=ON -DENABLE_TORCH=ON -DENABLE_CUDA=OFF -DENABLE_TESTS=OFF'
export 'SKBUILD_BUILD_TARGETS=all;doc'

# Pin torch, onnxruntime, tensorflow-cpu versions
echo "onnxruntime==1.22.0" >> /tmp/constraints.txt
echo "torch==2.1.2" >> /tmp/constraints.txt
echo "tensorflow-cpu==2.12.*" >> /tmp/constraints.txt

# Compile requirements from pyproject.toml with constraints
python3 -m piptools compile pyproject.toml -v --constraint=/tmp/constraints.txt --extra=dev,test,docs --output-file=/tmp/requirements.txt

# Install the compiled dependencies
python3 -m pip install -r /tmp/requirements.txt

# Force-install tensorflow 2.10.1
python3 -m pip install tensorflow-cpu==2.10.1 keras==2.10.0 tensorflow-model-optimization --no-deps

# Required to work around tensorflow-protobuf version mismatch
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python

# Build AIMET docs (aimet/build/Docs/index.html)
python3 -m build --wheel --no-isolation .

Docker environment

Build and run docker container locally

Docker build argument examples for AIMET Variants.

Variant

Build args

aimet-onnx

VER_PYTHON=3.10 VER_ONNXRUNTIME=1.22.0 VER_CUDA=12.1.0

aimet-torch

VER_PYTHON=3.10 VER_TORCH=2.1.2 VER_CUDA=12.1.1

aimet-tf

VER_PYTHON=3.10 VER_TENSORFLOW=2.10.1 VER_CUDA=11.8.0

# cd to AIMET root directory
cd aimet

# Example: Build docker image for aimet-onnx with GPU
docker buildx build --build-arg VER_PYTHON=3.10 --build-arg VER_ONNXRUNTIME=1.22.0 --build-arg VER_CUDA=12.1.0 -t onnx-gpu:1.0 -f Jenkins/fast-release/Dockerfile.ci .

# Run the container
docker run -it -v /local/mnt/workspace:/local/mnt/workspace/ --gpus all --user root onnx-gpu:1.0

# Set up the conda environment inside the container
source /etc/profile.d/conda.sh

Set environment variables to build desired AIMET wheel

General Toggles

  • GPU build: -DENABLE_CUDA=ON

  • CPU-only build: -DENABLE_CUDA=OFF

  • Build C++ tests: -DENABLE_TESTS=ON

  • Skip build C++ tests: -DENABLE_TESTS=OFF

Variant-specific Toggles

Variant

CMake flags

aimet-onnx

-DENABLE_ONNX=ON -DENABLE_TORCH=OFF -DENABLE_TENSORFLOW=OFF

aimet-torch

-DENABLE_TORCH=ON -DENABLE_ONNX=OFF -DENABLE_TENSORFLOW=OFF

aimet-tf

-DENABLE_TENSORFLOW=ON -DENABLE_ONNX=OFF -DENABLE_TORCH=OFF

# Example: Build for aimet-onnx with GPU
export 'CMAKE_ARGS=-DENABLE_CUDA=ON -DENABLE_ONNX=ON -DENABLE_TORCH=OFF -DENABLE_TENSORFLOW=OFF -DENABLE_TESTS=OFF'
export 'SKBUILD_BUILD_TARGETS=all'

Build AIMET wheel and run unit tests

# Build AIMET wheel
python3 -m build --wheel --no-isolation .

# Install the built wheel
pip install dist/aimet*.whl

# Run unit tests (ONNX)
cd TrainingExtensions/onnx/test/python/
pytest