Installation¶
This guide walks you through installing the Cloud AI SDK on your system. Choose your installation path based on your environment:
Environment |
Description |
|---|---|
Pre-configured instances on AWS or Cirrascale (fastest setup) |
|
On-premise servers with Cloud AI hardware |
|
Containerized deployment |
|
KVM, ESXi, or Hyper-V environments |
Cloud Instances¶
Qualcomm Cloud AI accelerators are available from two cloud providers:
Amazon Web Services (AWS) — see Getting Started on AWS for instance setup and configuration.
Cirrascale Cloud Services — configurations from 1 to 8 Cloud AI accelerators per instance.
Note
SDKs are pre-installed on cloud instances. Skip ahead to Inference Workflow.
Local Server Installation¶
Follow these steps to install on an on-premise server:
Verify hardware requirements - Hardware Requirements
Check OS compatibility - Supported Operating Systems
Configure BIOS settings - Enable MSI
Download SDKs - Download from QPM
Install Platform SDK - Platform SDK Installation
Install Apps SDK - Apps SDK Installation
Verify installation - Verification Checklist
Tip
To run Platform SDK tools without sudo, add yourself to the qaic group:
sudo usermod -aG qaic $USER
newgrp qaic # Apply changes without logging out
Virtual Machine Installation¶
Cloud AI supports PCIe passthrough to virtual machines. The VM has exclusive access to the Cloud AI device (no sharing between VMs or with the host).
Supported Hypervisors (x86-64 only)
KVM
Hyper-V
ESXi
Xen
Installation Steps
Configure PCIe passthrough on your hypervisor
Enable MSI in BIOS and hypervisor settings
Launch VM with a supported OS
Install Cloud AI SDKs inside the VM
Verify card health
Note
Cloud AI SDKs are installed only on the guest VM, not on the hypervisor host.
See Hypervisors for detailed PCIe passthrough and hypervisor configuration instructions.
Docker Installation¶
For containerized deployments:
Install Platform SDK on the host system
Download a prebuilt Docker image for Cloud AI or use the Apps SDK to build a custom Docker image.
Launch container with device access
See Docker for detailed instructions, available images, and workflow examples.
Next Steps¶
After installation, proceed to:
Inference Workflow - Compile and run models
System Management - Monitor and manage devices