-
1.1. Prerequisites
1.2. Install from Binary
1.3. Install from Source
1.4. Install from AI Kit
-
2.1. Prerequisites
2.2. Install from Binary
2.3. Install from Source
You can install Neural Compressor using one of three options: Install single component from binary or source, or get the Intel-optimized framework together with the library by installing the Intel® oneAPI AI Analytics Toolkit.
The following prerequisites and requirements must be satisfied for a successful installation:
- Python version: 3.7 or 3.8 or 3.9 or 3.10
Notes:
- Please choose one of the basic or full installation mode for your environment, DO NOT install both. If you want to re-install with the other mode, please uninstall the current package at first.
- If you get some build issues, please check frequently asked questions at first.
# install stable basic version from pypi
pip install neural-compressor
# or install stable full version from pypi (including GUI)
pip install neural-compressor-full
# install nightly version
git clone https://github.com/intel/neural-compressor.git
cd neural-compressor
pip install -r requirements.txt
# install nightly basic version from pypi
pip install -i https://test.pypi.org/simple/ neural-compressor
# or install nightly full version from pypi (including GUI)
pip install -i https://test.pypi.org/simple/ neural-compressor-full
# install stable basic version from from conda
conda install neural-compressor -c conda-forge -c intel
# or install stable full version from from conda (including GUI)
conda install sqlalchemy=1.4.27 alembic=1.7.7 -c conda-forge
conda install neural-compressor-full -c conda-forge -c intel
git clone https://github.com/intel/neural-compressor.git
cd neural-compressor
pip install -r requirements.txt
# build with basic functionality
python setup.py install
# build with full functionality (including GUI)
python setup.py --full install
The Intel® Neural Compressor library is released as part of the Intel® oneAPI AI Analytics Toolkit (AI Kit). The AI Kit provides a consolidated package of Intel's latest deep learning and machine optimizations all in one place for ease of development. Along with Neural Compressor, the AI Kit includes Intel-optimized versions of deep learning frameworks (such as TensorFlow and PyTorch) and high-performing Python libraries to streamline end-to-end data science and AI workflows on Intel architectures.
The AI Kit is distributed through many common channels, including from Intel's website, YUM, APT, Anaconda, and more. Select and download the AI Kit distribution package that's best suited for you and follow the Get Started Guide for post-installation instructions.
Download | Guide |
---|---|
Download AI Kit | AI Kit Get Started Guide |
The following prerequisites and requirements must be satisfied for a successful installation:
- Python version: 3.7 or 3.8 or 3.9 or 3.10
# install stable basic version from pypi
pip install neural-compressor
# or install stable full version from pypi (including GUI)
pip install neural-compressor-full
# install stable basic version from from conda
conda install pycocotools -c esri
conda install neural-compressor -c conda-forge -c intel
# or install stable full version from from conda (including GUI)
conda install pycocotools -c esri
conda install sqlalchemy=1.4.27 alembic=1.7.7 -c conda-forge
conda install neural-compressor-full -c conda-forge -c intel
git clone https://github.com/intel/neural-compressor.git
cd neural-compressor
pip install -r requirements.txt
# build with basic functionality
python setup.py install
# build with full functionality (including GUI)
python setup.py --full install
Intel® Neural Compressor supports CPUs based on Intel 64 architecture or compatible processors:
- Intel Xeon Scalable processor (formerly Skylake, Cascade Lake, Cooper Lake, Ice Lake, and Sapphire Rapids)
- Intel Xeon CPU Max Series (formerly Sapphire Rapids HBM)
- Intel Data Center GPU Flex Series (formerly Arctic Sound-M)
- Intel Data Center GPU Max Series (formerly Ponte Vecchio)
Intel® Neural Compressor quantized ONNX models support multiple hardware vendors through ONNX Runtime:
- Intel CPU, AMD/ARM CPU, and NVidia GPU. Please refer to the validated model list.
- OS version: CentOS 8.4, Ubuntu 20.04
- Python version: 3.7, 3.8, 3.9, 3.10
Framework | TensorFlow | Intel TensorFlow |
Intel® Extension for TensorFlow* |
PyTorch | Intel® Extension for PyTorch* |
ONNX Runtime |
MXNet |
---|---|---|---|---|---|---|---|
Version | 2.11.0 2.10.1 2.9.3 |
2.11.0 2.10.0 2.9.1 |
1.0.0 | 1.13.1+cpu 1.12.1+cpu 1.11.0+cpu |
1.13.0 1.12.1 1.11.0 |
1.13.1 1.12.1 1.11.0 |
1.9.1 1.8.0 1.7.0 |
Note: Set the environment variable
TF_ENABLE_ONEDNN_OPTS=1
to enable oneDNN optimizations if you are using TensorFlow before v2.9. oneDNN is the default for TensorFlow since v2.9.