OpenVINO

How it Works


参考OpenVINO编译安装 for Linux

代码编译与依赖安装

下载代码

$ git clone https://github.com/openvinotoolkit/openvino
$ cd openvino
$ git submodule update --init --recursive


安装依赖

$ ./install_build_dependencies.sh


安装 Intel® Graphics Compute Runtime for OpenCL™ Driver

正确安装驱动方法

$ cd openvino
$ sudo ./scripts/install_dependencies/install_NEO_OCL_driver.sh

下载以下文件

$ sha256sum -c ww41.sum 
intel-gmmlib_19.3.2_amd64.deb: 成功
intel-igc-core_1.0.2597_amd64.deb: 成功
intel-igc-opencl_1.0.2597_amd64.deb: 成功
intel-ocloc_19.41.14441_amd64.deb: 成功
intel-opencl_19.41.14441_amd64.deb: 成功

$ sudo dpkg -i *.deb
正在选中未选择的软件包 intel-gmmlib。
(正在读取数据库 ... 系统当前共安装有 472191 个文件和目录。)
准备解压 intel-gmmlib_19.3.2_amd64.deb  ...
正在解压 intel-gmmlib (19.3.2) ...
正在选中未选择的软件包 intel-igc-core。
准备解压 intel-igc-core_1.0.2597_amd64.deb  ...
正在解压 intel-igc-core (1.0.2597) ...
正在选中未选择的软件包 intel-igc-opencl。
准备解压 intel-igc-opencl_1.0.2597_amd64.deb  ...
正在解压 intel-igc-opencl (1.0.2597) ...
正在选中未选择的软件包 intel-ocloc。
准备解压 intel-ocloc_19.41.14441_amd64.deb  ...
正在解压 intel-ocloc (19.41.14441) ...
正在选中未选择的软件包 intel-opencl。
准备解压 intel-opencl_19.41.14441_amd64.deb  ...
正在解压 intel-opencl (19.41.14441) ...
正在设置 intel-gmmlib (19.3.2) ...
正在设置 intel-igc-core (1.0.2597) ...
正在设置 intel-igc-opencl (1.0.2597) ...
正在设置 intel-ocloc (19.41.14441) ...
正在设置 intel-opencl (19.41.14441) ...


编译

$ cd openvino
$ mkdir build && cd build
$ cmake -DCMAKE_BUILD_TYPE=Release ..
$ make --jobs=$(nproc --all)

安装

$ cd openvino
$ sudo cmake --install ./build --prefix /usr/share/openvino

校验

$ cd ~/openvino/samples/cpp
$ source /usr/share/openvino/setupvars.sh
$ ./build_samples.sh


支持平台

硬件 说明
CPU  
GPU  
VPU  
FPGA  
MYRIAD  
HDDL  
GNA 高斯和神经加速器
HETERO  
MULTI  
Movidius  


以下是我虚拟机环境的设备信息

$ ~/openvino/bin/intel64/Release$ ./hello_query_device 
[ INFO ] OpenVINO Runtime version ......... 2022.2.0
[ INFO ] Build ........... custom_master_80739700ff4804f039d36a8c1e6394746f01925b
[ INFO ] 
[ INFO ] Available devices: 
[ INFO ] CPU
[ INFO ]        SUPPORTED_PROPERTIES: 
[ INFO ]                Immutable: AVAILABLE_DEVICES : ""
[ INFO ]                Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 1 1
[ INFO ]                Immutable: RANGE_FOR_STREAMS : 1 4
[ INFO ]                Immutable: FULL_DEVICE_NAME : Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz
[ INFO ]                Immutable: OPTIMIZATION_CAPABILITIES : FP32 FP16 INT8 BIN EXPORT_IMPORT
[ INFO ]                Immutable: CACHE_DIR : ""
[ INFO ]                Mutable: NUM_STREAMS : 1
[ INFO ]                Mutable: AFFINITY : CORE
[ INFO ]                Mutable: INFERENCE_NUM_THREADS : 0
[ INFO ]                Mutable: PERF_COUNT : NO
[ INFO ]                Mutable: INFERENCE_PRECISION_HINT : f32
[ INFO ]                Mutable: PERFORMANCE_HINT : ""
[ INFO ]                Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 0
[ INFO ] 
[ INFO ] GNA
[ INFO ]        SUPPORTED_PROPERTIES: 
[ INFO ]                Immutable: AVAILABLE_DEVICES : GNA_SW
[ INFO ]                Immutable: OPTIMAL_NUMBER_OF_INFER_REQUESTS : 1
[ INFO ]                Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 1 1
[ INFO ]                Immutable: OPTIMIZATION_CAPABILITIES : INT16 INT8 EXPORT_IMPORT
[ INFO ]                Immutable: FULL_DEVICE_NAME : GNA_SW
[ INFO ]                Immutable: GNA_LIBRARY_FULL_VERSION : 3.0.0.1455
[ INFO ]                Mutable: GNA_SCALE_FACTOR_PER_INPUT : ""
[ INFO ]                Mutable: GNA_FIRMWARE_MODEL_IMAGE : ""
[ INFO ]                Mutable: GNA_DEVICE_MODE : GNA_SW_EXACT
[ INFO ]                Mutable: GNA_HW_EXECUTION_TARGET : UNDEFINED
[ INFO ]                Mutable: GNA_HW_COMPILE_TARGET : UNDEFINED
[ INFO ]                Mutable: GNA_PWL_DESIGN_ALGORITHM : UNDEFINED
[ INFO ]                Mutable: GNA_PWL_MAX_ERROR_PERCENT : 1.000000
[ INFO ]                Mutable: PERFORMANCE_HINT : ""
[ INFO ]                Mutable: INFERENCE_PRECISION_HINT : undefined
[ INFO ]                Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 1
[ INFO ]                Mutable: LOG_LEVEL : LOG_NONE
[ INFO ]

手里的一台Intel小盒子

$ sudo ./hello_query_device 
[ INFO ] OpenVINO Runtime version ......... 2022.2.0
[ INFO ] Build ........... custom_master_1a288c2e99da256d1caa1e85e755c5a2b9b68304
[ INFO ] 
[ INFO ] Available devices: 
[ INFO ] CPU
[ INFO ]        SUPPORTED_PROPERTIES: 
[ INFO ]                Immutable: AVAILABLE_DEVICES : ""
[ INFO ]                Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 1 1
[ INFO ]                Immutable: RANGE_FOR_STREAMS : 1 4
[ INFO ]                Immutable: FULL_DEVICE_NAME : Intel(R) Core(TM) i5-7267U CPU @ 3.10GHz
[ INFO ]                Immutable: OPTIMIZATION_CAPABILITIES : FP32 FP16 INT8 BIN EXPORT_IMPORT
[ INFO ]                Immutable: CACHE_DIR : ""
[ INFO ]                Mutable: NUM_STREAMS : 1
[ INFO ]                Mutable: AFFINITY : CORE
[ INFO ]                Mutable: INFERENCE_NUM_THREADS : 0
[ INFO ]                Mutable: PERF_COUNT : NO
[ INFO ]                Mutable: INFERENCE_PRECISION_HINT : f32
[ INFO ]                Mutable: PERFORMANCE_HINT : ""
[ INFO ]                Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 0
[ INFO ] 
[ INFO ] GNA
[ INFO ]        SUPPORTED_PROPERTIES: 
[ INFO ]                Immutable: AVAILABLE_DEVICES : GNA_SW
[ INFO ]                Immutable: OPTIMAL_NUMBER_OF_INFER_REQUESTS : 1
[ INFO ]                Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 1 1
[ INFO ]                Immutable: OPTIMIZATION_CAPABILITIES : INT16 INT8 EXPORT_IMPORT
[ INFO ]                Immutable: FULL_DEVICE_NAME : GNA_SW
[ INFO ]                Immutable: GNA_LIBRARY_FULL_VERSION : 3.0.0.1455
[ INFO ]                Mutable: GNA_SCALE_FACTOR_PER_INPUT : ""
[ INFO ]                Mutable: GNA_FIRMWARE_MODEL_IMAGE : ""
[ INFO ]                Mutable: GNA_DEVICE_MODE : GNA_SW_EXACT
[ INFO ]                Mutable: GNA_HW_EXECUTION_TARGET : UNDEFINED
[ INFO ]                Mutable: GNA_HW_COMPILE_TARGET : UNDEFINED
[ INFO ]                Mutable: GNA_PWL_DESIGN_ALGORITHM : UNDEFINED
[ INFO ]                Mutable: GNA_PWL_MAX_ERROR_PERCENT : 1.000000
[ INFO ]                Mutable: PERFORMANCE_HINT : ""
[ INFO ]                Mutable: INFERENCE_PRECISION_HINT : undefined
[ INFO ]                Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 1
[ INFO ]                Mutable: LOG_LEVEL : LOG_NONE
[ INFO ] 
[ INFO ] GPU
[ INFO ]        SUPPORTED_PROPERTIES: 
[ INFO ]                Immutable: AVAILABLE_DEVICES : 0
[ INFO ]                Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 2 1
[ INFO ]                Immutable: RANGE_FOR_STREAMS : 1 2
[ INFO ]                Immutable: OPTIMAL_BATCH_SIZE : 1
[ INFO ]                Immutable: MAX_BATCH_SIZE : 1
[ INFO ]                Immutable: FULL_DEVICE_NAME : Intel(R) Iris(R) Plus Graphics 650 [0x5927] (iGPU)
[ INFO ]                Immutable: DEVICE_TYPE : integrated
[ INFO ]                Immutable: DEVICE_GOPS : f16 1612.8 f32 806.4 i8 806.4 u8 806.4
[ INFO ]                Immutable: OPTIMIZATION_CAPABILITIES : FP32 BIN FP16
[ INFO ]                Immutable: GPU_DEVICE_TOTAL_MEM_SIZE : 6619910144
[ INFO ]                Immutable: GPU_UARCH_VERSION : 9.0.0
[ INFO ]                Immutable: GPU_EXECUTION_UNITS_COUNT : 48
[ INFO ]                Immutable: GPU_MEMORY_STATISTICS : ""
[ INFO ]                Mutable: PERF_COUNT : NO
[ INFO ]                Mutable: MODEL_PRIORITY : MEDIUM
[ INFO ]                Mutable: GPU_HOST_TASK_PRIORITY : MEDIUM
[ INFO ]                Mutable: GPU_QUEUE_PRIORITY : MEDIUM
[ INFO ]                Mutable: GPU_QUEUE_THROTTLE : MEDIUM
[ INFO ]                Mutable: GPU_ENABLE_LOOP_UNROLLING : YES
[ INFO ]                Mutable: CACHE_DIR : ""
[ INFO ]                Mutable: PERFORMANCE_HINT : ""
[ INFO ]                Mutable: COMPILATION_NUM_THREADS : 4
[ INFO ]                Mutable: NUM_STREAMS : 1
[ INFO ]                Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 0
[ INFO ]                Mutable: DEVICE_ID : 0
[ INFO ]


模型转换与优化(mo)

安装依赖

$ sudo apt install python3-pybind11

注意protobuf的版本

$ python3 -m pip show protobuf
Name: protobuf
Version: 3.19.4
Summary: Protocol Buffers
Home-page: https://developers.google.com/protocol-buffers/
Author: None
Author-email: None
License: 3-Clause BSD License
Location: /home/wii/.local/lib/python3.6/site-packages
Requires: 
$ protoc --version
-bash: /usr/bin/protoc: No such file or directory

$ sudo apt-get install autoconf automake libtool

$ git clone https://github.com/google/protobuf.git
$ cd protobuf
$ git checkout v3.19.4
$ git submodule update --init --recursive
$ ./autogen.sh
$ ./configure
$ make
$ make check
$ sudo make install
$ sudo ldconfig
$ protoc --version
libprotoc 3.19.4


安装mo

$ sudo apt install virtualenv

$ virtualenv -p /usr/bin/python3.6 .env3
$ source .env3/bin/activate
$ pip install openvino-dev[onnx]

$ cd ~/openvino/tools/mo
$ pip install -e .
$ mo --input_model model.onnx --output_dir ./ --input_shape [1,3,640,640] --data_type FP32
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /home/wii/models/model.onnx
        - Path for generated IR:        /home/wii/models/./
        - IR output name:       model
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,3,640,640]
        - Source layout:        Not specified
        - Target layout:        Not specified
        - Layout:       Not specified
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - User transformations:         Not specified
        - Reverse input channels:       False
        - Enable IR generation for fixed input shape:   False
        - Use the transformations config file:  None
Advanced parameters:
        - Force the usage of legacy Frontend of Model Optimizer for model conversion into IR:   False
        - Force the usage of new Frontend of Model Optimizer for model conversion into IR:      False
OpenVINO runtime found in:      /home/wii/openvino/tools/mo/.env3/lib/python3.6/site-packages/openvino
OpenVINO runtime version:       2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version:        2022.1.0-7019-cdb9bec7210-releases/2022/1
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/wii/models/model.xml
[ SUCCESS ] BIN file: /home/wii/models/model.bin
[ SUCCESS ] Total execution time: 1.14 seconds. 
[ SUCCESS ] Memory consumed: 136 MB. 
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai

$ ll model.*
-rw-rw-r-- 1 wii wii 28215960 Apr  2 07:01 model.bin        openvino模型参数
-rw-rw-r-- 1 wii wii    24787 Apr  2 07:01 model.mapping    openvino模型转换前后计算节点映射关系
-rw-r--r-- 1 wii wii 28236825 Apr  2 04:56 model.onnx       *原onnx模型
-rw-rw-r-- 1 wii wii   236739 Apr  2 07:01 model.xml        openvino模型网络结构


$ mo --input_model mask_rcnn.onnx --output_dir ./ --input_shape [1,3,640,640]
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /home/aibox/pytorch_mask_rcnn/mask_rcnn.onnx
        - Path for generated IR:        /home/aibox/pytorch_mask_rcnn/./
        - IR output name:       mask_rcnn
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,3,640,640]
        - Source layout:        Not specified
        - Target layout:        Not specified
        - Layout:       Not specified
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - User transformations:         Not specified
        - Reverse input channels:       False
        - Enable IR generation for fixed input shape:   False
        - Use the transformations config file:  None
Advanced parameters:
        - Force the usage of legacy Frontend of Model Optimizer for model conversion into IR:   False
        - Force the usage of new Frontend of Model Optimizer for model conversion into IR:      False
OpenVINO runtime found in:      /home/aibox/openvino_env/lib/python3.8/site-packages/openvino
OpenVINO runtime version:       2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version:        2022.1.0-7019-cdb9bec7210-releases/2022/1
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Check 'get_input_size() >= 1' failed at core/src/op/concat.cpp:33:
While validating node 'v0::Concat 3200 () -> (f32{0})' with friendly_name '3200':
At least one argument required.

[ ERROR ]  Traceback (most recent call last):
  File "/home/aibox/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/main.py", line 533, in main
    ret_code = driver(argv)
  File "/home/aibox/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/main.py", line 493, in driver
    ret_res = moc_emit_ir(ngraph_function, argv)
  File "/home/aibox/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/moc_frontend/serialize.py", line 25, in moc_emit_ir
    apply_moc_transformations(ngraph_function)
  File "/home/aibox/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/back/offline_transformations.py", line 39, in apply_moc_transformations
    apply_moc_transformations(func, False)
RuntimeError: Check 'get_input_size() >= 1' failed at core/src/op/concat.cpp:33:
While validating node 'v0::Concat 3200 () -> (f32{0})' with friendly_name '3200':
At least one argument required.


[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

mask_rcnn 转换成onnx的时候,设置成了固定输入的大小,后面改成动态大小就可以转换成openvino的格式了。


Intel GPU 使用率

$ sudo apt-get install intel-gpu-tools
$ sudo intel_gpu_top