0%

Note TVM/VTA

I’ve been learning a lot about TVM this week and I visited the TVM website. I also read a lot of tutorials and code. I found it very interesting. It just so happens that I have Raspberry Pie 3b, so I tried the tutorial code on my ubuntu computer and it worked perfectly.

What is TVM

Apache(incubating) TVM is an open deep learning compiler stack for CPUs, GPUs, and specialized accelerators. It aims to close the gap between the productivity-focused deep learning frameworks, and the performance- or efficiency-oriented hardware backends. TVM provides the following main features:

  • Compilation of deep learning models in Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet into minimum deployable modules on diverse hardware backends.
  • Infrastructure to automatic generate and optimize tensor operators on more backend with better performance.

Install TVM on Ubuntu

I have Ubuntu 18.04.2 LTS on my PC.

  • Get Source from Github
1
git clone --recursive https://github.com/apache/incubator-tvm tvm
  • Build the Shared Library
1
2
sudo apt-get update
sudo apt-get install -y python3 python3-dev python3-setuptools gcc libtinfo-dev zlib1g-dev build-essential cmake libedit-dev libxml2-dev
  • Use cmake to build the library
    • copy the cmake/config.cmake to the directory
      1
      2
      mkdir build
      cp cmake/config.cmake build
    • Edit build/config.cmake to customize the compilation options
      Because i have no NVIDA GPU on my pc,so don’t need to set USE_CUDA ON
      1
      2
      3
      4
      set(USE_GRAPH_RUNTIME ON) 
      set(USE_GRAPH_RUNTIME_DEBUG ON)
      set(USE_LLVM ON)
      set(USE_VTA_FSIM ON) # enable VTA Simulator
  • Build tvm and related libraries.
    1
    2
    3
    cd build
    cmake ..
    make -j4
  • Set the environment variable for Python
1
2
3
export TVM_HOME=~/tvm
export PYTHONPATH=$TVM_HOME/python:$TVM_HOME/python:$TVM_HOME/topi/python:${TVM_HOME}/vta/python:${PYTHONPATH}
export VTA_HW_PATH=$TVM_HOME/3rdparty/vta-hw
  • To update the environment variables, execute source ~/.bashrc.

configuration my Raspberry pi

My raspberry pi model is Raspberry Pi 3 Model B Rev 1.2.

Setup my Raspberry pi 3b

  1. Prepare Raspberry pi 3 and an 8GB TF. and download Raspberry Pi OS, i choiced Raspberry Pi OS (32-bit) with desktop.
  2. Formatting TF ,write Pi images to TF using Win32DiskImager.
  3. No display start-up method
    1. Create a new empty ssh file in the memory card boot disk, in order to turn on the ssh function.
    2. Create a new wpa_supplicant.conf text file in the memory card boot disk, write wifi configuration.
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    country=DE
    ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
    update_config=1

    network={
    ssid="my wifi name"
    psk="my wifi password"
    key_mgmt=WPA-PSK
    priority=1
    }
  4. Find the IP adress of your respberry pi in rounter. contect your resperry pi via ssh
    1
    ssh pi@<ip adress> # password: raspberry
  5. Use VNC to connect raspberry pi. Command sudo raspi-config to configuration.

Install TVM on Raspberry pi

  • We only need to build tvm runtime on the remote device.
1
2
3
4
5
6
7
8
cd ~
git clone --recursive https://github.com/apache/incubator-tvm tvm
cd tvm
mkdir build
cp cmake/config.cmake build
cd build
cmake ..
make runtime -j4
  • set environment varibles in ~/.bashrc file.
1
export PYTHONPATH=$PYTHONPATH:~/tvm/python
  • To update the environment variables, execute source ~/.bashrc.

Environment Building

Ubuntu

  1. LLVM
    1
    2
    3
    4
    5
    6
    cd ~
    wget https://github.com/llvm/llvm-project/releases/download/llvmorg-10.0.0/clang+llvm-10.0.0-x86_64-linux-gnu-ubuntu-18.04.tar.xz
    tar -xvf clang+llvm-10.0.0-armv7a-linux-gnueabihf.tar.xz
    rm clang+llvm-10.0.0-armv7a-linux-gnueabihf.tar.xz
    mv clang+llvm-10.0.0-armv7a-linux-gnueabihf clang_9.0.0
    sudo mv clang_10.0.0 /usr/local
    set environment varibles
    ~/.bashrc
    1
    2
    export PATH=/usr/local/clang_10.0.0/bin:$PATH
    export LD_LIBRARY_PATH=/usr/local/clang_10.0.0/lib:$LD_LIBRARY_PATH
  2. Cross Compiler
    we need /usr/bin/arm-linux-gnueabihf-g++
    1
    2
    sudo apt-get install g++-arm-linux-gnueabihf
    /usr/bin/arm-linux-gnueabihf-g++ -v

python

  1. Pytorch
    1
    sudo pip3 install torch torchvision
  2. onnx
    pytorch->onnx->tvm
    1
    pip3 install onnx
  3. python-opencv
    1
    2
    pip3 install opencv-python 
    pip3 install opencv-contrib-python

other tool

  1. Neron
    1. online: https://lutzroeder.github.io/netron/
    2. download: https://github.com/lutzroeder/netron/find/main
  2. CUDA (i dont use it.because i have no NVIDA GPU)
  3. opencv
    1. download zip source form https://github.com/opencv/opencv/releases
    2. install contri
    1
    2
    3
    sudo apt-get install build-essential
    sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
    sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev lib
    1. cmake
    1
    2
    3
    4
    5
    6
    cd ~/opencv-3.4.3  
    mkdir build
    cd build
    cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local ..
    make -j7
    sudo make install

Deploy model on raspberry pi

I run this tutorials code on Ubuntu

  1. Connect Raspberry pi via ssh
    1
    ssh pi@192.168.2.105
  2. Start an RPC server on Raspberry pi
    1
    2
    python3 -m tvm.exec.rpc_server --host 0.0.0.0 --port=9090
    INFO:RPCServer:bind to 0.0.0.0:9090 # RPC server started successfully
  3. Run code on Unbuntu
    1
    python3 python3 deploy_model_on_rasp.py
  4. result
    1. Raspberry pi
    1
    2
    3
    INFO:RPCServer:connection from ('192.168.2.108', 57062)
    INFO:RPCServer:load_module /tmp/tmppv_8msd5/net.tar
    INFO:RPCServer:Finish serving ('192.168.2.108', 57062)
    1. Ubuntu


      Model : MXNet resnet18_v1
    Run model on raspberry pi via RPC
    1
    2
    3
    4
    5
    Time for model loading is 281.33s
    Time for build graph is 1.09s
    Time for model running is 1.33s
    x TVM prediction top-1: tiger cat
    y TVM prediction top-1: airliner
    Run model on local pc
    1
    2
    3
    4
    5
    Time for model loading is 0.90s
    Time for bulid graph is 0.03s
    Time for model running is 0.10s
    x TVM prediction top-1: tiger cat
    y TVM prediction top-1: airliner

Running model on Raspberry Pi is significantly slower than running it on Ubuntu, mainly because it is too slow to load model through RPC. But it works.
Deploy model on raspberry pi through LLVM, deploy model on FPGA through VTA.

Reference

-------- 本文结束 感谢阅读 --------