跳到主要內容

[Tensorflow] How to install Tensorflow for aarch64 CPU (build from source)

Tensorflow is a powerful deep learning suite that is now widely used in the development and application of various deep learning.
Although there are many users, if there are special projects that must install tensorflow on different versions / operating systems / programming languages / platforms, it is often necessary to find answers in the sea of google, and must continue to test after found.
Build form Source can basically solve most of the problems, but because it takes a lot of time and is difficult, it causes some people to have some difficulties in installation.

aarch64 | python/C/C++ | ubuntu (build from source)



1. Install Bazel (build from Source)

2. Ready Your Tensorflow

git clone https://github.com/tensorflow/tensorflowcd tensorflowgit checkout v1.13.1
and create a file tf1.13-aarch64.patch with code below in current folder and save:
diff --git a/tensorflow/lite/kernels/internal/BUILD b/tensorflow/lite/kernels/internal/BUILDindex 4be3226..b52b5b3 100644--- a/tensorflow/lite/kernels/internal/BUILD+++ b/tensorflow/lite/kernels/internal/BUILD@@ -30,7 +30,6 @@ NEON_FLAGS_IF_APPLICABLE = select({],":armv7a": ["-O3",-        "-mfpu=neon",],"//conditions:default": ["-O3",diff --git a/third_party/aws/BUILD.bazel b/third_party/aws/BUILD.bazelindex 5426f79..82d8a0d 100644--- a/third_party/aws/BUILD.bazel+++ b/third_party/aws/BUILD.bazel@@ -24,7 +24,9 @@ cc_library("@org_tensorflow//tensorflow:raspberry_pi_armeabi": glob(["aws-cpp-sdk-core/source/platform/linux-shared/*.cpp",]),-        "//conditions:default": [],+        "//conditions:default": glob([+            "aws-cpp-sdk-core/source/platform/linux-shared/*.cpp",+    ]),}) + glob(["aws-cpp-sdk-core/include/**/*.h","aws-cpp-sdk-core/source/*.cpp",diff --git a/third_party/gpus/crosstool/BUILD.tpl b/third_party/gpus/crosstool/BUILD.tplindex db76306..9539009 100644--- a/third_party/gpus/crosstool/BUILD.tpl+++ b/third_party/gpus/crosstool/BUILD.tpl@@ -23,6 +23,7 @@ cc_toolchain_suite("darwin|compiler": ":cc-compiler-darwin","x64_windows|msvc-cl": ":cc-compiler-windows","x64_windows": ":cc-compiler-windows",+    "aarch64": ":cc-compiler-local","arm": ":cc-compiler-local","k8": ":cc-compiler-local","piii": ":cc-compiler-local",diff --git a/third_party/nccl/build_defs.bzl.tpl b/third_party/nccl/build_defs.bzl.tplindex 42de79c..f37a129 100644--- a/third_party/nccl/build_defs.bzl.tpl+++ b/third_party/nccl/build_defs.bzl.tpl@@ -87,7 +87,7 @@ def rdc_copts():# The global functions can not have a lower register count than the# device functions. This is enforced by setting a fixed register count.# https://github.com/NVIDIA/nccl/blob/f93fe9bfd94884cec2ba711897222e0df5569a53/makefiles/common.mk#L48-    maxrregcount = "-maxrregcount=96"+    maxrregcount = "-maxrregcount=80"return cuda_default_copts() + select({"@local_config_cuda//cuda:using_nvcc": [
then apply to git
git apply tf1.13-aarch64.patch

3. Set Configuration

use command:
./configure
then console will ask lots of question about your setting. (Just Check options you may use):
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".You have bazel 0.19.2- (@non-git) installed.Please specify the location of python. [Default is /usr/bin/python]:/usr/bin/pythonPlease input the desired Python library path to use.Default is [/opt/ros/kinetic/lib/python2.7/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: Y
XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N
No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: N
No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: Y
CUDA support will be enabled for TensorFlow. 
Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]:   
Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0  Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.2.2  
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]:   
Do you wish to build TensorFlow with TensorRT support? [y/N]: Y
TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/aarch64-linux-gnu]:  
Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]:   
Please specify a list of comma-separated Cuda compute capabilities you want to build with.You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]: 7.2,7.5  
Do you want to use clang as CUDA compiler? [y/N]: Nnvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:   
Do you wish to build TensorFlow with MPI support? [y/N]: N
No MPI support will be enabled for TensorFlow. 
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:   
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N
Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. 

4. Build Tensorflow for Chosen API (C/C++/Python)

C api

bazel build --config=opt --config=cuda //tensorflow:libtensorflow.so

C++ api

bazel build --config=opt --config=cuda //tensorflow:libtensorflow_cc.so

Python api

bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package 

Addition Content: Build a .whl file

bazel-bin/tensorflow/tools/pip_package/build_pip_package tensorflow_pkg

5. Copy to lib

After Building about 3~4 hour, now you can find the ‘libtensorflow.so’ in tensorflow/bazel-bin/tensorflow
You can just include the folder for using, but I recommand copy folder to system
sudo mkdir /usr/local/include/tf-c
sudo cp -r bazel-genfiles/ /usr/local/include/tf-c/
sudo cp -r tensorflow /usr/local/include/tf-c/
sudo cp -r third_party /usr/local/include/tf-c/
sudo cp -r bazel-bin/tensorflow/libtensorflow.so /usr/local/lib/

6. Verify

C

Create a file hello_tf.c like before:
#include 
#include 

int main() {
  printf("Hello from TensorFlow C library version %s\n", TF_Version());
  return 0;
}
compile and check
gcc hello_tf.c -ltensorflow -o hello_tf

./hello_tf

Python

# pythonimport tensorflow as tfhello = tf.constant('Hello, TensorFlow!')sess = tf.Session()print(sess.run(hello))

Reference

留言

這個網誌中的熱門文章

[python] 使用 python 控制 docx 範例

因為同事的需求,無職 a 我就又再度幫忙同事寫一些小程式。 這些小程式雖然簡單,但是聽到如果不幫忙寫程式解決,以手工作業的"大量人天" 的後果真的是讓人吐血。 他們有一份工作,需要產出一份很多很多很多資料圖片的判釋報告,要把數百張圖片剪裁成特定大小,加上圖說之後放入 word 裡面。 聽到的做法是...一張一張插圖!! wooow! That's really shocking me! 所以為了前公司同事的幸福,我還是加減寫一下好了。

[電銲] 自己的 IMU 自己焊!笨蛋的焊接法!

工程師真的是被要求包山包海都要會... IMU的組件被要求不能只插麵包板,要 "穩固的固定在另外製作的盒子裡",反正就是搞得跟外面幾十萬上下的 IMU 一樣精緻就是了。 好好好都焊給你~ 呃 對了 怎麼焊哈哈哈哈 ^^" 電焊這種技術自從上過國中的生活科技後就再也沒再用了@@,好家在這是個網路就是你家的時代,立刻上網查一查,找到一些關於焊槍使用的相關資料。 被要求說GY-91不能只接麵包板一定焊死才穩固所以就這樣接了

[RaspberryPI] 一鍵啟動 .py 文件

先講總結: 樹莓派君你好討厭啊啊啊啊! 今天把程式整理完之後,心血來潮,想要把 程式弄得更 "一鍵啟動" 一點。 也就是我只要用我的觸控螢幕一點,就可以直接打開寫好的 QT 介面做 IMU 和 相機的紀錄 親手烘培(?) 的 QT 介面