跳到主要內容

[Tensorflow] How to install Tensorflow for aarch64 CPU (build from source)

Tensorflow is a powerful deep learning suite that is now widely used in the development and application of various deep learning.
Although there are many users, if there are special projects that must install tensorflow on different versions / operating systems / programming languages / platforms, it is often necessary to find answers in the sea of google, and must continue to test after found.
Build form Source can basically solve most of the problems, but because it takes a lot of time and is difficult, it causes some people to have some difficulties in installation.

aarch64 | python/C/C++ | ubuntu (build from source)



1. Install Bazel (build from Source)

2. Ready Your Tensorflow

git clone https://github.com/tensorflow/tensorflowcd tensorflowgit checkout v1.13.1
and create a file tf1.13-aarch64.patch with code below in current folder and save:
diff --git a/tensorflow/lite/kernels/internal/BUILD b/tensorflow/lite/kernels/internal/BUILDindex 4be3226..b52b5b3 100644--- a/tensorflow/lite/kernels/internal/BUILD+++ b/tensorflow/lite/kernels/internal/BUILD@@ -30,7 +30,6 @@ NEON_FLAGS_IF_APPLICABLE = select({],":armv7a": ["-O3",-        "-mfpu=neon",],"//conditions:default": ["-O3",diff --git a/third_party/aws/BUILD.bazel b/third_party/aws/BUILD.bazelindex 5426f79..82d8a0d 100644--- a/third_party/aws/BUILD.bazel+++ b/third_party/aws/BUILD.bazel@@ -24,7 +24,9 @@ cc_library("@org_tensorflow//tensorflow:raspberry_pi_armeabi": glob(["aws-cpp-sdk-core/source/platform/linux-shared/*.cpp",]),-        "//conditions:default": [],+        "//conditions:default": glob([+            "aws-cpp-sdk-core/source/platform/linux-shared/*.cpp",+    ]),}) + glob(["aws-cpp-sdk-core/include/**/*.h","aws-cpp-sdk-core/source/*.cpp",diff --git a/third_party/gpus/crosstool/BUILD.tpl b/third_party/gpus/crosstool/BUILD.tplindex db76306..9539009 100644--- a/third_party/gpus/crosstool/BUILD.tpl+++ b/third_party/gpus/crosstool/BUILD.tpl@@ -23,6 +23,7 @@ cc_toolchain_suite("darwin|compiler": ":cc-compiler-darwin","x64_windows|msvc-cl": ":cc-compiler-windows","x64_windows": ":cc-compiler-windows",+    "aarch64": ":cc-compiler-local","arm": ":cc-compiler-local","k8": ":cc-compiler-local","piii": ":cc-compiler-local",diff --git a/third_party/nccl/build_defs.bzl.tpl b/third_party/nccl/build_defs.bzl.tplindex 42de79c..f37a129 100644--- a/third_party/nccl/build_defs.bzl.tpl+++ b/third_party/nccl/build_defs.bzl.tpl@@ -87,7 +87,7 @@ def rdc_copts():# The global functions can not have a lower register count than the# device functions. This is enforced by setting a fixed register count.# https://github.com/NVIDIA/nccl/blob/f93fe9bfd94884cec2ba711897222e0df5569a53/makefiles/common.mk#L48-    maxrregcount = "-maxrregcount=96"+    maxrregcount = "-maxrregcount=80"return cuda_default_copts() + select({"@local_config_cuda//cuda:using_nvcc": [
then apply to git
git apply tf1.13-aarch64.patch

3. Set Configuration

use command:
./configure
then console will ask lots of question about your setting. (Just Check options you may use):
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".You have bazel 0.19.2- (@non-git) installed.Please specify the location of python. [Default is /usr/bin/python]:/usr/bin/pythonPlease input the desired Python library path to use.Default is [/opt/ros/kinetic/lib/python2.7/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: Y
XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N
No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: N
No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: Y
CUDA support will be enabled for TensorFlow. 
Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]:   
Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0  Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.2.2  
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]:   
Do you wish to build TensorFlow with TensorRT support? [y/N]: Y
TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/aarch64-linux-gnu]:  
Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]:   
Please specify a list of comma-separated Cuda compute capabilities you want to build with.You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]: 7.2,7.5  
Do you want to use clang as CUDA compiler? [y/N]: Nnvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:   
Do you wish to build TensorFlow with MPI support? [y/N]: N
No MPI support will be enabled for TensorFlow. 
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:   
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N
Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. 

4. Build Tensorflow for Chosen API (C/C++/Python)

C api

bazel build --config=opt --config=cuda //tensorflow:libtensorflow.so

C++ api

bazel build --config=opt --config=cuda //tensorflow:libtensorflow_cc.so

Python api

bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package 

Addition Content: Build a .whl file

bazel-bin/tensorflow/tools/pip_package/build_pip_package tensorflow_pkg

5. Copy to lib

After Building about 3~4 hour, now you can find the ‘libtensorflow.so’ in tensorflow/bazel-bin/tensorflow
You can just include the folder for using, but I recommand copy folder to system
sudo mkdir /usr/local/include/tf-c
sudo cp -r bazel-genfiles/ /usr/local/include/tf-c/
sudo cp -r tensorflow /usr/local/include/tf-c/
sudo cp -r third_party /usr/local/include/tf-c/
sudo cp -r bazel-bin/tensorflow/libtensorflow.so /usr/local/lib/

6. Verify

C

Create a file hello_tf.c like before:
#include 
#include 

int main() {
  printf("Hello from TensorFlow C library version %s\n", TF_Version());
  return 0;
}
compile and check
gcc hello_tf.c -ltensorflow -o hello_tf

./hello_tf

Python

# pythonimport tensorflow as tfhello = tf.constant('Hello, TensorFlow!')sess = tf.Session()print(sess.run(hello))

Reference

留言

這個網誌中的熱門文章

[python] python 常用套件

最近跑完 TCN 創客松後深感自己的不足,覺得不要再 Arduino 了,來重回 Python 的懷抱XD,複習了 Django 的書,順便列一下書裡面常見的套件,覺得之後會用到XD 網站框架 Django: 完整強大的 Web 框架 Pyramid: 強大 Web 框架2 web2py: Google app engine 預設框架 flask: 輕量 Web 框架,覺得有興趣。 圖片處理 PIL: 可對圖片進行縮放切割旋轉,圖片操作 Pillow: 因為 PIL 太久沒更新而出的 fork 版本,現在都用這個了 科學計算 Numpy: 神 Matplotlib: 出圖之神 pandas: 有點像 excel 的資料分析神 scikit-learn: 機器學習之神 命令列操作 fabric: 可以直接撰寫 shell 命令,透過 fabric 執行,也支持遠端登入和自定義 Shell  paramiko: 提供遠端登入和部分指定呼叫 測試 django-nose: Django 的測試套件 網路爬蟲 Scrapy: Python 爬蟲框架之一,可以輕易地和 Django 協作 文件叵析 beautifulsoup: 美麗的湯,處理 html, xml 文本分析一定要學的套件 lxml: 不太熟,但是聽說也很好用  自然語言處理 nltk: 理論基礎及功能強大的語言處理套件,但相對低階,上手困難 textblob: 較高階的分詞、分句語言分析工具 jieba: 中文分詞、分句、語言分析工具 網路請求用戶端 requests: 常用的網路請求工具,直觀好用 pycurl: 處理 linux, unix 系統上的命令 背景程序、定時任務 celery: 可以輕易編寫、呼叫非同步及背景程序,或是執行定時任務 資料庫介接 mysql-python: MySQL 的資料庫介接套件, Django 連接 MySQL 的預設 psycopg2: PostgreSQL 吃料庫介接套件 pymongo: MongoDB 的介接套件 自己常用的主要都在科學計算的部分,努力想要切入 scikit-learn 中 XD,但是又覺得網頁端 & 資料庫很重要R

[電銲] 自己的 IMU 自己焊!笨蛋的焊接法!

工程師真的是被要求包山包海都要會... IMU的組件被要求不能只插麵包板,要 "穩固的固定在另外製作的盒子裡",反正就是搞得跟外面幾十萬上下的 IMU 一樣精緻就是了。 好好好都焊給你~ 呃 對了 怎麼焊哈哈哈哈 ^^" 電焊這種技術自從上過國中的生活科技後就再也沒再用了@@,好家在這是個網路就是你家的時代,立刻上網查一查,找到一些關於焊槍使用的相關資料。 被要求說GY-91不能只接麵包板一定焊死才穩固所以就這樣接了

[python] 使用 python 控制 docx 範例

因為同事的需求,無職 a 我就又再度幫忙同事寫一些小程式。 這些小程式雖然簡單,但是聽到如果不幫忙寫程式解決,以手工作業的"大量人天" 的後果真的是讓人吐血。 他們有一份工作,需要產出一份很多很多很多資料圖片的判釋報告,要把數百張圖片剪裁成特定大小,加上圖說之後放入 word 裡面。 聽到的做法是...一張一張插圖!! wooow! That's really shocking me! 所以為了前公司同事的幸福,我還是加減寫一下好了。