TreeviewCopyright © Pengfei Ni all right reserved, powered by aleen42
Tensorflow GPU版本安装
注意:从1.2版本开始,Mac OSX不再支持GPU版本(CPU版仍继续支持)。
pip安装
最简单的方法是使用pip安装:
# Python 2.7
pip install --upgrade tensorflow-gpu
# Python 3.x
pip3 install --upgrade tensorflow-gpu
Docker
首先安装nvidia-docker:
# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi
然后可以使用gcr.io/tensorflow/tensorflow:latest-gpu
镜像启动GPU版Tensorflow:
nvidia-docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow:latest-gpu
CUDA和cuDNN
安装CUDA:
# Check for CUDA and try to install.
if ! dpkg-query -W cuda; then
# The 16.04 installer works with 16.10.
curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
dpkg -i ./cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
apt-get update
apt-get install libcupti-dev cuda -y
fi
安装cuDNN:
首先到网站https://developer.nvidia.com/cudnn注册,并下载cuDNN v5.1,然后运行命令安装
wget https://www.dropbox.com/s/xdak8t60lzk11zb/cudnn-8.0-linux-x64-v5.1.tgz?dl=1 -O cudnn-8.0-linux-x64-v5.1.tgz
tar zxvf cudnn-8.0-linux-x64-v5.1.tgz
ln -s /usr/local/cuda-8.0 /usr/local/cuda
sudo cp -P cuda/include/cudnn.h /usr/local/cuda/include
sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
安装完成后,可以运行nvidia-smi查看GPU设备的状态
$ nvidia-smi
Fri Jun 16 19:33:35 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66 Driver Version: 375.66 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 0000:00:04.0 Off | 0 |
| N/A 74C P0 80W / 149W | 0MiB / 11439MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
验证安装
$ python
>>> from tensorflow.python.client import device_lib
>>> print device_lib.list_local_devices()
...
[name: "/cpu:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 9675741273569321173
, name: "/gpu:0"
device_type: "GPU"
memory_limit: 11332668621
locality {
bus_id: 1
}
incarnation: 7807115828340118187
physical_device_desc: "device: 0, name: Tesla K80, pci bus id: 0000:00:04.0"
]
>>>