The Living Thing / Notebooks :

Putting tensorflow on my laptop

How is deep learning awful this time?

I want to do machine learning without the cloud, which as we learn previously, is awful.

But also I’m a vagabond with nowhere safe and high-bandwidth to store a giant GPU machine (campus IT don’t return my calls about it; I think they think I’m taking the piss.)

So, let’s buy a Razer Blade 2016, a nice portable, surprisingly cheap laptop with all the latest feature and a comparable performance to the kind of single-GPU desktop machine I could afford.

I don’t want to do anything fancy here, just process a few gigabytes of MP3 data. My data is stored in the AARNET owncloud server. It’s all quite simple, but the algorithm is just too slow without a GPU and I don’t have a GPU machine I can leave running. I’ve developed it in keras v1.2.2, which depends on tensorflow 1.0.

So installing CUDA etc on the laptop is straightforward. Making it run is not.

Daniel Teichmann’s Bumblebee instructions.

NVIDIA’s howto points out that you will need to care about Bumblebee to survive Ubuntu.

Confusing, because most sources seem to think you want to have NVIDIA graphics; but what if you merely want NVIDIA computing via CUDA, and don’t care about graphics?

In stark contrast to NVIDIA, they claim:

There is sometimes confusion about CUDA. You don’t need Bumblebee to run CUDA. Follow the How-to to get CUDA working under Ubuntu.

Lies! without Bumblebee the NVIDIA gets switched off.

Their second point is interesting though

There is however a new feature (--no-xorg option for optirun) in Bumblebee 3.2, which makes it possible to run CUDA / OpenCL applications that does not need the graphics rendering capabilities.

Bumblebee is its own small world. There is a walkthrough for a Razer Blade. It has a debugging page, which you will need.

To file

sudo modprobe nvidia-uvm nvidia
primusrun python -i jobs/spectrogram_normed.py
primusrun nvidia-smi
CUDA_VISIBLE_DEVICES= primusrun python jobs/spectrogram_normed.py

Tensorflow ACPI interaction.

Turn Off Discrete nVidia Optimus Graphics Card in Ubuntu

Tensorflow wants the whole GPU.

Webupd8 bumblebee howto

monitoring NVIDIA power without using the NVIDIA

Boot without graphics in Ubuntu.

https://dip4fish.blogspot.com/2016/03/using-tensorflow-07-from-python-on.html

Tensorflow compilation

Using python library path: /home/dan/.virtualenvs/keras2/lib/python3.5/site-packages
Do you wish to build TensorFlow with MKL support? [y/N]
No MKL support will be enabled for TensorFlow
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to use jemalloc as the malloc implementation? [Y/n]
jemalloc enabled
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N]
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] y
XLA JIT support will be enabled for TensorFlow
Do you wish to build TensorFlow with VERBS support? [y/N]
No VERBS support will be enabled for TensorFlow
Do you wish to build TensorFlow with OpenCL support? [y/N]
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Do you want to use clang as CUDA compiler? [y/N]
nvcc will be used as CUDA compiler
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 8.0
Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-8.0
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Please specify the cuDNN version you want to use. [Leave empty to use system default]: 5.1.10
Please specify the location where cuDNN 5.1.10 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-8.0]: /usr/include/x86_64-linux-gnu
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "3.5,5.2"]: 6.1
Do you wish to build TensorFlow with MPI support? [y/N]
MPI support will not be enabled for TensorFlow
........
INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.

Configuration finished

Keyboard lights

Install the dorky keyboard drivers and the dorky GUI.

sudo add-apt-repository ppa:terrz/razerutils
sudo add-apt-repository ppa:lah7/polychromatic
sudo apt update
sudo apt install polychromatic
sudo gpasswd -a