table of contents
Introduction
Ubuntu 19.04 has entered beta as I write this and will be released in a few weeks. I decided to install it and try it out. My initial impression is very positive. Subjectively, it looks like it was optimized for performance. It is the first Linux distribution release to use the new 5.0 kernel. Everything is up to date. There's a lot to like.
Although this is an xx.04 release, it is not an LTS (Long Term Support) release. It is a short term release that will be supported for 1 year. The next LTS release will be on 20.04, two years after the current LTS, Ubuntu 18.04.For a stable "production" installation, I still recommend using Ubuntu 18.04.
I consider Ubuntu 19.04 an experimental release and that's exactly what I'm doing with it, experimenting. I wanted to see if I could run some currently unsupported packages.So far I have installed CUDA 10.1, docker 18.09.4 and NVIDIA-docker 2.03 and I am running TensorFlow 2 alpha with GPU support. They are all working fine.In this post, I will talk about how to run CUDA 10.1 on Ubuntu 19.04.Fortunately, it was easy to get it working.
dbk
Teaser info output from this Ubuntu 19.04 install
[email protected]:~$ lsb_release -aReseller ID:Ubuntu Description:Ubuntu Disco Dingo (development branch)Version:19.04Codename:disc
[email protected]:~$ uname -aLinux u19 5.0.0-7-generic #8-Ubuntu SMP Sec 4 Mar 16:27:25 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[email protected]:~$ gcc --versiongcc (Ubuntu 8.3.0-3ubuntu1) 8.3.0
[email protected]:~$ nvcc --versionnvcc: NVIDIA(R) Compiler Driver CudaCopyright (c) 2005-2019 NVIDIA Corporation Built on Fri_february__8_19:08:17_PST_2019Cuda Compilation Tools, Version 10.1, V10.1.105
[email protected]:~$ docker run --runtime=nvidia -u $(id -u):$(id -g) --rm -it tensorflow/tensorflow:2.0.0a0-gpu-py3 bash________ _______________ ___ __/_________________________________ ____/__ /________ ____ / _ _ _ __ _ ___/ __ _ ___/_ /_ __ /_ __ _ | /| / /_ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ / /_/ ___//_/ /_//____/ ____ //_/ /_/ /_/ ____/____/|__/ You are running this container as user with ID 1000 and group 1000, which should be mapped to the userid and group on the Docker host. Great! tf-docker /> python -c "import tensorflow as tf; print (tf.__version__)" 2.0.0-alpha0
**Disclaimer:** The following is my personal hack for installing and running CUDA on Ubuntu 19.04. He is not compatible with anyone, especially me!
Steps to install CUDA 10.1 on Ubuntu 19.04
Step 1) Install Ubuntu 19.04!
The first thing I tried installing Ubuntu 19.04 was using the "Desktop" ISO installer. It failed! It crashed during installation and I couldn't get it to work (I didn't try very hard to get it to work as I have an easier method). To be fair, this was the "night" ISO build from March 26, 2019, a few days before the beta release. By the time you read this the "beta" is available (or the full version if you're reading this after mid-April), hopefully it installs from the "Desktop/Live" ISO without a hitch.
I used my "standard" workaround to install Ubuntu. I use the server installer and Ubuntu's wonderful `tasksel` tool to install a desktop. I installed my favorite MATE desktop. You can read how to do this in the following post,
Best way to install Ubuntu 18.04 with NVIDIA drivers and any desktop version. This almost always works and these instructions for 18.04 still apply to 19.04. But,if you follow the guide linked above, see next step on display driver.
Step 2) Install the NVIDIA Driver
You will need to have the NVIDIA 410 or higher video driver installed to work with CUDA 10.1. Otherwise, you will get the dreaded "Status: CUDA driver version is insufficient for CUDA runtime version". I recommend using the latest driver. The easiest way to install the driver is from "ppa graphics drivers".
sudo add-apt-repository ppa:graphics-drivers/ppa
Install dependencies for the system to build the kernel modules,
sudo apt-get install dkms build-essential
Then install the driver (418 was the most recent at the time of writing. If you run the following command and click on the tab after typing nvidia-driver you should see a list of all available driver versions in the ppa).
sudo apt-get updates sudo apt-get install nvidia-driver-418
After installing the driver, go ahead and reboot.
sudo off -r now
Step 3) Install the CUDA "Dependencies"
There are some dependencies that are installed when you run the full CUDA deb, but since we won't be using the deb, you'll want to install these separately. It's simple as we can get what we need with just four package installs,
sudo apt-get install freeglut3 freeglut3-dev libxi-dev libxmu-dev
These packages will receive the necessary GL, GLU, Xi, Xmu libraries and several other libraries that will be installed as dependencies on them.
Step 4) Get the "run" CUDA file installer (use Ubuntu 18.10 installer)
go to theCUDA areaand click the Download Now button. Then click on the link buttons until you get the following,
Image
Download this.
Step 5) Run "run file" to install CUDA toolkit and samples
This is where we get the CUDA Developer Toolkit and samples on the system.We will not install the included video driver as the latest driver was installed in step 2). You can use `sh` to run the shell script (".run" file),
sudo sh cuda_10.1.105_418.39_linux.run
This is a new installer and it starts much slower than previous scripts (if you've done this before).
You will be asked to agree to the EULA, of course, after which you will be presented with a "picker". Uncheck the "Driver" block and select "Install" and press "Enter".
" ─────────────│ CUDA Installer ││ - [ ] Driver ││ Toolkit [ 39 ] CUDA1 418. [X] CUDA Samples 10.1 ││ [X] CUDA Demo Suite 10.1 ││ [ X] CUDA 10.1 ││ Install ││ Options ││ ││ ││ ││ │││ │ ││ ││ │ ││ ││ ││ ││ ││ ││ ││ ││ ││ ││ ││ ││ │││ │ │ ││ ││ ││ Up/Right : Expand | 'Enter': Select | 'A': Advanced Options ─┘
This will do "the right thing". Go to,
- install the CUDA toolkit to /usr/local/cuda-10.1
- create a symlink to /usr/local/cuda
- install the samples in /usr/local/cuda/samples and in your home directory in NVIDIA_CUDA-10.1_Samples
- add appropriate library path
gato /etc/ld.so.conf.d/cuda-10-1.conf /usr/local/cuda-10.1/targets/x86_64-linux/lib
It doesn't set your PATH to the toolkit. That's the next section.
Step 6) Set your environment variables
There are two good ways to set environment variables so you can use CUDA.
- Configuration system environment
- User environment setup
In the past, you typically did system-wide environment setup. You could do this even for a single-user workstation, but you might prefer to create a small script that configures things just for the terminal you're working in when you need it.
Alternative for the whole system
To set up the CUDA environment for all users (and applications) on your system, create the file (use sudo and a text editor of your choice)
/etc/profile.d/cuda.sh
with the following content,
export PATH=$PATH:/usr/local/cuda/binexport CUDADIR=/usr/local/cuda
The environment scripts in /etc/profile.d/ are read by your local .bashrc file when you start a terminal or login. it's automatic
The next time you log in, your shells will start with CUDA in your path and be ready to use. If you want to load that environment into a shell right now without exiting, just do so,
source /etc/profile.d/cuda.sh
Note on the LIBRARY ROUTE:
Installing the cuda-toolkit added a .conf file to /etc/ld.so.conf.d, but what it added is not an idea and doesn't always seem to work correctly. If you are doing a system-wide environment setup, I suggest you do the following;
Move the installed conf file out of the way,
sudo mv /etc/ld.so.conf.d/cuda-10-1.conf /etc/ld.so.conf.d/cuda-10-1.conf-orig
Then create (using sudo and your editor of choice) the file
/etc/ld.so.conf.d/cuda.conf
that contains,
/usr/local/cuda/lib64
so run
sudo ldconfig
This cuda.conf file in /etc/ld.so.conf.d/ will point to the symlink for cuda-xx in /usr/local, so it will still be correct even if you change the version of cuda that is the link. pointing to (This is my "normal" way of setting up system-wide environments for CUDA.)
User-by-terminal alternative
If you want to activate your CUDA environment only when and where you need it, this is one way to do it. You may prefer this method to a system environment as it will keep your PATH cleaner and allow you to easily manage multiple CUDA versions. If you decide to use the ideas in this post to install another version of CUDA, say 9.2, alongside your 10.1, this will make the switch easier.
For a localized user CUDA environment, create the following simple script. You don't need to use sudo for this, and you can keep the script anywhere in your home directory. You only need "source" when you want a CUDA development environment.
I will create the file with the name `cuda10.1-env`. Add the following lines to this file,
exportar RUTA=$RUTA:/usr/local/cuda-10.1/binexport CUDADIR=/usr/local/cuda-10.1exportar LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.1/lib64
Note: I explicitly used the fully qualified path for version 10.1, i.e. `/usr/local/cuda-10.1` instead of the symlink `/usr/local/cuda`. You can use the symlink path if you like. I only did this in case you want to install another version of CUDA and create another environment script that points to the different version.
Now when you want your CUDA development environment, just do `source cuda10.1-env`. This will set those environment variables to your current shell. (You can copy this file to your working directory or provide the full path when using the `source` command.)
Step 7) Test CUDA by building the "samples"
Let's make sure everything works correctly. You can use the copy of the samples that the installer placed in your home directory under `NVIDIA_CUDA-10.1_Samples` or copy the samples from `/usr/local/cuda/samples`.
cd ~/NVIDIA_CUDA-10.1_Samplessource cuda-10.1-envmake -j4
Running this make command will compile and link all source examples as specified in the Makefile. (the -j4 just means running 4 "jobs" make can build objects in parallel so you can speed up build time by using more processes).
Once everything has finished compiling, you can `cd` to `bin/x86_64/linux/release/` and see all the sample executables. All samples appear to have compiled without errors, although this is an unsupported version of Ubuntu. I ran several programs and they worked as expected, including those using OpenGL graphics.
Just because the samples were built correctly doesn't mean there weren't problems with the installation, but it's a very good indication that you can confidently continue your development work!
Extras not discussed... docker, nvidia-docker, TensorFlow
I only talked about configuring CUDA 10.1 on Ubuntu 19.04. I also installed the latest docker and nvidia-docker. This was done using "bionic" based repository settings, i.e. Ubuntu 18.04. These deb packages installed correctly in 19.04. My basic procedure for installing and configuring Docker is presented in a series of 5 posts from early 2018 (still relevant),How to Configure NVIDIA Docker and NGC Registry on Your Workstation - Part 5 Docker Performance and Resource Tuning. This post has links to the first 4.
After configuring docker and nvidia-docker, I ranGoogle Docker Image TensorFlow 2.0 Alpha no DockerHub. I might as well have tried building the TensorFlow 2 alpha against installing CUDA 10.1 here, but I'm not that brave. It's best to stick with docker or an Ubuntu 18.04 setup for this.
[I tried installing TensorFlow from the pip package, but ended up with a segmentation fault in a system library. I don't recommend trying this.]
Recommendation
As I said at the beginning of the post, this is an experimental setup. Ubuntu 19.04 appears to be a good Linux platform and has all the latest packages that will tempt you (while you're reading this). My serious recommendation is to do this if you want to experience a state-of-the-art development environment, otherwise stick with Ubuntu 18.04.Your stable "production" platform should be Ubuntu 18.04. 18.04 will be supported for a few more years, which means it will be an attractive standard Linux platform for software builds. It must remain stable and well supported.
I'll be doing more posts on setting up Machine Learning/AI/Data Science/HPC/etc. settings. This includes configurations for Windows 10 and Ubuntu 18.04. I probably won't do anything else with Ubuntu 19.04 unless I'm convinced lol. Sounds like a good step to me. Congratulations to Canonical and the Ubuntu team!
Happy computing! –dbk
In search of one
Scientific computer system?
Do you have a project that needs a lot of computational power and don't know who to turn to? Puget Systems offers a variety of HPC servers and workstations designed for both CPU and GPU workloads.
Setting up a scientific computer system
Why choose Puget Systems?
Built specifically for you
Rather than getting a generic workstation, our systems are designed around your unique workflow and are optimized for the work you do every day.
We are here, call us!
We ensure that our representatives are as accessible as possible, via phone and email. At Puget Systems, you can talk to a real person!
Fast build times
By keeping an inventory of our most popular parts and keeping a short supply line for the parts we need, we can offer the best shipping time in the industry.
Even after your parts warranty has expired, we continue to answer your questions and even fix your computer at no labor cost.
Click here for more reasons!
Hang tags:CUDA,stevedore,tensorflow 2,Ubuntu 19.04