Latest Tweets

RAVADA VDI 0.2.9: Issues and vulnerabilities

Preamble

The RAVADA VDI project is awesome. I already talked about it on a previous post (read it here), and I even wrote a 4-page tutorial for Linux User and Developer #181.  It is still great, of course, but it has some important security issues and flaws I will like to describe here. Most of these problems can be easily addressed by adding some sanity checks to the code. Let’s go then!

Broken Access Control

Every single issue I have found belong to the A4 – Broken Access Control security flaw described in The OWASP TOP-10. Going through the perl code in /usr/sbin/rvd_frontend, some HTTP GET requests can be found that process different aspects of the RAVADA VDI back-end’s functionality, such as enumerating users, virtual machines, and so on. Most of these requests have some sanity checks to ensure the user is either authenticated or is authenticated and has admin privileges; e.g:

any '/new_machine' => sub {
     my $c = shift; 
     return access_denied($c)    if !$USER->is_admin;  
     return new_machine($c);
};

The previous code-snippet would prevent any non-admin user from creating new domains. However, some other requests do not have this checkings set in place. It is easy to get a list of valid HTTP Requests coming from the web interface by running:

cat /usr/sbin/rvd_front |grep -E “^get|^any”

Then, you can go directly to every request and see whether it has some sanity checks or not. If not, then you can try to access to that resource directly from your web browser and see what happens. For example:

get ‘/list_machines.json’ => sub {
get ‘/list_users.json’ => sub {

After loging in as a non-privileged, non-admin user, you can open the following URLS:

http://RAVADA_SERVER/list_machines.json

http://RAVADA_SERVER/list_users.json

The first URL will give you a list of all the created virtual machines, no matter if you have enough privileges to see them or not. The second one will dump the entire user database (including password hashes for local accounts). By the way, passwords are hashed using MYSQL SHA-1 algorithm:

Getting a full list of available VMs in JSON format.

Dumping the whole user database in JSON format.

Playing with someone else’s VMS.

Apart from being able to dump the entire user database, thanks to the list of available VMs you can modify someone else’s domains by just accessing the following url (replace ID with the id:NUMBER obtained from navitating to /list_machines.json):

https://RAVADA_SERVER/machine/settings/ID.html

The VMs list also gives you the 4-char pseudo-random characters used as the spice session password. So, for every listed VM that is up (is_active:1), you can obtain its spice password (spice_password field) in case it is protected by an spice session password and try to connect to it. Besides, you will get the remote IP address that is now connected to the spice session (remote_ip field). A simple Bash-like script can parse the entirety of the list_machines.json file and then iterate trhough every spice password trying to establish a connection to the remote session:

#!/bin/bash
pwds=`./JSON.sh < ../list_machines.json -l -p -n|grep spice_password|grep -v null|awk '{print $2}'|tr -d "\""|xargs`
 
for p in $pwds;
do
/bin/cat <<EOM >sp.vv
[virt-viewer]
type=spice
port=PORT
host=HOST
...
EOM	
	echo "PWD=$p"	
	echo "password=$p">> sp.vv	
	remote-viewer sp.vv
done
exit 0

There are more things to consider, but suffice to say, the developers have already been told and they have fixed all these issues on release 0.2.10. Get it from here.

Send a newer PXE kernel version to a new computer node on Rocks Cluster 6.2 without altering the frontend

The issue

You have a Rocks Cluster deployed on your university. The latest official Rocks version is 6.2. It ships with kernel 2.6.32 out-of-the-box. This is the same kernel that is installed on every single computer node, and it’s the very same one that it’s sent via PXE to the computer nodes in order to boot up the installation process. After a long while performing computer simulations, you have bought new computer nodes to be added to the cluster. It comes as no surprise when, once you have started the installation process via PXE, a kernel panic arises: unsupported CPU family. Sometimes it’s not a kernel panic, it’s just that there is no support for your newer hardware and anaconda stops the kickstart process altogether, waiting for you to provide the installer with some additional drivers. Crap.

It’s not the end of the world

So you go through the Rocks documentation and find how to compile and install a custom GNU/Linux Kernel for your computer nodes. Nice:

http://central6.rocksclusters.org/roll-documentation/base/6.1.1/customization-kernel.html

But this do not update the kernel and initramfs sent via PXE, so the node is not going to be able to start the installation process anyway. Reading on, you find that in order to update the PXE kernel and initramfs, you have to install this new kernel on the fronted, and from there rocks-boot will take care of updating the /tftpboot/pxelinux/vmlinuz-6.1.1-x86_64 and /tftpboot/pxelinux/initrd.img-6.1.1-x86_64 files. But hey, wait a minute. What if you do not want to install a new kernel on the frontend? Fret not: here I came up with a safe solution.

First step: Set up the New Installation Kernel.

Clone the Rocks kernel repository from GitHub first:

git clone https://github.com/rocksclusters/kernel.git

Then, download the kernel you need, extract it, and configure it. You can copy the config file for the running kernel and then use “make olddefconfig” to speed things up. If you need some particular driver or kernel option, make sure it is set before going further (you can perform make olddefconfig first followed by make menuconfig). For illustrative purposes, we will be using kernel 3.18.68 here:

cd kernel/src/kernel.org
wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.18.68.tar.gz
tar xvfz linux-3.18.68.tar.gz
cd linux-3.18.68
cp /boot/config-2.6.32-504.16.2.el6.x86_64 .config
make olddefconfig

Once this is done, copy this new kernel config file to the parent directory, delete the entire linux-3.18.68 source directory and change the VERSION directive in the version.mk file:

cp .config ../config-3.18.68
cd ..
rm -rf linux-3.18.68
vi version.mk …

The new version.mk file should look like this:

NAME = kernel
RELEASE = 1

VERSION = 3.18.68
PAE = 0
XEN = 0

Now the tricky part. In the past, Centos 5 and early versions shipped with kudzu and mkinitrd. But this is long gone now: kudzu exists no longer and mkinitrd has been replaced with dracut. Therefore, you must change this in the kernel.spec.in file; edit it and remove these lines:

Requires: mkinitrd
Requires: kudzu

Add this line to the file:

Requires: dracut

The %post section will ensure that, during the post-installation of the new kernel image, initramfs is generated. On the computer node there is no such executable as /sbin/new-kernel-pkg, so the following command won’t be executed (this is not even commented on the official Rocks documentation!):

%post

[ -x /sbin/new-kernel-pkg ] && /sbin/new-kernel-pkg –package %{name} \
–mkinitrd –dracut –depmod –install %{kernelversion}

If initramfs does not get generated, the computer node will panic and won’t boot up. Remove these two lines in kernel.spec.in and write this one instead:

/sbin/dracut -f /boot/initramfs-%{kernelversion}.img %{kernelversion}

Now you can build the rpm packages:

make rpm

Once the rpms are built, copy the resulting packages to the rocks install directory and re-generate the distro:

cp ../../RPMS/<arch>/kernel*rpm /export/rocks/install/contrib/6.2/<arch>/RPMS/
cd /export/rocks/install
rocks create distro

This first step is complete now. This kernel will be installed by Anaconda on the computer nodes from now on (on the new ones and on the old ones whenever they are re-installed).

Second step: Set up the New PXE Kernel

Extract the contents of the previous kernel-<VERSION> package on a new directory. You need the vmlinuz image and the new modules before constructing the new initramfs for the PXE installation:

mkdir kernel.binary
cd kernel.binary
rpm2cpio  RPMS/<arch>/kernel-<version>.rpm |cpio –extract –make-directories –verbose
cp boot/vmlinuz-<VERSION> /tfptboot/pxelinux/vmlinuz-<VERSION>
rsync -avtHDl lib/modules/* /lib/modules

Now you have to construct the new initramfs for this new kernel using dracut. Don’t forget to add as many network drivers as needed (for example, for old and new nodes):

depmod -ae <VERSION>
dracut –add-drivers “r8169.ko e1000.ko e1000e.ko” -f /tmp/initramfs <VERSION>

Now the trickiest part of all: you need to modify the provided Rocks initramfs in order to add the new modules. First, create a new directory and inflate the contents of the original initramfs:

mkdir initramo
cd initramo
cp /tftpboot/pxelinux/initrd.img-6.2-x86_64 initrd.img-6.2-x86_64.lzma
unlzma initrd.img-6.2-x86_64.lzma
cat initrd.img-6.2-x86_64|cpio -i

Remove the old modules because you do not need them; don’t forget to delete the initrd file itself too:

rm -rf lib/modules/2.6.32-504.16.2.el6.x86_64
rm -rf initrd.img-6.2-x86_64

Inflate now the contents of the new initramfs previously generated with dracut:

mkdir initramn
cd initramn
gzip -dc /tmp/initramfs|cpio -i

Copy all the new modules to the original initramfs directory:

rsync -avtHDl lib/modules/<VERSION> ../initramo/lib/modules/

Get back to the original initramfs directory and generate the final initramfs file; save it to /tftpboot/pxelinux:

cd initrammo
find . | cpio –create –format=’newc’ > /tmp/newinitrd
gzip /tmp/newinitrd
cp /tmp/newinitrd.gz /tftpboot/pxelinux/initramfs-<VERSION>.img

Set the following permissions to both the kernel and the initramfs image:

chmod 755 /tftpd/pxelinux/vmlinuz-<VERSION>
chmod 755 /tftpd/pxelinux/initramfs-<VERSION>.img

Done; the second step is now complete.

Third step: send the new kernel via PXE to the nodes

Finally, in order to use this new kernel and initramfs via PXE, use the rocks command. First, add a new bootaction making sure to set the “kernel” and “ramdisk” parameters to the ones just generated:

rocks add bootaction action=”install newkernel” kernel=”vmlinuz-<VERSION>” ramdisk=”initramfs-<VERSION>.img” args=”ks ramdisk_size=150000 lang= devfs=nomount pxe kssendmac selinux=0 noipv6 ksdevice=bootif”

Next, set this new action as the install action for your particular subset of newer computer nodes:

rocks set host <NODE> installaction action=”install newkernel”

Now, make sure that during the next PXE boot, the desired node or nodes will be re-installed:

rocks set host boot <NODE> action=install

Conclusions

Now you can build a new kernel, deploy it on your computer nodes and update the PXE kernel and initramfs without installing the new kernel on the frontend or dealing with the rocks-boot package. This way you can even send different kernels via PXE to different computer nodes, in case you need it. Besides, even if the default PXE kernel does allow the new nodes to be correctly installed, you won’t be able to boot up the new nodes with a newer kernel because of the bug with /sbin/new-kernel-pkg, described above. For old computer nodes already installed, this should work out-of-the-box: a new kernel is not going to break the nodes. Anyway, you can try an old node to see if it is still working fine after installing it with the new kernel. If the default PXE kernel does allow the old node to be installed, it’s perfectly safe to install it using this default PXE kernel and then booting it up with the new kernel package.

Installing and using RAVADAVDI on Debian Jessie

Introducing RAVADAVDI

This is an incredible effort to bring the power of VDI to you using only open source libraries and tools. It is mainly developed in PERL 5, and it relies on KVM for virtualization, and Spice for sound, I/O and graphics. Of course, there’s still a lot of things to polish, such as increasing its security: it does not implement any sort of encryption save for TLS on the ravadavdi web framework part (by means of using Apache, for example). The way the user authenticates against the Spice remote server is poor; only a 4-character pseudo-random password easily prone to brute-forcing. But this is a start, and a hell of a start indeed! I’ve been testing this whole framework myself and I have to admit that I am impressed. Thus, I was a bit disappointed at discovering that it did not work out out-of-the-box on Debian systems. The installation is really easy, but only if you are deploying this solution on Ubuntu-based distros. A pitty. So I decided to have a look into it and make it work for Debian Jessie.

First Issue: MySQL version < 5.6

You can follow the instructions and install the framework on your Debian Jessie box up until the “Ravada Web User” section (http://ravada.readthedocs.io/en/latest/docs/INSTALL.html). Debian Jessie ships with MySQL 5.55.084, and according to RAVADAVDI documentation, MySQL 5.6 is required. But in fact, it does not use anything from MySQL 5.6 that is not present on MySQL 5.55, so you can still use this framework without the pain of updating your MySQL version. MySQL 5.55 does not implement a DEFAULT value for DATETIME fields; therefore if you try to add a new user using the rvd_back perl script, you will get an error:

rvd_back –add-user lud.test
INFO: creating table messages
DBD::mysql::db do failed: Invalid default value for ‘date_send’ at /usr/share/perl5/Ravada.pm line 276.

You need to replace the field “date_send” in the table “messages”, which is of type DATETIME,  with  TIMESTAMP. TIMESTAMP fields in MySQL 5.55 does implement a DEFAULT value:

CREATE TABLE `messages` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`id_user` int(11) NOT NULL,
`id_request` int(11),
`subject` varchar(120) DEFAULT NULL,
`message` text,
`date_send` timestamp default now(),
`date_shown` datetime,
`date_read` datetime,
PRIMARY KEY (`id`),
KEY `id_user` (`id_user`)

Use your favourite ASCII editor and make this small alteration in the file /usr/share/doc/ravada/sql/mysql/messages.sql.

Second Issue: no kvm-spice binary

Reading the RAVADAVDI documentation we notice:

Debian jessie has been tried but kvm spice wasn’t available there, so it won’t work.

which is not true at all. Spice is already available on Debian Jessie. All you need to do in order to test that is to run the strings command on the kvm binary in Debian Jessie:

cat /usr/bin/kvm
#! /bin/sh
exec qemu-system-x86_64 -enable-kvm “$@”

strings /usr/bin/qemu-x86_64-static |grep spice
spicevmc
spiceport
qemu_spice_create_update
qemu_spice_wakeup
qemu_spice_del_memslot
qemu_spice_add_memslot
qxl_spice_update_area_rest
qxl_spice_update_area
qxl_spice_reset_memslots
qxl_spice_reset_image_cache
qxl_spice_reset_cursor
qxl_spice_oom
qxl_spice_loadvm_commands
qxl_spice_monitors_config
qxl_spice_destroy_surfaces
spice_vmc_event
spice_vmc_register_interface
spice_vmc_read
spice_vmc_write
qemu_spice_destroy_primary_surface
qemu_spice_create_primary_surface
qxl_spice_flush_surfaces_async
qxl_spice_destroy_surface_wait
qxl_spice_destroy_surface_wait_complete
qxl_spice_destroy_surfaces_complete
spice_vmc_unregister_interface

So, Debian Jessie does have spice support on the qemu-kvm package. The problem here is that on Ubuntu systems, there’s a file /usr/bin/kvm-spice, whereas on Debian Jessie there isn’t. To fix this, you can create a symlink and be done with it:

# ln -s /usr/bin/kvm /usr/bin/kvm-spice

Third issue: “persistent update of device ‘graphics’ is not supported”

This is a reported and well-know issue of libvirt0. The workaround on Debian Jessie systems is to add the backports repository and install spice from it:

apt-get -t jessie-backports install libvirt0

After that, make sure you have the right version running on your system:

apt-cache madison libvirt0
libvirt0 | 3.0.0-4~bpo8+1 | http://ftp.es.debian.org/debian/ jessie-backports/main amd64 Packages
libvirt0 | 3.0.0-4~bpo8+1 | http://http.debian.net/debian/ jessie-backports/main amd64 Packages
libvirt0 | 3.0.0-4~bpo8+1 | http://http.debian.net/debian/ jessie-backports/main amd64 Packages
libvirt0 | 1.2.9-9+deb8u4 | http://ftp2.fr.debian.org/debian/ jessie/main amd64 Packages
libvirt0 | 1.2.9-9+deb8u3 | http://security.debian.org/ jessie/updates/main amd64 Packages
libvirt | 1.2.9-9+deb8u4 | http://ftp2.fr.debian.org/debian/ jessie/main Sources
libvirt | 1.2.9-9+deb8u3 | http://security.debian.org/ jessie/updates/main Sources

You should have version 3.0.0-4 instead of 1.2.9-9.

Four issue: “Unsupported machine type”

Finally, whenever starting a new VDI you will get this error:

ERROR starting domain status:’done’ ( libvirt error code: 1, message: internal error: process exited while connecting to monitor: redir -device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on (process:29263): GLib-WARNING **: /build/glib2.0-ETetDu/glib2.0-2.48.0/./glib/gmem.c:483: custom memory allocation vtable not supported qemu-system-x86_64: -machine pc-i440fx-xenial,accel=kvm,usb=off,dump-guest-core=off: Unsupported machine type Use -machine help to list supported machines! )

Crystal clear: the -machine argument passed to every single machine defined using the proper XML files are Ubuntu-based. So you have to edit these files and use the right -machine flag for your Debian Jessie distro. For example, edit the “Debian Jessie AMD64” VM XML definition file and replace pc-i440fx-xenial with pc-i440fx-2.1:

vi /var/lib/ravada/xml/jessie-amd64.xml

<type arch=’x86_64′ machine=’pc-i440fx-2.1′>hvm</type>

Of course, you can do as suggested in the error message and get a list of valid machines by issuing:

kvm -machine help

Conclussions

It is quick and easy to make RAVADAVDI work on Debian Jessie. And it is better to use Debian than Ubuntu most of the time. So now, following these instructions, you can also benefit from this incredible work and start using an open-source  VDI framework right away on your amazing Debian GNU/Linux distro!