Posts Tagged ‘openstack’

As promised in my previous blog post  How to setup Openstack Havana with Docker driver  here I would like to share some of my experience working with the Havana/Docker setup. I’ll basically explain how to run secure access containers in Openstack/Docker and accessing the user-data passed to the containers overcoming the technical difficulties in the versions used in our setup.

We will create a  Ubuntu image in docker local repository using a Dockerfile and then transfer that image into the glance repository. The image we create fix the following issues that we find in the selected version of Docker.

– does not allow to pass user data at container start up.

– User cannot  pass a public key at instance boot up and access using it.

– User cannot change /etc/hosts file

So we will fix these issues which are critical when using Openstack/Docker as IaaS for Stratos. We will create a 64 bit Ubuntu image fixing above issues, which can be used as a base image for  creating cartridge images for Stratos.

You can download all the scripts and other stuff used in this blog from[2]. Download Dockerfile, metadata_svc_bugfix.sh, file_edit_patch.sh, run_scripts.sh and ubuntu64-docker-ssh.tar.gz from [2].

Let’s start with the snapshot of our set up saved earlier. Create a virtualbox VM with this snapshot.

Then you need to rejoin the Openstack session using
cd devstack
. openrc

Or instead of running rejoin-stack.sh you can run stack.sh. But in this case you will lose your previous data including images stored in glance repository and previously run instances.

Now open another terminal to the virtual machine.

Upload the 64bit Ubuntu image you downloaded above, into the docker repository. We will use this image as the base image of the images we create in the Docker repository.

docker import - ubuntu64base < ./ubuntu64-docker-ssh.tar.gz

create a new folder and name it say stratosbase
cd stratosbase

Create the file below and name it as Dockerfile

# stratosbase
# VERSION 0.0.1
FROM ubuntu64base
MAINTAINER Damitha Kumarage "damitha23@gmail.com"
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update

RUN apt-get install -y openssh-server
RUN echo 'root:g' |chpasswd

RUN apt-get install -q -y zip
RUN apt-get install -q -y unzip
RUN apt-get install -q -y curl

ADD metadata_svc_bugfix.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/metadata_svc_bugfix.sh
ADD file_edit_patch.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/file_edit_patch.sh
ADD run_scripts.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/run_scripts.sh
ENTRYPOINT /usr/local/bin/run_scripts.sh | /usr/sbin/sshd -D

What this Dockerfile do is self descriptive. Note that I run the sshd daemon as an ENTRYPOINT instead of CMD. Reason is Docker driver will override “/usr/sbin/sshd -D” with “sh” if I use CMD and consequently sshd daemon will not be run. We have also set ssh password for root as ‘g’ .

There is a problem in docker containers as of the current versions where it does not allow to download user data. The reason is described in the Openstack bug report[1]. We will fix this problem by using a patch script called  metadata_svc_bugfix.sh. In this patch we also retrieve ssh public key of the user passed to the instance when booting up. There is limitation in Docker containers where it does not allow to edit /etc/hosts file. We will circumvent this issue by adding another patch file called file_edit_patch.sh.

We introduce another script called run_script.sh which will be executed at startup of the docker container and this script just contain execution codes for the above two patch scripts.

Following are the scripts mentioned.


NOVA_NIC=$(ip a | grep pvnet | head -n 1 | cut -d: -f2)
while [ "$NOVA_NIC" == "" ] ; do
echo "Find nova NIC..."
sleep 1
NOVA_NIC=$(ip a | grep pvnet | head -n 1 | cut -d: -f2)
echo $NOVA_NIC
echo "Device $NOVA_NIC found. Wait until ready."
sleep 3
# Setup a network route to insure we use the nova network.
echo "[INFO] Create default route for $NOVA_NIC. Gateway"
ip r r default via dev $NOVA_NIC
# Shutdown eth0 since icps will fetch enabled enterface for streaming.
ip l set down dev eth0

sleep 5
#Get public keys from meta-data server
if [ ! -d /root/.ssh ]; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
# Fetch public key using HTTP
if [ ! -f /root/.ssh/authorized_keys ]; then
wget -O /tmp/metadata-key -o /var/log/metadata_svc_bugfix.log
if [ $? -eq 0 ]; then
cat /tmp/metadata-key >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
#restorecon /root/.ssh/authorized_keys
rm -f /tmp/metadata-key
echo "Successfully retrieved public key from instance metadata" >> /var/log/metadata_svc_bugfix.log


mkdir p - /root/lib
cp -f /lib/x86_64-linux-gnu/libnss_files.so.2 /root/lib
perl -pi -e 's:/etc/hosts:/tmp/hosts:g' /root/lib/libnss_files.so.2
perl -pi -e 's:/etc/resolv.conf:/tmp/resolv.conf:g' /root/lib/libnss_files.so.2
cp -f /etc/hosts /tmp/hosts
cp -f /etc/resolv.conf /tmp/resolv.conf



Copy the above three scripts(metadata_svc_bugfix.sh, file_edit_patch.sh, run_scripts.sh) into stratosbase folder. Now create the image in Docker local repository
docker build -t stratosbase .
Note the dot at the end of the command. Note that we tag the image as stratosbase. Now to see the image created in local Docker repo execute
docker images
You will see an image named stratosbase is created there.
Now you will tag this image and push it to the glance repository.
docker tag stratosbase
docker push
where is the ip of your Virtualbox VM. Your image is exported to the glance repository in Docker format. In fact you can push this image to any Docker repository you choose to and it is a good idea that Apache Stratos community keep a public Docker repository where they can share cartridge images. Then any one interested in the shared cartridges can pull it from public repository and use it with Stratos.

Now to see the image in Glance repository.
glance image-list
Now nova compute can spawn Docker containers from this image.
Log into to Horizon UI and create an instance using this image.
Note: you will log into Horizon web UI using the admin or demo user. The password for it is set in devstack/localrc file we created earlier in my previous blog.
Make sure that using Horizon Access & Security under Project tab you add rules for tcp port 22 and icmp for the security group from which you create containers(by default this is default group)

Now you should be able to access the spawned container using
ssh root@private_ip_of_container

or if you passwd your public key when creating your instance
ssh -i root@<private_ip_of_container>
Now when creating the container try passing some user data script using the Post Creation tab in the launch screen.
In the Customization Script box type

X1=1, X2=2

Now when the container is spawned you should be able to log in and retrieve the passed data by

And retrieve the public key buy


My aim of building this Openstack/Docker set up is to use this as a testing and developer environment for Apache Stratos PaaS Cloud environment. My next blog post Apache Stratos on Openstack/Docker-Part One will deal with how to set up Apache Stratos on the same Virtualbox VM we set up Openstack/Docker. We will use the stratosbase image we created as base for creating Stratos cartridge images in Openstack/Docker IaaS environment.

[1] https://bugs.launchpad.net/nova/+bug/1259267


Read Full Post »

I will share my experience on the subject in detail. The guide will definitely work if you follow this with the versions of the software specified.

Software: 64bit server of Ubuntu 13.04.
Openstack Havana/stable branch
Docker version 0.7.6
Oracle virtualbox version 4.1.12_ubuntu

This worked for me with ubuntu running on virtualbox. The vertualbox version need not be exact. But if you need this definetely work follow the exact Ubuntu and Docker versions because I have not tested on other versions. But I believe these instructions will work with prior Ubuntu and Docker versions as well with slight changes.

I use the devstack setup to install Havana and Docker. It is difficult to maintain our own scripts with the fast growing development of Openstack with new technologies, hence the idea of following devstack scripts. My previous choice of lxc driver is changed to dokcer since I feel that lxc/libvirt driver soupport in Openstack commnunity is somewhat lagging and Docker community show promising growth. Besides Docker is based on lxc with better isolation and features. Most attracting idea of docker for me is the concept of portable containers.

In this setup all nova services run in a single virtual machine. This setup is mainly used to test my Apache Stratos PaaS environment where the Openstack/Docker is used as a IaaS layer.

Scripts which appear within this article can be downloaded from [1]
It is good habit to take virtualbox snapshots at every important step of the process. This way if something goes wrong, you can re-start from the previously saved state. I strongly recommend to follow the instructions exactly as indicated in the article. Once you achieved the article goal, you can do your own experiments beginning from various snapshots saved. Later you can delete snapshots to save disk space.

If you are too eager to get the setup running follow the Quick Steps below. Quick Steps guide will assume you are familiar with virtualbox environment and Openstack devstack setup. If you run into problems or need detailed steps I recommend you follow the whole blog entry as a tutorial. Also for quick steps download the scripts and other stuff from [1].

Quick Instructions


Download interfaces, hypervisor-docker, install_docker0.sh, install_docker1.sh, localrc and driver.py files from [1].

Setup Virtualbox:

Install Ubuntu 13.04 64 bit version in virtualbox with at least 40G dyanmically growing hard disk. Add hostonly interface eth1 with gateway Add hostonly interface eth2 with gateway Log in and create a user/password called wso2/g. Replace /etc/network/interfaces file with downloaded interface file. Reboot vm, open a terminal and ssh into instance

ssh wso2@

sudo apt-get update

sudo apt-get install linux-image-3.8.0-26-generic


Setup Docker:

sudo apt-get install git
git clone https://github.com/openstack-dev/devstack.git
cd devstack
git checkout stable/havana

Replace devstack/lib/nova_plugins/hypervisor-docker with the downloaded hypervisor-docker file.
Copy install_docker0.sh and install_docker1.sh into /devstack/tools/docker folder.
cd devstack
sudo usermod -a -G docker wso2
sudo chown wso2:docker /var/run/docker.sock
sudo service docker restart
cd files
curl -OR http://get.docker.io/images/openstack/docker-ut.tar.gz
docker import - docker-busybox < ./docker-ut.tar.gz
If permission denied error occur execute the following command again.
sudo chown wso2:docker /var/run/docker.sock
url -OR http://get.docker.io/images/openstack/docker-registry.tar.gz
ocker import - docker-registry < ./docker-registry.tar.gz
Set ipv4 forwarding in /etc/sysctl.conf
net.ipv4.ip_forward = 1
sudo apt-get install lxc wget bsdtar curl
sudo apt-get install linux-image-extra-3.8.0-26-generic
sudo modprobe aufs

Add following three lines to /etc/rc.local
chown wso2:docker /var/run/docker.sock
modprobe aufs
sudo killall dnsmasq

Setup Openstack:

Copy localrc file to devstack folder
cd devstack
After stack.sh finished successfully execute docker images to see docker registry is still there. If there are no images do following again
cd devstack/files
docker import - docker-registry < ./docker-registry.tar.gz
Replace /opt/stack/nova/nova/virt/docker/driver.py with downlaoded driver.py
Reboot vm.
cd devstack
Log into Horizon, add icmp and ssh rules to security group and create an instance of busybox image.

Detailed  Instructions

First install Ubuntu 13.04 in a Virtualbox VM. Add a host-only network adaptor to it. In the ipv4 Address field put and ipv4 Network Mask field put . Add another host-only network adaptor. In the ipv4 Address field put and ipv4 Network Mask field put . Make sure to give at least 40G dynamically growing hard disk. Now boot up the VM and follow the steps below. Connect using the terminal ui provided by virtualbox and create a user/password called wso2/g.

Change /etc/network/interfaces as following

auto eth0
iface eth0 inet dhcp

auto eth1
iface eth1 inet static

auto eth2
iface eth2 inet manual
up ifconfig eth2 up

Now reboot and you can connect to the VM from a terminal using username wso2 and password g.

ssh wso2@
Now from within this terminal exeute

sudo apt-get update

Now in order for Openstack/Docker to work correctly we need a linux kernel upgrade for ubuntu

sudo apt-get install linux-image-3.8.0-26-generic

Now restart the VM node.

sudo apt-get install git

git clone https://github.com/openstack-dev/devstack.git

cd devstack

git checkout stable/havana

Now we need to apply the following patch for devstack scripts

Apply patch

The first one is in file “devstack/tools/docker/install_docker.sh”, line 41:
install_package --force-yes lxc-docker=${DOCKER_PACKAGE_VERSION} socat
should be:
install_package --force-yes lxc-docker-${DOCKER_PACKAGE_VERSION} socat

The second one is in file “devstack/lib/nova_plugins/hypervisor-docker”, line 75:
if ! is_package_installed lxc-docker; then
should be:
if ! is_package_installed lxc-docker-${DOCKER_PACKAGE_VERSION}; then

Also add the following line in devstack/lib/nova_plugins/hypervisor-docker under the entry called # Defaults


Now we are supposed to execute ./tools/docker/install_docker.sh. But don’t do it. In my case I got permission error for /var/run/docker.sock and a curl download fail for docker registry image when executed it. So I solved those two problems by following steps.

Break the installer script into two called install_docker0.sh and install_docker1.sh.

My install_docker0.sh file can be downloaded from [1]

My install_docke1.sh can be downloaded from[1]

Now run the first script

Then add wso2 user to docker group. Here username wso2 is the name you have given for your ubuntu account user.
sudo usermod -a -G docker wso2
Then change permission of the /var/run/docker.sock

sudo chown wso2:docker /var/run/docker.sock
Important:Each time you restart the virbualbox VM make sure that above permission for /var/run/docker.sock set correctly. If it is changed execute the above command and change the permission before doing anything.

Now run the second script
sudo service docker restart

cd files
curl -OR http://get.docker.io/images/openstack/docker-ut.tar.gz
docker import - docker-busybox < ./docker-ut.tar.gz
If permission denied error occur execute the following command again.
sudo chown wso2:docker /var/run/docker.sock
curl -OR http://get.docker.io/images/openstack/docker-registry.tar.gz (Take about 20 mins in 120k per second connection)
If file transfer failed continue with
curl -C - -o docker-registry.tar.gz 'http://get.docker.io/images/openstack/docker-registry.tar.gz'
Now import
docker import - docker-registry < ./docker-registry.tar.gz

So by now your Docker installation should be a success. Now we need to run stack.sh script to setup Openstack. But before that let’s do the following.
Set ipv4 forwarding in /etc/sysctl.conf
net.ipv4.ip_forward = 1
To setup aufs file system which is necessary for docker driver
sudo apt-get install lxc wget bsdtar curl
sudo apt-get install linux-image-extra-3.8.0-26-generic

sudo modprobe aufs

Add following three lines to /etc/rc.local
chown wso2:docker /var/run/docker.sock
modprobe aufs
sudo killall dnsmasq

Now create a file called localrc in devstack folder and add the following content


Now execute stack.sh

(Take about 1.5hours in 120k per secon connection)

After stack.sh finished successfully execute docker images to see our docker registry is still there(I remember I once lost it by this time). If there are no images
cd devstack/files
docker import - docker-registry < ./docker-registry.tar.gz

Now you need to patch /opt/stack/nova/nova/virt/docker/driver.py line 317
destroy_disks=True, context=None):
Now restart node.

cd devstack

If you have followed the steps correctly you should have a working state of Openstack installation with Docker driver. Log into horizon UI( using admin or demo user. Paasword for those users is ‘g‘(as we set in our devstack/localrc file above) and create instances from docker-busybox image that is uploaded in the default installation.
Don’t forget to add icmp and ssh rules for the security group you use(by default this is default group). Take a snapshot of this working state before you do any further playing with your setup.

If you restart the node
cd devstack
To run the nova services. Or if you need a clean Openstack environment after restarting the node instead of running rejoin-stack.sh, run stack.sh. This time it won’t take long time as the first time, only few seconds.

Nova service logs are in /opt/stack/logs/screen folder.

If you run rejoin-stack.sh you can see each nova service log in the rejoin screen. To see each service log ctrl+A and press " then select the service log you need by moving up|down arrows and then click. You can scroll up and down the rejoin screen by ctrl+A and press Esc and then use up|down or pgup|pgdown keys to scroll.

Note: for some reason eth1(flat interface) does not show the ip when rejoin-stack.sh is run. But that does not prevent connecting to the virtualbox vm. But sometimes  problem occur and thats why you add second interface eth2.

My next blog Docker Driver for Openstack Havana will be on playing around this setup like creating new customized images and secure access(ssh) the containers. I’ll also deal with a bug fix on accessing metadata serivces.


Read Full Post »

After reading this excellent article[1] on installing Openstack Essex in virtualbox as an all in one setup, I thought of sharing my own experience on successfully setting it up, with some additional knowledge.

In the article, the virtual machine type used for openstack virtual machines is qemu, which use software virtualization. That’s why the setup could be demonstrated on Virtualbox which is a virtual machine itself. But the virtual machine type I used is LXC(Linux Containers). LXC containers directly comunicate with the kernel instead of using a hypervisor to communicate with the kernel. Therefore it is possible to run a LXC container inside a virtualbox instance.  Also note that as the name implies LXC only spawn linux containers and it does not support any other OS.

My setup also different from the one described in the article in that I use two virtual box instances, one for controller and a compute node and the other for a compute node. So my setup consist of one  controller and two compute nodes.

My host machine setup and virtual box version is similer to the ones described in the article

Configuring the first VM is exactly as described in the article. For the second VM I added two more Host Only interfaces.

open File → Preferences → Network tab
Add host-only netwok for vboxnet0 – this will be the Publicinterface

set IP to, mask, dhcp disbaled

Add host-only netwok for vboxnet1 – this will be the Private (VLAN) interface

set IP to, mask, dhcp disbaled

open File → Preferences → Network tab
Add host-only netwok for vboxnet2 – this will be the Publicinterface

set IP to, mask, dhcp disbaled

Add host-only netwok for vboxnet3 – this will be the Private (VLAN) interface

set IP to, mask, dhcp disbaled

The second VM installation is very much similer to the first VM installation as described in the article.

However I will write down the changes in the network interfaces file(By the way these are only ip changes).

Become root (from now till the end):
%sudo -i
Edit /etc/network/interfaces, make it look like this:

auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

#Public Interface
auto eth1
iface eth1 inet static


#Private VLAN interface
auto eth2
iface eth2 inet manual

up ifconfig eth2 up

then run:

%ifup eth1 #after this, ifconfig shows inet addr: Bcast: Mask:
%ifup eth2 #after this, ifconfig doesnt report ipv4 addr for eth1

or reboot

Verify reachability from your host PC


At this point I assume that you have set up both VM’s and updated and upgraded them and installed openssh-server, git in them and checkout openstack Essex into the /root folder.

Installing the controller:

%sudo -i
%cd OpenStackInstaller

In the OSinstall.sh script, change all occurences of nova-compute under NOVA_PACKAGES to nova-compute-lxc(there are 3 occurences).

Then execute

%./OSinstall.sh -T all -F -f -s 512 -P eth1 -p eth2 -t demo -v lxc

Note that I use -T all options since I install in this server both controller and a compute node.

Then you need to upload an ubuntu image to glance. Download the image precise-server-cloudimg-amd64.tar.gz from http://cloud-images.ubuntu.com/precise/current/

Since this image name is diffrent as expected in the upload_ubuntu.sh (which is ubuntu-12.04-server-cloudimg-amd64.tar.gz) I think the easiest thing is to change upload_ubuntu.sh file and replace the line with

wget -O ${TMPAREA}/${TARBALL} http://uec-images.ubuntu.com/releases/${CODENAME}/release/${TARBALL}


cp -f <folder where you downloaded image>/precise-server-cloudimg-amd64.tar.gz ${TMPAREA}/${TARBALL}

then call

%./upload_ubuntu.sh -a admin -p openstack -t demo -C

to upload the image

Then create a keypair as described in the article and set the security group details

%euca-add-keypair demo > demo.pem
%chmod 0600 demo.pem

%euca-authorize default -P tcp -p 22 -s
%euca-authorize default -P tcp -p 80 -s
%euca-authorize default -P tcp -p 8080 -s
%euca-authorize default -P icmp -t -1:-1

Since you have a compute node in this server, now you can start creating instances after sourcing the demorc file by

%. demorc or

%source ./demorc

You can verify that there is a compute node added by logging to the nova database and verifying that there is a compute node entry in compute_nodes table

%mysql -uroot -popenstack

mysql>use nova

mysql>select id, created_at from compute_nodes; Note that after adding our compute node server you will see two entries in this table

Installing the compute node

Make sure that you give a host name for this server instead of the default hostname

%vi /etc/hostname and change the host entry say mycompute-node

Then reboot the machine

Now log into controller node and add an host entry in /etc/hosts file      mycompute-node

Now log into the compute node and

%sudo -i
%cd OpenStackInstaller
%./OSinstall.sh -T compute -C -F -f -s 512 -P eth1 -p eth2 -t demo -v lxc

Now log into the mysql database in the controller again and verify that there are two entries in the compute_nodes table which means your new compute node is now active in the setup.

Now from your controller node’s /root/OpenStackInstaller folder you can start playing with creating/deleting your new instances

first list your uploaded image by


Then using the image id there, create an new instance

%euca-run-instances -k demo -n 1 -g default -t m1.tiny ami-00000001

You can list your created instances by

%nova list or


You can monitor the /var/log/nova/nova-compute.log in both servers(controller node and compute node) to see the status of node creating.

Now you can start creating more instances and verify that in both compute nodes, instances are created, until you see a short, undescriptive message that basically say your quota has exceeded.

Opentack has this error[2] when deleting lxc instances (qemu or kvm does not have this problem). If you are interested I can share an dirty hack until that problem is solved in the code base.

[1] http://www.tikalk.com/alm/blog/expreimenting-openstack-essex-ubuntu-1204-lts-under-virtualbox

[2] https://bugs.launchpad.net/nova/+bug/971621

Read Full Post »