Feeds:
Posts
Comments

In this article I explain how to set up Apache Stratos using Openstack/Docker as the underlying IaaS. This is the third article on a series of step by step series of articles(see [1] and [2]). In the previous articles I discussed how to set up the IaaS environment, that is Openstack Havana with Docker Driver. In this article I discuss how to build Tomcat cartridge for Stratos in Openstack/Docker, and then how to install Apache Stratos to use this Openstack/Docker set up as the IaaS.

This series is kind of a bottom-up approach rather than a top-down approach of the Apache Stratos Architecture[3]. In the block diagram describing the architecture you can see that Apache Stratos can run it’s cartridges on many IaaS(concurrently or separately). The communication between Stratos and IaaS happen through jclouds API’s. jcloud support more than fifty IaaSs. So theoretically Apache Stratos can be made to support all the IaaSes supported by jclouds. We started from Openstack/Docker IaaS and first learnt how to make that IaaS ready for Stratos to work with it. Before going further here is a brief description of Stratos architecture focussed mainly on how cartridges connect with the main picture.

Apache Stratos is a fast growing open source PaaS where users can deploy their applications in various run time environments. A user in Stratos is a tenant or a tenant admin or a Stratos admin. First lets understand with following scenario. X company has selected Apache Startos as it’s private PaaS environment. The Stratos admin’s role is to set up Stratos and any IaaS environment used by Stratos. Setting up Stratos means installing it’s various middle-ware servers like Stratos Mananger, Stratos Controller and Stratos Auto Scaler. Then he need to setup IaaS, for example Openstack/Docker. Then he need to set up run time eivnronments in Stratos called Cartridges. Suppose X company has set of customers who need to use a web application deployed on Tomcat and there is a administrator managing this application. Then the customers are the tenants and the administrator is the tenant admin. The tenant admin subscribe to the Tomcat cartridge which is already created by Stratos admin. When he subscribes he provides a git repository url where he has his tomcat Web application uploaded. This user interface and tenant managing functionality is provided by Apache Stratos Manager server. When the tenants use this application, the scaling decisions are taken by Auto Scaler server[5]. Communication between the underlying IaaS and Stratos PaaS is managed by Stratos Controller server.

What does it mean by Tomcat cartridge mentioned above? It is simply a cluster of Apache Stratos aware virtual machines running Tomcat web services engine usually fronted by a load balancer. This cluster runs in the underlying IaaS, in our case this is a cluster of Docker containers running in Openstack.

So how does the Stratos admin create Tomcat cartridge? It is just creating OS images installed with necessary software for Tomcat web server, for the underlying IaaSes on which the cartridge is supposed to run plus some configuration settings on the Stratos Controller. In our case this is equivalent to creating an Openstack/Docker image, uploading it to Openstack glance repository and setting tomcat cartridge configuration in Stratos Controller with image id information. This would be the scope of this article.

So what we do here is first create an Ubuntu OS Docker image with necessary software and configuration to run Tomcat web server, based on the Stratos base image we introduced in the previous article. Then we upload this image to Openstack glance repository and test it in Openstack environment. That will complete the IaaS part of our Tomcat cartridge. After this, I will explain how to complete the next step of introducing our Tomcat cartridge to Apache Stratos.

First we need to build Apache Stratos from source. For that I suggest you to have a separate Virtualbox VM node. The reason is that if you are a developer your development/testing environment(creating of which is the aim of this article series) has to be regularly updated with the latest code.  So having a separate build machine is always good. Of course you can later integrate an automated build environment like Jenkins to your set up as well. But for now let’s do this way. Create such VM node and log into it.

git clone https://github.com/apache/incubator-stratos.git

cd incubator-stratos
Now we will build Stratos using Maven version 3. Before building set the followng Maven options. Otherwise out of memory errors could occur.

export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=256m"

mvn clean install

This will build Stratos Server, Load balancer, cartridge agent and other artifacts which will be needed as we proceed. Note that the Stratos server here has the Stratos Controller, Stratos Manager and Stratos Autoscaler embedded into a single product. When run the server you decide to run manager, auto-scaler and controller together as a single server or run them as individual servers. You can find Startos server binary in incubator-stratos/products/stratos/modules/distribution/target/ folder.

Second we should install Puppet master into our Virtualbox VM node where Openstack/Docker run.  I’ will explain why we need Puppet later. I suggest you to follow the Configure Puppet Master of the Apache Stratos documentation. In that documentation the PUPPETMASTER-DOMAIN  is given as test.org. In my set up I gave it as stratos.org. You have the choice of your own. Also see my changes to /etc/puppet/manifests/nodes.pp file

$package_repo         = 'http://192.168.57.30:8080'
#following directory is used to store binary packages
$local_package_dir    = '/mnt/packs'
# Stratos message broker IP and port
$mb_ip                = '192.168.57.30'
$mb_port              = '61616'
# Stratos CEP IP and port
$cep_ip               = '192.168.57.30'
$cep_port             = '7611'
# Stratos Cartridge Agent’s trust store password
$truststore_password  = 'wso2carbon'
$java_distribution  = 'jdk-7u7-linux-x64.tar.gz'
$java_name  = 'jdk1.7.0_07'

Since in our single node set up all servers run in the same node, we have both mb ip and cep ip the same value as our Virtualbox ip. If you use different version of Java  update java_distribution and java_name accordingly.

For Stratos installation we also need a database. We already have a Mysql database in our Virtualbox VM created while installing Openstack.

Create Mysql database user. We will later need that user as the Mysql database user when configuring Stratos.

grant all privileges on *.* TO 'wso2'@'localhost' identified by 'g' with grant option;
grant all privileges on *.* TO 'wso2'@'%' identified by 'g' with grant option;
flush privileges;

In the last article we created the Stratos base cartridge image using a base Dockerfile. Now we will create Tomcat cartridge Dockerfile using our previuos image as the base. Create a folder name tomcat and create this Dockerfile in it.
# Tomcat
# VERSION 0.0.1
FROM stratosbase
MAINTAINER Damitha Kumarage "damitha23@gmail.com"
MAINTAINER Lakmal Warusawithana "lakmal@apache.org"

RUN apt-get install -q -y puppet
RUN apt-get install -q -y ruby

RUN mkdir /root/bin
ADD init.sh /root/bin/
ADD puppet.conf /etc/puppet/
RUN chmod +x /root/bin/init.sh
ADD stratos_sendinfo.rb /root/bin/

EXPOSE 22
ENTRYPOINT /usr/local/bin/run_scripts.sh | /usr/sbin/sshd -D

See what are the additions we make here to the Dockerfile. Here From stratosbase means we make our previous image as base here. First we install Puppet.  The reason for installing Puppet is that Puppet handle all stuff related to installing Java, Tomcat and Apache Stratos agent.  In each cartridge there should be a Stratos agent to coordinate the cartridge instances with the Stratos Server. When the cartridge instance load, Puppet client will communicate with the Puppet Master to install software. Of course we can get rid of Puppet by doing all stuff related to installing Java, Tomcat and Stratos agent ourselves within the Dockerfile. If we do that we can reduce the load time of cartridges. However the real strength of Puppet is unavoidable in a production ready cartridge where we need to periodically update the cartridges with patches and maintenance code. Download init.sh, puppet.conf and stratos_sendinfo.rb from [4] and copy into the tomcat folder. Copy run_scripts.sh file from your previous stratosbase folder into tomcat folder as well. You need to add the following line as the last line to this run_scripts.sh file.

/root/bin/init.sh > /var/log/stratos_init.log

When above line is get executed when loading the cartridge, it will install and configure Stratos agent and necessary software into the cartridge.

Now you can build, tag and push the above image into glance repository by.

docker build -t tomcat .
docker tag tomcat 192.168.57.30:5042/tomcat
docker push 192.168.57.30:5042/tomcat
You will need the glance server image id of this image to register the cartridge in Stratos below. You can view it from Openstack dashboard ui or
cd /home/wso2/devstack
. openrc
glance image-list
Next step is to install Stratos server binary we built above. For that I recommend to follow the guide provided in link [6], which is easy enough to follow. Howerver pay special attention to following lines in stratos-installer/conf/setup.conf file to make sure they are in accordance to our setup. I have included a brief explanation of each change inline in different color

export host_user="wso2" This is our virtualbox vm os user we created in previous articles
export stratos_domain="stratos.org" This is the domain we choose for our Stratos setup.
export host_ip="192.168.57.30" Ip of the Virtualbox instance
export offset=2 When we have this offset https port of the Stratos server will be 9445
export puppet_ip="192.168.57.30"
export puppet_hostname="puppet.stratos.org"
export mb_ip="192.168.57.30"
export mb_port=61616


export ec2_provider_enabled=false Keep all other IaaS except Openstack disabled
export openstack_provider_enabled=true  Enable Openstack IaaS
export openstack_identity="demo:demo"  This is the default Openstack demo account
export openstack_credential="g"  Password we provided for Openstack demo account(See devstack/localrc file)
export openstack_jclouds_endpoint="http://192.168.57.30:5000/v2.0"
export openstack_keypair_name="demo"
export openstack_security_groups="default"


export vcloud_provider_enabled=false  Keep all other IaaS except Openstack disabled
export userstore_db_hostname="192.168.57.30"
export userstore_db_user="root"
export userstore_db_pass="g"

In the setup.conf file we configure our Openstack IaaS access details as shown above.  This is where we provide Stratos the knowledge about our IaaS.

Next we install load balancer. There are several ways you can configure a load balancer for Stratos. You can have a specific load balancer for a cartridge cluster or you can have a generic load balancer that will handle several cartridge clusters. In the former case we can configure load balancer as a cartridge in Stratos. But in our setup we will configure load balancer as a generic load balancer server.
Copy the load balancer zip file from incubator-stratos/products/load-balancer/modules/distribution/target/ to virtualbox vm’s /opt folder. Unarchive it and edit the following lines in repository/conf/load-balancer.conf
mb-ip: 192.168.57.30;
mb-port: 61616;
cep-ip: 192.168.57.30;
cep-port: 7611;

Also change repository/conf/templates/jndi.properties.template to following
connectionfactoryName=TopicConnectionFactory
java.naming.provider.url=tcp://192.168.57.30:61616
java.naming.factory.initial=
org.apache.activemq.jndi.ActiveMQInitialContextFactory

That’s all for load balancer configuration. Open a new terminal to Virtualbox VM and start the load balancer by
cd /opt/apache-stratos-load-balancer-4.0.0-incubating/bin
sudo ./stratos.sh

We have installed Apache Stratos in the same Virtualbox VM, where we created our Tomcat cartridge image. Now we need to make our Stratos aware about our Tomcat cartridge image. Download autoscale-policy.jason, deployment-policy.jason, partition.jason and tomcat.json from [4].
Access the Stratos UI interface as Stratos admin user named admin by
https://192.168.57.30:9445/console
The default password for Stratos admin user is admin.
Follow the wizard to deploy partion, autoscale policy, deployment policy and the tomcat cartridge. Just copy and paste the relevant content from the json files you downloaded. Remember to skip the load balancer configuration step as we are not installing the load balancer as a cartridge, but as a standalone server as explained above.
Also make sure you change the line
"imageId": "RegionOne/3f6c5c20-93fa-423e-bfc9-021b51566d5b",
in tomcat.json file before copying the content. Here you give the id of the Stratos image you created above. You can see the Stratos server logs while doing the deployment by,
tail -f /opt/stratos/apache-stratos-default/repository/logs/wso2carbon.log
Now Startos administrator can create a tenant admin for Tomcat cartridge following the instructions in the Stratos admin ui. Then log in as that tenant administrator. Now you can see the Tomcat cartridge created above. Subscribe to it providing a valid git repository url. Make sure to copy a Tomcat sample.war file in that repository so that you can test your tomcat cartridge once it is active. Wait for some time and click your subscribed cartridge link to see whether there are active instances created for your cartridge. If so access your Tomcat sample application through provided urls there. You can also access the tomcat webpage directlry from the instance ip which you can view from the Openstack dashboard.
For more details on deploying and subscribing to cartridges using admin console see [7].

That’s all for your Startos setup using Openstack/Docker IaaS on a single Virtualbox VM. You will be able to use this set up as a development and Testing environment for Apache Stratos. Chris Snow has wonderfully automated the whole process I described in the three blogs(installing and configuring virtualbox, installing Openstack/Docker, installing Stratos and creating and testing cartridges) using vagrant boxes for both linux and windows[8] and he has excellent blog post on it[9]. So now it is just one command and you are off testing Stratos with Openstack/Docker IaaS.

[1] http://damithakumarage.wordpress.com/2014/01/31/how-to-setup-openstack-havana-with-docker-driver/
[2] http://damithakumarage.wordpress.com/2014/02/01/docker-driver-for-openstack-havana/
[3] https://cwiki.apache.org/confluence/display/STRATOS/4.0.0+Architecture
[4] https://www.dropbox.com/sh/dmmey60kvdihc31/3sbBkX7ns3
[5] http://lahiruwrites.blogspot.com/2014/01/apache-stratos-autoscaler-supports.html
[6] https://cwiki.apache.org/confluence/display/STRATOS/4.0.0+Automated+Product+Configuration
[7] https://cwiki.apache.org/confluence/display/STRATOS/4.0.0+Stratos+Manager+Guide
[8] https://github.com/snowch/stratos-vagrant-box
[9] http://christopersnow.blogspot.co.uk/2014/04/apache-stratos-paas-simple-setup.html

As promised in my previous blog post  How to setup Openstack Havana with Docker driver  here I would like to share some of my experience working with the Havana/Docker setup. I’ll basically explain how to run secure access containers in Openstack/Docker and accessing the user-data passed to the containers overcoming the technical difficulties in the versions used in our setup.

We will create a  Ubuntu image in docker local repository using a Dockerfile and then transfer that image into the glance repository. The image we create fix the following issues that we find in the selected version of Docker.

- does not allow to pass user data at container start up.

- User cannot  pass a public key at instance boot up and access using it.

- User cannot change /etc/hosts file

So we will fix these issues which are critical when using Openstack/Docker as IaaS for Stratos. We will create a 64 bit Ubuntu image fixing above issues, which can be used as a base image for  creating cartridge images for Stratos.

You can download all the scripts and other stuff used in this blog from[2]. Download Dockerfile, metadata_svc_bugfix.sh, file_edit_patch.sh, run_scripts.sh and ubuntu64-docker-ssh.tar.gz from [2].

Let’s start with the snapshot of our set up saved earlier. Create a virtualbox VM with this snapshot.

Then you need to rejoin the Openstack session using
cd devstack
. openrc
rejoin-stack.sh

Or instead of running rejoin-stack.sh you can run stack.sh. But in this case you will lose your previous data including images stored in glance repository and previously run instances.

Now open another terminal to the virtual machine.

Upload the 64bit Ubuntu image you downloaded above, into the docker repository. We will use this image as the base image of the images we create in the Docker repository.

docker import - ubuntu64base < ./ubuntu64-docker-ssh.tar.gz

create a new folder and name it say stratosbase
cd stratosbase

Create the file below and name it as Dockerfile


# stratosbase
# VERSION 0.0.1
FROM ubuntu64base
MAINTAINER Damitha Kumarage "damitha23@gmail.com"
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update


RUN apt-get install -y openssh-server
RUN echo 'root:g' |chpasswd


RUN apt-get install -q -y zip
RUN apt-get install -q -y unzip
RUN apt-get install -q -y curl


ADD metadata_svc_bugfix.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/metadata_svc_bugfix.sh
ADD file_edit_patch.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/file_edit_patch.sh
ADD run_scripts.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/run_scripts.sh
ENV LD_LIBRARY_PATH /root/lib
EXPOSE 22
ENTRYPOINT /usr/local/bin/run_scripts.sh | /usr/sbin/sshd -D

What this Dockerfile do is self descriptive. Note that I run the sshd daemon as an ENTRYPOINT instead of CMD. Reason is Docker driver will override “/usr/sbin/sshd -D” with “sh” if I use CMD and consequently sshd daemon will not be run. We have also set ssh password for root as ‘g’ .

There is a problem in docker containers as of the current versions where it does not allow to download user data. The reason is described in the Openstack bug report[1]. We will fix this problem by using a patch script called  metadata_svc_bugfix.sh. In this patch we also retrieve ssh public key of the user passed to the instance when booting up. There is limitation in Docker containers where it does not allow to edit /etc/hosts file. We will circumvent this issue by adding another patch file called file_edit_patch.sh.

We introduce another script called run_script.sh which will be executed at startup of the docker container and this script just contain execution codes for the above two patch scripts.

Following are the scripts mentioned.

metadata_svc_bugfix.sh

#!/bin/bash
NOVA_NIC=$(ip a | grep pvnet | head -n 1 | cut -d: -f2)
while [ "$NOVA_NIC" == "" ] ; do
echo "Find nova NIC..."
sleep 1
NOVA_NIC=$(ip a | grep pvnet | head -n 1 | cut -d: -f2)
done
echo $NOVA_NIC
echo "Device $NOVA_NIC found. Wait until ready."
sleep 3
# Setup a network route to insure we use the nova network.
#
echo "[INFO] Create default route for $NOVA_NIC. Gateway 10.11.12.1"
ip r r default via 10.11.12.1 dev $NOVA_NIC
# Shutdown eth0 since icps will fetch enabled enterface for streaming.
ip l set down dev eth0


sleep 5
#Get public keys from meta-data server
if [ ! -d /root/.ssh ]; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
fi
# Fetch public key using HTTP
ATTEMPTS=30
FAILED=0
if [ ! -f /root/.ssh/authorized_keys ]; then
wget http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key -O /tmp/metadata-key -o /var/log/metadata_svc_bugfix.log
if [ $? -eq 0 ]; then
cat /tmp/metadata-key >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
#restorecon /root/.ssh/authorized_keys
rm -f /tmp/metadata-key
echo "Successfully retrieved public key from instance metadata" >> /var/log/metadata_svc_bugfix.log
fi
fi

And
file_edit_patch.sh

#!/bin/bash
mkdir p - /root/lib
cp -f /lib/x86_64-linux-gnu/libnss_files.so.2 /root/lib
perl -pi -e 's:/etc/hosts:/tmp/hosts:g' /root/lib/libnss_files.so.2
perl -pi -e 's:/etc/resolv.conf:/tmp/resolv.conf:g' /root/lib/libnss_files.so.2
cp -f /etc/hosts /tmp/hosts
cp -f /etc/resolv.conf /tmp/resolv.conf

 

Finally
run_scripts.sh
#!/bin/bash
/usr/local/bin/metadata_svc_bugfix.sh
/usr/local/bin/file_edit_patch.sh

Copy the above three scripts(metadata_svc_bugfix.sh, file_edit_patch.sh, run_scripts.sh) into stratosbase folder. Now create the image in Docker local repository
docker build -t stratosbase .
Note the dot at the end of the command. Note that we tag the image as stratosbase. Now to see the image created in local Docker repo execute
docker images
You will see an image named stratosbase is created there.
Now you will tag this image and push it to the glance repository.
docker tag stratosbase 192.168.57.30:5042/stratosbase
docker push 192.168.57.30:5042/stratosbase
where 192.168.57.30 is the ip of your Virtualbox VM. Your image is exported to the glance repository in Docker format. In fact you can push this image to any Docker repository you choose to and it is a good idea that Apache Stratos community keep a public Docker repository where they can share cartridge images. Then any one interested in the shared cartridges can pull it from public repository and use it with Stratos.

Now to see the image in Glance repository.
glance image-list
Now nova compute can spawn Docker containers from this image.
Log into to Horizon UI and create an instance using this image.
Note: you will log into Horizon web UI using the admin or demo user. The password for it is set in devstack/localrc file we created earlier in my previous blog.
Make sure that using Horizon Access & Security under Project tab you add rules for tcp port 22 and icmp for the security group from which you create containers(by default this is default group)

Now you should be able to access the spawned container using
ssh root@private_ip_of_container

or if you passwd your public key when creating your instance
ssh -i root@<private_ip_of_container>
Now when creating the container try passing some user data script using the Post Creation tab in the launch screen.
In the Customization Script box type

X1=1, X2=2

Now when the container is spawned you should be able to log in and retrieve the passed data by
wget http://169.254.169.254/2009-04-04/user-data

And retrieve the public key buy

wget http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key

My aim of building this Openstack/Docker set up is to use this as a testing and developer environment for Apache Stratos PaaS Cloud environment. My next blog post Apache Stratos on Openstack/Docker-Part One will deal with how to set up Apache Stratos on the same Virtualbox VM we set up Openstack/Docker. We will use the stratosbase image we created as base for creating Stratos cartridge images in Openstack/Docker IaaS environment.

[1] https://bugs.launchpad.net/nova/+bug/1259267
[2]https://www.dropbox.com/sh/dmmey60kvdihc31/3sbBkX7ns3

I will share my experience on the subject in detail. The guide will definitely work if you follow this with the versions of the software specified.

Software: 64bit server of Ubuntu 13.04.
Openstack Havana/stable branch
Docker version 0.7.6
Oracle virtualbox version 4.1.12_ubuntu

This worked for me with ubuntu running on virtualbox. The vertualbox version need not be exact. But if you need this definetely work follow the exact Ubuntu and Docker versions because I have not tested on other versions. But I believe these instructions will work with prior Ubuntu and Docker versions as well with slight changes.

I use the devstack setup to install Havana and Docker. It is difficult to maintain our own scripts with the fast growing development of Openstack with new technologies, hence the idea of following devstack scripts. My previous choice of lxc driver is changed to dokcer since I feel that lxc/libvirt driver soupport in Openstack commnunity is somewhat lagging and Docker community show promising growth. Besides Docker is based on lxc with better isolation and features. Most attracting idea of docker for me is the concept of portable containers.

In this setup all nova services run in a single virtual machine. This setup is mainly used to test my Apache Stratos PaaS environment where the Openstack/Docker is used as a IaaS layer.

Scripts which appear within this article can be downloaded from [1]
It is good habit to take virtualbox snapshots at every important step of the process. This way if something goes wrong, you can re-start from the previously saved state. I strongly recommend to follow the instructions exactly as indicated in the article. Once you achieved the article goal, you can do your own experiments beginning from various snapshots saved. Later you can delete snapshots to save disk space.

If you are too eager to get the setup running follow the Quick Steps below. Quick Steps guide will assume you are familiar with virtualbox environment and Openstack devstack setup. If you run into problems or need detailed steps I recommend you follow the whole blog entry as a tutorial. Also for quick steps download the scripts and other stuff from [1].

Quick Instructions

Download:

Download interfaces, hypervisor-docker, install_docker0.sh, install_docker1.sh, localrc and driver.py files from [1].

Setup Virtualbox:

Install Ubuntu 13.04 64 bit version in virtualbox with at least 40G dyanmically growing hard disk. Add hostonly interface eth1 with gateway 192.168.92.1. Add hostonly interface eth2 with gateway 192.168.57.1. Log in and create a user/password called wso2/g. Replace /etc/network/interfaces file with downloaded interface file. Reboot vm, open a terminal and ssh into instance

ssh wso2@192.168.57.30

sudo apt-get update

sudo apt-get install linux-image-3.8.0-26-generic

Reboot

Setup Docker:

sudo apt-get install git
git clone https://github.com/openstack-dev/devstack.git
cd devstack
git checkout stable/havana

Replace devstack/lib/nova_plugins/hypervisor-docker with the downloaded hypervisor-docker file.
Copy install_docker0.sh and install_docker1.sh into /devstack/tools/docker folder.
cd devstack
./tools/docker/install_docker0.sh
sudo usermod -a -G docker wso2
sudo chown wso2:docker /var/run/docker.sock
./tools/docker/install_docker1.sh
sudo service docker restart
cd files
curl -OR http://get.docker.io/images/openstack/docker-ut.tar.gz
docker import - docker-busybox < ./docker-ut.tar.gz
If permission denied error occur execute the following command again.
sudo chown wso2:docker /var/run/docker.sock
url -OR http://get.docker.io/images/openstack/docker-registry.tar.gz
ocker import - docker-registry < ./docker-registry.tar.gz
Set ipv4 forwarding in /etc/sysctl.conf
net.ipv4.ip_forward = 1
sudo apt-get install lxc wget bsdtar curl
sudo apt-get install linux-image-extra-3.8.0-26-generic
sudo modprobe aufs

Add following three lines to /etc/rc.local
chown wso2:docker /var/run/docker.sock
modprobe aufs
sudo killall dnsmasq

Setup Openstack:

Copy localrc file to devstack folder
cd devstack
./stack.sh
After stack.sh finished successfully execute docker images to see docker registry is still there. If there are no images do following again
cd devstack/files
docker import - docker-registry < ./docker-registry.tar.gz
Replace /opt/stack/nova/nova/virt/docker/driver.py with downlaoded driver.py
Reboot vm.
cd devstack
./stack.sh
Test:
Log into Horizon, add icmp and ssh rules to security group and create an instance of busybox image.

Detailed  Instructions

First install Ubuntu 13.04 in a Virtualbox VM. Add a host-only network adaptor to it. In the ipv4 Address field put 192.168.92.1 and ipv4 Network Mask field put 255.255.255.0 . Add another host-only network adaptor. In the ipv4 Address field put 192.168.57.1 and ipv4 Network Mask field put 255.255.255.0 . Make sure to give at least 40G dynamically growing hard disk. Now boot up the VM and follow the steps below. Connect using the terminal ui provided by virtualbox and create a user/password called wso2/g.

Change /etc/network/interfaces as following

auto eth0
iface eth0 inet dhcp


auto eth1
iface eth1 inet static
address 192.168.92.30
network 192.168.92.0
netmask 255.255.255.0
broadcast 192.168.92.255


auto eth2
iface eth2 inet manual
up ifconfig eth2 192.168.57.30 up

Now reboot and you can connect to the VM from a terminal using username wso2 and password g.

ssh wso2@192.168.57.30
Now from within this terminal exeute

sudo apt-get update

Now in order for Openstack/Docker to work correctly we need a linux kernel upgrade for ubuntu

sudo apt-get install linux-image-3.8.0-26-generic

Now restart the VM node.

sudo apt-get install git

git clone https://github.com/openstack-dev/devstack.git

cd devstack

git checkout stable/havana

Now we need to apply the following patch for devstack scripts

Apply patch

The first one is in file “devstack/tools/docker/install_docker.sh”, line 41:
install_package --force-yes lxc-docker=${DOCKER_PACKAGE_VERSION} socat
should be:
install_package --force-yes lxc-docker-${DOCKER_PACKAGE_VERSION} socat

The second one is in file “devstack/lib/nova_plugins/hypervisor-docker”, line 75:
if ! is_package_installed lxc-docker; then
should be:
if ! is_package_installed lxc-docker-${DOCKER_PACKAGE_VERSION}; then

Also add the following line in devstack/lib/nova_plugins/hypervisor-docker under the entry called # Defaults

DOCKER_PACKAGE_VERSION=0.7.6

Now we are supposed to execute ./tools/docker/install_docker.sh. But don’t do it. In my case I got permission error for /var/run/docker.sock and a curl download fail for docker registry image when executed it. So I solved those two problems by following steps.

Break the installer script into two called install_docker0.sh and install_docker1.sh.

My install_docker0.sh file can be downloaded from [1]

My install_docke1.sh can be downloaded from[1]

Now run the first script
./tools/docker/install_docker0.sh

Then add wso2 user to docker group. Here username wso2 is the name you have given for your ubuntu account user.
sudo usermod -a -G docker wso2
Then change permission of the /var/run/docker.sock

sudo chown wso2:docker /var/run/docker.sock
Important:Each time you restart the virbualbox VM make sure that above permission for /var/run/docker.sock set correctly. If it is changed execute the above command and change the permission before doing anything.

Now run the second script
./tools/docker/install_docker1.sh
and
sudo service docker restart

cd files
curl -OR http://get.docker.io/images/openstack/docker-ut.tar.gz
docker import - docker-busybox < ./docker-ut.tar.gz
If permission denied error occur execute the following command again.
sudo chown wso2:docker /var/run/docker.sock
Now
curl -OR http://get.docker.io/images/openstack/docker-registry.tar.gz (Take about 20 mins in 120k per second connection)
If file transfer failed continue with
curl -C - -o docker-registry.tar.gz 'http://get.docker.io/images/openstack/docker-registry.tar.gz'
Now import
docker import - docker-registry < ./docker-registry.tar.gz
 

So by now your Docker installation should be a success. Now we need to run stack.sh script to setup Openstack. But before that let’s do the following.
Set ipv4 forwarding in /etc/sysctl.conf
net.ipv4.ip_forward = 1
To setup aufs file system which is necessary for docker driver
sudo apt-get install lxc wget bsdtar curl
sudo apt-get install linux-image-extra-3.8.0-26-generic

sudo modprobe aufs

Add following three lines to /etc/rc.local
chown wso2:docker /var/run/docker.sock
modprobe aufs
sudo killall dnsmasq

Now create a file called localrc in devstack folder and add the following content

FLOATING_RANGE=192.168.92.0/27
FIXED_RANGE=10.11.12.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=eth1
ADMIN_PASSWORD=g
MYSQL_PASSWORD=g
RABBIT_PASSWORD=g
SERVICE_PASSWORD=g
SERVICE_TOKEN=g
SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
VIRT_DRIVER=docker
SCREEN_LOGDIR=$DEST/logs/screen

Now execute stack.sh

./stack.sh
(Take about 1.5hours in 120k per secon connection)

After stack.sh finished successfully execute docker images to see our docker registry is still there(I remember I once lost it by this time). If there are no images
cd devstack/files
docker import - docker-registry < ./docker-registry.tar.gz

Now you need to patch /opt/stack/nova/nova/virt/docker/driver.py line 317
replace
destroy_disks=True):
with
destroy_disks=True, context=None):
Now restart node.

Again
cd devstack
stack.sh

If you have followed the steps correctly you should have a working state of Openstack installation with Docker driver. Log into horizon UI(http://192.168.57.30) using admin or demo user. Paasword for those users is ‘g‘(as we set in our devstack/localrc file above) and create instances from docker-busybox image that is uploaded in the default installation.
Don’t forget to add icmp and ssh rules for the security group you use(by default this is default group). Take a snapshot of this working state before you do any further playing with your setup.

If you restart the node
cd devstack
rejoin-stack.sh
To run the nova services. Or if you need a clean Openstack environment after restarting the node instead of running rejoin-stack.sh, run stack.sh. This time it won’t take long time as the first time, only few seconds.

Nova service logs are in /opt/stack/logs/screen folder.

If you run rejoin-stack.sh you can see each nova service log in the rejoin screen. To see each service log ctrl+A and press " then select the service log you need by moving up|down arrows and then click. You can scroll up and down the rejoin screen by ctrl+A and press Esc and then use up|down or pgup|pgdown keys to scroll.

Note: for some reason eth1(flat interface) does not show the ip when rejoin-stack.sh is run. But that does not prevent connecting to the virtualbox vm. But sometimes  problem occur and thats why you add second interface eth2.

My next blog Docker Driver for Openstack Havana will be on playing around this setup like creating new customized images and secure access(ssh) the containers. I’ll also deal with a bug fix on accessing metadata serivces.

[1]https://www.dropbox.com/sh/dmmey60kvdihc31/3sbBkX7ns3

In this guide I explain how to install Openstack in a single physical node. I install the nova controller and a compute node in this node. The aim of this article is to get you started with Openstack IaaS with minimum effort in a short period of time.

What you need

The steps below can be followed using one physical node. The node should posses two network interfaces. One of them could be a virtual one. I have tested this on Ubuntu 12.04 LTS 64 bit server. The memory and storage requirements of the node depend on how much virtual machines you run on Openstack once it is ready. For example if you plan to run 10 virtual machines with 256Mb memory and 5Gb HD each, then you need at least 3G memory and 60Gb hard disk for the node. You also need an internet connection to download the necessary Openstack software.
Note the installation described in this document is in no way production ready. You may need to do lot of enhancements, feature additions to make it such.

Installation Steps

Step1:Insatll Ubuntu server

Install Ubuntu server as you do any normal installation. Please refer to good Ubuntu documentation for this. During the installation steps do the following.
– Create a user account on the host machine(say nova).
– Install openssh.
– Assign hostname(say openstack). Assign domain name(say demo.com)
– Assign static ip(say 192.168.16.20)
– Give gateway to access internet (say 192.168.16.1). I assume here you have a wired connection to the internet. Insead if you have a wireless connection you can let it connect to internet using dhcp.
You can do the above steps once the Ubuntu installation is finished as well like below

- Create user account(say nova)

$ sudo /usr/sbin/adduser nova

- Install openssh

$ sudo apt-get install openssh-server(to ssh into instance)

- Assign static ip by editing /etc/network/interfaces file

auto eth0
iface eth0 inet static
address 192.168.16.20
netmask 255.255.252.0
gateway 192.168.16.1
auto eth1
iface eth1 inet manual
up ifconfig eth1 up

Then

$ sudo ifup eth0
$ sudo ifup eth1

- Assign hostname and domain name by putting an entry in /etc/hosts file as in

192.168.16.20    openstack.demo.com    openstack

Step2:

Log in using nova account you created.
$ sudo apt-get update

Step3:

Checkout the Installation Scripts
$ sudo apt-get -y install git
$ git clone https://github.com/damitha23/openstack.git
$ cd openstack
$ unzip OpenStackInstaller.zip

Note that content of OpenStackInstaller folder has scripts I took from https://github.com/uksysadmin/OpenStackInstaller.git maintained by Kevin Jackson <kevin@linuxservices.co.uk> https://twitter.com/#!/itarchitectkevirc.freenode.org: uksysadmin

Step4: Installing Openstack

$ cd /home/nova/OpenStackInstaller

Modify oscontrollerinstall.sh as per your requirements and execute. It will take couple of minutes to install Openstack.
Also modify the OSinstall.sh to add following configuration that would go into nova.conf

--rpc_response_timeout=<new timeout in seconds>

Give a sufficient response timeout to avoid timeout errors.
Example oscontrollerinstall.sh

./OSinstall.sh -T all -C openstack.demo.com -F 192.168.16.128/25 -f 192.168.17.128/25 -s 126 -P eth0
 -p eth1 -t demo -v kvm

Important: The virtualization type here I used is kvm.
Note that I use -T all options since I install in this server both controller and a compute node.
With -C parameter we give the hostname of the node. You should have an entry in the /etc/hosts file for this as following.

192.168.16.20    openstack.demo.com    openstack

If your node ip regulary change it is good idea to have following kind of entry in /etc/rc.local file so that it will automatically add that entry when node bootup

ip=`/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`
echo $ip openstack.demo.com openstack >> /etc/hosts

Note that here ip is taken from eth0 interface. You may need adjustments.

With -F parameter we give the floating ip range for the project.
With -f parameter we give the fixed ip range for the project.
With -s parameter we give number of nodes in the private network.

I use eth1 as private interface. eth0 as public interface. For the public ips(floating ips) we should give an valid range from the network where the host machine took IP. So a valid floating ip subnet would be 192.168.16.128/25. You can calculate such an range from the subnet calculator in link [1] or [2]
A valid fixed ip subnet would be 192.168.17.128/25. Note that if the floating ip’s are exhausted, then there will be errors and instance would not be created. To avoid this situation, make sure that you allocate as many as floating ips, at least, as the fixed ips. Now you can access Openstack UI from http://openstack.demo.com using

Username:admin
Password:openstack

You may need to add an host entry in the node where your browser reside when giving the above url as in

192.168.16.20    openstack.demo.com    openstack

Now you can manage your Openstack environment from the UI interface.

If one of your interface is a virtual interface(This could be the case when are installing on a laptop) your install command could be like following

./OSinstall.sh -T all -C openstack.demo.com -F 192.168.16.128/25 -f 192.168.17.128/25 -s 126 -P eth0
 -p eth0:0 -t demo -v kvm

Make sure eth0:0 is defined as following

auto eth0:0
iface eth0:0 inet manual

And make sure it is up by using
$ ifup eth0:0

Step5: Upload an Image

From this step on you can execute the commands as normal user. I upload an ubuntu image to glance. For kvm virtual machine download a base ubuntu image precise-server-cloudimg-amd64-disk1.img from http://cloud-images.ubuntu.com/precise/current/ and create a folder called /home/nova/upload folder and copy the image into it.

Modify /home/nova/OpenStackInstaller/uploadimage.sh and execute to upload the image.

An example uploadimage.sh would be

./imageupload.sh -a admin -p openstack -t demo -C openstack.demo.com -x amd64 -y ubuntu -w 12.04 
 -z /root/upload/precise-server-cloudimg-amd64-disk1.img -n cloudimg-ubuntu-12.04

Here openstack.demo.com is the hostname of the openstack controller.
Execute

$ cd OpenStackInstaller
$ source ./demorc
$ nova image-list

command to see whether your newly uploaded image appear in the image list.

Step6: Testing the Controller

$ cd OpenStackInstaller
$ source ./demorc

Now add a keypair. It is highly recommended that you use your own keypair when creating
instances. For example suppose you create an instance as normal user, using a keypair owned by root user. You may succeed in creating your instance. But you will get permission denied exception when trying to ssh to that instance.

$ nova keypair-add wso2 > wso2.pem

Set permission for the private key

$ chmod 0600 wso2.pem

You can see the created key listed

$ nova keypair-list

Allow needed ports for the default security group.

$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
$ nova secgroup-add-rule default tcp 80 80 0.0.0.0/0
$ nova secgroup-add-rule default tcp 443 443 0.0.0.0/0
$ nova secgroup-add-rule default tcp 3306 3306 0.0.0.0/0
$ nova secgroup-add-rule default tcp 8080 8080 0.0.0.0/0

Now list the images and select an image id to create an instance from it

$ nova image-list
$ nova boot –key_name=nova-key –flavor=1 –image=<image id> <instance name>

Instead of the above command you can use the following command if you need to pass some user data into the instance you want to create.

$ nova boot –key_name=nova-key –flavor=1 –image=<image id> –user_data=/root/client/payload.zip <instance name>

Now see whether your instance is up and running. Look for the running instances ip.

$ nova list
$ ssh -i wso2.pem ubuntu@ipaddress

If you can access the virtual machine instance then you have successfully created a controller with a compute node in it. Log into the nova mysql database running in the controller machine and observe that there is a compute node entry in the compute_nodes table.

$ mysql -uroot -popenstack

Note that mysql password is defined in the OpenStackInstaller/OSinstall.sh file.

mysql>use nova
mysql>select id, created_at from compute_nodes;

Your should see one compute node entry in the table. Now from your Openstack node you can start playing with creating/deleting your new instances. You can monitor the /var/log/nova/nova-compute.log to see the status of creating the nodes. You can create more and more instances and verify that in both compute nodes until you see a short, undescriptive message that basically say your quota has exceeded.

Some useful settings in the Openstack environment

In the following sections, some useful settings on Openstack Nova environment is explained.

Adding a new VM resource type

You can add new resource types by

$ nova-manage flavor create –name=m1.wso2 –memory=128 –cpu=1 –root_gb=2 –ephemeral_gb=0 –flavor=6 –swap=0 –rxtx_factor=1

User data injection

From openstack nova essex that ship with Ubuntu 12.04 LTS the instances created from cloud images are ready to get information such as user-data, public ip, keys etc from the metadata service. User data data can be passed to the instance at startup like

$ nova boot –key_name=nova-key –flavor=1 –image=<image id> –user_data=/root/client/payload.zip <instance name>

At instance startup, nova copy the zip file into the instance as /var/lib/cloud/instance/user-data.txt.

Accessing Metadata information from within instances

We can get the public ip from the metadata server

$ wget http://169.254.169.254/latest/meta-data/public-ipv4

Now public-iv4 file contain the public ip

Adding floating ip to instances

We can add floating ip’s to the instances automatically when spawned or later. For automatically assiginint ip when instance spawn, add the following line into /etc/nova.conf and restart nova services

--auto_assign_floating_ip

To add a floating ip first allocate one using the following command

$ nova floating-ip-create
$ nova add-floating-ip <instance id> <floating ip>
$ nova remove-floating-ip <instance id> <floating ip>

$ nova floating-ip-delete <floating ip>

To list the floating ips

$ nova floating-ip-list

Monitoring Openstack

To see how much memory an lxc container is using

$ cat /sys/fs/cgroup/memory/libvirt/lxc/instance-0000002d/memory.stat

and look at rss entries
or
$ cat /sys/fs/cgroup/memory/libvirt/lxc/instance-0000002d/memory.usage_in_bytes

In /sys/fs/cgroup/memory/libvirt/lxc/instance-0000002d/ folder you can see several other memory related files

Some of the other folders that contain files regarding resources are

./blkio/libvirt/lxc/instance-0000002d
./freezer/libvirt/lxc/instance-0000002d
./devices/libvirt/lxc/instance-0000002d
./memory/libvirt/lxc/instance-0000002d
./cpuacct/libvirt/lxc/instance-0000002d
./cpu/libvirt/lxc/instance-0000002d
./cpuset/libvirt/lxc/instance-0000002d

Troubleshooting

Cannot ping to the instance created

Make sure you have enabled icmp using the nova command-line tool:

$ nova secgroup-add-rule default icmp -1 -1 -s 0.0.0.0/0

Cannot ssh to the instance

Make sure you have enabled tcp port

Using the nova command-line tool:

$ nova secgroup-add-rule default tcp 22 22 -s 0.0.0.0/0

If you still cannot ping or SSH your instances after issuing the nova secgroup-add-rule commands, look at the number of dnsmasq processes that are running. If you have a running instance, check to see that TWO dnsmasq processes are running. If not, perform the following

as root:

$ sudo killall dnsmasq
$ sudo service nova-network restart

When installing nova essex into a new box dpkg error occur and then mysql configuration take a long time and fail

This happen when you forget to do an apt-get update before starting to install nova essex. This could not be corrected until doing a fresh installation again.

Your applications deployed in instances cannot be accessed

Make sure you have enabled your application port.

Using the nova command-line tool:

$ nova secgroup-add-rule default tcp 8080 8080 -s 0.0.0.0/0

Note that you need to replace 8080 above with the port your application is running.

Cannot shutdown the instance

Sometimes even after terminate command is executed on an instance it is not terminated but go to shutoff state. At such moments try restarting nova services.

Error returned when creating the very first instance

Make sure that you public and private interfaces are up

Eg: sudo ifconfig eth1 up

Timeout: Timeout while waiting on RPC response

Sometimes when creating instances you get the response timeout error. The default request
timeout for nova is 60seconds. To increase this add following entry to /etc/nova.conf and restart nova services

--rpc_response_timeout=<new timeout in seconds>

Successfully added compute node but cannot create instances in that node

When instances are created in that node the instance state is in ERROR. In the compute node log we have

libvirtError: Unable to read from monitor: Connection reset by peer

To avoid this make sure that you have commented out the following three entries in the compute nodes /etc/nova.conf

#--novncproxy_base_url=http://192.168.16.20:6080/vnc_auto.html
#--vncserver_proxyclient_address=192.168.16.20
#--vncserver_listen=192.168.16.20

If not comment them out and restart nova services in the compute node.

Instances are not created

- Check whether both interfaces of the controller is up and all compute node interfaces are up. If not make them up and then restart nova services.

Disaster Recovery

Nova instances can be rebooted using

$ nova reboot <instance id>

I notices that when node is restarted while some vm’s are running I could not ping to those vm’s when node restarted. Then rebooting the vm as above solved it. But now I could ping to the instance but connection is refused when ssh to it. Then I cd to OpenStackInstaller and executed

$ sudo restartservices.sh

You may need to run this command twice if you see any warning/error first time. Then that problem is solved too.

References

[1] http://www.subnet-calculator.com/subnet.php?net_class=C
[2]http://jodies.de/ipcalc?host=192.168.25.10&mask1=22&mask2=

After reading this excellent article[1] on installing Openstack Essex in virtualbox as an all in one setup, I thought of sharing my own experience on successfully setting it up, with some additional knowledge.

In the article, the virtual machine type used for openstack virtual machines is qemu, which use software virtualization. That’s why the setup could be demonstrated on Virtualbox which is a virtual machine itself. But the virtual machine type I used is LXC(Linux Containers). LXC containers directly comunicate with the kernel instead of using a hypervisor to communicate with the kernel. Therefore it is possible to run a LXC container inside a virtualbox instance.  Also note that as the name implies LXC only spawn linux containers and it does not support any other OS.

My setup also different from the one described in the article in that I use two virtual box instances, one for controller and a compute node and the other for a compute node. So my setup consist of one  controller and two compute nodes.

My host machine setup and virtual box version is similer to the ones described in the article

Configuring the first VM is exactly as described in the article. For the second VM I added two more Host Only interfaces.

open File → Preferences → Network tab
Add host-only netwok for vboxnet0 – this will be the Publicinterface

set IP to 172.16.0.254, mask 255.255.0.0, dhcp disbaled

Add host-only netwok for vboxnet1 – this will be the Private (VLAN) interface

set IP to 11.1.0.1, mask 255.255.0.0, dhcp disbaled

open File → Preferences → Network tab
Add host-only netwok for vboxnet2 – this will be the Publicinterface

set IP to 172.17.0.254, mask 255.255.0.0, dhcp disbaled

Add host-only netwok for vboxnet3 – this will be the Private (VLAN) interface

set IP to 12.1.0.1, mask 255.255.0.0, dhcp disbaled

The second VM installation is very much similer to the first VM installation as described in the article.

However I will write down the changes in the network interfaces file(By the way these are only ip changes).

Become root (from now till the end):
%sudo -i
Edit /etc/network/interfaces, make it look like this:


auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

#Public Interface
auto eth1
iface eth1 inet static

address 172.17.0.1
netmask 255.255.0.0
network 172.17.0.0
broadcast 172.17.255.255

#Private VLAN interface
auto eth2
iface eth2 inet manual

up ifconfig eth2 up

then run:

%ifup eth1 #after this, ifconfig shows inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
%ifup eth2 #after this, ifconfig doesnt report ipv4 addr for eth1

or reboot

Verify reachability from your host PC

%ping 172.17.0.1

At this point I assume that you have set up both VM’s and updated and upgraded them and installed openssh-server, git in them and checkout openstack Essex into the /root folder.

Installing the controller:

%sudo -i
%cd OpenStackInstaller

In the OSinstall.sh script, change all occurences of nova-compute under NOVA_PACKAGES to nova-compute-lxc(there are 3 occurences).

Then execute

%./OSinstall.sh -T all -F 172.16.1.0/24 -f 11.1.0.0/16 -s 512 -P eth1 -p eth2 -t demo -v lxc

Note that I use -T all options since I install in this server both controller and a compute node.

Then you need to upload an ubuntu image to glance. Download the image precise-server-cloudimg-amd64.tar.gz from http://cloud-images.ubuntu.com/precise/current/

Since this image name is diffrent as expected in the upload_ubuntu.sh (which is ubuntu-12.04-server-cloudimg-amd64.tar.gz) I think the easiest thing is to change upload_ubuntu.sh file and replace the line with

wget -O ${TMPAREA}/${TARBALL} http://uec-images.ubuntu.com/releases/${CODENAME}/release/${TARBALL}

with

cp -f <folder where you downloaded image>/precise-server-cloudimg-amd64.tar.gz ${TMPAREA}/${TARBALL}

then call

%./upload_ubuntu.sh -a admin -p openstack -t demo -C 172.16.0.1

to upload the image

Then create a keypair as described in the article and set the security group details

%euca-add-keypair demo > demo.pem
%chmod 0600 demo.pem

%euca-authorize default -P tcp -p 22 -s 0.0.0.0/0
%euca-authorize default -P tcp -p 80 -s 0.0.0.0/0
%euca-authorize default -P tcp -p 8080 -s 0.0.0.0/0
%euca-authorize default -P icmp -t -1:-1

Since you have a compute node in this server, now you can start creating instances after sourcing the demorc file by

%. demorc or

%source ./demorc

You can verify that there is a compute node added by logging to the nova database and verifying that there is a compute node entry in compute_nodes table

%mysql -uroot -popenstack

mysql>use nova

mysql>select id, created_at from compute_nodes; Note that after adding our compute node server you will see two entries in this table

Installing the compute node

Make sure that you give a host name for this server instead of the default hostname

%vi /etc/hostname and change the host entry say mycompute-node

Then reboot the machine

Now log into controller node and add an host entry in /etc/hosts file

172.17.0.1      mycompute-node

Now log into the compute node and

%sudo -i
%cd OpenStackInstaller
%./OSinstall.sh -T compute -C 172.16.0.1 -F 172.16.1.0/24 -f 11.1.0.0/16 -s 512 -P eth1 -p eth2 -t demo -v lxc

Now log into the mysql database in the controller again and verify that there are two entries in the compute_nodes table which means your new compute node is now active in the setup.

Now from your controller node’s /root/OpenStackInstaller folder you can start playing with creating/deleting your new instances

first list your uploaded image by

%euca-describe-images

Then using the image id there, create an new instance

%euca-run-instances -k demo -n 1 -g default -t m1.tiny ami-00000001

You can list your created instances by

%nova list or

%euca-describe-instances

You can monitor the /var/log/nova/nova-compute.log in both servers(controller node and compute node) to see the status of node creating.

Now you can start creating more instances and verify that in both compute nodes, instances are created, until you see a short, undescriptive message that basically say your quota has exceeded.

Opentack has this error[2] when deleting lxc instances (qemu or kvm does not have this problem). If you are interested I can share an dirty hack until that problem is solved in the code base.

[1] http://www.tikalk.com/alm/blog/expreimenting-openstack-essex-ubuntu-1204-lts-under-virtualbox

[2] https://bugs.launchpad.net/nova/+bug/971621

I have added NTLM authentication support for Axis2/C recently.

I have added this support by writing a dynamically loadable library called axis2_ntlm which wrap an existing NTLM library. Currently it wrap a NTLM library called Heimdal [1].

However one can write his own wrapper for the external NTLM library of his choice.

When using Heimdal, if I send same messages to a server requiring NTLM authentication using different connections, I noticed that some authentication requests fail with server responding that the provided credentials are not valid, even when the provided credentials are perfectly valid. If I repeat the same request again it is authorized. This intermittent failure come from Heimdal, because when linked with a wrapper for a different external library like libntlm[2] it works fine. It seems that Heimdal no longer actively support it’s NTLM library, so I encourage ppl to use libntlm instead. I have attached the code for libntlm wrapper for Axis2/C NTLM below[3] as a text file. Also you can download this libntlm wrapper for Axis2/C at [4]. One use this code to compile a wrapper for Axis2/C by studing how it is done for Heimdal. Note the additions to configure.ac, samples/configure.ac when it is done for Heimdal.

[1] http://www.h5l.org/

[2] http://josefsson.org/libntlm/

[3] http://damithakumarage.files.wordpress.com/2011/06/libntlm_wrapper-c.doc

[4] https://github.com/damitha23/libntlm-axis2c.git

Try Eclim

I have started using eclim as my Java, C and C++ development environment.

eclim is a project aimed for vim developers to incorporate the power of Eclipse into vim editor.

eclim provide host of features for java development that can be used from within your vim environment.
Some of them are

- Context sensitive code completion
– Code correction suggestions with option to apply a suggestion
– Java source and java doc searching capabilities.

Above features together with the most of the essential features one would expect from a java development environment make eclim a strong development environment for vim users.

The only catch is that you need to start a daemon called eclimd in order to use eclipse features from within vim. But that’s OK with me.

There is another way of using eclim as a vi editor from within Eclipose IDE. However I don’t like to play with Monsters like Eclipse.

My way of doing things is using the most familiar tool for me to get a job done. In that respect Unix pipe concept, which emphasize “use the best tool for the task at hand and if need to do more things plumb it with other tools. One tool do just the thing it is intended, nothing more, nothing less”.

See here for a set of eclim features. This getting started guide is the only document I followed to get my eclim environment running.

Follow

Get every new post delivered to your Inbox.