Feeds:
Posts
Comments

Archive for the ‘WSO2’ Category

In this guide I explain how to install Openstack in a single physical node. I install the nova controller and a compute node in this node. The aim of this article is to get you started with Openstack IaaS with minimum effort in a short period of time.

What you need

The steps below can be followed using one physical node. The node should posses two network interfaces. One of them could be a virtual one. I have tested this on Ubuntu 12.04 LTS 64 bit server. The memory and storage requirements of the node depend on how much virtual machines you run on Openstack once it is ready. For example if you plan to run 10 virtual machines with 256Mb memory and 5Gb HD each, then you need at least 3G memory and 60Gb hard disk for the node. You also need an internet connection to download the necessary Openstack software.
Note the installation described in this document is in no way production ready. You may need to do lot of enhancements, feature additions to make it such.

Installation Steps

Step1:Insatll Ubuntu server

Install Ubuntu server as you do any normal installation. Please refer to good Ubuntu documentation for this. During the installation steps do the following.
– Create a user account on the host machine(say nova).
– Install openssh.
– Assign hostname(say openstack). Assign domain name(say demo.com)
– Assign static ip(say 192.168.16.20)
– Give gateway to access internet (say 192.168.16.1). I assume here you have a wired connection to the internet. Insead if you have a wireless connection you can let it connect to internet using dhcp.
You can do the above steps once the Ubuntu installation is finished as well like below

– Create user account(say nova)

$ sudo /usr/sbin/adduser nova

– Install openssh

$ sudo apt-get install openssh-server(to ssh into instance)

– Assign static ip by editing /etc/network/interfaces file

auto eth0
iface eth0 inet static
address 192.168.16.20
netmask 255.255.252.0
gateway 192.168.16.1
auto eth1
iface eth1 inet manual
up ifconfig eth1 up

Then

$ sudo ifup eth0
$ sudo ifup eth1

– Assign hostname and domain name by putting an entry in /etc/hosts file as in

192.168.16.20    openstack.demo.com    openstack

Step2:

Log in using nova account you created.
$ sudo apt-get update

Step3:

Checkout the Installation Scripts
$ sudo apt-get -y install git
$ git clone https://github.com/damitha23/openstack.git
$ cd openstack
$ unzip OpenStackInstaller.zip

Note that content of OpenStackInstaller folder has scripts I took from https://github.com/uksysadmin/OpenStackInstaller.git maintained by Kevin Jackson <kevin@linuxservices.co.uk> https://twitter.com/#!/itarchitectkevirc.freenode.org: uksysadmin

Step4: Installing Openstack

$ cd /home/nova/OpenStackInstaller

Modify oscontrollerinstall.sh as per your requirements and execute. It will take couple of minutes to install Openstack.
Also modify the OSinstall.sh to add following configuration that would go into nova.conf

--rpc_response_timeout=<new timeout in seconds>

Give a sufficient response timeout to avoid timeout errors.
Example oscontrollerinstall.sh

./OSinstall.sh -T all -C openstack.demo.com -F 192.168.16.128/25 -f 192.168.17.128/25 -s 126 -P eth0
 -p eth1 -t demo -v kvm

Important: The virtualization type here I used is kvm.
Note that I use -T all options since I install in this server both controller and a compute node.
With -C parameter we give the hostname of the node. You should have an entry in the /etc/hosts file for this as following.

192.168.16.20    openstack.demo.com    openstack

If your node ip regulary change it is good idea to have following kind of entry in /etc/rc.local file so that it will automatically add that entry when node bootup

ip=`/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`
echo $ip openstack.demo.com openstack >> /etc/hosts

Note that here ip is taken from eth0 interface. You may need adjustments.

With -F parameter we give the floating ip range for the project.
With -f parameter we give the fixed ip range for the project.
With -s parameter we give number of nodes in the private network.

I use eth1 as private interface. eth0 as public interface. For the public ips(floating ips) we should give an valid range from the network where the host machine took IP. So a valid floating ip subnet would be 192.168.16.128/25. You can calculate such an range from the subnet calculator in link [1] or [2]
A valid fixed ip subnet would be 192.168.17.128/25. Note that if the floating ip’s are exhausted, then there will be errors and instance would not be created. To avoid this situation, make sure that you allocate as many as floating ips, at least, as the fixed ips. Now you can access Openstack UI from http://openstack.demo.com using

Username:admin
Password:openstack

You may need to add an host entry in the node where your browser reside when giving the above url as in

192.168.16.20    openstack.demo.com    openstack

Now you can manage your Openstack environment from the UI interface.

If one of your interface is a virtual interface(This could be the case when are installing on a laptop) your install command could be like following

./OSinstall.sh -T all -C openstack.demo.com -F 192.168.16.128/25 -f 192.168.17.128/25 -s 126 -P eth0
 -p eth0:0 -t demo -v kvm

Make sure eth0:0 is defined as following

auto eth0:0
iface eth0:0 inet manual

And make sure it is up by using
$ ifup eth0:0

Step5: Upload an Image

From this step on you can execute the commands as normal user. I upload an ubuntu image to glance. For kvm virtual machine download a base ubuntu image precise-server-cloudimg-amd64-disk1.img from http://cloud-images.ubuntu.com/precise/current/ and create a folder called /home/nova/upload folder and copy the image into it.

Modify /home/nova/OpenStackInstaller/uploadimage.sh and execute to upload the image.

An example uploadimage.sh would be

./imageupload.sh -a admin -p openstack -t demo -C openstack.demo.com -x amd64 -y ubuntu -w 12.04 
 -z /root/upload/precise-server-cloudimg-amd64-disk1.img -n cloudimg-ubuntu-12.04

Here openstack.demo.com is the hostname of the openstack controller.
Execute

$ cd OpenStackInstaller
$ source ./demorc
$ nova image-list

command to see whether your newly uploaded image appear in the image list.

Step6: Testing the Controller

$ cd OpenStackInstaller
$ source ./demorc

Now add a keypair. It is highly recommended that you use your own keypair when creating
instances. For example suppose you create an instance as normal user, using a keypair owned by root user. You may succeed in creating your instance. But you will get permission denied exception when trying to ssh to that instance.

$ nova keypair-add wso2 > wso2.pem

Set permission for the private key

$ chmod 0600 wso2.pem

You can see the created key listed

$ nova keypair-list

Allow needed ports for the default security group.

$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
$ nova secgroup-add-rule default tcp 80 80 0.0.0.0/0
$ nova secgroup-add-rule default tcp 443 443 0.0.0.0/0
$ nova secgroup-add-rule default tcp 3306 3306 0.0.0.0/0
$ nova secgroup-add-rule default tcp 8080 8080 0.0.0.0/0

Now list the images and select an image id to create an instance from it

$ nova image-list
$ nova boot –key_name=nova-key –flavor=1 –image=<image id> <instance name>

Instead of the above command you can use the following command if you need to pass some user data into the instance you want to create.

$ nova boot –key_name=nova-key –flavor=1 –image=<image id> –user_data=/root/client/payload.zip <instance name>

Now see whether your instance is up and running. Look for the running instances ip.

$ nova list
$ ssh -i wso2.pem ubuntu@ipaddress

If you can access the virtual machine instance then you have successfully created a controller with a compute node in it. Log into the nova mysql database running in the controller machine and observe that there is a compute node entry in the compute_nodes table.

$ mysql -uroot -popenstack

Note that mysql password is defined in the OpenStackInstaller/OSinstall.sh file.

mysql>use nova
mysql>select id, created_at from compute_nodes;

Your should see one compute node entry in the table. Now from your Openstack node you can start playing with creating/deleting your new instances. You can monitor the /var/log/nova/nova-compute.log to see the status of creating the nodes. You can create more and more instances and verify that in both compute nodes until you see a short, undescriptive message that basically say your quota has exceeded.

Some useful settings in the Openstack environment

In the following sections, some useful settings on Openstack Nova environment is explained.

Adding a new VM resource type

You can add new resource types by

$ nova-manage flavor create –name=m1.wso2 –memory=128 –cpu=1 –root_gb=2 –ephemeral_gb=0 –flavor=6 –swap=0 –rxtx_factor=1

User data injection

From openstack nova essex that ship with Ubuntu 12.04 LTS the instances created from cloud images are ready to get information such as user-data, public ip, keys etc from the metadata service. User data data can be passed to the instance at startup like

$ nova boot –key_name=nova-key –flavor=1 –image=<image id> –user_data=/root/client/payload.zip <instance name>

At instance startup, nova copy the zip file into the instance as /var/lib/cloud/instance/user-data.txt.

Accessing Metadata information from within instances

We can get the public ip from the metadata server

$ wget http://169.254.169.254/latest/meta-data/public-ipv4

Now public-iv4 file contain the public ip

Adding floating ip to instances

We can add floating ip’s to the instances automatically when spawned or later. For automatically assiginint ip when instance spawn, add the following line into /etc/nova.conf and restart nova services

--auto_assign_floating_ip

To add a floating ip first allocate one using the following command

$ nova floating-ip-create
$ nova add-floating-ip <instance id> <floating ip>
$ nova remove-floating-ip <instance id> <floating ip>

$ nova floating-ip-delete <floating ip>

To list the floating ips

$ nova floating-ip-list

Monitoring Openstack

To see how much memory an lxc container is using

$ cat /sys/fs/cgroup/memory/libvirt/lxc/instance-0000002d/memory.stat

and look at rss entries
or
$ cat /sys/fs/cgroup/memory/libvirt/lxc/instance-0000002d/memory.usage_in_bytes

In /sys/fs/cgroup/memory/libvirt/lxc/instance-0000002d/ folder you can see several other memory related files

Some of the other folders that contain files regarding resources are

./blkio/libvirt/lxc/instance-0000002d
./freezer/libvirt/lxc/instance-0000002d
./devices/libvirt/lxc/instance-0000002d
./memory/libvirt/lxc/instance-0000002d
./cpuacct/libvirt/lxc/instance-0000002d
./cpu/libvirt/lxc/instance-0000002d
./cpuset/libvirt/lxc/instance-0000002d

Troubleshooting

Cannot ping to the instance created

Make sure you have enabled icmp using the nova command-line tool:

$ nova secgroup-add-rule default icmp -1 -1 -s 0.0.0.0/0

Cannot ssh to the instance

Make sure you have enabled tcp port

Using the nova command-line tool:

$ nova secgroup-add-rule default tcp 22 22 -s 0.0.0.0/0

If you still cannot ping or SSH your instances after issuing the nova secgroup-add-rule commands, look at the number of dnsmasq processes that are running. If you have a running instance, check to see that TWO dnsmasq processes are running. If not, perform the following

as root:

$ sudo killall dnsmasq
$ sudo service nova-network restart

When installing nova essex into a new box dpkg error occur and then mysql configuration take a long time and fail

This happen when you forget to do an apt-get update before starting to install nova essex. This could not be corrected until doing a fresh installation again.

Your applications deployed in instances cannot be accessed

Make sure you have enabled your application port.

Using the nova command-line tool:

$ nova secgroup-add-rule default tcp 8080 8080 -s 0.0.0.0/0

Note that you need to replace 8080 above with the port your application is running.

Cannot shutdown the instance

Sometimes even after terminate command is executed on an instance it is not terminated but go to shutoff state. At such moments try restarting nova services.

Error returned when creating the very first instance

Make sure that you public and private interfaces are up

Eg: sudo ifconfig eth1 up

Timeout: Timeout while waiting on RPC response

Sometimes when creating instances you get the response timeout error. The default request
timeout for nova is 60seconds. To increase this add following entry to /etc/nova.conf and restart nova services

--rpc_response_timeout=<new timeout in seconds>

Successfully added compute node but cannot create instances in that node

When instances are created in that node the instance state is in ERROR. In the compute node log we have

libvirtError: Unable to read from monitor: Connection reset by peer

To avoid this make sure that you have commented out the following three entries in the compute nodes /etc/nova.conf

#--novncproxy_base_url=http://192.168.16.20:6080/vnc_auto.html
#--vncserver_proxyclient_address=192.168.16.20
#--vncserver_listen=192.168.16.20

If not comment them out and restart nova services in the compute node.

Instances are not created

– Check whether both interfaces of the controller is up and all compute node interfaces are up. If not make them up and then restart nova services.

Disaster Recovery

Nova instances can be rebooted using

$ nova reboot <instance id>

I notices that when node is restarted while some vm’s are running I could not ping to those vm’s when node restarted. Then rebooting the vm as above solved it. But now I could ping to the instance but connection is refused when ssh to it. Then I cd to OpenStackInstaller and executed

$ sudo restartservices.sh

You may need to run this command twice if you see any warning/error first time. Then that problem is solved too.

References

[1] http://www.subnet-calculator.com/subnet.php?net_class=C
[2]http://jodies.de/ipcalc?host=192.168.25.10&mask1=22&mask2=

Advertisements

Read Full Post »

After reading this excellent article[1] on installing Openstack Essex in virtualbox as an all in one setup, I thought of sharing my own experience on successfully setting it up, with some additional knowledge.

In the article, the virtual machine type used for openstack virtual machines is qemu, which use software virtualization. That’s why the setup could be demonstrated on Virtualbox which is a virtual machine itself. But the virtual machine type I used is LXC(Linux Containers). LXC containers directly comunicate with the kernel instead of using a hypervisor to communicate with the kernel. Therefore it is possible to run a LXC container inside a virtualbox instance.  Also note that as the name implies LXC only spawn linux containers and it does not support any other OS.

My setup also different from the one described in the article in that I use two virtual box instances, one for controller and a compute node and the other for a compute node. So my setup consist of one  controller and two compute nodes.

My host machine setup and virtual box version is similer to the ones described in the article

Configuring the first VM is exactly as described in the article. For the second VM I added two more Host Only interfaces.

open File → Preferences → Network tab
Add host-only netwok for vboxnet0 – this will be the Publicinterface

set IP to 172.16.0.254, mask 255.255.0.0, dhcp disbaled

Add host-only netwok for vboxnet1 – this will be the Private (VLAN) interface

set IP to 11.1.0.1, mask 255.255.0.0, dhcp disbaled

open File → Preferences → Network tab
Add host-only netwok for vboxnet2 – this will be the Publicinterface

set IP to 172.17.0.254, mask 255.255.0.0, dhcp disbaled

Add host-only netwok for vboxnet3 – this will be the Private (VLAN) interface

set IP to 12.1.0.1, mask 255.255.0.0, dhcp disbaled

The second VM installation is very much similer to the first VM installation as described in the article.

However I will write down the changes in the network interfaces file(By the way these are only ip changes).

Become root (from now till the end):
%sudo -i
Edit /etc/network/interfaces, make it look like this:


auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

#Public Interface
auto eth1
iface eth1 inet static

address 172.17.0.1
netmask 255.255.0.0
network 172.17.0.0
broadcast 172.17.255.255

#Private VLAN interface
auto eth2
iface eth2 inet manual

up ifconfig eth2 up

then run:

%ifup eth1 #after this, ifconfig shows inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
%ifup eth2 #after this, ifconfig doesnt report ipv4 addr for eth1

or reboot

Verify reachability from your host PC

%ping 172.17.0.1

At this point I assume that you have set up both VM’s and updated and upgraded them and installed openssh-server, git in them and checkout openstack Essex into the /root folder.

Installing the controller:

%sudo -i
%cd OpenStackInstaller

In the OSinstall.sh script, change all occurences of nova-compute under NOVA_PACKAGES to nova-compute-lxc(there are 3 occurences).

Then execute

%./OSinstall.sh -T all -F 172.16.1.0/24 -f 11.1.0.0/16 -s 512 -P eth1 -p eth2 -t demo -v lxc

Note that I use -T all options since I install in this server both controller and a compute node.

Then you need to upload an ubuntu image to glance. Download the image precise-server-cloudimg-amd64.tar.gz from http://cloud-images.ubuntu.com/precise/current/

Since this image name is diffrent as expected in the upload_ubuntu.sh (which is ubuntu-12.04-server-cloudimg-amd64.tar.gz) I think the easiest thing is to change upload_ubuntu.sh file and replace the line with

wget -O ${TMPAREA}/${TARBALL} http://uec-images.ubuntu.com/releases/${CODENAME}/release/${TARBALL}

with

cp -f <folder where you downloaded image>/precise-server-cloudimg-amd64.tar.gz ${TMPAREA}/${TARBALL}

then call

%./upload_ubuntu.sh -a admin -p openstack -t demo -C 172.16.0.1

to upload the image

Then create a keypair as described in the article and set the security group details

%euca-add-keypair demo > demo.pem
%chmod 0600 demo.pem

%euca-authorize default -P tcp -p 22 -s 0.0.0.0/0
%euca-authorize default -P tcp -p 80 -s 0.0.0.0/0
%euca-authorize default -P tcp -p 8080 -s 0.0.0.0/0
%euca-authorize default -P icmp -t -1:-1

Since you have a compute node in this server, now you can start creating instances after sourcing the demorc file by

%. demorc or

%source ./demorc

You can verify that there is a compute node added by logging to the nova database and verifying that there is a compute node entry in compute_nodes table

%mysql -uroot -popenstack

mysql>use nova

mysql>select id, created_at from compute_nodes; Note that after adding our compute node server you will see two entries in this table

Installing the compute node

Make sure that you give a host name for this server instead of the default hostname

%vi /etc/hostname and change the host entry say mycompute-node

Then reboot the machine

Now log into controller node and add an host entry in /etc/hosts file

172.17.0.1      mycompute-node

Now log into the compute node and

%sudo -i
%cd OpenStackInstaller
%./OSinstall.sh -T compute -C 172.16.0.1 -F 172.16.1.0/24 -f 11.1.0.0/16 -s 512 -P eth1 -p eth2 -t demo -v lxc

Now log into the mysql database in the controller again and verify that there are two entries in the compute_nodes table which means your new compute node is now active in the setup.

Now from your controller node’s /root/OpenStackInstaller folder you can start playing with creating/deleting your new instances

first list your uploaded image by

%euca-describe-images

Then using the image id there, create an new instance

%euca-run-instances -k demo -n 1 -g default -t m1.tiny ami-00000001

You can list your created instances by

%nova list or

%euca-describe-instances

You can monitor the /var/log/nova/nova-compute.log in both servers(controller node and compute node) to see the status of node creating.

Now you can start creating more instances and verify that in both compute nodes, instances are created, until you see a short, undescriptive message that basically say your quota has exceeded.

Opentack has this error[2] when deleting lxc instances (qemu or kvm does not have this problem). If you are interested I can share an dirty hack until that problem is solved in the code base.

[1] http://www.tikalk.com/alm/blog/expreimenting-openstack-essex-ubuntu-1204-lts-under-virtualbox

[2] https://bugs.launchpad.net/nova/+bug/971621

Read Full Post »

WSO2 ESB Performance

I uploaded  an  article [1] with the latest performance numbers of WSO2 Enterprise Service Bus. It is quite a long time since WSO2 last published it’s ESB performance numbers. During that time it released performance numbers on customer request on various instances.

WSO2 ESB has undergone various feature additions during that time. It is now based on WSO2 Carbon (with ESB 2.x family of products). WSO2 Carbon[2] provides the development framework and the runtime environment for the WSO2 products including the ESB. I am not going to talk all the new features here. WSO2 Oxygen Tank library is full of articles/tutorial written on WSO2 ESB. So decided it is time again to publish some performance numbers.

In Previous articles the performance was compared with different open source and commercial competitors.That is a time WSO2 was desperately searching for a niche in the middleware market. Now WSO2 ESB has established it’s ground firmly in the middleware ESB market, it has changed it’s approach on publishing performance numbers. No comparison with other vendors. We just publish our numbers and provide a performance test framework freely for our customers(or any interested user) to download and verify the results and, also do perform their own comparison with the products of their choice. This performance testing framework will be available soon on WSO2 Oxygen Tank.

This article has taken a different approach from previous performance test articles of WSO2 ESB in various aspects. I would like to discuss some of them here

First, the way performance presented is different. To quote from the article

“For each scenario each message size load is generated with concurrency varying from 20 to 300(increasing by 40 at each stage). Then the maximum transactions per second(TPS) achieved during this concurrency range is recorded as the Transactions per second(TPS) for the corresponding scenarios message size.”

What does that mean?.  From the experience of the previous performance tests I could see that  WSO2 ESB performs best at the concurrency range from 20 to 300. So I decided that the best performance number for a particular message size within that range would be the best choice to present to the reader. My original plan was to break several ranges like

20-300, 300-900 and 900-1800. But that could overly make the results complex. So I omitted the higher concurrency results from the article to avoid unnecessary cluttering.

Besides I wanted to capture results for a wider message size range. I thought that the concurrency range mentioned in the article best suited for that purpose.

Also my approach this time is to present the numbers in best possible way, so that one could easily grasp the results in easy to read graphs. When you compare the graphs from round3 this could be easily understood.

Note that I have done the performance test without keep alive enabled. HTTP keep-alive is great to improve performance. But it is not the best way to say that your product has good performance. My understanding
is that, to emulate how the server react to real world scenario of concurrent users, the best way is to assume that each tcp connection represent a user. So my decision was to ignore how many requests each user do on a particular connection.

In the article I have provided enough information to reproduce the performance test scenarios. . I have not done any special tweaks to gain extra bit of performance, other than the things I have mentioned in the article. Providing the exact scripts, messages in files etc for somebody to immediately reproduce the results is a bonus. Yet WSO2 have planned a something more than a bonus. It will soon upload into Oxygen tank a great open source performance test framework which I sincerely hope will serve as a good platform for benchmark tests for many open source projects. It is a fun to reproduce the performance numbers provided in the article  using that tool.

[1] http://wso2.org/library/articles/2010/08/wso2-esb-performancenew

[2] http://wso2.com/products/carbon/

Read Full Post »

I have a proxy service deployed in my esb server. This service will verify the signature of the incoming messages and decrypt them before sending it to the target service. I send the messges to ESB using WSO2 wsclient which is bundled with WSO2 WSF/C. To sign the messages I use Alice’s private key. To encrypt the messages I use the public key received from ESB ( You can find Alice’s samples keys bundled with WSF/C samples. More on ESB keys during this article).

To deploy that service I followed the following procedure. I first created a simple pass through service using the Add/Proxy Service menu. I gave the target server as my WSAS instance running on a separate server. After that I selected the created proxy service and added security using the Sign and Encrypt option. I also gave the private and trusted key store as wso2carbon.jks. I also added Alice’s public key to the wso2carbon.jks key store using WSO2 ESB admin console facilities.

Now my services are ready, I wanted to use WSO2 wsclient (A command line web services client tool) to access the service through ESB. To learn more about how to use wsclient and how to secure your messages using it please refer to [1] and [2]. To encrypt and sign messages wsclient use server certificate in PEM format. We give the server certificate using –recipient-certificate option.  Usually I use my wsclient command line tool to access web services deployed in Apache2 server. So I knew how to generate my server certificates in PEM format from  PKCS key stores. But did not know how to generate PEM certificates from JKS key stores. Howerver I could not find a direct way to do this. Following is how I did this using java keytool and openssh x509 commands.

keytool -export -file wso2carbon.cer -keystore /wso2carbon.jks -alias wso2carbon

In this step we create a wso2carbon.cer file using wso2carbon.jks server keystore. Here you will be asked the password for the keystore entry alias.

After that I executed the following command to create the recipient certificate in PEM format.

openssl x509 -out wso2carbon.pem -outform pem -in wso2carbon.cer -inform der

Now I could use the created pem certificate to execute the following command to access the service

./wsclient –log-level error –no-wsa –soap –no-mtom –sign-body –key /alice_key.pem –certificate /alice_cert.cert –recipient-certificate /wso2carbon.pem –encrypt-payload –policy-file ./policy.xml  http://localhost:8280/services/SignEncProxy < ./data/POService.xml

 

[1] https://damithakumarage.wordpress.com/2008/10/04/access-secure-enabled-web-services-from-command-line/

[2] https://damithakumarage.wordpress.com/2010/05/25/using-wso2-wsclient-generate-your-custom-soap-messages-for-you/

 

Read Full Post »

There are many ways you can write a web service and deploy it in WSO2 WSAS application server environment. I already to explained how to deploy your POJO service in Eclipse platform.
Here I’ll explain in detail the top down approach(Contract first) using WSAS admin console. I don’t use Eclipse platform here. An WSO2 Oxygentank article, “Deploying Web Services using Apache Axis2 Eclipse Plugins” explain using Eclipse plugins for deploying your web services using contract first approach.

I started code generating for POWSDL

I used the WSO2 WSAS admin UI to generate my service code. Select the WSDL2Java tool under tools menu.  In the -uri option select the wsdl from your filesystem and upload it. I selected the options -ss, -sd and -u.  When you click generate it will generate the code and download it to your local file system as a zip file.

I unzipped this file and add my server code at src/org/wso2/carbon/core/services/po/POServiceSkeleton.java as

public org.wso2.carbon.core.services.po.BuyStocks buyStocks
(org.wso2.carbon.core.services.po.BuyStocks buyStocks)
{
return buyStocks;
}

Note that at the root of the unzipped folder there is a pom.xml file. So you execute mvn to build your source. If you have a maven repository already with required jars it is advised to use mvn with -o option so that maven will not download already existing jars in your repository.

When the build is completed you will have target/build/lib/POService.aar ready to be deployed in WSO2 WSAS.

I then uploaded this in to WSAS as an Axis2 service. To do that in WSAS admin UI under services menu select add Axis2 Service sub menu. Then just browse to your aar file and click upload. Your POService will be listed in the services list.

(more…)

Read Full Post »

I wanted to try create a web service using my POJO style bean without going too much detail into Axis2. I found this useful tutorial by Saminda.
However although it was too easy to get a Axis2 service deployed into WSO2 WSAS server, I had to struggle a bit in Ubuntu Karmic platform. Following I explain what happened there.

In the tutorial it does not mention that we need Eclipse IDE for EE developers. Either you need to use this or upgrade your Eclipse IDE for Java developers for EE. First I downloaded the EE version and tried it. However in my machine it had some stability issues with Eclipse and it crashed serveral times. So I installed Eclipse Java developer version. Note that I had to install all JST related plugins in addition to EE plugin for Eclipse in order to get this work.

Once I had my Eclipse ready for J2EE the rest of the tutorial went smoothly and in less than one minite I could deploy my service in WSO2 WSAS and try it.

Read Full Post »

I’m very much pleased to acknowledge the WSF/C++ release from WSO2. This is a long felt need for the WSO2 web services stacks in C/C++. As I have pointed out in my article comparing WSF/C and gSOAP as well as article compaing WSF/C and RogueWave’s HydraExpress the main minus point for WSF/C stack was its lacking of C++ API level support. Even at the time I was compairing these stacks WSF/C++ 1.0.0 was available but without code generation support and server side support. WSF/C++ is a fully featured release with API’s for writing services as well as complete code generation support. So now I can speak of WSF/C++ as a complete web services stack for C/C++ web services development with the added advantage of providing C++ support over the already feature rich WSF/C stack.
While using the C++ API’s for writing web services, developers can always exploit the underline WSF/C core API’s to his advantage.

Read Full Post »

Older Posts »