Monday, December 16, 2013

Downloading and running WSO2 Enterprise Service Bus (ESB)

WSO2 ESB is a fast and light Enterprise Service Bus. An ESB is a software component (a middle-ware application) that handles communication among various components that needs to transfer message to some other component and get their work done. Instead each component communicating with others directly, every component talks to ESB. ESB would then transfer messages to said formats and route it and get the work done.

WSO2 ESB is built on top of Apache Synapse and has rich features to exploit. In this guide let's start of with downloading and running the WSO2 ESB.

Download the latest WSO2 ESB from http://wso2.com/products/enterprise-service-bus/

Extract the WSO2 ESB into a directory. I'll refer to this directory as ESB_HOME. Navigate in to bin directory and run the WSO2 ESB as follows.
$ ./wso2server.sh                         in Linux  or
> wso2server.bat                          in Windows
You would see some content in the console and finally the Management console URL.
Open up your favorite browser and point to the link.

http://localhost:9443/carbon

Note : You can only run one WSO2 product in it's default settings. If you are running more than one product at a time, make sure you increment the <Offset> in ESB_HOME/repository/conf/carbon.xml

You may be prompted for a security exception in the browser. Add the security exception. You would then be re-directed to the ESB Management console as shown in the following after login in with default user name and password, which is 'admin'.


Sunday, December 8, 2013

Writing Python extension modules in C

In this short guide, we shall look at how to develop an extension module for Python in C. This requirement may come in couple of ways.

You may need to have a Python module, which has to be super-fast. Don't mislead yourself by this statement. Python it self can be used to write fast algorithms and modules. But let's be little
realistic about this. Python isn't anyway a compiled language like C. It's an interpreted language. Therefore, a hardcore algorithm written in C would easily surpass performance of the same written in Python. In that sense, you might want to write a C program for that, and still use Python's ease to develop an application fast.

Another common reason you may want to have an extension module is that, you may already have a library written in C which will fulfill your program requirements. Then, you may use that code with little modification to adapt to Python API, so that Python code can call that libraries' methods. After all, Python extension module is not more than a C library. In Unix/Linux, the the dynamic libraries are Shared Object or .so files, whereas in Windows they are usually referred to as Dynamic Link Libraries or dll.

In this we would build a sample module to be invoked in Python using C. For this exercise, you need to have Python interpreter and its header files along with. In Linux, you can install python dev packages using,
$ sudo apt-get install python-dev
In Windows, the header files must be available with the binary installer package itself. Developing an extension module for Python involves three steps. Namely,
  1. Set of C functions that would need to be invoked from Python 
  2. A table of mappings to Python methods to C functions 
  3. An initialization process 
Let me demonstrate with a sample module called mathC which is going to have an add, complex add and a method to find logarithm. Again, don't mislead yourself. You don't need to write an extension module yourself to find the logarithm of a value in Python. But this is merely for demonstration purpose. Simply, the C program would be like the following.
#include <Python.h>
#include <math.h>

static PyObject *mathC_add(PyObject *self, PyObject *args) {
    double d1, d2 ;
    if(!PyArg_ParseTuple(args, "dd", &d1, &d2)) {
        return  NULL ;
    }

    return Py_BuildValue("d", d1 + d2) ;
}

static PyObject *mathC_addComplex(PyObject *self, PyObject *args, PyObject *kw) {
    char *kwlist[] = { "real1", "complex1", "real2", "complex2", NULL } ;
    double r1, r2=0, c1=0, c2=0 ;
    if(!PyArg_ParseTupleAndKeywords(args, kw, "d|ddd", kwlist, &r1, &c1, &r2, &c2)) {
        return  NULL ;
    }

    return Py_BuildValue("(dd)", (r1 + r2), (c1 + c2)) ;
}

static PyObject *mathC_log(PyObject *self, PyObject *args) {
    double d1 ;
    if(!PyArg_ParseTuple(args, "d", &d1)) {
        return  NULL ;
    }

    return Py_BuildValue("d", log(d1)) ;
}

static PyMethodDef mathC_methods[] = {
    { "add", (PyCFunction) mathC_add, METH_VARARGS, NULL },
    { "addComplex", (PyCFunction) mathC_addComplex, METH_VARARGS | METH_KEYWORDS, NULL },
    { "log", (PyCFunction) mathC_log, METH_VARARGS, NULL },
    { NULL, NULL, 0, NULL }
};

PyMODINIT_FUNC initmathC() {
    Py_InitModule3("mathC", mathC_methods, "My mathC extension module");
}
Read the code carefully. First of all, we need include Python.h before hand. That exposes, the required methods for us to communicate back and forth with python and C. Next thing is, three functions corresponding to three methods that we need to invoke from python. All the functions we need to invoke must have either of the following three function prototypes in C.
PyObject *funct(PyObject *self, PyObject *args);
PyObject *funcWithKeywords(PyObject *self, PyObject *args, PyObject *kw);
PyObject *functWithNoArgs(PyObject *self);
Function names can be anything that we like to have. But normally, it's recommended to have a convention like {moduleName}_{nameDescribingPurpose}. That's why I have three functions; mathC_add, mathC_complex and mathC_log. Normally, the functions would take above first form, as in mathC_add function. That function would accept any number of arguments from python and that is equivalent to a python tuple. Let's get into the implementation of the mathC_add function. I have declared two double variables to get it from arguments. Those are for the two operands to be added. Next, we need to parse the arguments from the tuple args. For that Python API provids us a function named PyArg_ParseTuple. This function accepts the PyObject * parameter arg as the first argument. Second argument is a C style format specifier. But note that here we don't use % sign, instead only the character. 'd' for double, 'i' for integer and 's' for strings or (char *) and so on. This function return 0 when operation fails and we have a check to determine that and return NULL from C function, when failed, so that python interpreter can throw an exception saying the error. Next when succeeded in parsing the tuple, we can do our operations, here in our case adding the two variable. Returning from C to python needs to be compatible with python. For this, Python API provides us a method called Py_BuildValue, which also takes the format specifier for the return value and the actual return value. Easy huh!!!

OK.. Now let's look into the second function, mathC_addComplex. This function is supposed to add two complex numbers. It will take four arguments, orderly, first complex numbers' real part, then complex part and then second complex numbers' real part and the complex part. But, note that this function has the above second prototype. This is to support pythons' keyword arguments. The acceptable keyword list needs to be specified in a null terminated array like kwlist. As of here, the python interpreter can pass arguments with keywords; real1, complex1, real2, complex2. Then I've declared four variables to capture. Here I have initialized r2, c1 and c2 to zero and left r1 to uninitialized. This, is to enforce the python programmer to pass at least one value, which is the real part of the first complex number. That enforcement actually happens in the format specifier in the PyArg_ParseTupleAndKeywords function. It's specified as d|ddd. All the specifier right to | are optional. All the specifier to left of the | are required. Note the function PyArg_ParseTupleAndKeywords. It's different from PyArg_ParseTuple. It needs to know about the keyword list and the keyword arguments. The PyObject * kw is passed as the second argument. And, finally we are returning a tuple. For that, see the format specifier in the return statement. (dd) says the tuple contains two values and those are double. The last function mathC_log is very self explainable. Only difference is that it uses C math libraries' log function. Voila, all three functions are now ready to serve as extensions to python. We are done with the first step of building an extension module.

The second step is to specify the table of mapping. This is specified in an array, which can have any name we like, but of type PyMethodDef provided in Python API. This is a structure of the following form.
 struct PyMethodDef {
  char *ml_name;
  PyCFunction ml_meth;
  int ml_flags;
  char *ml_doc;
 };
ml_name is the method name, that we are going to use from python. ml_meth is the equivalent C function name. ml_flags identify which of the three prototypes we are using in C function. Acceptable values are METH_VARARGS for the first prototype, METH_VARARGS | METH_KEYWORDS for the second and METH_NOARG for the third. The final char * parameter is for a docstring of the method, which python uses, which can be NULL. So to specify our second function, we need to have a structure like following.
{ "addComplex", (PyCFunction) mathC_addComplex, METH_VARARGS | METH_KEYWORDS, NULL }
Now the above mathC_methods specifies together with NULL values for the termination. Now the second step is done.

The last and final step involves expoting our initialization function, so that when this module is imported, the python interpreter knows what methods are available. This initialization method needs to be named init{moduleName}. Since, our module is mathC, this method is named initmathC. Note PyMODINIT_FUNC macro. That's a platform independant way to export this method so that it could be called from outside when we build this library into a dynamic library, .so or .dll. To initialize we use the function called Py_InitModule3 as follows.
Py_InitModule3("mathC", mathC_methods, "My mathC extension module");
This methods accepts three arguments; module name, method declaration array and a doc string for the module. Phew!! all steps are done.

OK.. we are not done yet though. Next we need to build the library and test it by importing. We can build the library in two ways. First is the classic way of building. In Linux/Unix we can build with following gcc command. Of course, you need to have gcc, C compiler installed for doing that. If you don't have the required tools installed, see one of my previous post "Installing Developer Tools in Linux".
$ gcc -shared -I/usr/include/python3.1 mathC.c -o mathC.so -fPIC
In windows, having Visual C++ compiler installed, with following command you can build the library into a dll.
cl /LD /IC:\Python31\include mathC.c C:\Python31\libs\python31.lib
Note it assumes the python 3 is installed, and change to the path where Python header files are residing in your system.

Next to import this module into python code, either your library needs to be in the directory where your python code is. Otherwise it must be in one of the sys.path directories. You can manually copy the library file to one of that directories.

But the second method can save you with all these manual process. It's with using the pythons' setup script with the help of distutils package. If you haven't done the above, you can proceed with the following setup script in a file called setup.py.
from distutils.core import setup, Extension
setup(name=’mathC’, version=’1.0’, ext_modules=[Extension(‘mathC’,[‘math.C.c’])])
It's all self explanatory. You can now install the package using the following command.
$ python setup.py install
In Unix, you may need to provide root access for this. In Windows that's not a problem in general.
Now all done. Let's check our extension module in Python. If you have not placed the library into one of the sys.path or site.packages directory, you may have to navigate to the directory where the library is residing. If you have placed or installed in the previous way with the setup.py all is fine.
[shazni@wso2-ThinkPad-T530 cmath]$ python
Python 2.7.4 (default, Sep 26 2013, 03:20:26) 
[GCC 4.7.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mathC
>>> mathC.add(5.4, 2.3)
7.7
>>> mathC.log(15.4)
2.7343675094195836
>>> mathC.addComplex(1.1, 1.2, 1.3, 1.4)
(2.4000000000000004, 2.5999999999999996)
>>> mathC.addComplex(1.1)
(1.1, 0.0)
>>> mathC.addComplex(real1=1.1, real2=1.2, complex1=1.3, complex2=1.4)
(2.3, 2.7)
Great!!!. All happens as expected. Hope you enjoyed learning how to develop an extension module. This is a great technique, which can make beautiful python even more beautiful.

Saturday, November 23, 2013

Regular Expressions using Python

In this short guide, we shall introduce our selves with a powerful technique called 'Regular Expressions' (a.k.a regexp) with the context of a fascinating language, Python. The reason I chose to introduce regexp with Python is, that it has a very concise and easy syntax to follow, and therefore we do not swing away out of our main focus on regexp. And further, we can just experiment with regexp using Python interactive shell.

We have to know, that many languages, including C++ (introduced in C++11), Java, Perl, Ruby etc supports regexp in some form. There can be minor differences in their syntax. But what we will describe here should mostly be applicable to all languages for some extent.

Well, what is regexp and what are it used for. Simply, it is a text parser for patterns. Therefore it's used to find matching patterns in strings. This is useful in many context. Say, you want to search all the files in a directory having your name, regexp can help you do that. Adding more, if you want the files having your name at the beginning only. regexp can help you achieve to specify these patterns in a very short string, yielding results. What if you want to scan through a log file for occurrences of certain string patterns. These are just a few useful applications for regexp.

Following are some of the basic rules in regexp.
  • Strings matches it self. e.g. abc would match in a string xyzabcqrs
  • . (period) is a general wildcard for a single character. e.g: AB.x would match in AByx, ABdx and even AB.x
  • What if you want to match AB.x and not others like AByx, ABfx and all. The savior is the escape character '\'. e.g: AB\.x would only match for AB.x  Note: In python and in many other languages \ is used to provide escape sequences. Such as \n for newline, \t for tab character and so on. To avoid having problems with that, we may specify our regexp as raw strings in which \ has no special meaning for escape sequences. We'll see that in a short while in following examples with a preceding character 'r'.
  • ^ is the character that says to look in the beginning of the text. But withing brackets, if it comes at the beginning, it means to negate the matching characters (see below)
  • $ is the character that says to look at the end of the string
  • Since . (period) represent a general wildcard, .* says any number of such wildcards, including nothing (Simply, zero or more)
  • .+ says one or more characters
We are not going to write any python scripts in files in this tutorial. Rather, we would use Python interactive shell in Linux. In Windows, you may use the IDLE (A simple python IDE) provided with the python installer package.

Following shows some of the above rules in action. Descriptions are stated inline.
[shazni@wso2-ThinkPad-T530 ~]$ python
Python 2.7.4 (default, Sep 26 2013, 03:20:26) 
[GCC 4.7.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.

>>> # importing the regexp module called re in python. Note this is a comment
>>> import re

>>> # Defining a tuple of strings
>>> str=('sss', '#$%ssss^&*', 'as.su', 'q_%sssss', 'a\nut', '12sssTA', '^Atssy$')

>>> # Following finds all the strings having 'sss' character sequence
>>> a=filter((lambda str: re.search(r"sss", str)), str)
>>> print(a)
('sss', '#$%ssss^&*', 'q_%sssss', '12sssTA')

>>> # Note following uses 'match' instead of 'search'. This will only find string beginning with the 'sss'
>>> b=filter((lambda str: re.match(r"sss", str)), str)
>>> print(b)
('sss',)

>>> # We may also use the following to achieve the same. Here the character ^ specifies the pattern to match the specified string only in the beginning
>>> z=filter((lambda str: re.search(r"^sss", str)), str)
>>> print(z)
('sss',)

>>> # Following matches 'sss' only at the end. The $ special character does the trick 
>>> y=filter((lambda str: re.search(r"sss$", str)), str)
>>> print(y)
('sss', 'q_%sssss')

>>> # Period (.) is a general wildcard for a single characted. e.g: AB.x would match in AByx, ABdx and even AB.x. Therefore, following would return all the strings having an 's' then one more of any character and then another 's'
>>> c=filter((lambda str: re.search(r"s.s", str)), str)
>>> print(c)
('sss', '#$%ssss^&*', 'as.su', 'q_%sssss', '12sssTA')

>>> # Matches any string having s.s
>>> d=filter((lambda str: re.search(r"s\.s", str)), str)
>>> print(d)
('as.su',)

>>> # Matches any string having atleast zero or more characters in between two 's' characters
>>> e=filter((lambda str: re.search(r"s.*s", str)), str)
>>> print(e)
('sss', '#$%ssss^&*', 'as.su', 'q_%sssss', '12sssTA', '^Atssy$')

>>> # Matches any string having atleast one or more characters in between two 's' characters
>>> f=filter((lambda str: re.search(r"s.+s", str)), str)
>>> print(f)
('sss', '#$%ssss^&*', 'as.su', 'q_%sssss', '12sssTA')

>>> # Following matches anything having t
>>> g=filter((lambda str: re.search(r"t+", str)), str)
>>> print(g)
('a\nut', '^Atssy$')

>>> # Following matches an s charactes followed by an character in that you find in square bracket
>>> t=filter((lambda str: re.search(r"s[.y]", str)), str)
>>> print(t)
('as.su', '^Atssy$')

>>> # Follwoing matches an 's' character, followed by any character which is not 'y' or '.' and then followed by 'T'. Here what we need to know is the ^ character inside the square bracket. It says anything not in [] 
>>> u=filter((lambda str: re.search(r"s[^.y]T", str)), str)
>>> print(u)
('12sssTA',)

>>> # follwoing matcher all the strings that don't have an 's'. Note ^ at the begiining indicates begiining of strings, $ indicates end of all strings. In the middle we find anything not s. And * says any number of characters. Finally, this means, all the strings from beginning to end that do not consist an 's'  
>>> v=filter((lambda str: re.search(r"^[^s]*$", str)), str)
>>> print(v)
('a\nut',)

>>> # Ok now how do we match the ^ character at the beginning. Escape character comes to rescue
>>> w=filter((lambda str: re.search(r"^\^", str)), str)
>>> print(w)
('^Atssy$',)

>>> # Following is bit tricky. It says to match all the strings having a ^ but exclusing the ones having ^ at the beginning. 
>>> j=filter((lambda str: re.search(r"^[^\^].*\^", str)), l)
>>> print(j)
('at^yO',)
OK. That's it for now. This guide is by no means complete, but just a quick start. Now you may explore regexp capabilities in various languages, particularly in Python.

Sunday, November 17, 2013

Clustering WSO2 Governance Registry


Note: Before reading this guide, I recommend reading my previous blog post on "Setting up WSO2 Governance Registry Partitions", because last few steps in this guide is similar to that of the previous post. You can still follow along without reading it though.

In this guide we shall set up a WSO2 Governance Registry into a cluster. Cluster??? What's important in clustering any sort of product in the first place. Oh!!! That's a critical question. Anyhow, this could be a possible question in a semester paper of a subject in computer science. What could be that subject? Distributed Systems... I'm not sure. Well, whatever it is, I'll provide some simple answers why clustering an application can be useful, though these answers may not be suitable for a semester paper. :)
  • Clustering can be used to load balance between working nodes. This will reduce overhead that each node needs to handle by dividing work among partitions
  • Clustering enables application to withstand on sudden failures in one or more node as far as at least one node is up and running, decreasing the overall downtime of the service and providing high availability  
  • Clustering undoubtedly enables scalability and can increase the performance of the overall application
OK.. enough of theory!!! Let's bogged down into clustering WSO2 Governance Registry as described below.

Here we are going to demonstrate the clustered setup with three nodes of WSO2 Governance Registry (GREG), a WSO2 Elastic Load Balancer (ELB) and a database for storage, as shown below.


We are going to set up the cluster in the local machine for testing, though, the steps would be the same if each component; ELB, GREG nodes and the DB, resides in different servers, except for appropriate IP address settings.

First, we shall configure the WSO2 ELB, which is going to be in front of the three WSO2 GREG nodes. Load Balancers, in general help distribute the requests among the nodes. Download the WSO2 ELB from http://wso2.com/products/elastic-load-balancer/ and unzip it to a directory. I'll refer to load balancer home directory as ELB_HOME.

Now open up the ELB_HOME/repository/conf/loadbalancer.conf and replace the governance domain as shown below.
    governance {
        hosts                   governance.cloud-test.wso2.com;
    #url_suffix         governance.wso2.com;
        domains   {
            wso2.governance.domain {
                tenant_range *;
                group_mgt_port 4321;
                mgt {
                    hosts governance.cluster.wso2.com;
                }
            }
        }
    }
Next we need to edit the ELB_HOME/repository/conf/axis2/axis2.xml and uncomment the localMemberHost to expose the ELB to the members of the cluster (the GREG nodes that we are going to configure) as shown below.
<parameter name="localMemberHost">127.0.0.1</parameter>
Also change the localMemberPort to 5000 as shown below.
<parameter name="localMemberPort">5000</parameter>
Next you need to update your system's hosts file with an entry like the following, which is a simple mapping between an IP address and a hostname.
127.0.0.1 governance.cluster.wso2.com
In Linux, the file is /etc/hosts and in Windows the file is %systemroot%\system32\drivers\etc\hosts (In Windows to view this file you may have to enable the system to show hidden files)

That's all on WSO2 ELB. We can now start the WSO2 ELB in terminal by running the following command in ELB_HOME/bin/
$ ./wso2server.sh
Let's now download the latest WSO2 GREG from http://wso2.com/products/governance-registry/, and unzip it into a directory. Then copy the GREG home directory to two more places for other two nodes. You can place all three nodes in the same directory and rename those by appending '_node1', '_node2' and '_node3'. Before we delve into configuring WSO2 GREG nodes, we need to create database to hold the shared node data.

Let's now create database for the 'governance registry'. You can use the in-built H2 database, but it's recommended to use a database like MySQL in production systems. Therefore, we will use MySQL in this setup. If you don't have MySQL installed in Linux look in my previous post 'Installing MySQL in Linux'. Windows users can download and install using the MySQL installer package and it's very straight forward. To create a MySQL database for governance registry and populate it with the schema, enter the following to go into the MySQL prompt.
$ mysql -u root -p
 
mysql> create database wso2governance;
Query OK, 1 row affected (0.04 sec)
mysql> create database wso2apim;
Query OK, 1 row affected (0.00 sec)
Next we ought to exit from MySQL prompt. And, now we shall populate the databases we created with the schema. For that enter following commands. Don't forget to replace the GREG_HOME to any WSO2 GREG directory.
$ mysql -u root -p wso2governance < GREG_HOME/dbscripts/mysql.sql
$ mysql -u root -p wso2apim < GREG_HOME/dbscripts/apimgt/mysql.sql
OK.. now database is also ready. We can now configure our WSO2 GREG nodes. Repeat the following steps in all the GREG nodes. The differences you have to make are noted where necessary.

As a first step in clustering the nodes, we got to enable clustering in each GREG node, of course. This can be done by changing the enable attribute to 'true' as the following line in axis2.xml, which can be found in GREG_HOME/repository/conf/axis2 directory. And this needs to be done in all three nodes.
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
* Change the GREG_HOME/repository/conf/axis2/axis2.xml as shown below
<parameter name="membershipScheme">wka</parameter>
<parameter name="domain">wso2.governance.domain</parameter>
<parameter name="localMemberHost">governance.cluster.wso2.com</parameter>
<parameter name="localMemberPort">4250</parameter>
This is for joining to the multicast domain "wso2.governance.domain". Nodes with same domain belong to the clustering group. "wka" is for Well-Known Address based multicasting. "localMemberPort" port on other nodes needs to be different. Say 4251 and 4252. This is the TCP port on which other nodes contact this node.

* In the <members> section, there needs to be a <member> tags with the details of the ELB or multiple ELBs (If we are having clusters of ELBs). The following shows the ELB with port 4321, which is the group management port we configured in the loadbalancer.conf in the ELB. Since we are having only one ELB, there is only one <member> tag.
<members>
 <member>
     <hostName>127.0.0.1</hostName>
     <port>4321</port>
 </member>
</members>
* Next we need to edit the HTTP and HTTPS proxy ports in the GREG_HOME/repository/conf/tomcat/catalina-server.xml. Add a new attribute named proxyPort to 8280 in HTTP and 8243 in HTTPS. Attributes need to be added like proxyPort="8280" and proxyPort="8243" in the two <Connector> tags, respectively.

* Next, since we are going to setup our nodes and the ELB in the local machine we need to avoid port conflicts. For this, set the <Offset> to 1 in the first node and 2 in
 the second node, 3 in the third node, in GREG_HOME/repository/conf/carbon.xml. You can put any Offset of your wish, but it has to be different if the nodes are running in the same machine. If these nodes are in different servers, this need not be done.

* In carbon.xml we also need update the "HostName" and "MgtHostName" elements as follows
<HostName>governance.cluster.wso2.com</HostName>
<MgtHostName>governance.cluster.wso2.com</MgtHostName>
* WSO2 Carbon platforms supports a deployment model in its architecture for components to act as two types of nodes in clustering setup. 'management' nodes and 'worker' nodes. 'management' nodes are used for changing configurations and deploying artifacts. 'worker' nodes are to process requests. But in WSO2 GREG there is no concept of request processing, all nodes are considered 'management' nodes. In a clustered setup, we got to configure this explicitly in axis2.xml. We need to change the value attribute to "mgt" of <property> tag of "subdomain". After the change, the property tags would look like the below.
<!--
Properties specific to this member
-->
<parameter name="properties">
    <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
    <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
    <property name="subDomain" value="mgt"/>
</parameter>
* Next open up GREG_HOME/repository/conf/user-mgt.xml file and update the "datasource" property as follows to enable the user manager tables to be created in "wso2governance":
<Property name="dataSource">jdbc/WSO2CarbonDB_mount</Property>
* Next we to configure the shared 'config' and 'governance' registries, as I earlier pointed out to you to read my previous blog post "Setting up WSO2 Governence Registry Partitions". OK.. we shall configure this in each GREG_HOME/repository/conf/registry.xml and GREG_HOME/repository/conf/datasources/master-datasources.xml. First in registry.xml we need to add the following additional configs.

Add a new <dbConfig>
<dbConfig name="wso2registry_mount">
    <dataSource>jdbc/WSO2CarbonDB_mount</dataSource>
</dbConfig>
Add a new <remoteInstance>
<remoteInstance url="https://localhost:9443/registry">
    <id>instanceid</id>
    <dbConfig>wso2registry_mount</dbConfig>
    <readOnly>false</readOnly>
    <enableCache>true</enableCache>
    <registryRoot>/</registryRoot>
    <cacheId>root@jdbc:mysql://localhost:3306/wso2governance</cacheId>
</remoteInstance>
Add the <mount> path for 'config' and 'governance'
<mount path="/_system/config" overwrite="true">
    <instanceId>instanceid</instanceId>
    <targetPath>/_system/config</targetPath>
</mount>
<mount path="/_system/governance" overwrite="true">
    <instanceId>instanceid</instanceId>
    <targetPath>/_system/governance</targetPath>
</mount>
Next in master-datasources.xml add a new <datasource> as below. Do not change the existing <datasource> of <name> WSO2_CARBON_DB.
<datasource>
    <name>WSO2_CARBON_DB_MOUNT</name>
    <description>The datasource used for registry and user manager</description>
    <jndiConfig>
        <name>jdbc/WSO2CarbonDB_mount</name>
    </jndiConfig>
    <definition type="RDBMS">
        <configuration>
            <url>jdbc:mysql://localhost:3306/wso2governance</url>
            <username>root</username>
            <password>root</password>
            <driverClassName>com.mysql.jdbc.Driver</driverClassName>
            <maxActive>50</maxActive>
            <maxWait>60000</maxWait>
            <testOnBorrow>true</testOnBorrow>
            <validationQuery>SELECT 1</validationQuery>
            <validationInterval>30000</validationInterval>
        </configuration>
    </definition>
</datasource>
As the final step modify the <datasource> for WSO2AM_DB as below.
<datasource>
    <name>WSO2AM_DB</name>
    <description>The datasource used for API Manager database</description>
    <jndiConfig>
        <name>jdbc/WSO2AM_DB</name>
    </jndiConfig>
    <definition type="RDBMS">
        <configuration>
   <url>jdbc:mysql://localhost:3306/wso2apim?autoReconnect=true&amp;relaxAutoCommit=true</url>
            <username>root</username>
            <password>root</password>
            <driverClassName>com.mysql.jdbc.Driver</driverClassName>
            <maxActive>50</maxActive>
            <maxWait>60000</maxWait>
            <testOnBorrow>true</testOnBorrow>
            <validationQuery>SELECT 1</validationQuery>
            <validationInterval>30000</validationInterval>
        </configuration>
    </definition>
</datasource>
Now we may run each WSO2 GREG node by running the following in GREG_HOME/bin
$ ./wso2server.sh
When all three nodes are up and running, the ELB console should have printed something similar to below, indicating that nodes have successfully joint the cluster.
[2013-11-12 16:52:29,260]  INFO - HazelcastGroupManagementAgent Member joined [271050b5-69d0-468d-9e9e-29557aa550ef]: governance.cluster.wso2.com/127.0.0.1:4252
[2013-11-12 16:52:32,314]  INFO - MemberUtils Added member: Host:127.0.0.1, Remote Host:null, Port: 4252, HTTP:9766, HTTPS:9446, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true
[2013-11-12 16:52:32,315]  INFO - HazelcastGroupManagementAgent Application member Host:127.0.0.1, Remote Host:null, Port: 4252, HTTP:9766, HTTPS:9446, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application cluster
[2013-11-12 16:53:01,042]  INFO - TimeoutHandler This engine will expire all callbacks after : 86400 seconds, irrespective of the timeout action, after the specified or optional timeout
[2013-11-12 16:55:34,655]  INFO - HazelcastGroupManagementAgent Member joined [912fe7db-7334-467c-a6a7-e6929e652edb]: governance.cluster.wso2.com/127.0.0.1:4251
[2013-11-12 16:55:38,707]  INFO - MemberUtils Added member: Host:127.0.0.1, Remote Host:null, Port: 4251, HTTP:9765, HTTPS:9445, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true
[2013-11-12 16:55:38,708]  INFO - HazelcastGroupManagementAgent Application member Host:127.0.0.1, Remote Host:null, Port: 4251, HTTP:9765, HTTPS:9445, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application cluster
[2013-11-12 16:57:05,136]  INFO - HazelcastGroupManagementAgent Member joined [cc177d58-1ec1-473e-899c-9f154303e606]: governance.cluster.wso2.com/127.0.0.1:4250
[2013-11-12 16:57:09,232]  INFO - MemberUtils Added member: Host:127.0.0.1, Remote Host:null, Port: 4250, HTTP:9764, HTTPS:9444, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true
[2013-11-12 16:57:09,232]  INFO - HazelcastGroupManagementAgent Application member Host:127.0.0.1, Remote Host:null, Port: 4250, HTTP:9764, HTTPS:9444, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application cluster
Great!!!. Now we may access the Management Console by browsing the https://governance.cluster.wso2.com:8243/carbon and now the WSO2 GREG is clustered. Every processing that comes to ELB will be distributed among the GREG nodes, providing the scalability and high availability.

Friday, November 15, 2013

Installing and configuring phpMyAdmin in Linux

phpMyAdmin is a web based graphical administration tool for MySQL RDBMS, written in PHP. Using phpMyAdmin can ease your database administration tasks. Therefore, installing it in your machine is useful and this short guide will walk you through setting it up in your Linux. Installing this in Windows is just a matter of installing the WAMP server installer package which has everything, including MySQL server packaged into one bundle.

Even, in Linux you may choose to use a package called LAMP, which is easy to install and have everything bundled together. But for certain reason, may be you want to install MySQL separately in a remote server or you have MySQL already installed in your distribution, you would need to install phpMyAdmin separately as described below.

The phpMyAdmin needs a MySQL database already installed. If you haven't got it installed, see my previous post on 'Installing MySQL in Linux'.

Assuming you have installed MySQL, do the following to install phpMyAdmin, whichever is applicable to your distribution
$ sudo apt-get install phpMyAdmin
$ sudo yum install phpMyAdmin
During the installation, you will be prompted for the web server to serve the phpMyAdmin. Select 'Apache2' for that. You may also be asked to provide administration credentials, which you can provide on your wish and note it down as it's required while you login. You can also use an existing MySQL user credentials to login to phpMyAdmin.

The installation might start the apache web server and copy the phpmyadmin.conf into /etc/apache2/conf.d directory. In certain distribution like Fedora, the apache2 directory is named httpd. Replace the directory appropriately. You can check it by listing the directory content of /etc/apache2/conf.d. If you don't find it, you can copy it manually as follows and can reload the apache web server.
$ sudo ln -s /etc/phpmyadmin/apache.conf /etc/apache2/conf.d/phpmyadmin.conf
$ sudo /etc/init.d/apache2 reload
or in RHEL/Fedora
$ sudo service httpd restart
Note: As per documentations, after Ubuntu 13.10 (Saucy Salamander), apache will load configuration files from /etc/apache2/conf-enabled/ directory instead of from /etc/apache2/conf.d/. Therefore if you are using Ubuntu 13.10 or later, do the above step for /etc/apache2/conf-enabled/ and reload the web server.

Ok that's it. Now you may start administering your MySQL database with phpMyAdmin. Open your favorite browser and navigate to http://<hostname>/phpmyadmin. If your apache web server is in local machine, replace <hostname> with localhost. Provide the username and the password that you configured in the installation or you may use an existing MySQL user credential. Then you would see the following phpMyAdmin page for administration. Good luck in using MySQL.


Monday, November 11, 2013

Enabling core dumps in Linux


It is usable to set the Linux systems to write core dumps when an application process crashes. It isn't a magic that badly written C/C++ application would silently crash and vanish. In such a case, to debug what has happened, we would like to view the stack trace of the application to the point of crash which may possibly give us a clue that what did go wrong.

If not in all Linux distributions, certain Linux distributions do not write the core dumps to the file system by default as it is set off. It's usually done to prevent applications to write huge core dumps of hundreds of GB, which would eat up valuable disk space. Although, enabling core dumps in production systems is not recommended. it would be beneficial for developers to have it enabled in the development machines.

The question is how do we turn it on. OK.. it's quite simple.

Before that we shall check the current core dump limits. Enter the following command.
$ ulimit -c
0
You might possibly get the value as zero. This means. the system doesn't write any core files when your application crashes. Let's test this with a sample program which is set to crash intentionally. Type in the following in a file names Crash.cpp.
#include <iostream>

using namespace std;

class Crash
{
public:
 Crash() {
     cout << "Constructing " << endl ;
     p = new int ;
 }
 ~Crash() {
     cout << "Destructing" << endl ;
     delete p ;
 }
private:
 int *p ;
};

int main()
{
 Crash *pCrash = new Crash ;
 delete pCrash ;
 delete pCrash ;
 return 0;
}

According to the program, dynamically allocated memory pointer pCrash is reallocated twice, which make the program crash at the second reallocation. Compile the program by entering the following.
$ g++ -o crash Crash.cpp
Now let's execute the program as below. You would see that the program crashed by looking at the terminal output of the stack trace. But the core file would be missing at the execution directory or in the configured directory (see below), since ulimit is 0.
 [shazni@wso2-ThinkPad-T530 Crash]$ ./crash
Constructing
Destructing
Destructing
*** Error in `./crash': free(): invalid pointer: 0x0000000000ddb020 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x80a46)[0x7fa5bd0bba46]
./crash[0x400b2f]
./crash[0x400a3a]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fa5bd05cea5]
./crash[0x400929]
======= Memory map: ========
00400000-00401000 r-xp 00000000 08:04 14552625                           /home/shazni/ProjectFiles/Test/NormalTest/Crash/crash
00600000-00601000 r--p 00000000 08:04 14552625                           /home/shazni/ProjectFiles/Test/NormalTest/Crash/crash
00601000-00602000 rw-p 00001000 08:04 14552625                           /home/shazni/ProjectFiles/Test/NormalTest/Crash/crash
00ddb000-00dfc000 rw-p 00000000 00:00 0                                  [heap]
7fa5bcd36000-7fa5bce39000 r-xp 00000000 08:03 4341456                    /lib/x86_64-linux-gnu/libm-2.17.so
7fa5bce39000-7fa5bd039000 ---p 00103000 08:03 4341456                    /lib/x86_64-linux-gnu/libm-2.17.so
7fa5bd039000-7fa5bd03a000 r--p 00103000 08:03 4341456                    /lib/x86_64-linux-gnu/libm-2.17.so
7fa5bd03a000-7fa5bd03b000 rw-p 00104000 08:03 4341456                    /lib/x86_64-linux-gnu/libm-2.17.so
7fa5bd03b000-7fa5bd1fa000 r-xp 00000000 08:03 4341459                    /lib/x86_64-linux-gnu/libc-2.17.so
7fa5bd1fa000-7fa5bd3f9000 ---p 001bf000 08:03 4341459                    /lib/x86_64-linux-gnu/libc-2.17.so
7fa5bd3f9000-7fa5bd3fd000 r--p 001be000 08:03 4341459                    /lib/x86_64-linux-gnu/libc-2.17.so
7fa5bd3fd000-7fa5bd3ff000 rw-p 001c2000 08:03 4341459                    /lib/x86_64-linux-gnu/libc-2.17.so
7fa5bd3ff000-7fa5bd404000 rw-p 00000000 00:00 0
7fa5bd404000-7fa5bd418000 r-xp 00000000 08:03 4329143                    /lib/x86_64-linux-gnu/libgcc_s.so.1
7fa5bd418000-7fa5bd618000 ---p 00014000 08:03 4329143                    /lib/x86_64-linux-gnu/libgcc_s.so.1
7fa5bd618000-7fa5bd619000 r--p 00014000 08:03 4329143                    /lib/x86_64-linux-gnu/libgcc_s.so.1
7fa5bd619000-7fa5bd61a000 rw-p 00015000 08:03 4329143                    /lib/x86_64-linux-gnu/libgcc_s.so.1
7fa5bd61a000-7fa5bd6ff000 r-xp 00000000 08:03 2891772                    /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17
7fa5bd6ff000-7fa5bd8fe000 ---p 000e5000 08:03 2891772                    /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17
7fa5bd8fe000-7fa5bd906000 r--p 000e4000 08:03 2891772                    /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17
7fa5bd906000-7fa5bd908000 rw-p 000ec000 08:03 2891772                    /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17
7fa5bd908000-7fa5bd91d000 rw-p 00000000 00:00 0
7fa5bd91d000-7fa5bd940000 r-xp 00000000 08:03 4330760                    /lib/x86_64-linux-gnu/ld-2.17.so
7fa5bdb18000-7fa5bdb1d000 rw-p 00000000 00:00 0
7fa5bdb3b000-7fa5bdb3f000 rw-p 00000000 00:00 0
7fa5bdb3f000-7fa5bdb40000 r--p 00022000 08:03 4330760                    /lib/x86_64-linux-gnu/ld-2.17.so
7fa5bdb40000-7fa5bdb42000 rw-p 00023000 08:03 4330760                    /lib/x86_64-linux-gnu/ld-2.17.so
7fffcccd9000-7fffcccfa000 rw-p 00000000 00:00 0                          [stack]
7fffccd66000-7fffccd68000 r-xp 00000000 00:00 0                          [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]
Aborted
If you see a value instead, that is the maximum size that's permitted to dump. If the dump is less than that size you would get the full dump. Otherwise the full dump may not be written, which would be useless. the core files may not be truncated to exact limit of ulimit. It may be different, since ulimit that we set is a soft limit.

To set a limit to the ulimit, issue the following command
$ ulimit -c 45  # sets the limit to 45 times 512 bytes of block
To set the limit to an unlimited amount, issue the following command
$ ulimit -c unlimited
Now, execute the above program. You would now see the same output, but there will be an additional file named 'core' in the current directory (If you still don't find, read ahead), which is the kernel dump of the process. Ok, have we done yet? You would have probably guessed 'No', because you see lot of stuff written below in the post. Good guess!!!
The problem here is that, the setting that you made is only temporary to the current terminal session. If you open up another terminal and check the ulimit, it would still be the initial ulimit (probably zero) that you saw earlier. We need our settings to be persistent throughout and between reboots. OK... I get profile files into my head now. Either .bashrc in home directory, if you want the setting to effect only the current user or /etc/profile if you need the settings to be effected to all users. I'll use my .bashrc for this.
$ vi ~/.bashrc
Add the following line to the end of your profile.
ulimit -c unlimited >/dev/null 2>&1
This set the ulimit to unlimited and throws whatever is output to screen to hell. You may need to source the .bashrc file as the below command and may need to close the terminal and start all over again. Now in all terminals you open up as your user account, ulimit should be shown as unlimited.
$ source ~/.bashrc
Ok, now even if you reboot the system and run the above program, you should get a core file created (If, you still don't find read ahead). Well, we can be little more fancy as well. We can have a pattern of our core file name. Currently, if you see the core dump is named 'core' unless someone has changed the pattern in the file /proc/sys/kernel/core_pattern. This is where you specify where the core dump is written and it's file name pattern. If you didn't see the core dump earlier in the directory where the program executable exist, look the path in this file. In my Ubuntu, it as 'core' as shown below.
$ cat /proc/sys/kernel/core_pattern
core
We can have certain information embedded in the core file name itself. You may want to edit this file directly or you can edit /etc/sysctl.conf. I'll edit the configuration file since I can add more system parameters if needed.
$ sudo vi /etc/sysctl.conf
And add the following to the end of the file
#core file settings
kernel.core_uses_pid=1
kernel.core_pattern=core.%e.%s.%p.%t`
fs.suid_dumpable=2
The first line will enable addition of the process id to the core file name.
The more interesting line is the second line. This line is the core file pattern. Each letter following a % sign as a special meaning. Some of the possible values and its meanings are as follows.
Unix/linux systems provide a facility to run a process under a different user id or group id than the user/group starting the process by changing the setuid and setgid bits. Such processes may be dealing with sensitive data which should not be accessible by the new invoking user/group. Since core dumps may provide some internal sensitive data, Unix/Linux systems by default disable core dumps to be written when such users execute the process. If we still want to enable core dumps to be written when such users execute the process, we need to set the third line.

%e - Application name
%s - Signal number that caused the crash
%p - Process ID of the application when it crashed
%t - time at which dump is written (This is in seconds from Unix epoch)
%u - real UID (User ID) of the dumped process
%g - real GID (Group ID) of the sumped process
%h - hostname
%% - will have a % sign itself

To make setting effective, enter the following command.
$ sudo sysctl -p
Now if you run the above application with above pattern, you would get something similar core file name as, core.crash.6.18056.1384147041

OK. Going a little further now. We can permit the core files to be written by all the applications in your system, including daemons. In Red-hat based distributions like RHEL or Fedora, this can be achieved by adding the following line in /etc/sysconfig/init
DAEMON_COREFILE_LIMIT='unlimited'
You may need to restart the system for this setting to take effect.

Instead if you just want to allow only certain daemons to write dumps, in Red-hat based distributions, edit add the following line /etc/init.d/functions unless it's already there
# make sure it doesn't core dump anywhere unless it's requested
corelimit="ulimit -S -c ${DAEMON_COREFILE_LIMIT:-0}"
Now add the following line to the init script of your daemon in /etc/init.d/{your service}
DAEMON_COREFILE_LIMIT='unlimited'
Since is a RedHat way of setting core file limit, in Ubuntu you need to add the following lines instead to the services' init script in /etc/init.d
ulimit -c unlimited >/dev/null 2>&1
echo tmp/core.%e.%s.%p.%t > /proc/sys/kernel/core_pattern
If you now start your service by invoking following command
$ sudo service {your service script} start
Now find you services process id using
$ ps aux | grep {your service}
And let's kill it
$ sudo kill -s SIGSEGV {Your services' PID}
Now if go ahead and look in /tmp. you should find a core dump for your service.

Great!!! Here we are. now we have set our Linux box to write core dumps to the system and can debug any application crashes.

Sunday, October 27, 2013

Setting up WSO2 Governance Registry Partitions

In this brief guide, I'll walk you through setting up, WSO2 Governance registry setup in a remote machine, and another local setup (the node) to share with the remote instance. I'm going to setup the remote instance locally as well, so that it's easy for you to set this up, in the same machine (But the procedure would mostly be the same on to an actual remote setup)

As you may know, WSO2 Governance registry is separated into three sub registries;
  •     Config registry - Stores the configurations and related information of a product
  •     Governance registry - Stores governance artifacts and other related information
  •     Local registry - Stores information that is local to a node in a cluster
Local registry, as its name implies, have to be local to that instance. Other two registries can be shared.
To start of with experimenting this setup with WSO2 Governance Registry, go ahead and download the latest stable WSO2 Governance Registry product from http://wso2.com/products/governance-registry/ After downloading, extract the zip file to a location of your interest, and copy the entire folder to another location to have a perception of the shared instance.

To avoid confusion, let me rename the directory name of the shared instance to wso2greg-4.6.0_remote. The version number of the extracted zip file may be different from this at the time of your download. Follow this with the version and change the procedure accordingly to reflect the version that you have downloaded. Let the local instance directory name be wso2greg-4.6.0. I've placed the two directories in my home directory. Therefore the two directories are, $HOME/wso2greg-4.6.0 and $HOME/wso2greg-4.6.0_remote. I'll refer to these two directories as GREG_HOME in respective contexts.

WSO2 governance registry uses inbuilt H2 database to store data related to above three registries. In a production environment however, it's recommended to use some other RDBMS such as MySQL as the registry DB. Therefore, in this tutorial, we shall use MySQL as the DB for the remote shared partition.

To install MySQL database point to my above blog post "Installing MySQL in Linux" 

Assuming you have installed MySQL DB and running, you can now create the database named registrydb (the name can be anything you like) in MySQL as follows,

mysql> create database registrydb;

Let's first setup our remote shared instance (i.e. the directory named wso2greg-4.6.0_remote)

In fact, there is nothing much to do in this setup except modifying the configuration file named master-datasources.xml which you can find in GREG_HOME/repository/conf/datasources/ to use MySQL, instead of the built-in H2 database. Change the <datasource> of WSO2_CARBON_DB as shown below.

        <datasource>
            <name>WSO2_CARBON_DB</name>
            <description>The datasource used for registry and user manager</description>
            <jndiConfig>
                <name>jdbc/WSO2CarbonDB</name>
            </jndiConfig>
            <definition type="RDBMS">
                <configuration>
                    <url>jdbc:mysql://127.0.0.1:3306/registrydb</url>
                    <username>root</username>
                    <password>root</password>
                    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                    <maxActive>50</maxActive>
                    <maxWait>60000</maxWait>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1</validationQuery>
                    <validationInterval>30000</validationInterval>
                </configuration>
            </definition>
        </datasource>

Note the <url>. You may use the phrase "localhost" instead of the loopback address 127.0.0.1 IP address. Further, change the port on which your MySQL runs. (3306 is the default configured MySQL port.)
registrydb part of the URL indicates the database we created earlier.

We need to copy the MySQL JDBC driver (a .jar file), which you can download from http://www.mysql.com/products/connector/, to GREG_HOME/components/lib/. And, this step needs to be done in the other node as well.

We can now start this instance of the WSO2 Governance registry by issuing the following command from GREG_HOME/bin

$ ./wso2server.sh -Dsetup

Here the -Dsetup option is to populate the created 'registrydb' in MySQL. We can now access the management console by navigating to the https://localhost:9443/carbon/ URL from your browser.

Now we can configure the other node (our local instance that's going to share the 'Config' and 'Governance' registries of the remote partition that we have already started). Therefore, what needs to happen is that, to inform this node about the remote 'Config' and 'Governance' registries that we intend to use. This is done in WSO2 Governance Registry by properly configuring the registry.xml and the master-datasources.xml configuration files.

Let's configure the registry.xml file. We need to tell this registry partition that we are going to use a remote 'Config' and 'Governance' registries. This is done by adding the following in the registry.xml file.

    <mount path="/_system/config" overwrite="true">
        <instanceId>mountInstance</instanceId>
        <targetPath>/_system/config</targetPath>
    </mount>

    <mount path="/_system/governance" overwrite="true">
        <instanceId>mountInstance</instanceId>
        <targetPath>/_system/governance</targetPath>
    </mount>

And further, we have to have a way to inform the mount instance location. That's done by adding the following in registry.xml.

    <remoteInstance url="https://x.x.x.x:9443/registry">
        <id>mountInstance</id>
        <dbConfig>mountregistry</dbConfig>
        <readOnly>false</readOnly>
        <enableCache>true</enableCache>
        <registryRoot>/</registryRoot>
        <cacheId>root@jdbc:mysql:@127.0.0.1:3306:registrydb</cacheId>
    </remoteInstance> 

Where x.x.x.x is the IP address of the remote partition. Since, the remote instance that we already started is in localhost that x.x.x.x = 127.0.0.1 (the loopback ip address). If you are setting this up in a true remote setup, change that IP address accordingly. Note that <id> tag matches the <instanceid> tag in the <mount> we configured earlier.

Now the registry partition location is been informed. Now we got to say, where the database of the remote instance resides. That database could ideally be in any location within the reachable network. It has to be configured in the master-datasources.xml file. Before that, we have to add one more addition to registry.xml file. It's listed below.

    <dbConfig name="mountregistry">
        <dataSource>jdbc/WSO2MountRegistryDB</dataSource>
    </dbConfig>

"jdbc/WSO2MountRegistryDB" will be a reference to the relevant <datasource> that we would next configure in the master-datasources.xml file. Now, let's move to master-datasources.xml file. There we would keep the default H2 database for the local registry. Additionally we got to say the DB instance where the 'Config' and 'Governance' registries reside. Add the following <datasource> in master-datasources.xml to achieve that.

        <datasource>
            <name>WSO2_MOUNT_REGISTRY_DB</name>
            <description>The datasource used for registry and user manager</description>
            <jndiConfig>
                <name>jdbc/WSO2MountRegistryDB</name>
            </jndiConfig>
            <definition type="RDBMS">
                <configuration>
                    <url>jdbc:mysql://127.0.0.1:3306/registrydb</url>
                    <username>root</username>
                    <password>root</password>
                    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                    <maxActive>120</maxActive>
                    <maxWait>900000</maxWait>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1 FROM DUAL</validationQuery>
                    <validationInterval>30000</validationInterval>
                </configuration>
            </definition>
        </datasource>

I guess we are all set, except for one thing. Since we are going to run both the remote and local setup in the local machine itself, if you run the currently configured node, you would see an exception thrown in the console. The reason for this is that, we can't run two WSO2 products at the same time, since both would attempt to run on the same default port 9443. To avoid the conflict, we should set the offset for the next server that we are going to run in the carbon.xml in GREG_HOME/repository/conf. Open up carbon.xml and change the value for <offset> to 1. This will make this instance to start of with the port 9444. If you are running this setup in two different machines you may not need to do this change, though doing so would do no harm either.

Phew!!!. All done. Now we shall start this instance in the same way we started the earlier remote instance. In the console, you should see the following three lines among others, which indicate that two registries; 'Config' and 'Governance' have been mounted in the 'mountregistry'.

 [2013-10-21 16:00:46,379]  INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} -  Configured Registry in 116ms
 [2013-10-21 16:00:46,408]  INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} -  Connected to mount at mountregistry in 3ms
 [2013-10-21 16:00:46,606]  INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} -  Connected to mount at mountregistry in 0ms

You may access the Management Console by browsing https://localhost:9444/carbon/ URL. In the Management console, click 'Browse' and expand the _system in the tree view. You should see blue arrow in front of 'config' and 'governance' nodes as shown in the following screenshot, indicating that it's mounted in a remote partition.


Saturday, October 19, 2013

Downloading and running WSO2 Governance Registry


WSO2 Governance registry is fully open source, Service Oriented Architecture (SOA) integrated registry for storing and managing artifact metadata.

In this short guide, I'll walk you through the process of how to download the WSO2 Governance Registry and start it in your machine and accessing it with its in-built web based management console (A Web User Interface (UI)), for managing the metadata.

The process is just simple. Download the latest release binary zip file from http://wso2.com/products/governance-registry/

Once you download the zipped archive, extract it to a directory of your liking. Let's assume the extracted directory be $(HOME_DIRECTORY)/wso2greg-4.5.3/ and let me call this directory $(GREG_HOME). You must have Java installed and JAVA_HOME set to successfully start the product. See my previous post "Installing Developer Tools in Linux" to see how to install and configure java in Linux.

Navigate to $(GREG_HOME)/bin and execute the following to run the WSO2 Governance Registry.
 $ ./wso2server.sh                                (In Linux)
 > wso2server.bat                                 (In Windows)
This will start the WSO2 Governance regiatry with all the default configurations, and you should see an output in console. In the end of the output you should see an output like the following.
Mgt Console URL  : https://192.168.2.1:9443/carbon/
Copy and paste the above URL in your favorite browser. You should see the WSO2 Governance Registry management console login screen. You can use the default Admin user credentials of which username and password being 'admin' to access the registry. Once you access you are ready to store and manage different artifacts in WSO2 Governance Registry. Following is a typical screen shot of the first page that you would see when you logged in.



Good Luck in using the product.

Thursday, October 17, 2013

Installing MySQL in Linux

MySQL is a very powerful open source relational database management system (RDBMS). In this short guide we'll get into installing MySQL on Linux.

You can check whether MySQL is already installed by issuing the following command.
$ mysql -V
You should see something similar output as follows if you have MySQL installed.
mysql  Ver 14.14 Distrib 5.5.32, for debian-linux-gnu (x86_64) using readline 6.2
If not, to install MySQL, run the following command from a terminal prompt in apt-get package manager based Linux distributions
$ sudo apt-get install mysql-server
In yum package manager based Linux distributions, issue the following command
$ sudo yum install mysql-server
You will be prompted to enter a password for the MySQL root user in the installation process. It's recommended to use a password for a root account. Now the MySQL server is installed on your Linux machine.

Alternatively, you may download the relevant MySQL package (e.g. rpm package) for your Linux from http://dev.mysql.com/downloads/mysql/#downloads and install according to your distribution. You may need to sign up for an Oracle Web account to download from this site. This article won't go into details of installing with distribution specific packages.

MySQL installation could also be done with the source. You can also download the source distribution from the above URL. Installing from source needs certain tools and libraries pre-installed (such as cmake and ncurses-devel). Other workaround may be required to build from source.

To start the MySQL database, issue the following command.
$ sudo /etc/init.d/mysql start
or in Fedora/RHEL
$ sudo systemctl start mysqld.service
To check whether MySQL is already running, you can issue the following command.
$ mysqladmin -u root -p status
You will be prompted for the password. When entered, you should see somewhat similar output as follows if MySQL is running
Uptime: 28581  Threads: 1  Questions: 111  Slow queries: 0  Opens: 343  Flush tables: 1  Open         tables: 84  Queries per second avg: 0.003
If not running, you would see an output as shown below.
   mysqladmin: connect to server at 'localhost' failed
   error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)'
   Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
You can edit the settings for the MySQL instance in /etc/mysql/my.cnf file to configure the basic settings like log file and port number.

That's it. You now should have MySQL installed on your system and ready to use it to develop database driven applications.

Saturday, October 5, 2013

Installing Developer Tools in Linux

The first step in developing applications in Linux based computers is mostly to have a setup of your developer tools to be installed properly. This short guide aims to provide you with installation steps required to get started with some of the popular programming languages to develop fascinating software.

This post will provide steps for installing the required tools and provide a classic "Hello World" example to test your installation of the tools. Before attempting to install any of the tools, make sure whether your system already contains the tools. Chances are high, that you, most probably have it installed already.

  • C

To compile a C program in Linux based systems, the most popular tool used is the gcc compiler. This is contained in the GNU Compiler Collection (GCC). To check whether gcc is already installed enter the following command
$ gcc --version
if you see an output like the following, gcc is already installed and you are all set to write C programs.
gcc (Ubuntu/Linaro 4.7.3-1ubuntu1) 4.7.3
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Instead if you see the following or something similar, you will have to install gcc.
bash: gcc: command not found...
Though gcc can be installed separately it's recommended to install a package called 'build-essential', which contains bunch of other useful and related packages. To install it issue the following commands in Debian based Linux.
$ sudo apt-get update
This will update the package list. Now issue the following command in Debian based Linux like Ubuntu.
$ sudo apt-get install build-essential
In Linux systems such as Red Hat Linux or Fedora that has yum based package manager, you can use the following command,
$ sudo yum install gcc
This will install gcc compiler and other development libraries like glibc. You can also use the following command to install whole bunch of development packages which also includes gcc and g++ (for C++)
$ sudo yum groupinstall "Development tools"
Now you can write a sample C program as below to check that everything is working fine. Create a file named HelloWorld.c and type in the following program text.
    #include <stdio.h>

    int main()
    {
        printf("Hello World!\n");
        return 0 ; 
    }
Compile the above program with the following command.
$ gcc -o HelloWorld HelloWorld.c
This will create an executable file named HelloWorld in your current directory. To execute the executable program type ;
$ ./HelloWorld
You should see the following output.
Hello World!
Congratulations!! You are a C guru now.

  • C++

The most popular C++ compiler in Linux is undoubtedly g++. This is also part of the GCC. If you have installed the build-essentials in the Debian or the "Development Tools" in Red hat Linux, you must already have g++. Otherwise, go ahead and install build-essentials or "Development Tools". To check whether g++ is already installed, enter the following command.
$ g++ --version
if you see an output like the following, g++ is already installed and you are all set to write C++ programs.
g++ (Ubuntu/Linaro 4.7.3-1ubuntu1) 4.7.3
    Copyright (C) 2012 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Instead if you see the following or something similar, you will have to install g++ by installing build-essential package or the "Development Tools".
bash: g++: command not found...
If you didn't install "Development Tools", and want to install g++ separately using yum, use the following command,
$ sudo yum install gcc-c++ 

This should install g++ and other development files and libraries.

Let's now write a sample C++ program which prints Hello World! on the standard output. Type the following program in a file named HelloWorld.cpp
    
#include <iostream>     
using namespace std ;     

int main()     
{         
    cout << "Hello World!" << endl ;         
    return 0 ;      
} 
Compile the above program with the following command
$ g++ -o HelloWorld HelloWorld.cpp
This will create an executable file named HelloWorld in your current directory. To execute the executable program, type ;
$ ./HelloWorld
You should see the following output.
Hello World!
Quite similar to what we did for C, right? Voila!!! Now you are a C++ expert too.

  • Java

Next we shall look into setting up Java in Linux. Chances are that you already got Java installed. Most probably the OpenJDK version. If you are willing to work with OpenJDK, everything is fine, and you are ready to develop java applications. But you may have a valid reason to use Oracle java for developments. In that case we can remove OpenJDK and install Oracle Java or keep OpenJDK with Oracle Java. Though there are few ways to install and confugre Java in Linux, the best and easy way is to download the JDK zipped file from http://www.oracle.com/technetwork/java/javase/downloads/index.html. Make sure to download the version you want, which is applicable to your system architecture. Once you have downloaded the zip file, extract it to a location of your interest. Let's say the extracted location is /usr/local/java/jdk1.6.0_43. All you need to do is add the JAVA_HOME (your installation directory; e.g: /usr/local/java/jdk1.6.0_43) as a new environment variable and add the bin directory of JAVA_HOME to the PATH environment variable. This can easily be done by adding the following lines to your .bashrc file in your home directory.
JAVA_HOME=/usr/local/java/jdk1.6.0_43
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin     
export JAVA_HOME     
export PATH

If you add these lines to .bashrc, the changes will be effective only to the current user. If changes need to be applied to all the users, the same set of lines can be added to /etc/profile file. Once you added the line, source the respective file to make changes immediate on the same shell. To do that, issue the following command.
$ source ~/.bashrc
Now java is configured on your Linux box. Issue the following commands to verify your installation and environment variables. Expected outputs should be something similar to the following lines.
$ echo $JAVA_HOME      
/usr/local/java/jdk1.6.0_43     

$ echo $PATH      
/usr/local/java/jdk1.6.0_43    

$ java -version
If you see a similar output as follows you are all set to develop java applications
java version "1.6.0_43"
Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)
Let's write a simple java program, compile and execute it. Create a file named HelloWorld.java and type in the following text.
public class HelloWorld     
{         
 public static void main(String[] args)         
 {               
  System.out.println("Hello World!");         
 }       
}
Make sure the class name match your file name. To compile this program, type in the following.
$ javac HelloWorld.java
This must generate a class file in your current directory named HelloWorld.class. It contains the bytecode of your program. To execute your class file type
$ java HelloWorld
Make sure you do not type in the .class extension when executing. Following output should be shown in the standard output.
Hello World!
Congratulations!! you have written a simple java program. Ok.. Have we done with everything? Ideally yes. But wait a second. We often come across new java versions. Then the question comes, whether we can have two versions of java installed in the Linux side by side? The answer, as you may have guessed, is a big "YES". So how do we go about making that. Ok.. The process is simple. Download the zip file of your interested version of java. And extract it, say to the same location (e.g. /usr/local/java/jdk1.7.0_07). Next we got to inform the Linux that you have two versions of java. Let's collect a few more details.
$ java -version
This should still print the older java we configured earlier, like the following.
java version "1.6.0_43"
Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)
To find the path of the java that we currently use, issue the following command
$ which java
If you have only the java we set up earlier and followed the instructions above, you would get a similar output as follows.
/usr/local/java/jdk1.6.0_43/bin/java
Ok.. let's make the Linux aware about the two jdk versions we extracted to /usr/local/java. To do that we issue the following commands.
$ sudo update-alternatives --install /usr/bin/java java /usr/local/java/jdk1.7.0_07/bin/java 2
ok... now the Linux knows about the new java, but has set the priority to 2. Issue the following command.
$ java -version
This would print the new java version.
java version "1.7.0_07"
Java(TM) SE Runtime Environment (build 1.7.0_07-b10)     
Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode)
    
Ooops!!. Where the hell my old java went? Ok don't panic. What has happened here is that you have created a symbolic link to /usr/local/java/jdk1.7.0_07/bin/java from /etc/alternatives/java and another symbolic link to /etc/alternatives/java to /usr/bin/java. When you issue the above java command, the system path /usr/bin/java overrides the path we earlier set. You can check this by issuing the following commands.
$ ls -l /usr/bin/java     
lrwxrwxrwx 1 root root 22 Oct  5 11:14 /usr/bin/java -> /etc/alternatives/java     

$ ls -l /etc/alternatives/java     
lrwxrwxrwx 1 root root 36 Oct  5 11:14 /etc/alternatives/java -> /usr/local/java/jdk1.7.0_07/bin/java
We shall now do the same to the old java we had. To do that, issue the following command
$ sudo update-alternatives --install /usr/bin/java java /usr/local/java/jdk1.6.0_43/bin/java 1

If you issue the following command, we should still see the jdk7 is used.
$ java -version     

java version "1.7.0_07"     
Java(TM) SE Runtime Environment (build 1.7.0_07-b10)     
Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode)

Ok. We shall change to the earlier version by issuing following command and selecting 1.
$ sudo update-alternatives --config java

You will be prompted as shown below.
Selection    Path                                  Priority   Status     
------------------------------------------------------------     
* 0          /usr/local/java/jdk1.7.0_07/bin/java   2         auto mode       
  1          /usr/local/java/jdk1.6.0_43/bin/java   1         manual mode       
  2          /usr/local/java/jdk1.7.0_07/bin/java   2         manual mode     

Press enter to keep the current choice[*], or type selection number: 1
      
By selecting 1, you can switch back to the earlier version. To verify, again issue the following.
$ java -version     

java version "1.6.0_43"     
Java(TM) SE Runtime Environment (build 1.6.0_43-b01)     
Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)
  
Further you have to do the same thing for javac and javaws commands for both the versions like the following.
$ sudo update-alternatives --install /usr/bin/javac javac /usr/local/java/jdk1.7.0_07/bin/javac 2     
$ sudo update-alternatives --install /usr/bin/javaws javaws /usr/local/java/jdk1.7.0_07/bin/javaws 2     
$ sudo update-alternatives --install /usr/bin/javac javac /usr/local/java/jdk1.6.0_43/bin/javac 1     
$ sudo update-alternatives --install /usr/bin/javaws javaws /usr/local/java/jdk1.6.0_43/bin/javaws 1
    
Now you can select your preferred javac version as below.
$ sudo update-alternatives --config javac

and select the appropriate number. That's it. We can now have as many java versions as we need in our system and switch back and forth easily to experiment with.

  • Python

Most probably python is installed on your Linux. You can check for python version by entering following command
$ python -V     

Python 2.7.4
If python is available you should see a similar output as above. If not install python with the following command, whichever is applicable,    $
sudo apt-get install python        or   
$ sudo yum install python
Ok.. Now let's start writing a sample program. Create a file called HelloWorld.py in your current directory. Enter the following text in it.
#!/usr/bin/python     
print("Hello World!")
Save the file and execute the code using python interpreter as follows
$ python HelloWorld.py     

Hello World!
Great.. Now you know how to program in Python too.

  • Perl

Most probably perl is also installed on your Linux. You can check for perl version by entering following command
$ perl -v
You should see a somewhat similar output as follows.
This is perl 5, version 14, subversion 2 (v5.14.2) built for x86_64-linux-gnu-thread-multi
(with 80 registered patches, see perl -V for more detail)

Copyright 1987-2011, Larry Wall

Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.

Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl".  If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.
    
If not, install perl using the following command, whichever applicable
$ sudo apt-get install perl        or     
$ sudo yum install perl
Done.. Now let's start writing a sample program. Create a file called HelloWorld.pl in your current directory. Enter the following text in it.
#!/usr/bin/perl     
print "Hello World!\n";
Save the file and execute the code using python interpreter as follows
$ perl HelloWorld.pl   
  
Hello World!
That's great... Now you are a perl expert as well.

  • Ruby

To check whether ruby is installed on you Linux, enter the following command.
$ ruby -v
If you have ruby installed, you should see somewhat similar output as below.    
ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux]
If you can't see an output like this, you will have to install ruby by issuing the following command, whichever is applicable
$ sudo apt-get install ruby            or      
$ sudo yum install ruby
Ok. Let's write the same HelloWorld program in ruby. Create a file called HelloWorld.rb and enter the following.
#!/usr/bin/ruby      
puts "Hello World!"
Now you can execute the program by entering following.
$ ruby HelloWorld.rb

Hello World!

  • C#

We finally have come to our last section. C# is a Microsoft developed programming language. Hence you should typically be programming in Windows using the .Net framework. But if you still want to develop C# or any .Net application in Linux, you are not to be made worry. You can install the 'mono', which is an open source implementation of the Microsoft .Net framework. To install mono on your Linux type in the following commands. whichever is applicable.
$ sudo apt-get install mono-complete    or     
$ sudo yum install mono-core.x86_64
When this is done, you are ready to develop .Net applications in Linux. To see the mono version installed, type in the following.
$ mono -V
you should see something similar to the following.
Mono JIT compiler version 2.10.8.1 (Debian 2.10.8.1-5ubuntu1)
Copyright (C) 2002-2011 Novell, Inc, Xamarin, Inc and Contributors. www.mono-project.com
 TLS:           __thread
 SIGSEGV:       altstack
 Notifications: epoll
 Architecture:  amd64
 Disabled:      none
 Misc:          softdebug 
 LLVM:          supported, not enabled.
 GC:            Included Boehm (with typed GC and Parallel Mark)
Let's write a small C# program. Create a file called HelloWorld.cs and type in the following.
using System;     

public class HelloWorld     
{         
 static void Main()         
 {               
  Console.WriteLine("Hello World!");        
 }       
}
We can compile the program by issuing the following command.
$ gmcs HelloWorld.cs
This should create a file called HelloWorld.exe in you current directory. (Note : In Microsoft Visual C# in Windows, we use 'csc' command in the command prompt to compile a C# program). To execute the executable we can type in the following.
$ mono HelloWorld.exe     

Hello World!
Voila!!! Now we can develop any .Net application in Linux as well. Here we are. This post explained you to set up development tools (or rather to see whether it's installed) in your Linux box and walked you in creating sample programs to get started. I hope the article has been easy to follow and fun to read. Flourish in the development world!! Good luck.