Saturday, November 23, 2013

Regular Expressions using Python

In this short guide, we shall introduce our selves with a powerful technique called 'Regular Expressions' (a.k.a regexp) with the context of a fascinating language, Python. The reason I chose to introduce regexp with Python is, that it has a very concise and easy syntax to follow, and therefore we do not swing away out of our main focus on regexp. And further, we can just experiment with regexp using Python interactive shell.

We have to know, that many languages, including C++ (introduced in C++11), Java, Perl, Ruby etc supports regexp in some form. There can be minor differences in their syntax. But what we will describe here should mostly be applicable to all languages for some extent.

Well, what is regexp and what are it used for. Simply, it is a text parser for patterns. Therefore it's used to find matching patterns in strings. This is useful in many context. Say, you want to search all the files in a directory having your name, regexp can help you do that. Adding more, if you want the files having your name at the beginning only. regexp can help you achieve to specify these patterns in a very short string, yielding results. What if you want to scan through a log file for occurrences of certain string patterns. These are just a few useful applications for regexp.

Following are some of the basic rules in regexp.
  • Strings matches it self. e.g. abc would match in a string xyzabcqrs
  • . (period) is a general wildcard for a single character. e.g: AB.x would match in AByx, ABdx and even AB.x
  • What if you want to match AB.x and not others like AByx, ABfx and all. The savior is the escape character '\'. e.g: AB\.x would only match for AB.x  Note: In python and in many other languages \ is used to provide escape sequences. Such as \n for newline, \t for tab character and so on. To avoid having problems with that, we may specify our regexp as raw strings in which \ has no special meaning for escape sequences. We'll see that in a short while in following examples with a preceding character 'r'.
  • ^ is the character that says to look in the beginning of the text. But withing brackets, if it comes at the beginning, it means to negate the matching characters (see below)
  • $ is the character that says to look at the end of the string
  • Since . (period) represent a general wildcard, .* says any number of such wildcards, including nothing (Simply, zero or more)
  • .+ says one or more characters
We are not going to write any python scripts in files in this tutorial. Rather, we would use Python interactive shell in Linux. In Windows, you may use the IDLE (A simple python IDE) provided with the python installer package.

Following shows some of the above rules in action. Descriptions are stated inline.
[shazni@wso2-ThinkPad-T530 ~]$ python
Python 2.7.4 (default, Sep 26 2013, 03:20:26) 
[GCC 4.7.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.

>>> # importing the regexp module called re in python. Note this is a comment
>>> import re

>>> # Defining a tuple of strings
>>> str=('sss', '#$%ssss^&*', '', 'q_%sssss', 'a\nut', '12sssTA', '^Atssy$')

>>> # Following finds all the strings having 'sss' character sequence
>>> a=filter((lambda str:"sss", str)), str)
>>> print(a)
('sss', '#$%ssss^&*', 'q_%sssss', '12sssTA')

>>> # Note following uses 'match' instead of 'search'. This will only find string beginning with the 'sss'
>>> b=filter((lambda str: re.match(r"sss", str)), str)
>>> print(b)

>>> # We may also use the following to achieve the same. Here the character ^ specifies the pattern to match the specified string only in the beginning
>>> z=filter((lambda str:"^sss", str)), str)
>>> print(z)

>>> # Following matches 'sss' only at the end. The $ special character does the trick 
>>> y=filter((lambda str:"sss$", str)), str)
>>> print(y)
('sss', 'q_%sssss')

>>> # Period (.) is a general wildcard for a single characted. e.g: AB.x would match in AByx, ABdx and even AB.x. Therefore, following would return all the strings having an 's' then one more of any character and then another 's'
>>> c=filter((lambda str:"s.s", str)), str)
>>> print(c)
('sss', '#$%ssss^&*', '', 'q_%sssss', '12sssTA')

>>> # Matches any string having s.s
>>> d=filter((lambda str:"s\.s", str)), str)
>>> print(d)

>>> # Matches any string having atleast zero or more characters in between two 's' characters
>>> e=filter((lambda str:"s.*s", str)), str)
>>> print(e)
('sss', '#$%ssss^&*', '', 'q_%sssss', '12sssTA', '^Atssy$')

>>> # Matches any string having atleast one or more characters in between two 's' characters
>>> f=filter((lambda str:"s.+s", str)), str)
>>> print(f)
('sss', '#$%ssss^&*', '', 'q_%sssss', '12sssTA')

>>> # Following matches anything having t
>>> g=filter((lambda str:"t+", str)), str)
>>> print(g)
('a\nut', '^Atssy$')

>>> # Following matches an s charactes followed by an character in that you find in square bracket
>>> t=filter((lambda str:"s[.y]", str)), str)
>>> print(t)
('', '^Atssy$')

>>> # Follwoing matches an 's' character, followed by any character which is not 'y' or '.' and then followed by 'T'. Here what we need to know is the ^ character inside the square bracket. It says anything not in [] 
>>> u=filter((lambda str:"s[^.y]T", str)), str)
>>> print(u)

>>> # follwoing matcher all the strings that don't have an 's'. Note ^ at the begiining indicates begiining of strings, $ indicates end of all strings. In the middle we find anything not s. And * says any number of characters. Finally, this means, all the strings from beginning to end that do not consist an 's'  
>>> v=filter((lambda str:"^[^s]*$", str)), str)
>>> print(v)

>>> # Ok now how do we match the ^ character at the beginning. Escape character comes to rescue
>>> w=filter((lambda str:"^\^", str)), str)
>>> print(w)

>>> # Following is bit tricky. It says to match all the strings having a ^ but exclusing the ones having ^ at the beginning. 
>>> j=filter((lambda str:"^[^\^].*\^", str)), l)
>>> print(j)
OK. That's it for now. This guide is by no means complete, but just a quick start. Now you may explore regexp capabilities in various languages, particularly in Python.

Sunday, November 17, 2013

Clustering WSO2 Governance Registry

Note: Before reading this guide, I recommend reading my previous blog post on "Setting up WSO2 Governance Registry Partitions", because last few steps in this guide is similar to that of the previous post. You can still follow along without reading it though.

In this guide we shall set up a WSO2 Governance Registry into a cluster. Cluster??? What's important in clustering any sort of product in the first place. Oh!!! That's a critical question. Anyhow, this could be a possible question in a semester paper of a subject in computer science. What could be that subject? Distributed Systems... I'm not sure. Well, whatever it is, I'll provide some simple answers why clustering an application can be useful, though these answers may not be suitable for a semester paper. :)
  • Clustering can be used to load balance between working nodes. This will reduce overhead that each node needs to handle by dividing work among partitions
  • Clustering enables application to withstand on sudden failures in one or more node as far as at least one node is up and running, decreasing the overall downtime of the service and providing high availability  
  • Clustering undoubtedly enables scalability and can increase the performance of the overall application
OK.. enough of theory!!! Let's bogged down into clustering WSO2 Governance Registry as described below.

Here we are going to demonstrate the clustered setup with three nodes of WSO2 Governance Registry (GREG), a WSO2 Elastic Load Balancer (ELB) and a database for storage, as shown below.

We are going to set up the cluster in the local machine for testing, though, the steps would be the same if each component; ELB, GREG nodes and the DB, resides in different servers, except for appropriate IP address settings.

First, we shall configure the WSO2 ELB, which is going to be in front of the three WSO2 GREG nodes. Load Balancers, in general help distribute the requests among the nodes. Download the WSO2 ELB from and unzip it to a directory. I'll refer to load balancer home directory as ELB_HOME.

Now open up the ELB_HOME/repository/conf/loadbalancer.conf and replace the governance domain as shown below.
    governance {
        hosts         ;
        domains   {
            wso2.governance.domain {
                tenant_range *;
                group_mgt_port 4321;
                mgt {
Next we need to edit the ELB_HOME/repository/conf/axis2/axis2.xml and uncomment the localMemberHost to expose the ELB to the members of the cluster (the GREG nodes that we are going to configure) as shown below.
<parameter name="localMemberHost"></parameter>
Also change the localMemberPort to 5000 as shown below.
<parameter name="localMemberPort">5000</parameter>
Next you need to update your system's hosts file with an entry like the following, which is a simple mapping between an IP address and a hostname.
In Linux, the file is /etc/hosts and in Windows the file is %systemroot%\system32\drivers\etc\hosts (In Windows to view this file you may have to enable the system to show hidden files)

That's all on WSO2 ELB. We can now start the WSO2 ELB in terminal by running the following command in ELB_HOME/bin/
$ ./
Let's now download the latest WSO2 GREG from, and unzip it into a directory. Then copy the GREG home directory to two more places for other two nodes. You can place all three nodes in the same directory and rename those by appending '_node1', '_node2' and '_node3'. Before we delve into configuring WSO2 GREG nodes, we need to create database to hold the shared node data.

Let's now create database for the 'governance registry'. You can use the in-built H2 database, but it's recommended to use a database like MySQL in production systems. Therefore, we will use MySQL in this setup. If you don't have MySQL installed in Linux look in my previous post 'Installing MySQL in Linux'. Windows users can download and install using the MySQL installer package and it's very straight forward. To create a MySQL database for governance registry and populate it with the schema, enter the following to go into the MySQL prompt.
$ mysql -u root -p
mysql> create database wso2governance;
Query OK, 1 row affected (0.04 sec)
mysql> create database wso2apim;
Query OK, 1 row affected (0.00 sec)
Next we ought to exit from MySQL prompt. And, now we shall populate the databases we created with the schema. For that enter following commands. Don't forget to replace the GREG_HOME to any WSO2 GREG directory.
$ mysql -u root -p wso2governance < GREG_HOME/dbscripts/mysql.sql
$ mysql -u root -p wso2apim < GREG_HOME/dbscripts/apimgt/mysql.sql
OK.. now database is also ready. We can now configure our WSO2 GREG nodes. Repeat the following steps in all the GREG nodes. The differences you have to make are noted where necessary.

As a first step in clustering the nodes, we got to enable clustering in each GREG node, of course. This can be done by changing the enable attribute to 'true' as the following line in axis2.xml, which can be found in GREG_HOME/repository/conf/axis2 directory. And this needs to be done in all three nodes.
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
* Change the GREG_HOME/repository/conf/axis2/axis2.xml as shown below
<parameter name="membershipScheme">wka</parameter>
<parameter name="domain">wso2.governance.domain</parameter>
<parameter name="localMemberHost"></parameter>
<parameter name="localMemberPort">4250</parameter>
This is for joining to the multicast domain "wso2.governance.domain". Nodes with same domain belong to the clustering group. "wka" is for Well-Known Address based multicasting. "localMemberPort" port on other nodes needs to be different. Say 4251 and 4252. This is the TCP port on which other nodes contact this node.

* In the <members> section, there needs to be a <member> tags with the details of the ELB or multiple ELBs (If we are having clusters of ELBs). The following shows the ELB with port 4321, which is the group management port we configured in the loadbalancer.conf in the ELB. Since we are having only one ELB, there is only one <member> tag.
* Next we need to edit the HTTP and HTTPS proxy ports in the GREG_HOME/repository/conf/tomcat/catalina-server.xml. Add a new attribute named proxyPort to 8280 in HTTP and 8243 in HTTPS. Attributes need to be added like proxyPort="8280" and proxyPort="8243" in the two <Connector> tags, respectively.

* Next, since we are going to setup our nodes and the ELB in the local machine we need to avoid port conflicts. For this, set the <Offset> to 1 in the first node and 2 in
 the second node, 3 in the third node, in GREG_HOME/repository/conf/carbon.xml. You can put any Offset of your wish, but it has to be different if the nodes are running in the same machine. If these nodes are in different servers, this need not be done.

* In carbon.xml we also need update the "HostName" and "MgtHostName" elements as follows
* WSO2 Carbon platforms supports a deployment model in its architecture for components to act as two types of nodes in clustering setup. 'management' nodes and 'worker' nodes. 'management' nodes are used for changing configurations and deploying artifacts. 'worker' nodes are to process requests. But in WSO2 GREG there is no concept of request processing, all nodes are considered 'management' nodes. In a clustered setup, we got to configure this explicitly in axis2.xml. We need to change the value attribute to "mgt" of <property> tag of "subdomain". After the change, the property tags would look like the below.
Properties specific to this member
<parameter name="properties">
    <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
    <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
    <property name="subDomain" value="mgt"/>
* Next open up GREG_HOME/repository/conf/user-mgt.xml file and update the "datasource" property as follows to enable the user manager tables to be created in "wso2governance":
<Property name="dataSource">jdbc/WSO2CarbonDB_mount</Property>
* Next we to configure the shared 'config' and 'governance' registries, as I earlier pointed out to you to read my previous blog post "Setting up WSO2 Governence Registry Partitions". OK.. we shall configure this in each GREG_HOME/repository/conf/registry.xml and GREG_HOME/repository/conf/datasources/master-datasources.xml. First in registry.xml we need to add the following additional configs.

Add a new <dbConfig>
<dbConfig name="wso2registry_mount">
Add a new <remoteInstance>
<remoteInstance url="https://localhost:9443/registry">
Add the <mount> path for 'config' and 'governance'
<mount path="/_system/config" overwrite="true">
<mount path="/_system/governance" overwrite="true">
Next in master-datasources.xml add a new <datasource> as below. Do not change the existing <datasource> of <name> WSO2_CARBON_DB.
    <description>The datasource used for registry and user manager</description>
    <definition type="RDBMS">
            <validationQuery>SELECT 1</validationQuery>
As the final step modify the <datasource> for WSO2AM_DB as below.
    <description>The datasource used for API Manager database</description>
    <definition type="RDBMS">
            <validationQuery>SELECT 1</validationQuery>
Now we may run each WSO2 GREG node by running the following in GREG_HOME/bin
$ ./
When all three nodes are up and running, the ELB console should have printed something similar to below, indicating that nodes have successfully joint the cluster.
[2013-11-12 16:52:29,260]  INFO - HazelcastGroupManagementAgent Member joined [271050b5-69d0-468d-9e9e-29557aa550ef]:
[2013-11-12 16:52:32,314]  INFO - MemberUtils Added member: Host:, Remote Host:null, Port: 4252, HTTP:9766, HTTPS:9446, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true
[2013-11-12 16:52:32,315]  INFO - HazelcastGroupManagementAgent Application member Host:, Remote Host:null, Port: 4252, HTTP:9766, HTTPS:9446, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application cluster
[2013-11-12 16:53:01,042]  INFO - TimeoutHandler This engine will expire all callbacks after : 86400 seconds, irrespective of the timeout action, after the specified or optional timeout
[2013-11-12 16:55:34,655]  INFO - HazelcastGroupManagementAgent Member joined [912fe7db-7334-467c-a6a7-e6929e652edb]:
[2013-11-12 16:55:38,707]  INFO - MemberUtils Added member: Host:, Remote Host:null, Port: 4251, HTTP:9765, HTTPS:9445, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true
[2013-11-12 16:55:38,708]  INFO - HazelcastGroupManagementAgent Application member Host:, Remote Host:null, Port: 4251, HTTP:9765, HTTPS:9445, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application cluster
[2013-11-12 16:57:05,136]  INFO - HazelcastGroupManagementAgent Member joined [cc177d58-1ec1-473e-899c-9f154303e606]:
[2013-11-12 16:57:09,232]  INFO - MemberUtils Added member: Host:, Remote Host:null, Port: 4250, HTTP:9764, HTTPS:9444, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true
[2013-11-12 16:57:09,232]  INFO - HazelcastGroupManagementAgent Application member Host:, Remote Host:null, Port: 4250, HTTP:9764, HTTPS:9444, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application cluster
Great!!!. Now we may access the Management Console by browsing the and now the WSO2 GREG is clustered. Every processing that comes to ELB will be distributed among the GREG nodes, providing the scalability and high availability.

Friday, November 15, 2013

Installing and configuring phpMyAdmin in Linux

phpMyAdmin is a web based graphical administration tool for MySQL RDBMS, written in PHP. Using phpMyAdmin can ease your database administration tasks. Therefore, installing it in your machine is useful and this short guide will walk you through setting it up in your Linux. Installing this in Windows is just a matter of installing the WAMP server installer package which has everything, including MySQL server packaged into one bundle.

Even, in Linux you may choose to use a package called LAMP, which is easy to install and have everything bundled together. But for certain reason, may be you want to install MySQL separately in a remote server or you have MySQL already installed in your distribution, you would need to install phpMyAdmin separately as described below.

The phpMyAdmin needs a MySQL database already installed. If you haven't got it installed, see my previous post on 'Installing MySQL in Linux'.

Assuming you have installed MySQL, do the following to install phpMyAdmin, whichever is applicable to your distribution
$ sudo apt-get install phpMyAdmin
$ sudo yum install phpMyAdmin
During the installation, you will be prompted for the web server to serve the phpMyAdmin. Select 'Apache2' for that. You may also be asked to provide administration credentials, which you can provide on your wish and note it down as it's required while you login. You can also use an existing MySQL user credentials to login to phpMyAdmin.

The installation might start the apache web server and copy the phpmyadmin.conf into /etc/apache2/conf.d directory. In certain distribution like Fedora, the apache2 directory is named httpd. Replace the directory appropriately. You can check it by listing the directory content of /etc/apache2/conf.d. If you don't find it, you can copy it manually as follows and can reload the apache web server.
$ sudo ln -s /etc/phpmyadmin/apache.conf /etc/apache2/conf.d/phpmyadmin.conf
$ sudo /etc/init.d/apache2 reload
or in RHEL/Fedora
$ sudo service httpd restart
Note: As per documentations, after Ubuntu 13.10 (Saucy Salamander), apache will load configuration files from /etc/apache2/conf-enabled/ directory instead of from /etc/apache2/conf.d/. Therefore if you are using Ubuntu 13.10 or later, do the above step for /etc/apache2/conf-enabled/ and reload the web server.

Ok that's it. Now you may start administering your MySQL database with phpMyAdmin. Open your favorite browser and navigate to http://<hostname>/phpmyadmin. If your apache web server is in local machine, replace <hostname> with localhost. Provide the username and the password that you configured in the installation or you may use an existing MySQL user credential. Then you would see the following phpMyAdmin page for administration. Good luck in using MySQL.

Monday, November 11, 2013

Enabling core dumps in Linux

It is usable to set the Linux systems to write core dumps when an application process crashes. It isn't a magic that badly written C/C++ application would silently crash and vanish. In such a case, to debug what has happened, we would like to view the stack trace of the application to the point of crash which may possibly give us a clue that what did go wrong.

If not in all Linux distributions, certain Linux distributions do not write the core dumps to the file system by default as it is set off. It's usually done to prevent applications to write huge core dumps of hundreds of GB, which would eat up valuable disk space. Although, enabling core dumps in production systems is not recommended. it would be beneficial for developers to have it enabled in the development machines.

The question is how do we turn it on. OK.. it's quite simple.

Before that we shall check the current core dump limits. Enter the following command.
$ ulimit -c
You might possibly get the value as zero. This means. the system doesn't write any core files when your application crashes. Let's test this with a sample program which is set to crash intentionally. Type in the following in a file names Crash.cpp.
#include <iostream>

using namespace std;

class Crash
 Crash() {
     cout << "Constructing " << endl ;
     p = new int ;
 ~Crash() {
     cout << "Destructing" << endl ;
     delete p ;
 int *p ;

int main()
 Crash *pCrash = new Crash ;
 delete pCrash ;
 delete pCrash ;
 return 0;

According to the program, dynamically allocated memory pointer pCrash is reallocated twice, which make the program crash at the second reallocation. Compile the program by entering the following.
$ g++ -o crash Crash.cpp
Now let's execute the program as below. You would see that the program crashed by looking at the terminal output of the stack trace. But the core file would be missing at the execution directory or in the configured directory (see below), since ulimit is 0.
 [shazni@wso2-ThinkPad-T530 Crash]$ ./crash
*** Error in `./crash': free(): invalid pointer: 0x0000000000ddb020 ***
======= Backtrace: =========
======= Memory map: ========
00400000-00401000 r-xp 00000000 08:04 14552625                           /home/shazni/ProjectFiles/Test/NormalTest/Crash/crash
00600000-00601000 r--p 00000000 08:04 14552625                           /home/shazni/ProjectFiles/Test/NormalTest/Crash/crash
00601000-00602000 rw-p 00001000 08:04 14552625                           /home/shazni/ProjectFiles/Test/NormalTest/Crash/crash
00ddb000-00dfc000 rw-p 00000000 00:00 0                                  [heap]
7fa5bcd36000-7fa5bce39000 r-xp 00000000 08:03 4341456                    /lib/x86_64-linux-gnu/
7fa5bce39000-7fa5bd039000 ---p 00103000 08:03 4341456                    /lib/x86_64-linux-gnu/
7fa5bd039000-7fa5bd03a000 r--p 00103000 08:03 4341456                    /lib/x86_64-linux-gnu/
7fa5bd03a000-7fa5bd03b000 rw-p 00104000 08:03 4341456                    /lib/x86_64-linux-gnu/
7fa5bd03b000-7fa5bd1fa000 r-xp 00000000 08:03 4341459                    /lib/x86_64-linux-gnu/
7fa5bd1fa000-7fa5bd3f9000 ---p 001bf000 08:03 4341459                    /lib/x86_64-linux-gnu/
7fa5bd3f9000-7fa5bd3fd000 r--p 001be000 08:03 4341459                    /lib/x86_64-linux-gnu/
7fa5bd3fd000-7fa5bd3ff000 rw-p 001c2000 08:03 4341459                    /lib/x86_64-linux-gnu/
7fa5bd3ff000-7fa5bd404000 rw-p 00000000 00:00 0
7fa5bd404000-7fa5bd418000 r-xp 00000000 08:03 4329143                    /lib/x86_64-linux-gnu/
7fa5bd418000-7fa5bd618000 ---p 00014000 08:03 4329143                    /lib/x86_64-linux-gnu/
7fa5bd618000-7fa5bd619000 r--p 00014000 08:03 4329143                    /lib/x86_64-linux-gnu/
7fa5bd619000-7fa5bd61a000 rw-p 00015000 08:03 4329143                    /lib/x86_64-linux-gnu/
7fa5bd61a000-7fa5bd6ff000 r-xp 00000000 08:03 2891772                    /usr/lib/x86_64-linux-gnu/
7fa5bd6ff000-7fa5bd8fe000 ---p 000e5000 08:03 2891772                    /usr/lib/x86_64-linux-gnu/
7fa5bd8fe000-7fa5bd906000 r--p 000e4000 08:03 2891772                    /usr/lib/x86_64-linux-gnu/
7fa5bd906000-7fa5bd908000 rw-p 000ec000 08:03 2891772                    /usr/lib/x86_64-linux-gnu/
7fa5bd908000-7fa5bd91d000 rw-p 00000000 00:00 0
7fa5bd91d000-7fa5bd940000 r-xp 00000000 08:03 4330760                    /lib/x86_64-linux-gnu/
7fa5bdb18000-7fa5bdb1d000 rw-p 00000000 00:00 0
7fa5bdb3b000-7fa5bdb3f000 rw-p 00000000 00:00 0
7fa5bdb3f000-7fa5bdb40000 r--p 00022000 08:03 4330760                    /lib/x86_64-linux-gnu/
7fa5bdb40000-7fa5bdb42000 rw-p 00023000 08:03 4330760                    /lib/x86_64-linux-gnu/
7fffcccd9000-7fffcccfa000 rw-p 00000000 00:00 0                          [stack]
7fffccd66000-7fffccd68000 r-xp 00000000 00:00 0                          [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]
If you see a value instead, that is the maximum size that's permitted to dump. If the dump is less than that size you would get the full dump. Otherwise the full dump may not be written, which would be useless. the core files may not be truncated to exact limit of ulimit. It may be different, since ulimit that we set is a soft limit.

To set a limit to the ulimit, issue the following command
$ ulimit -c 45  # sets the limit to 45 times 512 bytes of block
To set the limit to an unlimited amount, issue the following command
$ ulimit -c unlimited
Now, execute the above program. You would now see the same output, but there will be an additional file named 'core' in the current directory (If you still don't find, read ahead), which is the kernel dump of the process. Ok, have we done yet? You would have probably guessed 'No', because you see lot of stuff written below in the post. Good guess!!!
The problem here is that, the setting that you made is only temporary to the current terminal session. If you open up another terminal and check the ulimit, it would still be the initial ulimit (probably zero) that you saw earlier. We need our settings to be persistent throughout and between reboots. OK... I get profile files into my head now. Either .bashrc in home directory, if you want the setting to effect only the current user or /etc/profile if you need the settings to be effected to all users. I'll use my .bashrc for this.
$ vi ~/.bashrc
Add the following line to the end of your profile.
ulimit -c unlimited >/dev/null 2>&1
This set the ulimit to unlimited and throws whatever is output to screen to hell. You may need to source the .bashrc file as the below command and may need to close the terminal and start all over again. Now in all terminals you open up as your user account, ulimit should be shown as unlimited.
$ source ~/.bashrc
Ok, now even if you reboot the system and run the above program, you should get a core file created (If, you still don't find read ahead). Well, we can be little more fancy as well. We can have a pattern of our core file name. Currently, if you see the core dump is named 'core' unless someone has changed the pattern in the file /proc/sys/kernel/core_pattern. This is where you specify where the core dump is written and it's file name pattern. If you didn't see the core dump earlier in the directory where the program executable exist, look the path in this file. In my Ubuntu, it as 'core' as shown below.
$ cat /proc/sys/kernel/core_pattern
We can have certain information embedded in the core file name itself. You may want to edit this file directly or you can edit /etc/sysctl.conf. I'll edit the configuration file since I can add more system parameters if needed.
$ sudo vi /etc/sysctl.conf
And add the following to the end of the file
#core file settings
The first line will enable addition of the process id to the core file name.
The more interesting line is the second line. This line is the core file pattern. Each letter following a % sign as a special meaning. Some of the possible values and its meanings are as follows.
Unix/linux systems provide a facility to run a process under a different user id or group id than the user/group starting the process by changing the setuid and setgid bits. Such processes may be dealing with sensitive data which should not be accessible by the new invoking user/group. Since core dumps may provide some internal sensitive data, Unix/Linux systems by default disable core dumps to be written when such users execute the process. If we still want to enable core dumps to be written when such users execute the process, we need to set the third line.

%e - Application name
%s - Signal number that caused the crash
%p - Process ID of the application when it crashed
%t - time at which dump is written (This is in seconds from Unix epoch)
%u - real UID (User ID) of the dumped process
%g - real GID (Group ID) of the sumped process
%h - hostname
%% - will have a % sign itself

To make setting effective, enter the following command.
$ sudo sysctl -p
Now if you run the above application with above pattern, you would get something similar core file name as, core.crash.6.18056.1384147041

OK. Going a little further now. We can permit the core files to be written by all the applications in your system, including daemons. In Red-hat based distributions like RHEL or Fedora, this can be achieved by adding the following line in /etc/sysconfig/init
You may need to restart the system for this setting to take effect.

Instead if you just want to allow only certain daemons to write dumps, in Red-hat based distributions, edit add the following line /etc/init.d/functions unless it's already there
# make sure it doesn't core dump anywhere unless it's requested
corelimit="ulimit -S -c ${DAEMON_COREFILE_LIMIT:-0}"
Now add the following line to the init script of your daemon in /etc/init.d/{your service}
Since is a RedHat way of setting core file limit, in Ubuntu you need to add the following lines instead to the services' init script in /etc/init.d
ulimit -c unlimited >/dev/null 2>&1
echo tmp/core.%e.%s.%p.%t > /proc/sys/kernel/core_pattern
If you now start your service by invoking following command
$ sudo service {your service script} start
Now find you services process id using
$ ps aux | grep {your service}
And let's kill it
$ sudo kill -s SIGSEGV {Your services' PID}
Now if go ahead and look in /tmp. you should find a core dump for your service.

Great!!! Here we are. now we have set our Linux box to write core dumps to the system and can debug any application crashes.