Note: Before reading this guide, I recommend reading my previous blog post on "Setting up WSO2 Governance Registry Partitions", because last few steps in this guide is similar to that of the previous post. You can still follow along without reading it though.
In this guide we shall set up a WSO2 Governance Registry into a cluster. Cluster??? What's important in clustering any sort of product in the first place. Oh!!! That's a critical question. Anyhow, this could be a possible question in a semester paper of a subject in computer science. What could be that subject? Distributed Systems... I'm not sure. Well, whatever it is, I'll provide some simple answers why clustering an application can be useful, though these answers may not be suitable for a semester paper. :)
- Clustering can be used to load balance between working nodes. This will reduce overhead that each node needs to handle by dividing work among partitions
- Clustering enables application to withstand on sudden failures in one or more node as far as at least one node is up and running, decreasing the overall downtime of the service and providing high availability
- Clustering undoubtedly enables scalability and can increase the performance of the overall application
Here we are going to demonstrate the clustered setup with three nodes of WSO2 Governance Registry (GREG), a WSO2 Elastic Load Balancer (ELB) and a database for storage, as shown below.
We are going to set up the cluster in the local machine for testing, though, the steps would be the same if each component; ELB, GREG nodes and the DB, resides in different servers, except for appropriate IP address settings.
First, we shall configure the WSO2 ELB, which is going to be in front of the three WSO2 GREG nodes. Load Balancers, in general help distribute the requests among the nodes. Download the WSO2 ELB from http://wso2.com/products/elastic-load-balancer/ and unzip it to a directory. I'll refer to load balancer home directory as ELB_HOME.
Now open up the ELB_HOME/repository/conf/loadbalancer.conf and replace the governance domain as shown below.
governance { hosts governance.cloud-test.wso2.com; #url_suffix governance.wso2.com; domains { wso2.governance.domain { tenant_range *; group_mgt_port 4321; mgt { hosts governance.cluster.wso2.com; } } } }Next we need to edit the ELB_HOME/repository/conf/axis2/axis2.xml and uncomment the localMemberHost to expose the ELB to the members of the cluster (the GREG nodes that we are going to configure) as shown below.
<parameter name="localMemberHost">127.0.0.1</parameter>Also change the localMemberPort to 5000 as shown below.
<parameter name="localMemberPort">5000</parameter>Next you need to update your system's hosts file with an entry like the following, which is a simple mapping between an IP address and a hostname.
127.0.0.1 governance.cluster.wso2.comIn Linux, the file is /etc/hosts and in Windows the file is %systemroot%\system32\drivers\etc\hosts (In Windows to view this file you may have to enable the system to show hidden files)
That's all on WSO2 ELB. We can now start the WSO2 ELB in terminal by running the following command in ELB_HOME/bin/
$ ./wso2server.shLet's now download the latest WSO2 GREG from http://wso2.com/products/governance-registry/, and unzip it into a directory. Then copy the GREG home directory to two more places for other two nodes. You can place all three nodes in the same directory and rename those by appending '_node1', '_node2' and '_node3'. Before we delve into configuring WSO2 GREG nodes, we need to create database to hold the shared node data.
Let's now create database for the 'governance registry'. You can use the in-built H2 database, but it's recommended to use a database like MySQL in production systems. Therefore, we will use MySQL in this setup. If you don't have MySQL installed in Linux look in my previous post 'Installing MySQL in Linux'. Windows users can download and install using the MySQL installer package and it's very straight forward. To create a MySQL database for governance registry and populate it with the schema, enter the following to go into the MySQL prompt.
$ mysql -u root -p mysql> create database wso2governance; Query OK, 1 row affected (0.04 sec) mysql> create database wso2apim; Query OK, 1 row affected (0.00 sec)Next we ought to exit from MySQL prompt. And, now we shall populate the databases we created with the schema. For that enter following commands. Don't forget to replace the GREG_HOME to any WSO2 GREG directory.
$ mysql -u root -p wso2governance < GREG_HOME/dbscripts/mysql.sql $ mysql -u root -p wso2apim < GREG_HOME/dbscripts/apimgt/mysql.sqlOK.. now database is also ready. We can now configure our WSO2 GREG nodes. Repeat the following steps in all the GREG nodes. The differences you have to make are noted where necessary.
As a first step in clustering the nodes, we got to enable clustering in each GREG node, of course. This can be done by changing the enable attribute to 'true' as the following line in axis2.xml, which can be found in GREG_HOME/repository/conf/axis2 directory. And this needs to be done in all three nodes.
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">* Change the GREG_HOME/repository/conf/axis2/axis2.xml as shown below
<parameter name="membershipScheme">wka</parameter> <parameter name="domain">wso2.governance.domain</parameter> <parameter name="localMemberHost">governance.cluster.wso2.com</parameter> <parameter name="localMemberPort">4250</parameter>This is for joining to the multicast domain "wso2.governance.domain". Nodes with same domain belong to the clustering group. "wka" is for Well-Known Address based multicasting. "localMemberPort" port on other nodes needs to be different. Say 4251 and 4252. This is the TCP port on which other nodes contact this node.
* In the <members> section, there needs to be a <member> tags with the details of the ELB or multiple ELBs (If we are having clusters of ELBs). The following shows the ELB with port 4321, which is the group management port we configured in the loadbalancer.conf in the ELB. Since we are having only one ELB, there is only one <member> tag.
<members> <member> <hostName>127.0.0.1</hostName> <port>4321</port> </member> </members>* Next we need to edit the HTTP and HTTPS proxy ports in the GREG_HOME/repository/conf/tomcat/catalina-server.xml. Add a new attribute named proxyPort to 8280 in HTTP and 8243 in HTTPS. Attributes need to be added like proxyPort="8280" and proxyPort="8243" in the two <Connector> tags, respectively.
* Next, since we are going to setup our nodes and the ELB in the local machine we need to avoid port conflicts. For this, set the <Offset> to 1 in the first node and 2 in
the second node, 3 in the third node, in GREG_HOME/repository/conf/carbon.xml. You can put any Offset of your wish, but it has to be different if the nodes are running in the same machine. If these nodes are in different servers, this need not be done.
* In carbon.xml we also need update the "HostName" and "MgtHostName" elements as follows
<HostName>governance.cluster.wso2.com</HostName> <MgtHostName>governance.cluster.wso2.com</MgtHostName>* WSO2 Carbon platforms supports a deployment model in its architecture for components to act as two types of nodes in clustering setup. 'management' nodes and 'worker' nodes. 'management' nodes are used for changing configurations and deploying artifacts. 'worker' nodes are to process requests. But in WSO2 GREG there is no concept of request processing, all nodes are considered 'management' nodes. In a clustered setup, we got to configure this explicitly in axis2.xml. We need to change the value attribute to "mgt" of <property> tag of "subdomain". After the change, the property tags would look like the below.
<!-- Properties specific to this member --> <parameter name="properties"> <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/> <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/> <property name="subDomain" value="mgt"/> </parameter>* Next open up GREG_HOME/repository/conf/user-mgt.xml file and update the "datasource" property as follows to enable the user manager tables to be created in "wso2governance":
<Property name="dataSource">jdbc/WSO2CarbonDB_mount</Property>* Next we to configure the shared 'config' and 'governance' registries, as I earlier pointed out to you to read my previous blog post "Setting up WSO2 Governence Registry Partitions". OK.. we shall configure this in each GREG_HOME/repository/conf/registry.xml and GREG_HOME/repository/conf/datasources/master-datasources.xml. First in registry.xml we need to add the following additional configs.
Add a new <dbConfig>
<dbConfig name="wso2registry_mount"> <dataSource>jdbc/WSO2CarbonDB_mount</dataSource> </dbConfig>Add a new <remoteInstance>
<remoteInstance url="https://localhost:9443/registry"> <id>instanceid</id> <dbConfig>wso2registry_mount</dbConfig> <readOnly>false</readOnly> <enableCache>true</enableCache> <registryRoot>/</registryRoot> <cacheId>root@jdbc:mysql://localhost:3306/wso2governance</cacheId> </remoteInstance>Add the <mount> path for 'config' and 'governance'
<mount path="/_system/config" overwrite="true"> <instanceId>instanceid</instanceId> <targetPath>/_system/config</targetPath> </mount> <mount path="/_system/governance" overwrite="true"> <instanceId>instanceid</instanceId> <targetPath>/_system/governance</targetPath> </mount>Next in master-datasources.xml add a new <datasource> as below. Do not change the existing <datasource> of <name> WSO2_CARBON_DB.
<datasource> <name>WSO2_CARBON_DB_MOUNT</name> <description>The datasource used for registry and user manager</description> <jndiConfig> <name>jdbc/WSO2CarbonDB_mount</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://localhost:3306/wso2governance</url> <username>root</username> <password>root</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource>As the final step modify the <datasource> for WSO2AM_DB as below.
<datasource> <name>WSO2AM_DB</name> <description>The datasource used for API Manager database</description> <jndiConfig> <name>jdbc/WSO2AM_DB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://localhost:3306/wso2apim?autoReconnect=true&relaxAutoCommit=true</url> <username>root</username> <password>root</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource>Now we may run each WSO2 GREG node by running the following in GREG_HOME/bin
$ ./wso2server.shWhen all three nodes are up and running, the ELB console should have printed something similar to below, indicating that nodes have successfully joint the cluster.
[2013-11-12 16:52:29,260] INFO - HazelcastGroupManagementAgent Member joined [271050b5-69d0-468d-9e9e-29557aa550ef]: governance.cluster.wso2.com/127.0.0.1:4252 [2013-11-12 16:52:32,314] INFO - MemberUtils Added member: Host:127.0.0.1, Remote Host:null, Port: 4252, HTTP:9766, HTTPS:9446, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true [2013-11-12 16:52:32,315] INFO - HazelcastGroupManagementAgent Application member Host:127.0.0.1, Remote Host:null, Port: 4252, HTTP:9766, HTTPS:9446, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application cluster [2013-11-12 16:53:01,042] INFO - TimeoutHandler This engine will expire all callbacks after : 86400 seconds, irrespective of the timeout action, after the specified or optional timeout [2013-11-12 16:55:34,655] INFO - HazelcastGroupManagementAgent Member joined [912fe7db-7334-467c-a6a7-e6929e652edb]: governance.cluster.wso2.com/127.0.0.1:4251 [2013-11-12 16:55:38,707] INFO - MemberUtils Added member: Host:127.0.0.1, Remote Host:null, Port: 4251, HTTP:9765, HTTPS:9445, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true [2013-11-12 16:55:38,708] INFO - HazelcastGroupManagementAgent Application member Host:127.0.0.1, Remote Host:null, Port: 4251, HTTP:9765, HTTPS:9445, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application cluster [2013-11-12 16:57:05,136] INFO - HazelcastGroupManagementAgent Member joined [cc177d58-1ec1-473e-899c-9f154303e606]: governance.cluster.wso2.com/127.0.0.1:4250 [2013-11-12 16:57:09,232] INFO - MemberUtils Added member: Host:127.0.0.1, Remote Host:null, Port: 4250, HTTP:9764, HTTPS:9444, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true [2013-11-12 16:57:09,232] INFO - HazelcastGroupManagementAgent Application member Host:127.0.0.1, Remote Host:null, Port: 4250, HTTP:9764, HTTPS:9444, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application clusterGreat!!!. Now we may access the Management Console by browsing the https://governance.cluster.wso2.com:8243/carbon and now the WSO2 GREG is clustered. Every processing that comes to ELB will be distributed among the GREG nodes, providing the scalability and high availability.
What's version have you been used ?
ReplyDeleteThis is WSO2 Governance Registry 4.6.0.
ReplyDelete