Clustered Deployment

While Clustering Guide under the Configuration and Administration section provides a comprehensive documentation on cluster configuration, starting and controlling cluster of UltraESB, this document gives a more practical overview of the clustered deployment of UltraESB. The objective of this guide is to walk the user through a deployment of a 3 node UltraESB cluster including the respective ZooKeeper quorum configuration following the best practices of the clustered deployments. However it is recommended that the user go through the Clustering Guide prior to doing a real production deployment.

Introduction

This guide walks through a 3 node UltraESB cluster deployment. The 3 UltraESB nodes in this deployment shares configuration state using the Apache ZooKeeper framework, and utilizes distributed caching for state replication. UltraESB separates the concern of cooperative control over the state replication of the runtime data. You may refer to the Scalability and High-Availability under the Architecture and Design to find more on this separation and the clustering implementation architecture.

As UltraESB clustering is implemented using Apache ZooKeeper, the first step is to setup ZooKeeper! Before digging into details, lets first look at the deployment diagram that we are planning to implement. You will understand why this setup was selected and the reasoning, as we move into the details later on.

clustered deployment

As per the above diagram, our deployment will be on 3 server nodes, each running both ZooKeeper and the UltraESB. The servers are assumed to be having the host names "esb1","esb2" and "esb3" respectively.

Setting up ZooKeeper

ZooKeeper and the required scripts and configurations to run the ZooKeeper, is shipped in with the UltraESB already! So we will be using those to setup ZooKeeper for our clustered deployment instead of downloading and setting up Zookeeper from scratch. (However, if your enterprise already has Zookeeper nodes setup for wider use, you may use them - or set up such an environment if required) You may also want to go through the ZooKeeper documentation to get an understanding of what you are doing and why you have to do it; although this guide will be sufficient to setup ZooKeeper as required for our purposes. Quoting from the ZooKeeper documentation, it says that you have to run a replicated ZooKeeper quorum if you are running ZooKeeper in production. Here is the quote,

Quote from ZooKeeper Documentation
Running ZooKeeper in standalone mode is convenient for evaluation, some development, and testing. But in production, you should run ZooKeeper in replicated mode. A replicated group of servers in the same application is called a quorum, and in replicated mode, all servers in the quorum have copies of the same configuration file.

Hence we will be using a three node ZooKeeper quorum. You might wonder whether you can go with two ZooKeeper nodes instead? Well the answer is yes, but there is no real value. This is because a two node ZooKeeper quorum is equivalent in reliability to that of single node Zookeeper setup. ZooKeeper assumes that there is a majority of servers available at any given time, Hence in a two node deployment, if one node goes down, ZooKeeper fails to have the necessary quorum to operate as there is no  possible majority. So in effect any even number (n) number of nodes is equal in reliability to that of n-1 nodes.

ZooKeeper quorum should be expanded in Odd numbers
The conclusion of the above discussion is that a ZooKeeper quorum should only be expanded in odd numbers, as in 3, 5, 7, etc… where necessary
Configuring ZooKeeper

Next you will need to prepare the ZooKeeper configuration for the quorum, which is a standard properties file with name value pairs. A default configuration for the standalone ZooKeeper is shipped with the UltraESB which can be found in the conf/zoo.cfg file in the UltraESB installation directory, so lets edit it and change it as follows to configure it for the quorum we will setup.

ZooKeeper node configuration

1tickTime=2000
2dataDir=/var/zookeeper
3clientPort=2181
4initLimit=5
5syncLimit=2
6server.1=esb1:2888:3888
7server.2=esb2:2888:3888
8server.3=esb3:2888:3888

Lets now review each of the above lines.

  1. tickTime - the basic time unit in milliseconds used by ZooKeeper. It is used to do heartbeats and the minimum session timeout will be twice the tickTime.

  2. dataDir - the location to store the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database.

  3. clientPort - the port to listen for client connections.

  4. initLimit - timeouts ZooKeeper uses to limit the length of time (in tickTime units) the ZooKeeper servers in quorum have to connect to a leader.

  5. syncLimit - limits how far (in tickTime units) out of date a server can be from a leader.

  6. server.x - This line will be repeated with incrementing x by 1 to define the servers in the quorum. This list the servers that make up the ZooKeeper service.

Note
Note the two port numbers after each server name, "2888" and "3888". Peers use the former port to connect to other peers. Such a connection is necessary so that peers can communicate, for example, to agree upon the order of updates. More specifically, a ZooKeeper server uses this port to connect followers to the leader. When a new leader arises, a follower opens a TCP connection to the leader using this port. Because the default leader election also uses TCP, we currently require another port for leader election. This is the second port in the server entry.

Now we need to inform each server which node they are in the quorum, remember that in the above configuration, we have mentioned, server.1, server.2 and server.3, so it is the time to give the servers their identifiers. This requires creating a file named myid in the ZooKeeper data directory (dataDir), which we have configured in the above configuration file. Simply create a file named myid there (according to the previous configuration, the file path is /var/zookeeper/myid) and specify only the identifier on each server, which is just 1, 2, and 3 respectively on each server.

ZooKeeper myid Configurations

Server 1: Host name = esb1, file /var/zookeeper/myid will contain "1"
Server 2: Host name = esb2, file /var/zookeeper/myid will contain "2"
Server 3: Host name = esb3, file /var/zookeeper/myid will contain "3"

That completes the ZooKeeper setup, but before starting the ZooKeeper quorum you may want to take a peek at the ZooKeeper administration guide as a production setup is not child’s play.

Note
If for any reason, this setup needs to be done on a single machine, you need to change the client port and server election ports to be unique while the host name of all the servers being the same, which is the host name of the server, for example "localhost"
Starting ZooKeeper

Starting of the ZooKeeper quorum is now a matter of running the ZooKeeper startup script shipped with the UltraESB instance in each server instance, which resides in the bin directory of the installations. So the command for running the ZooKeeper quorum would be;

Starting ZooKeeper

$ sh bin/zkServer.sh start

Zookeeper should now be up and running if all went well. You could check the status of the ZooKeeper quorum before proceeding with the UltraESB instance setup, with the ZooKeeper client (bin/zkCli.sh or bin/zkCli.bat) shipped with the UltraESB

Setting up the UltraESB

Setting up UltraESB to communicate with the above ZooKeeper quorum is simple and requires changes only to a few properties in the cluster configuration found in the main UltraESB configuration file conf/ultra-root.xml.

Configuring UltraESB

Open up the ultra-root.xml file with your favourite editor and find the cluster-manager bean configuration. You will have to enable clustering by changing the constructor-arg value of the bean configuration from "false" to "true" as follows; and configure the rest of the parameters as appropriate.

UltraESB Cluster Manager Configuration

 1<bean id="cluster-manager" class="org.adroitlogic.ultraesb.clustering.ClusterManager">
 2    <constructor-arg value="false" type="boolean"/>
 3    <property name="zkConnectString" value="esb1:2181,esb2:2181,esb3:2181"/>
 4    <property name="zkSessionTimeout" value="30"/>
 5    <property name="domain" value="default"/>
 6    <property name="space" value="space1"/>
 7    <property name="nodeGroup" value="primary"/>
 8    <property name="nodeName" value="esb1"/>
 9    <property name="startupTimeout" value="30"/>
10    <property name="upTimeReportInterval" value="60"/>
11    <property name="zooKeeperServerConnectionHandler" ref="zk-conn-handler"/>
12</bean>

Note that the zkConnectString and the nodeName properties being changed from the defaults. The zkConnectString will be the same for all 3 nodes as we will be deploying all these 3 nodes in the same ZooKeeper quorum, while the nodeName property should be changed to esb2 and esb3 as appropriate on the other 2 nodes. All these parameters are self descriptive with comments, however you may also go through the Clustering Configuration to understand each of these parameters in-detail.

Save and close the editor to complete the configuration.

Starting UltraESB

When you save this configuration, you can start the ESB cluster by issuing the startup command on all the three nodes as follows.

$ sh bin/ultraesb.sh

Now, you are done with setting up a three node UltraESB cluster.

Verifying the UltraESB Cluster

The easiest way to verify the deployment is to start the IMonitor (AdroitLogic Integration Monitor).

AdroitLogic Integration Monitor - IMonitor
AdroitLogic Integration Monitor executes as an independent Web Application, and allows the easy management of a single UltraESB instance or a cluster of instances. Let it be a single instance or a cluster of ESB nodes, IMonitor delivers business level statistics and monitoring at the best. Apart from the operational statistics, IMonitor is capable of presenting friendly troubleshooting & diagnostics capabilities. It’s your step towards improved organisational efficiency saving hours of developer time. Note that IMonitor comes as a replacement for UConsole which was there in previous UltraESB releases and is covered separately in AdroitLogic - Integration Monitor User Guide

More details on AdroitLogic Integration Monitor (IMonitor) Cluster Management can be found here.

Advanced Configurations related to Clustering

Enabling Remote JMX Access

Once again we are referring to the same conf/ultra-root.xml configuration file to enable remote JMX access. Edit this file and uncomment and change the following section of the configuration to get this enabled.

Configuration of Remote JMX Access

 1<bean id="serverConnector" class="org.springframework.jmx.support.ConnectorServerFactoryBean" depends-on="registry">
 2    <property name="objectName" value="connector:name=iiop"/>
 3    <!-- Remember to edit bin/ultraesb.sh or conf/wrapper.conf to specify the -Djava.rmi.server.hostname=<your-ip-address> property for JMX -->
 4    <property name="serviceUrl" value="service:jmx:rmi://localhost:9994/jndi/rmi://localhost:1099/ultra"/>
 5    <property name="threaded" value="true"/>
 6    <property name="daemon" value="true"/>
 7    <property name="environment">
 8        <map>
 9            <entry key="jmx.remote.x.access.file" value="conf/management/jmxremote.access"/>
10            <!--For plain text password file based access control-->
11            <entry key="jmx.remote.x.password.file" value="conf/management/jmxremote.password"/>
12            <!--For JAAS (e.g. LDAP / ActiveDirectory) based authentication-->
13            <!--<entry key="jmx.remote.x.login.config" value="LdapConfig"/>-->
14        </map>
15    </property>
16</bean>
17<bean id="registry" class="org.springframework.remoting.rmi.RmiRegistryFactoryBean">
18    <property name="port" value="1099"/>
19</bean>

As you can see above there are two authentication options, plain text file based access control and LDAP based configuration. LDAP configuration file ladp.conf can be found at conf directory. You may want to change the serviceUrl property in the serverConnector bean and the ports as appropriate and you will also need to configure the ultraesb.sh or the wrapper.conf (depending on the method that you use to start the UltraESB instance) to include the property "-Djava.rmi.server.hostname" with the value of the IP address or hostname of the server running the UltraESB instance.

Once this is enabled, at the server startup log, you will be able to see the remote JMX service URL being printed as follows.

INFO ServerManager Instance available for management via JMX at : service:jmx:rmi://esb1:9994/jndi/rmi://esb1:1099/ultra

For more information on enabling Remote JMX please refer to Remote JMX Access guide.

Enabling Distributed Caching

Another and important optional configuration for clustering is to enable Distributed Caching, and allow stateful services to replicate and share data across instances on the cluster.

As with any important UltraESB configuration, this also is configured via the conf/ultra-root.xml. Hence edit it to uncomment the following section to enable distributed caching, and you will then be able to access the distributed cache from any of your sequences.

Enabling Distributed Caching

1<bean id="cache-manager" class="org.adroitlogic.ultraesb.cache.ehcache.EhCacheManager">
2  <property name="ehCacheConfig" value="conf/ehcache.xml"/>
3</bean>

Once caching is enabled in a clustered environment the cache manager automatically detects and publishes a distributed cache which is accessible via the Mediation API from each of your sequences. Please refer to the Mediation API for more information on accessing the distributed cache and it’s variants.

In this topic
In this topic
Contact Us