Clustering Guide

This document walks you through the clustering aspects of the UltraESB, while it completes with a clustered deployment of an UltraESB. The document concludes with demonstrating the control commands and its operation to centrally control the cluster. The control commands are very important as you do not need to go into each and every node to do a particular control operation, rather it will be a cooperative control implementation within the UltraESB cluster invoked centrally via any node in the cluster.

The objective of this document is to present the configuration and administration aspect of clustering in UltraESB, a detailed architectural overview of the UltraESB clustering implementation is discussed in Scalability and High-Availability under the Architecture Guide.

The clustering guide composed of the following sub sections and it is recommended that user goes through them in-order.

At the end of this guide you will be able to do an advanced cluster deployment, for more descriptive information on doing a production deployment please refer to the Clustered Deployment under Deployment Guide.

Cluster Configuration

This document walks you through the configurations of clustering, and describes each and every configuration option related to UltraESB clustering in detail. This guide assumes that you have an UltraESB v2.3.0 instance extracted and installed (Please refer to the Installation Guide on doing a basic installation of UltraESB).

Enable Clustering

You need to edit the conf/ultra-root.xml configuration file to enable clustering. Find the "cluster-manager" spring bean configuration in the ultra-root.xml file found in the conf directory of the UltraESB home.

Clustering Configuration

1<bean id="cluster-manager" class="org.adroitlogic.ultraesb.clustering.ClusterManager">
2  ...
3<bean>

The first thing is to enable clustering (which is disabled by default), by changing the constructor argument of the cluster manager bean to "true" from "false".

Properties

Now go through the properties of the above bean one by one and modify them as per your requirements with the explanation on each of these below.

ZooKeeper Connection String

First property that you will find in the above configuration is the "zkConnectionString" which defines the connection string for the ZooKeeper quorum. For this documentation we will be using only one ZooKeeper instance, but on a production deployment it is highly recommended to have 3 or more ZooKeeper instances running as a quorum in which case you need to change the value of this property accordingly. UltraESB ships the libraries and the scripts to run ZooKeeper as a separate process which we will be using as a reference in this documentation, however there is no boundary permitting you to grab the ZooKeeper distribution from the Apache ZooKeeper downloads page to be used as the ZooKeeper installation.

In any case before doing a production deployment it is recommended that you go through the Apache ZooKeeper documentation to understand its deployment patterns and best practices on deploying a ZooKeeper quorum. In the case of a ZooKeeper quorum deployment guaranteeing high availability, the connection string will point to the list of ZooKeeper nodes.

ZooKeeper Session Timeout

Next property specifies the ZooKeeper session timeout, which is the maximum time ZooKeeper will keep a session (for the UltraESB instance in discussion) alive after receiving the last heart beat. This is by default set to 30 seconds and we will be keeping it as it is. Depending on your requirements of the deployment you may adjust this value by editing the property named "zkSessionTimeout".

Cluster Domain

Then you will get to the "domain" property which defines the cluster domain at which you are running this cluster. The cluster domain helps you to run 2 or more different UltraESB clusters, on the same ZooKeeper instance/quorum. It defines a root space for the cluster and make sure that another node in a different cluster domain doesn’t interfere with the nodes in this domain. Value of this property defaults to "default". All the nodes in a particular cluster should define the domain with the same value.

Space

Space is a sort of a categorization of nodes where there can be many spaces within a cluster domain and can have many nodes within the space, in general the configuration of a set of nodes in the space is treated as identical.

Node Group

Name of the node group to be used for the node physical/logical categorization. This could be used to distinguish the nodes in different data centers

Node Name

Next is the property "nodeName" which needs to be unique for each node in a given domain. If there are 2 nodes in the same cluster domain with the same node name the node which starts later will fail to start throwing a duplicate node name error. While this property defaults to "node1", you may change it as you wish to a value like"myFirstNode" or something, which uniquely identifies the node in a cluster.

Node name is optional in certain conditions
This is optional if you pass in a -serverName property for the server start-up script or specified in the wrapper.conf (in running the server as a daemon) with a value other than "localhost". This gives you the ability for all nodes to share the same /conf directory via a network file share
Startup Timeout

The property named "startupTimeout" defines the startup action. The ZooKeeper session initialization is asynchronous and hence the cluster manager is asynchronous too. But the cluster manager start call looks for this property to decide whether it wants to wait till the ZooKeeper initialization is complete to return, simulating a synchronous start of the cluster manager from the server startup point of view. This will guarantee that the cluster manager has started to continue the rest of the startup. However it is possible to keep this asynchronous behavior as it is by setting the value of this property to "0" though it is not recommended in a production deployment in general. The default value of this parameter is 30 seconds meaning that the server startup will wait for the cluster manager to establish the ZooKeeper connection and initialize, for 30 seconds. If it fails to initialize within the specified time, the complete server startup process will fail.

Note
Allowing the cluster manager initialization to be asynchronous by setting this value to "0" might be useful in certain cases where, you explicitly expect the ZooKeeper quorum to be available later at which point the clustering will be initialized, but allows the startup process to complete without waiting for ZooKeeper session initialization.
Up Time Report Interval

Up time reporter timer interval is configured by the property named "upTimeReportInterval". This guarantees the session end time is reported to an accuracy of this time interval, in other words UltraESB reports the uptime to the ZooKeeper quorum in the intervals of the specified time.

ZooKeeper Server Connection Handler

Reference to Zookeeper server connection handler bean which handles ZooKeeper server disconnect/reconnect events. Following is the Zookeeper server connection handler bean.

1<bean id="zk-conn-handler" class="org.adroitlogic.ultraesb.clustering.ZooKeeperServerConnectionHandlerImpl">
2    <property name="server" value="false"/>
3</bean>

Here if the property "server" set to "true" means node based failover is used and it being "false" means node group based failover is used.

Failover Processor

Failover processor (with property name "failoverProcessor") points to a bean that defines the cluster node failover processing. Which defines a failover node matrix and few other properties of the failover processing of the nodes in the cluster as discussed in the faiolver configuration.

ZooKeeper Session File

This is an optional property (named "zkSessionFile") that indicates the file where the ZooKeeper session information for this UltraESB instance should be stored. This property is useful if you wish to run multiple UltraESB instances using a shared configuration directory (more specifically, a shared ZooKeeper data directory), in which case session files of older instances would get overwritten upon starting a new instance, if separate session files are not specified for each instance.

Complete Configuration

The complete resulting configuration should look like follows;

Complete Clustering Configuration

 1<bean id="cluster-manager" class="org.adroitlogic.ultraesb.clustering.ClusterManager">
 2    <constructor-arg value="false" type="boolean"/>
 3    <property name="zkConnectString" value="127.0.0.1:2181"/>
 4    <property name="zkSessionTimeout" value="30"/>
 5    <property name="domain" value="default"/>
 6    <property name="space" value="space1"/>
 7    <property name="nodeGroup" value="primary"/>
 8    <property name="nodeName" value="node1"/>
 9    <property name="startupTimeout" value="30"/>
10    <property name="upTimeReportInterval" value="60"/>
11    <property name="zooKeeperServerConnectionHandler" ref="zk-conn-handler"/>
12</bean>

Save the ultra-root.xml file before closing it.

You need to repeat this same process with just the nodeName property changed to be unique for each and every node for other copies of the UltraESB installation.

If you are running two instances of the UltraESB HTTP and HTTPS transports as a cluster on a Single machine, the ports of the second instance should be updated to prevent a conflict - Otherwise you will get into an "Address already in use" bind exception. For this exercise look for the "http-8280" bean configuration and change the port property to something like 8281 from 8280, and do the same for the "https-8443" transport. You should not change the bean names of the transports - as that will cause the proxy services to fail at startup, as they will not be able to locate the new names - unless the services are also altered)
Failover Configuration

The following bean defines a failover processor to be referred from the clustering configuration if you need failover of nodes in the cluster to make sure any specific tasks are taken care of in the case of a node failure.

Failover processor configuration

 1<bean id="failover-processor" class="org.adroitlogic.ultraesb.clustering.FailoverProcessor">
 2    <property name="failoverNodeMatrix">
 3        <map>
 4            <entry key="node1" value="node2,node3"/>
 5            <entry key="node2" value="node1,node3"/>
 6            <entry key="node3" value="node1,node2"/>
 7        </map>
 8    </property>
 9
10    <property name="failoverGroupMatrix">
11        <map>
12            <entry key="primary">
13                <list>
14                    <bean class="org.adroitlogic.ultraesb.clustering.FailoverGroupConfig">
15                        <property name="failoverGroup" value="secondary"/>
16                        <property name="triggerNodeCount" value="2"/>
17                        <property name="triggerNodePercentage" value="25"/>
18                        <property name="triggerOnMajority" value="true"/>
19                    </bean>
20                </list>
21            </entry>
22        </map>
23    </property>
24
25    <property name="secondsBeforeFailover" value="60"/>
26    <property name="failoverMissingNodesAtStartup" value="true"/>
27    <property name="secondsBeforeFailoverAtStartup" value="300"/>
28    <property name="scriptOnFailOver" value=""/>
29    <property name="scriptOnFailBack" value=""/>
30</bean>

Please note that this is commented by default so does the property that refers this to the clustering configuration in the ultra-root.xml file. Please uncomment and change the properties that are discussed below as per your need.

Properties

The failover configuration as you have seen in the previous section has the following properties to be configured;

Failover Node Matrix

The property failoverNodeMatrix is the matrix defined supporting failover configuration, all nodes in the cluster should have identical failover node matrix. The entry keys specify the names of the nodes and the value is the corresponding failover node set for that node as a comma separated list of server names.

Failover Node Group Matrix

The property failoverGroupMatrix is the matrix supporting failover configuration with node groups, similar to failoverNodeMatrix all nodes in the cluster should have identical failover node group matrix.

Script on FailOver

This property can be set to run a specific shell script when node start acting as primary group.

Script on FailBack

This property can be set to run a specific shell script when node stop acting as primary group.

Seconds Before Failover

This is an optional property, if configured the delay that the server waits to see whether the failed node re-joins the cluster before starting to act as it. This defaults to zero meaning that the server will immediately act as the failed node.

Failover Missing Nodes at Startup

This defines whether the missing nodes at startup, configured this node as there failover node, be treated as failed by this node, and start to act as them at startup.

Seconds Before Failover at Startup

This depends on the previous property, if the The "failoverMissingNodesAtStartup" is true, the delay that gives some time for the missing nodes to appear on the cluster.

Starting a Cluster

This document describes starting a configured cluster. Please follow the document on Clustering Configuration to properly configure a cluster with respect to your requirements. Starting an UltraESB cluster consists of 2 steps where the UltraESB instances have a dependency on the availability of a ZooKeeper quorum/instance.

Starting ZooKeeper

Prior to starting the configured UltraESB instances, the ZooKeeper instance needs to be started. UltraESB has a built in ZooKeeper server as well as a simple client shipped with it. The default configuration of the ZooKeeper server is predefined to work with the default ultra-root configuration. You may have a look at the ZooKeeper server configuration which could be found in the UltraESB conf directory as "zoo.cfg", if you change the clientPort the connectionString property used earlier in the clustering configuration will need to be changed appropriately. Please refer to the Apache ZooKeeper documentation (to be specific the ZooKeeper Administration Guide) on configuring a ZooKeeper.

Note
Change the dataDir if you are on Windows or if you want the ZooKeeper persistent data to be stored in somewhere else than /var/zookeeper

Use the following command to start the ZooKeeper server from the UltraESB home directory and you will see the following on the command line and some other log messages after that too.

Starting the ZooKeeper server
$ sh bin/zkServer.sh start
JMX enabled by default
Using config: /opt/ultraesb-2.6.1/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
Starting UltraESB

Now that you have the ZooKeeper up and running, lets move to the UltraESB. Configure the UltraESB as per the Clustering Configuration and it’s  just a matter of starting the ESB instances. Use the typical UltraESB startup mechanisms to start the UltraESB and observe the logs.

Starting the UltraESB
$ sh bin/ultraesb.sh
Starting AdroitLogic UltraESB ...
Using JAVA_HOME  : /opt/jdk
Using ULTRA_HOME: /opt/ultraesb-2.6.1
Reading configuration from : /opt/ultraesb-2.6.1/conf/ | Current directory is : /opt/ultraesb-2.6.1/.
2016-08-22T16:22:52,320 [-] [main] [system] [000000I]  INFO ServerManager ServerManager initializing..
2016-08-22T16:22:52,407 [-] [main] [system] [000000I]  INFO UltraServer UltraESB JAR file version/s : [2.6.1]
2016-08-22T16:22:52,455 [-] [main] [system] [000000I]  INFO UltraServer BouncyCastle set as the preferred security provider..
2016-08-22T16:22:52,546 [-] [main] [system] [000000I]  INFO UltraServer JCE unlimited strength jurisdiction policy files are installed
2016-08-22T16:22:52,548 [-] [main] [system] [000000I]  INFO CustomStreamHandlerFactory CustomStreamHandlerFactory initializing..
2016-08-22T16:22:52,626 [-] [main] [] []  INFO LicenseClient License validation successful
2016-08-22T16:22:52,626 [-] [main] [system] [000000I]  INFO LicenseManager Licensed to :adroitlogic
2016-08-22T16:22:52,920 [-] [main] [system] [000000I]  INFO PooledMessageFileCache Directory : /opt/ultraesb-2.6.1/tmp/rajind_localhost locked for file cache
2016-08-22T16:22:53,198 [-] [main] [system] [000000I]  INFO ConfigurationImpl Starting AdroitLogic (http://www.adroitlogic.org) - UltraESB/2.6.1 - node1
2016-08-22T16:22:54,079 [-] [main] [system] [000000I]  INFO ConfigurationImpl Pre-initialization of the engine..
2016-08-22T16:22:54,347 [-] [main] [system] [000000I]  INFO MediationImpl Initializing the Mediation feature
2016-08-22T16:22:54,532 [-] [main] [system] [000000I]  INFO TransformationUtils Initializing Transformation feature
2016-08-22T16:22:54,536 [-] [main] [system] [000000I]  INFO JSONUtils Initializing the JSON feature
2016-08-22T16:22:54,537 [-] [main] [system] [000000I]  INFO XMLFeatures Initializing the XML feature
2016-08-22T16:22:54,555 [-] [main] [system] [000000I]  INFO FastInfosetUtils Initializing the Fast-Infoset feature
2016-08-22T16:22:54,557 [-] [main] [system] [000000I]  INFO CryptoSupport Initializing the Security and Crypto features
2016-08-22T16:22:54,578 [-] [main] [system] [000000I]  INFO ServerManager Starting server ...
2016-08-22T16:22:54,578 [-] [main] [system] [000000I]  INFO ServerManager Linux - amd64 [4.4.0-34-generic] / Processors : 8
2016-08-22T16:22:54,579 [-] [main] [system] [000000I]  INFO ServerManager Total physical memory : 16,298,260K (6,645,124K free)  Heap max : 1,005,568K
2016-08-22T16:22:54,579 [-] [main] [system] [000000I]  INFO ServerManager Java HotSpot(TM) 64-Bit Server VM, Oracle Corporation [24.80-b11]
2016-08-22T16:22:54,579 [-] [main] [system] [000000I]  INFO ServerManager Instance available for management via JMX at : service:jmx:rmi://localhost:9994/jndi/rmi://localhost:1099/ultra
2016-08-22T16:22:54,588 [-] [main] [system] [000000I]  INFO ServerManager Starting cluster manager..
2016-08-22T16:22:54,588 [-] [main] [system] [000000I]  INFO ClusterManager Asynchronous starting signal received by the cluster manager
2016-08-22T16:22:54,631 [-] [main] [system] [000000I]  INFO ClusterManager Waiting maximum 29 seconds for the cluster manager to start
2016-08-22T16:22:54,704 [-] [main-EventThread] [system] [000000I]  INFO ClusterManager Zookeeper connection established State:CONNECTED Timeout:30000 sessionid:0x156b1df290e0000 local:/127.0.0.1:45076 remoteserver:127.0.0.1/127.0.0.1:2181 lastZxid:0 xid:1 sent:1 recv:1 queuedpkts:0 pendingresp:0 queuedevents:0
2016-08-22T16:22:54,705 [-] [main-EventThread] [system] [000000I]  INFO ZooKeeperServerConnectionHandlerImpl Zookeeper reconnected. Starting self stopped entities if any...
2016-08-22T16:22:54,705 [-] [main-EventThread] [system] [000000I]  INFO ServerManager No self stopped entities found
2016-08-22T16:22:55,225 [-] [main-EventThread] [system] [000000I]  INFO ClusterManager Clustering command processors initialized, the command version set to : 0
2016-08-22T16:22:55,233 [-] [main-EventThread] [system] [000000I]  INFO ClusterManager Cluster manager has been successfully started
2016-08-22T16:22:55,250 [-] [main] [system] [000000I]  INFO fileCache Initialized cache of : 0 files at : /opt/ultraesb-2.6.1/tmp/rajind_localhost
2016-08-22T16:22:55,260 [-] [main] [system] [000000I]  INFO SimpleQueueWorkManager Started Work Manager : default
2016-08-22T16:22:55,260 [-] [main] [system] [000000I]  INFO ServerManager Starting Proxy Services, Endpoints and Sequences
2016-08-22T16:22:55,262 [-] [main] [system] [000000I]  INFO DeploymentUnit Starting deployment unit : system
2016-08-22T16:22:55,262 [-] [main] [system] [000000I]  INFO DeploymentUnit Starting transport senders..
2016-08-22T16:22:55,263 [-] [main] [system] [000000I]  INFO HttpsNIOSender Https transport sender : https-sender using default identity keystore
2016-08-22T16:22:55,263 [-] [main] [system] [000000I]  INFO HttpsNIOSender Https transport sender : https-sender using the default trust keystore
2016-08-22T16:22:55,352 [-] [HttpsNIOSender-https-sender] [system] [000000I]  INFO HttpsNIOSender Starting NIO Sender : https-sender ...
2016-08-22T16:22:55,353 [-] [main] [system] [000000I]  INFO DeploymentUnit Initializing transport listeners..
2016-08-22T16:22:55,353 [-] [HttpNIOSender-http-sender] [system] [000000I]  INFO HttpNIOSender Starting NIO Sender : http-sender ...
2016-08-22T16:22:55,363 [-] [main] [system] [000000I]  INFO Address Started Address : address of endpoint : echo-service
2016-08-22T16:22:55,364 [-] [main] [system] [000000I]  INFO echo-service Endpoint : echo-service started
2016-08-22T16:22:55,367 [-] [main] [system] [000000I]  INFO Address Started Address : address of endpoint : mediation.response
2016-08-22T16:22:55,367 [-] [main] [system] [000000I]  INFO Endpoint Endpoint : mediation.response started
2016-08-22T16:22:55,780 [-] [main] [system] [000000I]  INFO error-handler Sequence : error-handler started
2016-08-22T16:22:55,841 [-] [main] [system] [000000I]  INFO health-check-inSequence Sequence : health-check-inSequence started
2016-08-22T16:22:55,842 [-] [main] [system] [000000I]  INFO health-check Proxy service : health-check started
2016-08-22T16:22:55,877 [-] [main] [system] [000000I]  INFO ServerManager UltraESB root deployment unit started successfully
2016-08-22T16:22:55,879 [-] [main] [system] [000000I]  INFO DeploymentUnitBuilder Using the hot-swap class loading of libraries/classes for deployment unit default
2016-08-22T16:22:55,921 [-] [main] [system] [000000I]  INFO DeploymentUnitBuilder Successfully created the deployment unit : default
2016-08-22T16:22:55,921 [-] [main] [system] [000000I]  INFO DeploymentUnit Starting deployment unit : Default
2016-08-22T16:22:55,921 [-] [main] [system] [000000I]  INFO DeploymentUnit Starting transport senders..
2016-08-22T16:22:55,923 [-] [main] [system] [000000I]  INFO DeploymentUnit Initializing transport listeners..
2016-08-22T16:22:55,923 [-] [HttpNIOSender-http-sender-du] [system] [000000I]  INFO HttpNIOSender Starting NIO Sender : http-sender-du ...
2016-08-22T16:22:55,989 [-] [main] [system] [000000I]  INFO echo-back-inSequence Sequence : echo-back-inSequence started
2016-08-22T16:22:55,989 [-] [main] [system] [000000I]  INFO echo-back Proxy service : echo-back started
2016-08-22T16:22:56,035 [-] [main] [system] [000000I]  INFO echo-proxy-outSequence Sequence : echo-proxy-outSequence started
2016-08-22T16:22:56,037 [-] [main] [system] [000000I]  INFO Address Started Address : address of endpoint : echo-proxy-inDestination
2016-08-22T16:22:56,037 [-] [main] [system] [000000I]  INFO echo-proxy-inDestination Endpoint : echo-proxy-inDestination started
2016-08-22T16:22:56,039 [-] [main] [system] [000000I]  INFO Address Started Address : address of endpoint : echo-proxy-outDestination
2016-08-22T16:22:56,039 [-] [main] [system] [000000I]  INFO echo-proxy-outDestination Endpoint : echo-proxy-outDestination started
2016-08-22T16:22:56,039 [-] [main] [system] [000000I]  INFO echo-proxy Proxy service : echo-proxy started
2016-08-22T16:22:56,076 [-] [main] [system] [000000I]  INFO ConfigurationImpl Successfully added the deployment unit : default
2016-08-22T16:22:56,076 [-] [main] [system] [000000I]  INFO DeploymentUnit Starting transport listeners of : system
2016-08-22T16:22:56,084 [-] [HttpNIOListener-http-8280] [system] [000000I]  INFO HttpNIOListener Starting NIO Listener : http-8280 on port : 8280 ...
2016-08-22T16:22:56,097 [-] [main] [system] [000000I]  INFO HttpsNIOListener Identity keystore loaded from : conf/keys/identity.jks
2016-08-22T16:22:56,146 [-] [main] [system] [000000I]  INFO HttpsNIOListener Trust keystore loaded from : conf/keys/trust.jks
2016-08-22T16:22:56,146 [-] [main] [system] [000000I]  INFO DeploymentUnit Starting transport listeners of : Default
2016-08-22T16:22:56,147 [-] [main] [system] [000000I]  INFO ConfigurationImpl UltraESB/2.6.1 - Server node1 started..
2016-08-22T16:22:56,147 [-] [HttpNIOListener-http-8282] [system] [000000I]  INFO HttpNIOListener Starting NIO Listener : http-8282 on port : 8282 ...
2016-08-22T16:22:56,149 [-] [HttpsNIOListener-https-8443] [system] [000000I]  INFO HttpsNIOListener Starting NIO Listener : https-8443 on port : 8443 ...
Note
Some log lines have been removed in the above start-up log for the clarity of presentation

Now look at the above log and carefully examine the log statements from the ClusterManager, which shows the startup logs of the cluster manager. Once you start the second instance too, you have a cluster of ESB instances up and running. In the same way you could start any number of UltraESB instances with just the nodeName being changed for each node.

Verify Clustering (Optional)

You could optionally use the ZooKeeper client to examine the structure of the ZooKeeper node space created by the UltraESB cluster. For that matter, run the ZooKeeper client with the following command.

Run ZooKeeper client and examin (Optional)
 1$ sh bin/zkCli.sh -server 127.0.0.1:2181
 2Connecting to 127.0.0.1:2181
 3Welcome to ZooKeeper!
 4JLine support is enabled
 5
 6WATCHER::
 7
 8WatchedEvent state:SyncConnected type:None path:null
 9[zk: 127.0.0.1:2181(CONNECTED) 0]

The ZooKeeper client console lets you could examine the node structure using the "ls" command on the above console. For example to list the active nodes in the default cluster domain you could use the following command

[zk: 127.0.0.1:2181(CONNECTED) 4] ls /ultraesb/default/nodes/active
[myFirstNode, mySecondNode]
[zk: 127.0.0.1:2181(CONNECTED) 5]

Likewise you could observe any interaction of the UltraESB with ZooKeeper using the client console. The next step in working with a cluster is the control operations, where you can command the complete cluster in one go.

Controlling a Cluster

This document describes controlling a running cluster. Please follow the documents on Clustering Configuration and Starting a Cluster to set-up a running cluster with respect to your requirements. There are many ways to control a cluster by connecting to any of the nodes in the given cluster.

  1. Using the JMX with a tool like JConsole

  2. Using the UltraESB terminal management client, UTerm

  3. Using AdroitLogic Integration Monitor which executes as an independent Web Application - IMonitor

AdroitLogic Integration Monitor - IMonitor
AdroitLogic Integration Monitor executes as an independent Web Application, and allows the easy management of a single UltraESB instance or a cluster of instances. Let it be a single instance or a cluster of ESB nodes, IMonitor delivers business level statistics and monitoring at the best. Apart from the operational statistics, IMonitor is capable of presenting friendly troubleshooting & diagnostics capabilities. It’s your step towards improved organisational efficiency saving hours of developer time. Note that IMonitor comes as a replacement for UConsole which was there in previous UltraESB releases and is covered separately in AdroitLogic - Integration Monitor User Guide

While the usage of UTerm to control the cluster is presented in UTerm documentations , in this section we will be looking at the JConsole option in controlling the running cluster.

Introduction

The UltraESB cluster controls are built into the core of the run-time, and exposed via JMX as well as through the Web based management console. If clustering is enabled, you will be able to manage the complete cluster by connecting to any node on that cluster via JMX, or from the Web based management console.

Lets use the JMX management capability to try out cooperative control with UltraESB control commands.

Start JConsole and Connect

First you need to start a JMX client, like jconsole shipped with the JDK. You should be able to bring up the standard jconsole by executing the following command on the console.

Start JConsole

$ jconsole

You will be prompted to select the Java process to be connected to and select any of the UltraESB processes from the list of available Java processes and connect to it as shown below.

connect jconsole

Once you are connected you can see different tabs on the top of the connected jconsole window. On the "MBeans" tab in the left navigation menu, expand the "org.adroitlogic.ultraesb" section and under that the "ClusterManagement" sub section. Where you can find the Cluster, Endpoints, ProxyServices, Sequences and ServerManagement MXBeans of the cluster.

Before going to the cluster controls, first have a look at the normal Endpoints MXBean and see the attributes to observe the "echo-service" endpoint state to be "Started" as shown below.

view endpoint
Invoking a Cluster Control Operation

Then connect to the any other UltraESB instance of the same cluster domain using the jconsole with a new connection and go to the Endpoints MBean within the ClusterManagement bean to execute a cluster control command on the said endpoint. To observe the operation effect lets invoke the "stop" operation on this endpoint from the jconsole window which is connected to the second UltraESB instance.

cluster stop ep

You will see an information message saying that the operation has been completed successfully. Then switch back to the jconsole window which was connected to the first UltraESB instance and see the endpoint state has been changed from "Started" to "Stopped". Now, observe the second instance from which you fired the command also to be shown it’s endpoint state as"Stopped" meaning that the stop operation was effective in the complete cluster with just one click execution from a single node in the cluster.

Command History and States

Further to that, you can find all the control commands as well as there respective execution states among the nodes in the cluster, under the Cluster MXBean. If you browse the Cluster MXBean attributes and double click on the CurrentCommands value, you will be able to see a composite data structure with the executed commands and there respective meta data and states as follows.