This sections discusses the key architectural and design aspects that makes UltraESB the best ESB. The aspects that are discussed here are mostly non functional however addresses the performance, scalability and reliability of the ESB.
The UltraESB codebase was started in late 2009, with an objective to build the best available Enterprise Service Bus (ESB). It concentrated on three key areas as follows:
Simplicity of use for development and management
Achieve best performance
Ability and ease of extension
As the UltraESB was a completely new ESB implementation without any historical code or customer liability piggybacking on it, sweeping simplifications could be made in the architecture and design that was not possible for most of the previously initiated ESB projects.
File Caching and Storage of Payloads in Files
Memory map the files or use a RAM disk based file system
Achieve the speed of RAM with the ease of Files
Make use of RAM capacities without a Garbage Collection (GC) overhead
Use the 'sendfile' system call to transfer payloads to/from the network interface
Use Direct Memory Access (DMA) bypassing the CPU for NW to File transfer
Re-use existing developer skills without introducing a new language or DSL
Use familiar Javadocs, with a simple mediation API
IDE based auto-completion, unit testing/execution and step-through debugging
Based on the Spring framework and a very few and stable libraries
Configuration is 100% Spring Based
Support dynamic updating of subsets
Full distribution is limited to < 40M in size with minimal < 8M
Support and implement end-to-end unit testing with a high code coverage level
Start out writing end-to-end unit tests, and make it simple
Use JMX to report metrics and manage runtime instances
Standardize on JMX based management and reporting
Do not try to re-invent things, simply re-use products such as Zabbix
As firm believers in architecture, performance and quality, we’ve put into use some of the Computer Systems Design Principles of Prof. Jerry Saltzer of the Massachusetts Institute of Technology (MIT), with special focus on the use of `End-to-End Argument' and the establishment of 'Conceptual Integrity' to ensure optimal design.
Although not related to architecture or design, we also made the following business level decisions:
Develop and release only ONE version of the UltraESB
There would not be a cut down 'free' version and a enterprise grade 'paid' version
Simply offer the SAME version - under two FREE licenses
AGPL license with source code
AdroitLogic Zero-Dollar license with binary
Clustering, High Availability, Node fail-over, and ALL such enterprise grade features will be freely available
Product documentation and samples will remain freely accessible to end users
Community support and Proof-of-Concept support will be available free of charge
24x7 Production Support, Training and Consultancy will be available through AdroitLogic for a fee
Storage of payloads in files is a technique used in UltraESB to improve its performance and reliability. This section discusses the design and architecture around this concept.
UltraESB stores the payloads in files which makes it easy to deal with potentially large payloads easily and seamlessly, and by making use of memory mapped files or RAM disks, the system allows the operation at the speed of RAM and the ease of files.
Efficient use of RAM and Heap memory and no GC overhead
Storing the complete payload in a file is a much better implementation than holding a part of the payload in Java heap memory, and the rest in a overflow file on disk, as the use of heap memory on a high throughput ESB creates a large GC overhead on the virtual machine. Instead, allocating a large amount of RAM for the memory-mapping, or a RAM disk allows better utilization of large amounts of RAM available on typical production systems lately. This allows the UltraESB to run with a much smaller and more efficient heap memory, while utilizing huge amounts of efficient RAM to handle huge loads.
Optimizations possible due to storage on files and repeatability of the payload
Since the application visible payload is held in a file, accessing and using this payload is simple and straightforward, and allows easy repeatability of the bytes for further optimizations. For example, if an incoming XML requires evaluation of an XPath over the payload, this could still be achieved - even without any XML parsing of the payload - by using libraries such as VTD XML which index into the offset of the payload without creating huge cloned object structure of the same information. Where the original payload is not modified during routing, the originally used payload file could be used again, for example; forwarded to a backend service using outgoing zero-copy. If a backend service fails, and the UltraESB has to perform fail-over to another address, the original payload file could again be used to re-send the original request similarly. Keeping the payload in heap memory either as raw bytes or Objects (e.g. DOM/StAX etc) would cause many times of copying and serializing of the data between formats and locations etc to perform the same.
Zero-Copy Proxying for incoming and outgoing messages
Storing the payload on a file, allows the JDK to use Zero-Copy support of the new JDKs using the 'sendfile' system call underneath, to transfer raw bytes between the network interface and the payload file. This allows the efficient bypassing of each byte through the CPU by using Direct Memory Access (DMA). The Zero-Copy support is explained further in the next section.
The initial versions of the UltraESB introduced the default PooledMessageFileCache implementation, which memory mapped a section of each file created. By tuning this memory mapped threshold, it would be possible to make most of the payloads "fit" into the memory mapped section of the files, thereby avoiding any real disk access.
Detailed load testing with the memory mapped file cache showed that where RAM is adequately available, creating a complete RAM disk based file system would offer even better performance and simplicity. The RAM disk file cache requires an Operating System level RAM disk to be created first and assigned. In this case, if the RAM disk has adequate capacity, all messages will be guaranteed to be retained fully on RAM. The implementation allows the overflow to a secondary disk, if the RAM disk capacity becomes exhausted.
Many ESB systems uses custom Domain Specific Languages (DSLs) and/or Graphical Models to specify the mediation logic. At first sight, a new user may see these as a good feature, and would expect the "tools" around the DSL and/or the Graphical Models to be powerful and flexible for real enterprise development.
However, in reality DSLs or Graphical Models are not as powerful as programming languages, and they cannot be easily integrated with custom programming language code, without writing classes implementing specific interfaces and being deployed as specific bundles etc! The inability to debug DSLs and graphical models is another serious aspect, especially during development and initial testing.
In contrast, the UltraESB has been the first ESB to introduce mediation logic specified as Java fragments, Java classes, JSR 223 Scripting language files, fragments, compiled Java classes or Spring beans etc. Thus the mediation API of the UltraESB is built around two main interfaces - Message and Mediation and the complete user level API and Javadocs are hosted at http://api.adroitlogic.org As any third party library or any custom Java code could be invoked during mediation, the power and flexibility of a familiar programming language is at the users command. The users could configure the UltraESB using any mainstream IDE such as IntelliJ IDEA, Eclipse or NetBeans, without being forced into a customized, and sometimes buggy or old version of an Eclipse or NetBeans distribution. The mediation logic can be easily debugged and unit tested, including writing end-to-end unit tests, and executing these or running the complete UltraESB within your IDE.
As the Java/JSR 223 scripting language code and fragments are compiled once at load time, the user does not need to "compile, build, bundle and deploy" mediation logic or customizations unlike with other ESBs. Simply edit the text files containing the configuration, save and reload or restart.
Each custom DSL of a vendor is a "Vendor specific language" which a user must learn a new to configure the ESB. Although some auto-completion and a list of constructs maybe documented, one may not fully understand how each construct will work in reality. Usually such components or mediators are specified in XML configuration files, but these are not flexible enough like a programming language which a user would already be familiar with.
For example, consider a component performing an XSLT transformation. First a user will have to read some documentation about this component, understand its XML configuration structure and required and optional attributes and elements, and then finally specify it as configuration. Although auto-completion support of an IDE maybe available, a user will not be able to understand the conceptual behavior of the component by simply writing the configuration with auto-completion. For example consider the failure cases - checked exceptions, and run-time errors, or would the component replace the current message with the transformation, or leave the transformed output separately etc? Some of the vendor modules may handle errors internally and change the message in an unexpected way, or some may call into another mediation sequence or flow for error handling etc. Some extreme errors may throw out the thread of execution to a totally unexpected state. As a user, this means a considerable amount of time needs to be spent on reading non-standard documentation formats, examples and reference documentation etc. Furthermore, it could be impossible or very cumbersome to perform quite simple logic using the "fixed" components or mediators. Consider the case where your XSLT file name to be used would be computed by concatenating some transport header property with some postfix, however your "fixed" mediator may expect you to specify a hard coded XSLT file path instead of supporting the concatenation.
In contrast, consider a user downloading some new third party Java library, reading its standard Javadoc based API documentation and then using it within an IDE with auto-completion. We rarely need to look at reference documentation of a API method, as the return type, arguments and exceptions would be clearly defined in the Javadocs in a universally understood and clear format. You are able to catch checked or unchecked exceptions, use a try-catch-finally to wrap an API invocation etc and thus write crisp and stable code.
Consider testing, unit testing and debugging a DSL. Usually these would be impossible as DSLs operate at a higher level than the debug-able language level. However, considering a third party Java API, any developer is familiar on setting breakpoints, evaluating run-time arguments and stepping through the code to debug the logic.
Work manager in UltraESB is an in-memory implementation used for thread management. This avoids disadvantages of bounded queues or zero queue size in java thread pools.
In java thread pools with bounded queues, when a new task comes and number of threads running is less than the corePoolSize, a new thread is crated even if other worker threads are idle. If more tasks comes after the corePoolSize is exceeded it will be queued. A new thread will be created only after the queue is full and will keep on creating new threads up to the maxPoolSize as task comes. After the maxPoolSize is reached TaskRejectedException is thrown.
In thread pools with no bounded queues, nothing will be queued as the queue size is zero and will create threads up to maxPoolSize.
The main disadvantages of this are,
If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread even though the maxPoolSize is not exceeded. The executer starts creating new threads only after the queue is full.
If queue size and maxPoolSize has finite bounds, after both queue and maxPoolSize saturated new coming tasks will be rejected.
UltraESB Work manager avoids these disadvantages in java default thread pools using ScaleFirstExecutorService.
UltraESB by default ships with SimpleQueueWorkManager which is an in-memory implementation that does not support message priorities or persistence. It uses ScaleFirstExecutorService which is a slight variant of Java 7 ThreadPoolExecutor.
ScaleFirstExecutorService is a ThreadPool executor implementation that scales up before queueing as opposed to the usual policy of the ThreadPoolExecutor. The rest of the functionality of this thread pool executor is equivalent to that of the Java 7 ThreadPoolExecutor. After reaching the max thread pool size specified, it starts queuing. If a queue length is provided it will queue up to that length and when going above that queue length RejectedExecutionException will be thrown. If queue length is not specified it will use an unbounded blocking queue and in this case the thread pool won’t reject any task.
Zero-copy proxying is a unique feature of the AdroitLogic UltraESB which allows extremely efficient proxying of service calls through the UltraESB with least amount of overhead. Coupled with the support for Non-Blocking IO and Memory mapped files / RAM disks, a single UltraESB node can manage hundreds or thousands of concurrent clients using very few threads, and limited resources.
Note: Fully optimal Zero-Copy support will operate best on a Linux OS running on real hardware and a properly setup network interface. However, the zero-copy support can be left enabled even on simple hardware, virtualized systems, Amazon EC2 or cloud environments, etc. and would not cause any harm even if not optimally tuned.
The UltraESB allows extremely efficient proxying of messages with least possible overhead - including Zero-Copy proxying with Direct Memory Access [DMA] on supported Hardware. This feature is best used on a Linux operating system with a Kernel version equal to or above 2.6 using the 'sendfile' system call of the Operating System. Use of sendfile system call allows the message received over network buffer to be efficiently transferred out again without making a copy of the data in the heap memory. In addition to preventing the use of user space heap memory, the use of the 'sendfile' system call reduces the number of context switches as well, since the message is passed through Kernel memory alone. The UltraESB uses a cache of memory mapped / RAM disk based files (See previous section) that reside in kernel memory as opposed to the traditional buffers or programming language objects that reside in user space memory. Thus the transfer of a message through the UltraESB takes place with a minimum number of context switching and better utilization of the CPU.
For real 'Zero-Copy' forwarding, the network card used must support gather operations, and the UltraESB should be deployed over the bare metal hardware without using a layer of virtualization. This allows the network card to efficiently create the TCP packets by combining from different memory areas, without having to create the complete message into a buffer at first. It is highly recommended that network card offloading capabilities are used only with good hardware and drivers after comprehensive testing, as there could be certain issues with the Operating System and versions, Drivers and the Hardware which may cause problems.
Checking the output of the command "ethtool -k eth0" on a Linux system will show the offloading capabilities and configuration of the network device chosen. In the above example, it will print the settings for 'eth0'. Please refer to standard Linux documentation on how to enable the offloading capabilities of your adapter - at a minimum scatter-gather support should be enabled. TCP offloading will only benefit a wired network adapter, correctly configured for its maximum performance (i.e. ensure that gigabit support where available is enabled and functioning correctly)
The following articles explain the advantages of Zero-Copy proxying extremely well.
Zero Copy I : User-Mode Perspective - Dragan Stancevic [http://www.linuxjournal.com/article/6345]
Efficient data transfer through zero copy : Zero copy, zero overhead - Sathish K. Palaniappan [http://www.ibm.com/developerworks/library/j-zerocopy/]
The traditional Java Servlet model processes each request on a separate thread. Since an ESB typically forwards a request it receives to another service, keeping a thread blocked for the response to arrive would be an extreme waste of resources, and will lead the ESB to exhaustion of resources. In addition, as soon as the total threads being used increases to over hundred on a typical system, the thread context switching overhead will cause a degradation of performance, in addition to limiting the number of open connections (i.e. sockets) to the number of possible threads.
The UltraESB uses the Non-Blocking IO [NIO] Support of the Java VM, and supports SSL connections over NIO as well using the Apache HttpComponents/HttpCore NIO library. This allows the UltraESB to keep thousands of open sockets, and service requests with a few threads - most typically less than a hundred or two. As a 1:1 socket to thread pairing is not maintained, the thread of execution is immidiately released back into the thread pool, whenever processing requires a wait on an external service. Once the response is received, a new thread is connected back to process the response.