Supported Since: 17.01
An organization wishes to spread processing across two servers. One is a high end server with a very fast processing and other is a low end server with relatively slow processing. However, high end server can only support a low amount of concurrent requests compared to the low end server which can support a large amount of concurrent requests. Ideally the organization want to provide results to the end user in lowest possible latency.
Since the organization wants the processed results to be returned in the least possible time, they should try to use fast processing sever for all the requests but they should also make sure that fast processing server is not overloaded. They decided to use an ESB to fulfill this requirements.
External application will forward the processing requests to the HTTP web service exposed by the UltraESB and those will go through a Concurrency Throttle. The throttle which monitors the number concurrent messages will only allow a configured number of messages to go through the fast processing endpoint and rest will be directed to slow processing endpoint.
To demonstrate this scenario we will assume that fast processing endpoint only supports ten concurrent requests and we will use mock backends for the fast processing server endpoint and slow processing server with endpoints http://localhost:9000/service/fastProcessor and http://localhost:9000/service/slowProcessor respectively.
In order to implement above use case you must first select following dependencies when you are creating a new Ultra project.
NIO HTTP Connector from the connector list
Throttle Processor and Message Logger from the processor list
|If you have already created a project, you can add above dependencies via Component Registry. From Tools menu, select Ultra Studio → Component Registry and from the Connectors list and Processors list, select above dependencies.|
To implement above use case, first let’s create our integration flow named “limiting_concurrent_requests” and then add required processing components by going through following steps in order.
Add a NIO HTTP Ingress Connector from the Connectors → Ingress Connectors list, to accept processing requests from external application. NIO HTTP Ingress Connector should be configured as shown below to expose a single web service on port 8280 and under "/service/throttling-proxy" service path to accept processing requests.
Add a Concurrency Throttle processing element from the Processors → Generic list, to throttle the number of concurrent messages in processing. It should be configured as shown below with the concurrency value ten. (Note that for demonstration purposes we have set it to ten). Connect the Processor out port of the previously added NIO HTTP Ingress Connector to the Input of this processing element.
Add a Logger processing element from the Processors → Generic list. This should be configured as shown below. Note that the usage of logger element is not essential and it is used just to see which path the incoming processing requests take. Connect the Allowed out port of the previously added Concurrency Throttle to the Input of this Logger element.
Add a NIO HTTP Egress Connector from the connectors Connectors → Egress Connectors list, and configure as shown below to forward the request to fast processing server. Connect the Next out port of the Logger element with Input of this egress connector. Further, connect the Response Processor out port of this egress connector back to the Input of the NIO HTTP Ingress Connector to send back the received response.
Now to complete tha path starting from the Denied out port of the Concurrency Throttle, again add a Logger processing element from the Processors → Generic list and configure it as shown below. As mentioned previously, note that the usage of logger element is not essential and it is used just to see which path the incoming processing requests take. Connect the Denied out port of the previously added Concurrency Throttle to the Input of this Logger element.
Add another NIO HTTP Egress Connector from the connectors Connectors → Egress Connectors list, and configure as shown below to forward the request to slow processing server. Connect the Next out port of the previously added Logger element with Input of this egress connector. Also connect the Response Processor out port of this egress connector back to the Input of the NIO HTTP Ingress Connector to send back the received response.
The completed integration flow should look like below.
Configuration for each element is as below. The numbering corresponds to the numbers shown in above diagram.