Version: 17.01
Supported Since: 17.01
An organization wishes to spread processing across two servers. One is a high end server with a very fast processing and other is a low end server with relatively slow processing. However high end server can only support a low amount of concurrent requests compared to the low end server which can support a large amount of concurrent requests. Ideally the organization want to provide results to the end user in lowest possible latency.
Since the organization wants the processed results to be returned in the least possible time, they should try to use fast processing sever for all the requests but they should also make sure that fast processing server is not overloaded. They decided to use an ESB to fulfill this requirements.
External application will forward the processing requests to the HTTP web service exposed by the ESB those will go through a Concurrency Throttle. The throttle which monitors the number concurrent messages will only allow a configured number of messages to go through to the fast processing endpoint and rest will be directed to slow processing endpoint.
To demonstrate this scenario we will assume that fast processing endpoint only supports ten concurrent requests and we will mock both the fast processing server endpoint and slow processing server endpoint with http://localhost:9000/service/SimpleStockQuoteService which accept post requests. |
In order to implement above use case you must first select following dependencies when you are creating a new Ultra project.
NIO HTTP Connector from the connector list
Throttle Processor and Message Logger from the processor list
If you have already created a project, you can add above dependencies via Component Registry. From Tools menu, select Ultra Studio → Component Registry and from the Connectors list and Processors list, select above dependencies. |
To implement above use case, first let’s create our integration flow named “limiting_concurrent_requests” and then add required processing components by going through following steps in order.
Add a NIO HTTP Ingress Connector from the Connectors → Ingress Connectors list, to accept processing requests from external application. NIO HTTP Ingress Connector should be configured as shown below to expose a single web service on port 8280 and under "/service/throttling-proxy" service path to accept processing requests.
Parameter Name |
Category |
Value |
Http port |
Basic |
8280 |
Service path |
Basic |
/service/throttling-service |
Add a Concurrency Throttle processing element from the Processors → Generic list, to throttle number of concurrent messages in processing. It should be configured as shown below table with the concurrency value ten. (Note that for demonstration purposes we have set it to ten). Connect the Processor outport of the previously added NIO HTTP Ingress Connector to the Input of this processing element.
Parameter Name |
Category |
Value |
Concurrency |
Basic |
10 |
Consider All Branches |
Basic |
false (default value) |
Add a Logger processing element from the Processors → Generic list. This should be configured as shown below. Note that the usage of logger element is not essential and it is used just to see which path the incoming processing requests take. Connect the Allowed outport of the previously added Concurrency Throttle to the Input of this Logger element.
Parameter Name |
Category |
Value |
Log Template |
Basic |
Forwarding to request to fast processing server. |
Log Level |
Basic |
INFO |
Add a NIO HTTP Egress Connector from the connectors Connectors → Egress Connectors list, and configure as shown below to forward the request to fast processing server. Connect the Next outport of the Logger element with Input of this egress connector. Also connect the Response Processor outport of this egress connector back to the Input of the NIO HTTP Ingress Connector to send back the received response.
Parameter Name |
Category |
Value |
Destination Address type |
Basic |
URL |
Destination Host |
Basic |
localhost |
Destination Port |
Basic |
9000 |
Destination Path |
Basic |
/service/SimpleStockQuoteService |
Now to complete tha path starting from the Denied outport of the Concurrency Throttle, again add a Logger processing element from the Processors → Generic list and configure it as shown below. As mentioned previously, note that the usage of logger element is not essential and it is used just to see which path the incoming processing requests take. Connect the Denied outport of the previously added Concurrency Throttle to the Input of this Logger element.
Parameter Name |
Category |
Value |
Log Template |
Basic |
Concurrency limit reached. Forwarding to request to slow processing server. |
Log Level |
Basic |
INFO |
Add another NIO HTTP Egress Connector from the connectors Connectors → Egress Connectors list, and configure as shown below to forward the request to slow processing server. Connect the Next outport of the previously added Logger element with Input of this egress connector. Also connect the Response Processor outport of this egress connector back to the Input of the NIO HTTP Ingress Connector to send back the received response.
Parameter Name |
Category |
Value |
Destination Host |
Basic |
localhost |
Destination Path |
Basic |
/service/SimpleStockQuoteService |
Destination Port |
Basic |
9000 |
The completed integration flow should look like below.
Now you can run the Ultra Project and check the functionality of the integration flow.
Create an UltraESB Server run configuration and start it.
Start an Ultra Studio Jetty Server instance on port 9000, which will expose an HTTP endpoint on the service path /service/SimpleStockQuoteService.
Open up the HTTP Client tool shipped with Ultra Studio Toolbox. Set the URL to http://localhost:8280/service/throttling-service. Select the preset payload one which is a SOAP payload. Set the concurrency value to 20 and iterations to one. Run it.
If you observe the logs you can see the Concurrency Throttle in operation.
When running the sample in the UltraESB-X distribution, optionally you can change override the following properties according to user preference.
Refer to Managing Project Properties documentation on how to override properties. |
|
Hostname of the egress connector in allowed path |
|
Port of the egress connector in allowed path |
|
Hostname of the egress connector in denied path |
|
Port of the egress connector in denied path |