Version: 17.07
Supported Since: 17.01
An organization wishes to spread processing across two servers. One is a high end server with a very fast processing and other is a low end server with relatively slow processing. However, high end server can only support a low amount of concurrent requests compared to the low end server which can support a large amount of concurrent requests. Ideally the organization want to provide results to the end user in lowest possible latency.
Since the organization wants the processed results to be returned in the least possible time, they should try to use fast processing sever for all the requests but they should also make sure that fast processing server is not overloaded. They decided to use an ESB to fulfill this requirements.
External application will forward the processing requests to the HTTP web service exposed by the UltraESB and those will go through a Concurrency Throttle. The throttle which monitors the number concurrent messages will only allow a configured number of messages to go through the fast processing endpoint and rest will be directed to slow processing endpoint.
To demonstrate this scenario we will assume that fast processing endpoint only supports ten concurrent requests and we will use mock backends for the fast processing server endpoint and slow processing server with endpoints http://localhost:9000/service/fastProcessor and http://localhost:9000/service/slowProcessor respectively. |
UltraStudio Configuration
UltraESB-X Configuration
In order to implement above use case you must first select following dependencies when you are creating a new Ultra project.
NIO HTTP Connector from the connector list
Throttle Processor and Message Logger from the processor list
If you have already created a project, you can add above dependencies via Component Registry. From Tools menu, select Ultra Studio → Component Registry and from the Connectors list and Processors list, select above dependencies. |
To implement above use case, first let’s create our integration flow named “limiting_concurrent_requests” and then add required processing components by going through following steps in order.
Add a NIO HTTP Ingress Connector from the Connectors → Ingress Connectors list, to accept processing requests from external application. NIO HTTP Ingress Connector should be configured as shown below to expose a single web service on port 8280 and under "/service/throttling-proxy" service path to accept processing requests.
Add a Concurrency Throttle processing element from the Processors → Generic list, to throttle the number of concurrent messages in processing. It should be configured as shown below with the concurrency value ten. (Note that for demonstration purposes we have set it to ten). Connect the Processor out port of the previously added NIO HTTP Ingress Connector to the Input of this processing element.
Add a Logger processing element from the Processors → Generic list. This should be configured as shown below. Note that the usage of logger element is not essential and it is used just to see which path the incoming processing requests take. Connect the Allowed out port of the previously added Concurrency Throttle to the Input of this Logger element.
Add a NIO HTTP Egress Connector from the connectors Connectors → Egress Connectors list, and configure as shown below to forward the request to fast processing server. Connect the Next out port of the Logger element with Input of this egress connector. Further, connect the Response Processor out port of this egress connector back to the Input of the NIO HTTP Ingress Connector to send back the received response.
Now to complete tha path starting from the Denied out port of the Concurrency Throttle, again add a Logger processing element from the Processors → Generic list and configure it as shown below. As mentioned previously, note that the usage of logger element is not essential and it is used just to see which path the incoming processing requests take. Connect the Denied out port of the previously added Concurrency Throttle to the Input of this Logger element.
Add another NIO HTTP Egress Connector from the connectors Connectors → Egress Connectors list, and configure as shown below to forward the request to slow processing server. Connect the Next out port of the previously added Logger element with Input of this egress connector. Also connect the Response Processor out port of this egress connector back to the Input of the NIO HTTP Ingress Connector to send back the received response.
The completed integration flow should look like below.
Configuration for each element is as below. The numbering corresponds to the numbers shown in above diagram.
Design View
Text View
1. NIO HTTP Ingress Connector
2. Concurrency Throttle
3. Logger Processor
4. NIO HTTP Egress Connector
5. Logger
6. NIO HTTP Egress Connector
1. NIO HTTP Ingress Connector
|
8280 |
|
/service/throttling-service |
2. Concurrency Throttle
|
10 |
|
false (default value) |
3. Logger
|
Forwarding to request to fast processing server. |
|
INFO |
4. NIO HTTP Egress Connector
|
URL |
|
localhost |
|
9000 |
|
/service/fastProcessor |
5. Logger
|
Concurrency limit reached. Forwarding to request to slow processing server. |
|
INFO |
6. NIO HTTP Egress Connector
|
localhost |
|
/service/slowProcessor |
|
9000 |
Now you can run the Ultra Project and check the functionality of the integration flow. Create an UltraESB Server run configuration and start it. Note that this will start the mock backend servers as well.
When running the sample in the UltraESB-X distribution, you need to override the following properties in-order for the sample to work. The properties file is located at $ULTRA_HOME/conf/projects/limiting-concurrent-requests/default.properties
Refer to Managing Project Properties documentation on how to override properties. |
limiting-concurrent-requests-flow.throttle.concurrency |
Maximum concurrency level that should be allowed (Default value is *10) |
limiting-concurrent-requests-using-throttle.allowed-path-http-sender.host |
Hostname of the egress connector in allowed path (Default value is localhost) |
limiting-concurrent-requests-using-throttle.allowed-path-http-sender.port |
Port of the egress connector in allowed path (Default value is 9000) |
limiting-concurrent-requests-using-throttle.denied-path-http-sender.host |
Hostname of the egress connector in denied path (Default value is localhost) |
limiting-concurrent-requests-using-throttle.denied-path-http-sender.port |
Port of the egress connector in denied path (Default value is 9000) |
mock-processing-backends.mock-processing-servers.logger-fast-processor.logLevel |
Log level of mock fast processing backend (Default value is INFO) |
mock-processing-backends.mock-processing-servers.logger-slow-server.logLevel |
Log level of mock slow processing backend (Default value is INFO) |
After that navigate to $ULTRA_HOME/bin directory. Next you can run the UltraESB-X distribution with following command to start the engine with this sample project deployed.
./ultraesbx.sh -sample limiting-concurrent-requests
Open up the HTTP Client tool shipped with Ultra Studio Toolbox. Set the URL to http://localhost:8280/service/throttling-service. Select the preset payload one which is a SOAP payload. Set the concurrency value to 20 and iterations to one. Run it.
If you observe the logs, you can see exactly 10 (which is the configured Concurrency value of Concurrency Throttle
processor) logs with the value Forwarding to request to fast processing server
that demonstrated the Concurrency
Throttle in operation.