Limiting Concurrent Requests

Version: 17.07

Supported Since: 17.01

Use Case Description

An organization wishes to spread processing across two servers. One is a high end server with a very fast processing and other is a low end server with relatively slow processing. However, high end server can only support a low amount of concurrent requests compared to the low end server which can support a large amount of concurrent requests. Ideally the organization want to provide results to the end user in lowest possible latency.

limiting concurrent requests overview

Proposed Solution

Since the organization wants the processed results to be returned in the least possible time, they should try to use fast processing sever for all the requests but they should also make sure that fast processing server is not overloaded. They decided to use an ESB to fulfill this requirements.

External application will forward the processing requests to the HTTP web service exposed by the UltraESB and those will go through a Concurrency Throttle. The throttle which monitors the number concurrent messages will only allow a configured number of messages to go through the fast processing endpoint and rest will be directed to slow processing endpoint.

To demonstrate this scenario we will assume that fast processing endpoint only supports ten concurrent requests and we will use mock backends for the fast processing server endpoint and slow processing server with endpoints http://localhost:9000/service/fastProcessor and http://localhost:9000/service/slowProcessor respectively.

UltraStudio Configuration

UltraESB-X Configuration

Implementation of the Solution

Prerequisite

In order to implement above use case you must first select following dependencies when you are creating a new Ultra project.

  • NIO HTTP Connector from the connector list

  • Throttle Processor and Message Logger from the processor list

If you have already created a project, you can add above dependencies via Component Registry. From Tools menu, select Ultra Studio → Component Registry and from the Connectors list and Processors list, select above dependencies.
Implementation

To implement above use case, first let’s create our integration flow named “limiting_concurrent_requests” and then add required processing components by going through following steps in order.

  1. Add a NIO HTTP Ingress Connector from the Connectors → Ingress Connectors list, to accept processing requests from external application. NIO HTTP Ingress Connector should be configured as shown below to expose a single web service on port 8280 and under "/service/throttling-proxy" service path to accept processing requests.

  2. Add a Concurrency Throttle processing element from the Processors → Generic list, to throttle the number of concurrent messages in processing. It should be configured as shown below with the concurrency value ten. (Note that for demonstration purposes we have set it to ten). Connect the Processor out port of the previously added NIO HTTP Ingress Connector to the Input of this processing element.

  3. Add a Logger processing element from the Processors → Generic list. This should be configured as shown below. Note that the usage of logger element is not essential and it is used just to see which path the incoming processing requests take. Connect the Allowed out port of the previously added Concurrency Throttle to the Input of this Logger element.

  4. Add a NIO HTTP Egress Connector from the connectors Connectors → Egress Connectors list, and configure as shown below to forward the request to fast processing server. Connect the Next out port of the Logger element with Input of this egress connector. Further, connect the Response Processor out port of this egress connector back to the Input of the NIO HTTP Ingress Connector to send back the received response.

  5. Now to complete tha path starting from the Denied out port of the Concurrency Throttle, again add a Logger processing element from the Processors → Generic list and configure it as shown below. As mentioned previously, note that the usage of logger element is not essential and it is used just to see which path the incoming processing requests take. Connect the Denied out port of the previously added Concurrency Throttle to the Input of this Logger element.

  6. Add another NIO HTTP Egress Connector from the connectors Connectors → Egress Connectors list, and configure as shown below to forward the request to slow processing server. Connect the Next out port of the previously added Logger element with Input of this egress connector. Also connect the Response Processor out port of this egress connector back to the Input of the NIO HTTP Ingress Connector to send back the received response.

The completed integration flow should look like below.

limiting concurrent requests flow

Configuration for each element is as below. The numbering corresponds to the numbers shown in above diagram.

Design View

Text View

.

1. NIO HTTP Ingress Connector

limiting concurrent requests component 1

2. Concurrency Throttle

limiting concurrent requests component 2

3. Logger Processor

limiting concurrent requests component 3

4. NIO HTTP Egress Connector

limiting concurrent requests component 4

5. Logger

limiting concurrent requests component 5

6. NIO HTTP Egress Connector

limiting concurrent requests component 6
.

1. NIO HTTP Ingress Connector

Http port

8280

Service path

/service/throttling-service

2. Concurrency Throttle

Concurrency

10

Consider All Branches

false (default value)

3. Logger

Log Template

Forwarding to request to fast processing server.

Log Level

INFO

4. NIO HTTP Egress Connector

Destination Address type

URL

Destination Host

localhost

Destination Port

9000

Destination Path

/service/fastProcessor

5. Logger

Log Template

Concurrency limit reached. Forwarding to request to slow processing server.

Log Level

INFO

6. NIO HTTP Egress Connector

Destination Host

localhost

Destination Path

/service/slowProcessor

Destination Port

9000

.

Now you can run the Ultra Project and check the functionality of the integration flow. Create an UltraESB Server run configuration and start it. Note that this will start the mock backend servers as well.

Property Configuration

When running the sample in the UltraESB-X distribution, you need to override the following properties in-order for the sample to work. The properties file is located at $ULTRA_HOME/conf/projects/limiting-concurrent-requests/default.properties

Refer to Managing Project Properties documentation on how to override properties.

limiting-concurrent-requests-flow.throttle.concurrency

Maximum concurrency level that should be allowed (Default value is *10)

limiting-concurrent-requests-using-throttle.allowed-path-http-sender.host

Hostname of the egress connector in allowed path (Default value is localhost)

limiting-concurrent-requests-using-throttle.allowed-path-http-sender.port

Port of the egress connector in allowed path (Default value is 9000)

limiting-concurrent-requests-using-throttle.denied-path-http-sender.host

Hostname of the egress connector in denied path (Default value is localhost)

limiting-concurrent-requests-using-throttle.denied-path-http-sender.port

Port of the egress connector in denied path (Default value is 9000)

mock-processing-backends.mock-processing-servers.logger-fast-processor.logLevel

Log level of mock fast processing backend (Default value is INFO)

mock-processing-backends.mock-processing-servers.logger-slow-server.logLevel

Log level of mock slow processing backend (Default value is INFO)

After that navigate to $ULTRA_HOME/bin directory. Next you can run the UltraESB-X distribution with following command to start the engine with this sample project deployed.

./ultraesbx.sh -sample limiting-concurrent-requests

Testing the Integration Project

  1. Open up the HTTP Client tool shipped with Ultra Studio Toolbox. Set the URL to http://localhost:8280/service/throttling-service. Select the preset payload one which is a SOAP payload. Set the concurrency value to 20 and iterations to one. Run it.

  2. If you observe the logs, you can see exactly 10 (which is the configured Concurrency value of Concurrency Throttle processor) logs with the value Forwarding to request to fast processing server that demonstrated the Concurrency Throttle in operation.

In this topic
In this topic
Contact Us