Response Caching

Version: 17.07

Supported Since: 17.07

Use case description

TravelSmart is one of the world’s largest online travel services provider with millions of interactions daily. Users can search the TravelSmart database with custom queries. When a user submits a request at their HTTP web service interface, backend services query that from their database and provide the requested results. Since they have a very large database with thousands of online users, they need to have a proper way to respond to users efficiently.

Proposed Solution

Since TravelSmart need to handle millions of requests daily, they decided to introduce an ESB as the bridge between their backend db services and the database.

response caching travel smart arch

Each request coming from the web interface will go through UltraESB-X and now ESB is responsible of handling queries efficiently. UltraESB-X will cache each successful response with a hash key generated from the incoming payload. When the same request comes for the second time, instead of querying the TravelSmart database, ESB will send the previously cached response.

For this scenario assume that the TravelSmart database is exposed over the URL http://localhost:9000/service/db which accepts HTTP post requests with JSON payloads and responds with 200 (OK) status code if the request is successful, or 400 (Bad Request) if the request is invalid.

UltraStudio Configuration

UltraESB-X Configuration

Implementation of the Solution

Prerequisites

In order to implement above use case you must first select following dependencies when you are creating a new Ultra project.

  • HTTP NIO Connector from the connector list

  • Cache Processor from the processor list

If you have already created a project, you can add above dependencies via Component Registry. From Tools menu, select Ultra Studio → Component Registry and from the Connectors list and Processors list, select above dependencies.
Implementation

To implement above use case, first let’s create our integration flow named “travel-smart-database-caching” and then add required processing components by going through following steps in order.

  1. Add a HTTP NIO Ingress Connector from the Connectors → Ingress Connectors list, to accept the JSON query requests from the web interface. HTTP NIO Ingress Connector should be configured as shown below to expose a single web service on port 8280 and under /service/db-service service path to accept query requests.

  2. Add a Cache Processor processing element from the Processors → Cache list, to act as a response cache. It should be configured as shown below to cache the responses. Connect the Processor out port of the previously added HTTP NIO Ingress Connector to the Input of this processing element. Then connect the Cache hit out port of the Cache processor to the Input of the HTTP NIO Ingress Connector to send back the cache hit.

  3. Finally add a HTTP NIO Egress Connector from the Connectors → Egress Connectors list, and configure as shown below to send the cache misses to the TravelSmart's DB service endpoint. Then connect the Cache miss out port of the Cache processor to the in port of this egress connector. Also connect the Response Processor out port of this egress connector back to the Input of the HTTP NIO Ingress Connector to send back the received response.

The completed integration flow should look like below.

response caching travel smart flow

Configuration for each element is as below. The numbering corresponds to the numbers shown in above diagram.

Design View

Text View

.

1.HTTP NIO Ingress Connector

response caching component 1

2.Cache Processor

response caching component 2

3.HTTP NIO Egress Connector

response caching component 3
.

1. NIO HTTP Ingress Connector

Http port

8280

Service path

/service/db-service

2.Cache Processor

Heap Size

100

Expiry Time

2

Digest Algorithm

SHA256

3.HTTP NIO Egress Connector

Destination Address Type

URL

Destination Host

localhost

Destination Port

9000

Service Path

/service/db

.

Now you can run the Ultra Project and check the functionality of the integration flow. Create an UltraESB Server run configuration and start it. Note that this will start the mock backend servers as well.

Property Configuration

When running the sample in the UltraESB-X distribution, you need to override the following properties in-order for the sample to work. The properties file is located at $ULTRA_HOME/conf/projects/response-caching/default.properties

Refer to Managing Project Properties documentation on how to override properties.

response-caching-flow.http_sender.host

The host of the backend HTTP service (Default value is localhost)

response-caching-flow.http_sender.servicePath

The service path of the backend HTTP service (Default value is /service/db)

response-caching-flow.http_sender.port

The port of the backend HTTP service (Default value is 9000)

After that navigate to $ULTRA_HOME/bin directory. Next you can run the UltraESB-X distribution with following command to start the engine with this sample project deployed.

./ultraesbx.sh -sample response-caching

Testing the Integration Project

  1. Start an HTTP server instance on port 9000, which will expose an endpoint on the service path /service/db. This endpoint should be able to accept POST requests with json payload and return the appropriate values. If the flow is successfully completed, it should return 200 response code.

If you are unable create such server, the XPR bundle for this sample has an integration flow which will mock this behaviour (see /src/main/conf/travel-smart-db-system/mock-travel-smart-db-service.xcml). You can add that integration flow either to this same project or to a separate project and deploy. Note that a new INFO log will be printed if requests comes to the backend.
  1. Send an HTTP request containing a JSON payload of the above format to http://localhost:8280/service/db-service. (You can use the HTTP Client shipped with Ultra Studio Toolbox for this purpose). First request will go to the back end and the response will be cached. When the same request is made for the second time, instead of going to the back end it will return the previously cached value.

In this topic
In this topic
Contact Us