database processor

DB Processor

Version: 17.07

Supported Since: 17.01

What is DB Processor?

The database processing element can be used for any kind of database operations including transactions. The SQL query is pre configured and the parameters for the SQL query can be passed through a configuration file. Data to insert to the database can be extracted from 4 resources. They are,

  • Message payload

  • Message header

  • Scope variables

  • Values hard coded to the mapping file

DB processing element can be used with any relational SQL database system such as Oracle, SQL Server and MySQL databases.

In order to use this processing element, you must first select the Database Operations dependency from the processor list when you are creating an empty Ultra project. If you have already created a project, you can add this dependency via Component Registry. From Tools menu, select Ultra Studio → Component Registry and from the Processors list, select the Database Operations dependency.

database outports

Out Ports of the database processing element

Next

Next processing element to connect.

On Exception

The message will be sent to this outport, if an error occurred while performing the db operation.

Out Ports of the database transaction scope element

In

First processing element of the transaction scope to connect.

On Exception

The message will be sent to this outport, if an error occurred while starting the transaction.

Parameters of the database processing element

Data Source

Basic

Resource bean for the database connection to be used by the connector.

Mapping file

Basic

File path to the JSON configuration mapping file.

SQL query

Basic

SQL query with parameters. Ex: insert into table_name (column_name_1, column_name_2, column_name_3, column_name_4) values (@{parameter_1}, @{parameter_2}, @{parameter_3}, @{parameter_4})

Parameters of the database transaction scope element

Transaction Manager

Basic

DataSourceTransactionManager resource bean to be use for transaction.

Sample Use Cases

Prerequisite

In order to use the DB processing element you must first select the “DB” dependency from the processors list when you are creating an empty Ultra project. If you have already created a project, you can add the “DB” dependency via Component Registry. From Tools menu, select Ultra Studio → Component Registry and from the processors list, select the “DB” dependency.

Then you have to configure the resources needed for the database connection in the project.xpml. You can add the data source resource using the "Custom Template" button of the project.xpml designer. If you are going to handle a database operation with transactions you have to choose "Datasource with Transactions"

Image of the datasource bean
Setting up the processing element

Let’s create our integration flow named “SampleFlow” and add a DB Processor to that. Then click on the gear icon appeared on the processing element icon to open settings. Set the datasource of the database to read the data from. This datasource was added to the project.xpml in the above prerequisite section.

Here the SQL query can be an INSERT, UPDATE, DELETE or a SELECT query. Query can be a simple SQL query or an advance one with nested queries. Parameters to the query can be passed using the @{parameterName} format.

Now we have configured a datasource for the database to be used and a SQL query with parameters. Then we have to setup the configuration to extract values to replace the parameters. This is done using a mapping file. In this mapping file we have to declare where to extract parameter values from.

{
  "messageType": "XML",
  "parameters": [
    {
      "name": "parameter_1",
      "type": "PATH",
      "path": "/item/element_name"
    },
    {
      "name": "parameter_2",
      "type": "HEADER",
      "headerName": "headerName",
      "className": "java.lang.String"
      }
    },
    {
      "name": "parameter_3",
      "type": "VARIABLE",
      "variableName": "variableName",
      "className": "java.lang.String",
      "propagateToParent": true
    },
    {
      "name": "parameter_4",
      "type": "VALUE",
      "value": "value_1"
    },
    {
      "name" : "parameter_5"
      "type" : "BLOB",
    },
    {
      "name" : "parameter_6"
      "type" : "CLOB",
    }
  ]
}

"BLOB" and "CLOB" parameter types can be used to save the full payload to the database as a blob or clob.

A simple database insert with DB Processor

Let’s configure a database processing element to insert a data to a database table with four columns. For better understandability, let’s extract data from all 4 possible resources.

Suppose there is a database table named as table_1 with 4 columns named as column_1, column_2, column_3, column_4. Then the sample integration flow will be as follows.

Image of the flow

Sample configuration of the database processing element for the above flow will be as follows.

Image of the configuration
Database read with DB Processor

Let’s configure a database processing element to read a row from a database table.

Suppose there is a table named as table_1 with 2 columns named as column_1 and column_2. The the sample integration flow will be as follows.

Image of the flow

Sample configuration of the DB ingress connector for the above flow will be as follows.

Image of the configuration

The database processing elements returns a message with a payload in the format of DBResultsSetFormat. A custom processing element have been used here to print the returned message to the console.

An advance database insert with transactions

Let’s configure an integration flow to perform a database transactional operation. Here we enter the data to a table, obtain the id of the last inserted row and then again insert data to another table with the id obtained from the previous database operation.

Suppose there is a database table named as people having columns named as id, name and age. Second table is named as students and the columns are id, school and the grade. Here the id of the students table is a foreign key referenced from the people table.

Image of the flow

There are many processing elements and connectors here. But the actual operation happening here is not much complex. Let’s look in to the process happening here. The database transaction scope start element[1] starts the transaction scope. All the database operations happening within transaction scope are guaranteed to be committed if and only if all the operations within the transaction scope were a success. Then the clone message processing element[2] just take a clone of the message and send one copy as the response to the nio http ingress connector and the other copy to the database processor. Then database processing element[3] do the first insert to the people table in the database. In payload attacher[4] we save the payload as an attachment to the message, since we need it in the future to do the second database insert. Next database processing element[5] reads the last inserted row from the people table and return the result in DBResultsSetFormat. Then we use a custom processing element[6] to extract the id from the DBResultsSetFormat and set it to the header of the message. Last database processing element[7] does the second insert to the students table using the data read from the payload the id read from the message header. Finally the database transaction scope end element closes the transaction scope and commit the changes to the database if all the operations are complete. The database transaction will be rolled back in any kind of failure within the transaction scope fo the flow.

In this topic
In this topic
Contact Us