Thursday, 3 August 2017

When you should consider using WSO2 ESB !!!

Over the time business operations and processes growing in a rapid rate which requires the organizations to focus more on the integration of different applications and reuse the services as much as possible for maintainability.

The WSO2 ESB (Enterprise Service Bus) will seamlessly integrate applications, services, and processes across the platforms, if we simplify it, ESB is a collection of enterprise architecture design patterns that is catered through one single product.

Lets see when you need to consider using WSO2 ESB for your business;

1. You have few applications/services working independently and now you need to integrate those

2. When you want to deal with multiple message types and media types

3. When you want to connect and consume services using multiple communication protocols (ex: jms, websockets , FIX)

4. When you want to implement Enterprise Integration scenarios such as route messages to suitable back-end or aggregate the responses coming form the back-end

5. When you want to expose your applications as a service or API to other applications

6. When you want to augment application security in to your applications

Likewise there are many more scenarios where WSO2 ESB is capable of catering to your integration requirements.

To get more information about WSO2 ESB please refer -

Wednesday, 19 July 2017

Working with WSO2 carbon Admin Services

WSO2 products managed internally using defined SOAP web services named as admin services. This blog will describe how to call the admin services and perform operation with out using the Management Console.

Note - I will be using WSO2 Enterprise Integrator to demonstrate this.

Lets look at how to access the admin services in wso2 products. By default the admin services are hidden from the user. To enable the admin services,

1. Go to <EI_HOME>/conf/carbon.xml and enable admin services as follows


2. Now start the EI server using

./ -DosgiConsole
When the server is started click 'Enter' and you will be directed to the osgi console mode.

3. To search the available admin services, Add 'listAdminServices' in the osgi console.This will list down the available admin services with the URL to access the admin services.

Access Admin Service via SOAP UI

4. You can access any of the admin service via the service URL listed in the above step.

I will demonstrate how to access functionalities supported by the ApplicationAdmin service. This service support the functionalities such as list the available applications, get application details, delete application etc.

5. Start SOAP UI and create a SOAP project using the following WSDL.
6. If you want to list all the available applications in the EI server, open the SOAP request associated with listAllApplications and provide the HTTP basic authentication headers of the EI server. (Specify the user name and password of the EI server)

Similarly you can access any available admin service via SOAP UI with HTTP basic authentication headers.

Reference -

Tuesday, 4 July 2017

Implementing Aggregator Pattern using Ballerina

Introduction to Aggregator Pattern

Aggregator is one of the basic pattern defined in SOA patterns and EIP patterns that can be used to define more complex scenarios.

According to the EIP patterns “The Aggregator is a special Filter that receives a stream of messages and identifies messages that are correlated. Once a complete set of messages has been received (more on how to decide when a set is 'complete' below), the Aggregator collects information from each correlated message and publishes a single, aggregated message to the output channel for further processing’ [1]

Use Case

Let’s assume a Salesperson wants to get the Customer's Personal Information, Contact Information and the Purchasing Behavior for a given customer ID through the Customer Relationship Management (CRM) system for the upcoming direct marketing campaign. In a real world scenario, the  CRM system needs to call multiple backend services to get the required information and aggregate the responses coming from the backend systems to provide the information requested by the salesperson.

The system will send a request message with the customer ID to retrieve the required information from following systems.
  • Send a request to "Customer Info Service" to get customer's personal information
  • Send a request to "Contact Info Service" to get customer's contact Information
  • Send a request to "Purchasing Behavior Service" to get the purchasing details of the customer

Implementation Description

Following backend services will provide the requested information based on the customer ID provided.
  •     ContactInfo.bal
  •     CustomerInfo.bal
  •     PurchasingInfo.bal
Intermediate service (AggregatorService) will get the responses coming from the backend services and integrate the responses to provide the response to the Salesperson.

Let's Code with Ballerina

First, go to Ballerina website and download the latest ballerina distribution.

Note - I have used Ballerina 0.89 version to demonstrate this use case

Start Ballerina Composer

Ballerina Composer is a visual editor tool that provides the capability to write or draw your integration scenario.

To start the composer, go to <Ballerina_Home>/bin and execute following command depending on your environment.

Linux Environment - ./composer
Windows Environment  - composer.bat

Implementing the backend services

To implement the above use case let's create the required backend services; Customer Info Service, Contact Info Service and Purchasing Behavior Service.

Customer Information Service
I have created a service named “CustomerInfoService” via the composer and provide the base path “/customerInfo” to access this service directly by the outside clients. To demonstrate the scenario, I have created a map to maintain the customer information. The jsonpath is used to extract the customer ID from the incoming request and extract the customer information from the ‘customerInfoMap’ based on the customer ID. If there is no details for the requested ‘CustomerID’, service will return an error payload.

Let’s see how this can be represented using the composer.

Following is the code representation of the above design.

package aggregator;

import ballerina.lang.messages;
import ballerina.lang.jsons;
import ballerina.lang.system;

@http:config {basePath:"/customerInfo"}
service<http> CustomerInfoService {


    resource CustomerInfoResource(message m) {
    json incomingPayload = messages:getJsonPayload(m);
    map customerInfoMap = {};
    json cus_1 = {"PersonalDetails": {"Name": "Peter Thomsons","Age": "32","Gender": "Male"}};
    json cus_2 = {"PersonalDetails": {"Name": "Anne Stepson","Age": "50","Gender": "Female"}};
    json cus_3 = {"PersonalDetails": {"Name": "Edward Dewally","Age": "23","Gender": "Male"}};
    customerInfoMap["100"]= cus_1;
    customerInfoMap["101"]= cus_2;
    customerInfoMap["102"]= cus_3;
        string customerID = jsons:getString(incomingPayload,"$");
        system:println("Customer ID = " + customerID);
        message response = {};
        json payload;
        payload, _ = (json) customerInfoMap[customerID];

        if (payload != null) {
        } else {
            json errorpayload = {"Response": {"Error": "No Details available for the given Customer ID"}};
            messages:setJsonPayload(response, errorpayload);
    reply response;


This service will return the customer Information based on the requested customer ID.

Note - I have created the Contact Info Service and Purchasing Behaviour Service similar to the above service. Only difference is the payload used in the service.

Implementing the Intermediate service

So far we have created the ‘Customer Information Service’, ‘Contact Information Service’ and ‘Purchasing Information Service’ using ballerina. Let’s see how to create an intermediate service to aggregate the responses coming from each of the backend system and provide an aggregated response to the salesperson.

I have created a service named “AggregatorService” to aggregate the backend responses. To implement the scenario I have used the Fork Join function in Ballerina, which has the capability of defining individual workers that will work on an assigned task and wait until all the workers are completed with the assigned task. When the backend responses are collected those will be aggregated to create a JSON payload as diagramed in composer below.

Following is the code representation of the above design.
 package aggregator;

import ballerina.lang.messages;
import ballerina.lang.jsons;

@http:config {basePath:"/AggregatorService"}
service<http> AggregatorService {


    resource CRMResource(message m) {
    http:ClientConnector customerInfoEP = create http:ClientConnector("http://localhost:9090/customerInfo");
    http:ClientConnector contactInfoEP = create http:ClientConnector("http://localhost:9090/contactInfo");
    http:ClientConnector purchasingInfoEP = create http:ClientConnector("http://localhost:9090/purchasingInfo");
    json incomingPayload = messages:getJsonPayload(m);
    string customerID = jsons:getString(incomingPayload, "$");
    message aggregateResponse = {};

    if (customerID == "100" || customerID == "101" || customerID == "102" ) {
        fork {
            worker forkWorker1 {
            message response1 = {};
            message m1 = messages:clone(m);
            response1 =, "/", m1);
            response1 -> fork;
            worker forkWorker2 {
            message response2 = {};
        message m2 = messages:clone(m);
            response2 =, "/", m2);
          response2 -> fork;

        worker forkWorker3 {
            message response3 = {};
            response3 =, "/", m);
            response3 -> fork;

        } join (all) (map results){
            any[] t1;
            any[] t2;
    any[] t3;
            t1,_ = (any[]) results["forkWorker1"];
            t2,_ = (any[]) results["forkWorker2"];
    t3,_ = (any[]) results["forkWorker3"];
    message res1;
    message res2;
    message res3;
            res1, _  = (message) t1[0];
            res2, _  = (message) t2[0];
    res3, _  = (message) t3[0];
            json jsonres1 = messages:getJsonPayload(res1);
    json jsonres2 = messages:getJsonPayload(res2);
    json jsonres3 = messages:getJsonPayload(res3);

    json payload = {};
    payload.CustomerDetailsResponse = {};
    payload.CustomerDetailsResponse.PersonalDetails = jsonres1.PersonalDetails;
    payload.CustomerDetailsResponse.ContactDetails = jsonres2.ContactDetails;
    payload.CustomerDetailsResponse.PurchasingDetails = jsonres3.PurchasingDetails;
 } else {
     json errorpayload = {"Response": {"Error": "No Details available for the given Customer ID"}};

 messages:setJsonPayload(aggregateResponse, errorpayload);

reply aggregateResponse;

Executing the Service

Deploying the Service
Now we have all the backend services and aggregator service created using Ballerina. Let’s see how to deploy and invoke the services.

I have packaged all the backend services and intermediate service under “aggregator” package by defining the “package aggregator;” on top of each service. For the demonstration purpose I have created a ballerina archive named “aggregator.bsz” including all the services in the “aggreagtor” package.

Use following command to create a ballerina archive

<Ballerina_HOME>/bin/ballerina build service <package> -o <FileName.bsz>

Ex: <Ballerina_HOME>/bin/ballerina build service aggregator -o aggregator.bsz

Run the following command to deploy and run the service.

./ballerina run service <BallerinaArchiveName>

Ex: ./ballerina run service aggregator.bsz

Note : Ballerina Archive for the above use case can be found from [2]

Invoking the Service

Now the Salesperson can get all the expected information (personal details, contact details and purchasing behavior information) required for the direct marketing campaign by providing the CustomerID to the CRM system.

Here, I have used “Postman” Rest Client to represent the CRM system and requesting the information for the CustomerID = “101”.



Saturday, 24 June 2017

How to use nested UDTs with WSO2 DSS

WSO2 Data Services Server(DSS) is a platform for integrating data stores, creating composite data views and hosting data in different sources such as REST style web resources.

This blog guides you through the process of extracting the data using a data services when nested User Defined Types (UDT) used in a function.

Lets take the following oracle package that returns a nested UDT. When a nested UDT (A UDT that use standard data types and other UDT in it) exists in the oracle package, oracle package should be written in a way that it returns a single ref cursor, as DSS do not support nested UDTs out of the box.

Lets take the following oracle package that includes a nested UDT called 'dType4'. In this example I have used Oracle DUAL Table to represent the results of multiple types included in the 'dType4'.

Sample Oracle Package

create or replace TYPE dType1 IS Object (City VARCHAR2(100 CHAR) ,Country VARCHAR2(2000 CHAR));
create or replace TYPE dType2 IS TABLE OF VARCHAR2(1000);
create or replace TYPE dType3 IS TABLE OF dType1;
create or replace TYPE dType4 is Object(
Region VARCHAR2(50),
CountryDetails dType3,
Currency dType2);

create or replace PACKAGE myPackage IS
FUNCTION getData RETURN sys_refcursor;
end myPackage;
create or replace PACKAGE Body myPackage as FUNCTION getData
    tt  dType4;
    t3  dType3;
    t1  dType1;
    t11 dType1;
    t2  dType2;
    cur sys_refcursor;
    t1  := dType1('Colombo', 'Sri Lanka');
    t11 := dType1('Delihi', 'India');
    t2  := dType2('Sri Lankan Rupee', 'Indian Rupee');
    t3  := dType3(t1, t11);
    tt  := dType4('Asia continent', t3, t2);
    open cur for
      SELECT tt.Region, tt.CountryDetails, tt.Currency from dual;
    return cur;
end myPackage;

Lets see how we can access this Oracle package using the WSO2 Data Services Server.

Creating the Data Service

1. Download WSO2 Data Services Server
2. Start the server and go to "Create DataService" option
3. Create a data service using given sample data source.

In this data service I have created an input mapping to get the results of the oracle cursor using 'ORACLE_REF_CURSOR' sql type. The given output mapping is used to present the  results returned by the oracle package.

<data name="NestedUDT" transports="http https local">
   <config enableOData="false" id="oracleds">
      <property name="driverClassName">oracle.jdbc.driver.OracleDriver</property>
      <property name="url">jdbc:oracle:thin:@XXXX</property>
      <property name="username">XXX</property>
      <property name="password">XXX</property>
   <query id="qDetails" useConfig="oracleds">
      <sql>{call ?:=mypackage.getData()}</sql>
      <result element="MYDetailResponse" rowName="Details" useColumnNumbers="true">
         <element column="1" name="Region" xsdType="string"/>
         <element arrayName="myarray" column="2" name="CountryDetails" xsdType="string"/>
         <element column="3" name="Currency" xsdType="string"/>
      <param name="cur" ordinal="1" sqlType="ORACLE_REF_CURSOR" type="OUT"/>
   <resource method="GET" path="data">
      <call-query href="qDetails"/>

Response of the data service invocation is as follows

<MYDetailResponse xmlns="">
      <Region>Asia continent</Region>
      <CountryDetails>{Colombo,Sri Lanka}</CountryDetails>
      <Currency>Sri Lankan RupeeIndian Rupee</Currency>

Saturday, 28 January 2017

Use ZAP tool to intercept HTTP Traffic

ZAP Tool

Zed Attack Proxy is one of the most popular security tool that used to find security vulnerabilities in applications.

This blog discuss how we can use the ZAP tool to intercept and modify the HTTP and HTTPS traffic.

Intercepting the traffic using the ZAP tool

Before we start, lets download and install the ZAP Tool.

1) Start the ZAP tool using /

2) Configure local proxy settings
 To configure the Local Proxy settings in the ZAP tool go to Tools -> Options -> Local Proxy and provide the port to listen.

3) Configure the browser
 Now open your preferred browser and set up the proxy to listen to above configured port.

For example: If you are using FireFox browser browser proxy can be configured by navigating to "Edit -> Preferences -> Advanced -> Setting -> Manual Proxy Configuration" and providing the same port configured in the ZAP proxy

4) Recording the scenario

Open the website that you want to intercept using the browser and verify the site is listed in the site list. Now record the scenario that you want to intercept by executing the steps in your browser.

5) Intercepting the requests

Now you have the request response flow recorded in the ZAP tool. To view the request response information you have to select a request from the left side panel and get the information via the right side "Request" and "Response" tabs.

Next step is to add a break point to the request to stop it to modify the content.

Adding a Break Point

Right click on the request  that you want to add a break point, and then select "Break" to add a break point

After adding the breakpoint. Record the same scenario that you recorded above. You will notice that, when the browser reached to the intercepted request it will open up a new tab called 'Break'.

Use the "Break" tab to modify the request  headers and body. Then click the "Submit and step to next request or response" icon to submit the request.

Then ZAP will return the request to the server with the changes applied to it.

Sunday, 31 July 2016

Docker makes your life easy !!!

Most of the time we have come across situations to set up a cluster for WSO2 products. With in a product QA cycle it is a very common thing. But as you all know it consumes considerable amount of time to set up the cluster and troubleshoot.

Now, with the use of dockers we can set up a cluster within few seconds and it makes your life easy :)

So let me give a basic knowledge on what is "docker"

What is Docker

In most simplest terms, docker is a platform to containerize software images.

Install Docker  :

What is Docker Compose

Docker compose is used to compose several applications and run those using one single command to initialize in multiple containers.

Install Docker Compose :

For some of the wso2 products there are docker compose images already exists in a private repository.

Main purpose of this blog is to highlight some of the useful docker commands you have to use while working with docker compose images.

To explain some of the usages I will be using ESB 4.9.0 docker compose image.
You can get a clone of the git repository where the docker compose image for ESB 4.9.0 is available. Follow the instructions in the READ.ME to setup the ESB docker cluster.

Start Docker container

docker-compose up

Build the changes and up the docker compose image

docker-compose up --build

Stop docker containers

docker-compose down 

Start docker in demon mode

 Docker-compose up -d

List docker images

 docker images 

List running docker containers

 docker ps 

Login to an active container

docker exec -i -t <container_id> /bin/bash 

Delete/Kill existing containers

 docker rm -f $(docker ps -aq) 

View container logs 

 docker logs <container_id> 

Insert a delay between docker containers

Sample Scenario: When running ESB cluster, first we want to ensure that DB is up and running, Therefore we can introduce a delay and start the ESB nodes. To configure this, you can add below property to the docker-compose.yml file
      - SLEEP=50

Add additional host names

Sample Scenario: Lets assume you want to use a backend service hosted in Application Server in another instance. Host name of the Application Server is "". Docker can not resolve the host name unless you defined the host name in docker-compose.yml file as below.
      - ""

Enable additional ports

Sample Scenario: Each of the ports used for the docker compose should be exposed through the docker-compose.yml file. If you are using inbound HTTP endpoint with port 7373, this port should be exposed as below.
      - "443:443"
      - "80:80"
      - "7373:7373"

Friday, 17 June 2016

Configuring an email notification to define user password

This blog will discuss how to configure an auto generate email to define the password when creating a user via management console. In WSO2 Identity Server there is an inbuilt feature called 'Ask Password' to fulfill this requirement. Lets look at how to implement this in other wso2 products.

'Ask Password' is a feature that comes with wso2 Identity Server. The purpose of this feature is to allow the users to decide there own password rather than defining a password by the server administration and allow the user to change the defined password.

So let me move on to the purpose of writing this blog.

While I was working with WSO2 API Manager, I got a requirement that the APIM administrator wants to create users via APIM management console, but the administrator wants to allow the users to define a password by the user itself. This requirement can be fulfilled using the 'Ask Password' feature available in wso2 Identity Server.


APIM Administrator creates a user by providing a username and a user email through the management console. Then an email will be sent to the defined email address with a redirection URL to define a password for the user account.

I will use APIM 1.10.0 product to explain this.

Steps to configure 'Ask Password' feature in APIM 1.10.0

1. Download APIM server

2. Log in to APIM server as the administrator

When you go to 'Add User' option you can see a window like below.

Now lets look at how to configure auto-email to set user password.

3. Install 'Account Recovery and Credential Management' feature in APIM

Due to some of the limitations in identify server feature activation, you have to install 'Account Recovery and Credential Management' feature in APIM 1.10.0. ( Steps to install a feature in wso2 product can be found from [1]).

4. As the next step, do the configuration changes mentioned here in APIM server.

These configurations are required to enable 'Ask Password' feature.

5. Restart the server after above changes.

When you navigate to 'Add User' option you can see that 'Ask password' feature is installed in UI as below.

6. Now create a user from APIM management console by defining the user email address.

You can verify whether auto generate email is received to the defined user email address and the user can define a password through the redirection screen provided in the auto-generated mail. Then check whether the user can successfully log in to APIM server.

Now APIM administrator can add the users via management console and allow the users to define a password they prefer.