Oracle SOA Suite Online Training

Interested in learning Oracle SOA Suite 12c?
Learn from the author of this blog!
A complete and comprehensive course on the #1 platform on SOA - Oracle SOA Suite

Click here to find the complete course details
Click here to check the first session on Oracle SOA Suite 12c


Weblogic Managed Server startup without giving userid/password everytime

When you start the weblogic managed servers, it asks for userid/password to startup. This is unlike the Admin server which doesn't ask everytime you start, the creadentials are saved somewhere.

It would be quite helpful if the same happens for the Managed servers as well. The way you save credentials to the managed servers is very simple

Navigate to

1. <MIDDLEWARE_HOME>/user_projects/domains/<domain_name>/servers/<server_name>

2. create a folder "security", if doesn't exist

3. create a file

4. add the following lines


password=<your_password> and save it.

That is it.

The next time you start the managed server, it automatically reads this file, and gets the credentials

You can also find these lines in the startup logs which shows it gets the credentials from just created file

<Dec 9, 2012 8:02:59 PM IST> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.>
<Dec 9, 2012 8:02:59 PM IST> <Notice> <Security> <BEA-090083> <Storing boot identity in the file: D:\Oracle\Middleware\user_projects\domains\soa_domain\servers\soa_server1\security\>

Configuring Email Notification in Oracle SOA Suite 11g

Oracle SOA Suite 11g provides User Message Service(UMS) that enables users to send notifications via various channels like Email, SMS, IM, Voice Mail.

Each of these channels need to be configured first before they can be used.

This post explains you how to use Email channel as the notification service in Oracle SOA Suite 11g.

Once configured, it can be used in the Human Tasks as well as with the Oracle BPEL Extension "Email" activity.

Before anything, I'll just introduce you to the protocols used in EMAIL communication
Sending a Mail
SMTP : protocol to send mail to a mail server

Retrieving a Mail
POP3 : Downloads the mail from the mail server in to a local machine. This will be helpful if you have only one machine in which you'll always check your emails.
IMAP : Doesn't actually download the email to the local server(you can always do it if required). It helps you sync up your mail across machines, and gives the capability of having hierarchical maintenance of your emails ie, you can maintain a folder structure for specific mails. You create a folder in one machine, drag some msg's into it, that will be automatically synced up when you chk your mails from another machine as the changes are not done in your local machine, but the server.  Also, when your INBOX is too huge, it will just retrieve headers to your mail client, only when you click on it, it will actually download the message to show you the details. This makes the performance better than POP3.

For using EMAIL notification service, you may not have your own email servers. You can always use a freely available mail like gmail.

This example shows you how to use gmail as the mail server for Email Notifications.

Configuring Email notification service involves 4 steps(first step can be ignored if your mail server doesn't need SSL)

Step 1 : Import certificates from gmail and add it to your server trust store
Step 2 : Configure email driver properties
Step 3 : Enable notification mode
Step 4 : Testing the configuration

Below procedure lists a step-by-step approach of setting up gmail server as the default mail server for your UMS

Step 1 : Import certificates from gmail and add it to your server trust store
Any email server uses two protocols to send/receive messages.
SMTP for sending mails
Either POP3 or IMAP for receiving mails. Gmail uses IMAP for retrieving mails.
So, you need to get both SMTP and IMAP certificates connecting to the gmail server inorder to send/receive mails to/from your inbox.

You can download the certificates using an opensource software called openssl. First, you need to download and install it.
Downloading SMTP certificate
Open command prompt and cd to openssl_install_folder/bin
Give the below command to view the smtp certificate
openssl s_client -connect
Copy the code highlighted in the picture above and paste it to a file and name it smtp_gmail.cert

Similarly, issue the follwing command to view the IMAP certificate
openssl s_client -connect

Copy the code highlighted in the picture above and paste it to a file and name it imap_gmail.cert

Now that you have both smtp and imap certificates with you, you need to import these to your server trust store
For this, open command prompt and navigate to %JAVA_HOME%/bin
Issue the following command
keytool -import -alias gmail-smtp -keystore gmail-keystore.jks -<location of smtp_gmail.txt>
It asks for password, give a password and pls remember it  as we'll use it later.
Issue a similar command for importing imap certificate
keytool -import -alias gmail-imap -keystore gmail-keystore.jks -<location of imap_gmail.txt>

If the above commands dont work, issue the below ones. Note : do this only if the above commands dont work
keytool -import -alias -keystore trusted-certificates.jks -file <location of smtp_gmail.txt>
 keytool -import -alias -keystore trusted-certificates.jks -file
<location of imap_gmail.txt>
Once you are done with importing the trust certificates to the keystore using the keytool, you need to tell the managed server(soa_server1) that there is a user defined trust store from which it has to look for keystore
This will be done by editing the %MIDDLEWARE_HOME%\user_projects\domains\soa_domain\bin\setDomainEnv.cmd file
Search for and replace the value with the the gmail-keystore.jks file path that was generated by the keytool command. Also you need to edit not avl, create one). So finally your entries should look somewhat similar to this\oracle\Middleware\jdk160_29\bin\demogmailcertstore.jks

Once you are done with this edit, one step is pending, where you will tell the managed server that a custom keystore is setup and has to be considered.
This is done by opening the Admin Console(<adminHost>:<adminPort>/console --> Environments --> Servers --> click on soa_server1)

Click on Keystores, and change the Keystores to "Custom Identity and Java Standard Trust"

That's it. You're done with configuring the certificates!!!

Step 2 : Configure email driver properties
This step configures email driver properties like email server details, incoming/outgoing email, passwords, etc.

For this, open EM, traverse as shown to open Email Driver Properties Screen

Configure the below mentioned properties
As mentioned, SMTP is used to send mails
This is the port gmail uses for SMTP

The default FROM address (if one is not provided in the outgoing message).

Password of the gmail id
Option to be selected is "Use Cleartext Password"
---------similarly for----
As mentioned, IMAP is used to receive mails
This is the port gmail uses for IMAP

No need to check this option




You're done!!!

Step 3 : Enable notification mode
This step lets the server know what mode to use for notifications. Since that we've configured email notification above, we'll enable EMAIL notification mode
Traverse to WorkFlow Config in EM as shown in picture

Setup the required values

Restart the Admin(for step1) & Managed Server(for steps 2&3)
This sets up the required configurations.

Step 4 : Testing the configuration
The last thing that you need to do is test it.

Navigate to Human WorkFlow as  shown

Notification Management --> Send Test Notification --> give detials in the popup and chk the mail
Hope this helps you in working with EMAIL notification service.

Thanks for going through my post, feel free to provide a feedback!

Error: Global element declaration/definition of name are duplicated at the following locations

Sometimes, your composite suddenly stopped working after you've started referring to a new xsd file - via a service/reference/component.

Your composite was getting deployed properly, and suddenly after few changes, it gets the following error

Error: Global element declaration/definition of name '{}part1' are duplicated at the following locations:


Error(80): query "/ns2:part1/ns2:firstName" is invalid, because Global element declaration/definition of 
name '{}part1' are duplicated at the following locations:

Sometimes, you get this error when your composite is working with more than one schema(xsd) files.

Reason for this is that if both the partname and the namespace are exactly same, then their definitions also have to be exactly same.

A simple scenario would be, if you are using one xsd in your composite and there is an other xsd being referred by one of the components/services/references in your composite. And lets say that both the xsd's have the same part name. In such cases, both the parts have to be exaclty same, even annotations in that part.

For example, in this composite, this is the xsd being used
And lets say one of the references/services in the composite is either using/referring to an xsd as below

If you observe these two schemas, and the corresponding source,
Both are having the same namespace as well as the part names(part1 in this case). But second xsd is having a different annotation
Thumb rule is that when the namespaces are the same and the part names are same, then the definitions for those parts across the xsd's should be exacly same.

Resolution to this is that, if you change the annotation of second xsd from "A sample element1" to "A sample element", the issue gets resolved.

 Hope this helps you. Cheers.

Service Registries : WSIL & UDDI

Often there is a confusion on what is WSIL and what is UDDI, what is the difference between them, as on a high level, both seem to be the same.

Both Web Services Inspection Language(WSIL) and Universal Description, Discovery, and Integration(UDDI) specifications address the same task of Web service discovery. And both contain only references to WSDL's, but do not contain documents themselves.

If both serve the same purpose, why two different specifications..?

The UDDI specification addresses Web service discovery through the use of a centralized model.
UDDI compliant registries serve as central hubs where service providers and service requestors come together and satisfy their needs.
There are, however, few flaws in the UDDI business model that prevents it from taking off (at least today) the way it was intended:
  1. Not all records in the UDDI registries are really existing. There are instances where a service is registered in the UDDI, but the actual implementation do not. Infact, in an  independent research carried out by SalCentral, it is found that 2/3rd's of the service entries in the UDDI's are actually not existing. Also, there are situations where the entries are duplicated with different names. There is no proper moderation to these registries.
  2. No Quality of Service(QOS) is guaranteed to the services registered. How can you trust a service registered in the UDDI by some service provider, and use it by giving your data? Though some implementations provide digital signatures for this, this is not in the standard spec for UDDI.
  3. Also, UDDI requires the deployment and maintenance of a some infrastructure, thus increasing the cost of operation.

This is where WSIL just fits in.
WSIL decentralizes the centralized model of service publication within a UDDI registry and distributes them such that each service provider itself can advertise its Web Services offerings. The service descriptions can be stored at any location, and requests to retrieve the information are generally made directly to the entities that are offering the services.

The primary difference between both is that the Web Service "find" no longer goes directly to a centralized UDDI registry. Instead, the "find" is sent directly to the service providers from the requestors. If the service provider is maintaining the service advertising, he would always have the latest and greatest, non-duplicated, trusted(based on the service provider) service offerings. Thus the centralized, un-moderated registry is now a decentralized, moderated registry distrsibuted over the web.

WSIL defines an XML based language by which services can be advertised to interested consumers.

Which is better?
It completely depends on the situation.
UDDI is more advanced to WSIL in terms of the capabilities it provides. If your use case doesn't require advanced functionality offered by UDDI(some impl. like OSR provide approval mechanisms for publishing, digital signatures, etc.), WSIL is the option.
If the use case is that the data needs to be centrally managed, UDDI is the option.
Note that each of these technologies are complimentary to each other.

inspection.wsil document
All the services are advertised through WSIL using a fixed document called inspection.wsil
Once an inspection document has been created, a consumer needs to be able to find it. Consumers would only need to ping a URL such as http://<host>:<port>/inspection.wsil to discover the file's existence for retrieval.

Creating a WSIL Connection in Jdeveloper
Jdeveloper has an inbuilt WSIL wizard, which lets you create a WSIL connection very easily.
Go to Resource Pallette --> New Connection --> WSIL

Give the inspection.wsil location(normally in the root of the managed server), managed server's credentials

That’s it!!!
Now, You may refer to all the webservices deployed to this host through this connection.

Thanks for going through my post, feel free to provide a feedback!

Some technical stuff on it

My other post on SOA briefs you what SOA is all about in a very generic, real life scenario.
Now, lets talk some technical stuff here.

There's no standard definition for SOA, it is just a methodology followed, and each one has his own definition for it. The best I feel is

SOA is an architectural style of building business applications using loosely coupled services, that act like black boxes, that can be orchestrated in a particular fashion to attain a specific business functionality.

SOA aims at a greater alignment between your business and IT infrastructure.

For a service to be a candidate in a SOA Application, it has to adhere to certain principles, otherwise called SOA Principles

Below are the principles each individual candidate in a SOA Application has to satisfy, as defined by Thomas Earl

  1. Standardized service contract – Services adhere to a communications agreement, as defined collectively by one or more service-description documents. Services express their purpose and capabilities via a service contract. The Standardized Service Contract design principle is perhaps the most fundamental part of service-orientation

  1. Service Loose Coupling – Services maintain a relationship that minimizes dependencies and only requires that they maintain an awareness of each other.

  1. Service Abstraction – Beyond descriptions in the service contract, services hide logic from the outside world. This principle emphasizes the need to hide as much of the underlying details of a service as possible.

  1. Service reusability – Logic is divided into services with the intention of promoting reuse. The principle of Service Reusability emphasizes the positioning of services as enterprise resources with agnostic functional contexts.

  1. Service autonomy – Services have control over the logic they encapsulate. significant degree of control over its environment and resources.

  1. Service statelessness - Services minimize resource consumption by deferring the management of state information when necessary

  1. Service discoverability – Services are supplemented with communicative meta data by which they can be effectively discovered and interpreted. For services to be positioned as IT assets with repeatable ROI they need to be easily identified and understood when opportunities for reuse present themselves.

  1. Service Composability – Services are effective composition participants, regardless of the size and complexity of the composition.

Any technology component that adheres to all these principles would be a candidate to be used in SOA, and the best software component that we could think of that satisfy all these features is a Webservice, and morevoer it is highly evolved, widely supported, and more that anything, ease of use. It has become the defacto standard for realizing SOA.

 The main goal of SOA is to connect disparate systems. In order that these disparate system work they should talk to each other seamlessly. SOA gives it a way of doing it thru ESB (Enterprise service bus) that acts like a reliable post office which guarantees delivery of messages between systems in a loosely coupled manner.

By organizing the enterprise IT around services instead of around applications, SOA helps companies to achieve faster time-to-service and respond more flexibly to fast-paced changes in business requirements.

It aims at building systems that are extendible, flexible and fit with legacy systems.

Hope this gave you a better understanding of SOA and its principles. My next post aims at introducing you to the Service Component Architecture which realizes all the SOA principles

Thanks for going through my post, feel free to provide a feedback!

Service Component Architecture(SCA) - Its all about assembly

We've been repeatedly saying that SOA is neither a technology nor a specification, but just an architectural approach in designing enterprise applications.
If that’s true, then how is it realized?
Is it with the help of services that follow those so-called SOA principles, and integrating them in a custom-fashion.
Yes, you may do that, but that becomes really difficult as the application and the number of underlying technologies grow
For this, you may require a standard way of assembling such things so that it simplifies and standardizes the overall lifecycle of the enterprise software.
Service Component Architecture is exactly that.

Similar to SOA, there is no standard definition to SCA. Some of the definitions that I would like to share are below

"Service Component Architecture is a specification that defines how various enterprise pieces are created and assembled together as modular components to increase IT sustainability and flexibility"

"SCA is a unifying framework for standardizing and simplifying the development, deployment and management of atomic service components"

"SCA provides a model for composing applications that follow SOA principles"

In simple terms, its like a platform on which you develop your SOA applications in a more standard, easier and flexible manner.

SCA is first started with a collaboration of major IT vendors under OpenSOA group, and later handed over to OASIS.

Anatomy of SCA

The SCA Assembly Model consists of a series of artifacts, which are defined by elements contained in XML files.
Following figure depicts various artifacts of SCA

The basic components of SCA are:
  1. Composite : It is a single unit of deployment, irrespective of the complexity of the application.
  2. Component : It is the unit which provides the business logic for the application. A composite contains multiple such components.
  3. Services : Act as entrypoint to each of these components. Not all services can be accessed directly from outside the composite. Only those exposed to outside world specifically can be accessed directly. Those that can be accessed by outside world are called Promoted Services / Exposed Services
  4. Reference : As the name suggests, is a reference to some implementation, either external to the composite or within it.
  5. Wires : It is with wires that the components or services are bound/connect to each other
  6. Properties : SCA leverages its implementation to provide customizations to the components through properties, which actually extend the standard functionality.

Apart from these, SCA also defines the non-functional requirements such as policy enforcement, QOS, etc.

Hope this post gave you a clear understanding of SCA.
In my next post, I'll go explain how Oracle has leveraged these concepts, on its implementation of SCA, the Oracle SOA Suite 11g.

Thanks for going through my post, feel free to provide a feedback!

Mediator in Oracle SOA Suite 11g

Mediator, as the name implies, mediates the message flow.
SOA is all about message flows between disparate systems, and mediators play an important role in this.
It forms the communication layer between the services, thus providing a kind of service virtualization and decoupling between the interacting services.

It basically intercepts the message flow between the services and offer various capabilities as listed below

  1. Message Routing : This is the ability to channel a request to a particular service provider based on the message. You may define rules to select the target service among the available services based on the content in the message(static routing) or based on an external rules engine(dynamic routing)

  1. Message Validation: Messages can be checked against a schematron file to check its validity. This helps in invoking the target services only with the right message(as each and every invocation in an enterprise is costly)

  1. Message Filtering : You may filter the messages based on its content

  1. Message Transformation : This is the ability to convert the structure and format of the incoming service request to the structure and format expected by the service provider.
Transformation is one of the key capabilities of the Mediators.

Mediators are also a way to change the interaction patterns, i.e. you can convert a message call from sync to async, a 2-way to a one-way, etc.

Mediators are a way to enable Event Delivery Architecture in your composite. They have the capability to publish an event as well as subscribe for an event.

However, mediators form the communication layer only within a composite. If the interactions go across composites, Enterprise Service Bus comes to the rescue. We'll discuss about ESB later.

Logical Grouping - Partitions in SOA Suite 11g

In order to logically group your composite application deployments, SOA Suite provides a feature called partitions.
This way, it will be a lot easier to manage deployments, grouping them based on certain logical criteria.
You can perform bulk life cycle management tasks at a single go. Ex, start/stopAll, undeployAll, etc.

You always deploy a composite to a partition. SOA Suite provides a default partition named default.

There won't be any warning if you are trying to deploy the same composite in more than one partition using JDev. However, you will have a warning when done using EM.
Having the same composite in more than one partition may sometimes cause unexpected results.For example, say you have an inbound fileAdapter that checks for any new records on a file. On new record arrival, you never know which instance of the composite(either from first or second partition) is actually started

Creating a Partition
Rt click on soa-infra --> Manage Partitions

Create a new Partition in the next screen

Selecting a particular partition for your composite is done during deployment.
There are 3 ways of selecting the partition(which are a part of deployment)
  • Using EM
  • Using Jdev
  • Using Scripts

Using EM
Click on soa-infra, then click on the SOA Infrastructure dropdown to go into deployments

Using Jdev

In Jdev deploy wizard, you have the option to select the partition to which you want to deploy the composite

- Ravi Kiran


This blog is completely dedicated to SOA.
It gives you a clear idea on what SOA is all about, and how it is realized using Oracle SOA Suite 11g, common issues you find with SOA Suite.
All blogs published in my site are completely my own observations and learning, and in no way copied from any other resource.

PS : Blog under construction. Will update whenever I find some time.

Dehydration in BPEL - Oracle SOA Suite 11g

Dehydration - Offers Reliability, fail-over protection

Over the life cycle of a BPEL instance, the instance with its current state of execution may be saved in a database. When a BPEL instance is saved to a database, the instance is known as being dehydrated. The database where the BPEL instance is saved is called a dehydration store.

Once a BPEL instance is dehydrated, Oracle BPEL Server can off load it from the memory of Oracle BPEL Server. When a certain event occurs, such as the arrival of a message or the expiration of a timer, Oracle BPEL Server locates and loads the persistent BPEL instance from the dehydration store back into the memory of Oracle BPEL Server and resumes the execution of the process instance. Dehydrating BPEL instances offers reliability. If Oracle BPEL Server crashes in the middle of executing a process, the instance can be recovered automatically, programmatically, or manually from the dehydrated states. When Oracle BPEL Server resumes the execution of the process instance, it resumes from the last dehydration point, which is the last state of the instance that Oracle BPEL Server saves to the dehydration store.

                When and how the dehydration occurs differs based on the process types:

Transient process — Oracle BPEL Server dehydrates the process instance only once at the end of the process. When a host crashes in the middle of running the process instance, the instances are not visible from Oracle BPEL Control.
Durable process/idempotent — Oracle BPEL Server dehydrates the process instance in-flight at all midprocess breakpoint and non-idempotent activities, plus the end of the process. When the server crashes, this process instance appears in Oracle BPEL Control up to the last dehydration point (breakpoint activity) once the server restarts. If the server crashes before the process instance reaches the first midprocess breakpoint activity, the instance is not visible in Oracle BPEL Control after the server restarts.

        There are three cases in which dehydration occurs:

1. When the BPEL instance encounters a mid-process breakpoint activity (not including the initial receive)
Activities like wait, receive, onMessage, onAlarm, call to an async WSDL
That is where an existing BPEL instance must wait for an event, which can be either a timer expiration or message arrival. When the event occurs (the alarm expires or the message arrives), the instance is loaded from the dehydration store and execution is resumed. This type of dehydration occurs only in durable processes, which have mid-process breakpoint activities. A transient process does not have any midprocess breakpoint activities.

2. When the BPEL instance encounters a non-idempotent activity

When Oracle BPEL Server recovers after a crash, it retries the activities in the process instance. However, it should only retry the idempotent activities. Therefore, when Oracle BPEL Server encounters a nonidempotent activity, it dehydrates it. This enables Oracle BPEL Server to memorize that this activity was performed once and is not performed again when Oracle BPEL Server recovers from a crash.
Idempotent activities are those activities where the result is the same irrespective of no. of times you execute the process.
Repeated invocations have the same affect as one invocation.
Ex : Read-Only services

3. When the BPEL instance finishes

At the end of the BPEL process, Oracle BPEL Server saves the process instance to the dehydration store, unless you explicitly configure it not to do so. This happens to both durable and transient processes. For transient processes, the end of the process is the only point where the process instance is saved. This is because a transient process does not have any mid-process breakpoint activities and nonidempotent activities where the in-flight dehydration can occur.

Dehydration triggered by:

(a)Breakpoint activities: <receive>, <onMessage> (including <pick>), <onAlarm>, <wait>, and <reply>
(b)When using checkPoint()within a <bpelx:exec>activity


A BPEL invoke activity is by default an idempotent activity, meaning that the BPEL process does not dehydrate instances immediately after invoke activities. Therefore, if idempotent is set to true and Oracle BPEL Server fails right after an invoke activity executes, Oracle BPEL Server performs the invoke again after restarting. This is because no record exists that the invoke activity has executed. This property is applicable to both durable and transient processes.

If idempotent is set to false, the invoke activity is dehydrated immediately after execution and recorded in the dehydration store. If Oracle BPEL Server then fails and is restarted, the invoke activity is not repeated, because Oracle BPEL Process Manager sees that the invoke already executed.

When idempotent is set to false, it provides better failover protection, but at the cost of some performance, since the BPEL process accesses the dehydration store much more frequently. This setting can be configured for each partner link in the bpel.xml file.

Some examples of where this property can be set to true are read-only services (for example, CreditRatingService) or local EJB/WSIF invocations that share the instance's transaction.

BPEL Dehydration Stores
As already explained, when a BPEL process instance is saved in the database, it uses the schema that is configured using Repository Creation Utility(RCU) that you would use during SOA environment setup.
Here are some of the tables that a BPEL engine uses to store its current instance state.

cube_instance - stores instance metadata, eg. instance creation date, current state, title, process identifier
cube_scope - stores the scope data for an instance
work_item - stores activities created by an instance
document - stores large XML variables, etc.

Working with a Database Adapter - Part 1

This blogs helps you to work with a Database Adapterprovided in Oracle SOA Suite.
Please go through my previous blog on configuring a Database Adapter.

Why Database Adapter?
Service Oriented Architecture is all about services, its about your entire business application modeled as Services.
In a Business Application, it is obvious that you need database to interact with. But a database by itself cannot be exposed as a service. For ex, in order to interact with it, you need to write a Java program, and using JDBC connectivity, you connect and interact with the database. If this is the case, how can you use it in a 100% SOA based application?
The only way to use it is to expose it as a service.  And how is a database exposed as a Service?
The Answer to it will be to introduce a layer over it, so that it uses the database it covers, and yet exposes it as a service, i.e. something like a wrapper.
And Oracle SOA Suite 11g provides a similar solution to it called a Database Adapter. With this, you can use your existing database as a service in your SOA Application.
It also provides various other functionalities like polling a database, checking for any changes in a specific table in a database, etc.

The Example below gives you an idea on how to use Database Adapter with the SOA Suite.
It is assumed that you have a database up and running.
In this example, we take the user input which will be the EmployeeId, and return his FullName, Salary.
This example uses the concepts of mediator, transformation of data, webservice.

Working with a DBAdapter

Create a new SOA Project, name it DBAdapterExample 

This creates a new project folder, with an empty composite.xml
You need  to first define the input and output xml formats used in the example.
In this example, Input is EmployeeId(Int) and Output is FullName, Salary
Create a new XML schema that has 2 nodes with input and output format types
rt click on xsd folder in the project --> new XML Schema, name it DBAdapterFormat.xsd
Define the schema as below

Creating a Database Adapter
  • Drag and Drop a Database Adapter from the Component Pallette, into the "External References" swimline in the composite.xml, name it EmpDBAdapterService
  • Create a database connection, and give the JNDI name for the outbound connection pool for the DBAdapter that you've configured(verify my prev<link> post for this)
  • Select the operation that you want, Select in this case
  • Import the database table you want to interact with, Employees in this case
  • You may remove the relationships that you see in Step 6, or leave it as it is
  • Select all the fields that you will use in this application(no problem if you even select all and later on do not use it)
  • In Step 8, you have to create a bind variable(parameter), and use it in the where clause of the SQL query. In this example, we send the employeeId, and so, create a bind variable by some name(bndEmpId), and use it in the where clause as shown
  • Finish the wizard to last step, and you will see a new Database Adapter in the composite.xml

Now that DBAdapter is created, in other words, the underlying database is exposed as a service, you can talk to the service to get its data.
But since the service is not exposed directly to clients (Services only in Exposed Services swim lane can be accessed by external clients)
So, you now have to create an exposed web service whose input and output parameters are same as the Database Adapter. Please remember that the Exposed service is just a client to access the Database Adapter, and has no relevance to the database or its created Adapter.
contd. in Part 2