The flow execution process inside the IBM App Connect Enterprise (ACE) integration node consists of at least three steps or nodes. Do not confuse flow nodes with integration nodes. An integration node represents a group of IBM ACE applications inside the integration server, while a flow node stands for the unit of work inside the integration process. Nodes can be of various types. Some serve for receiving data, some for data transformation, others for connecting to the specific destination endpoint. Furthermore, there are few specific kinds of nodes intended for error handling.
When dealing with applications comprised of complex integration flows, just like for any other application development, testing, problem determination, error handling and debugging tools are a necessity. The IBM App Connect Enterprise offers all features and tools needed for establishing full control over message flow execution during development, testing or production application lifecycle phase.
Problem Determination Tools
Since the IBM App Connect Enterprise Toolkit is based on the well-known Eclipse IDE, one could expect standard debugging concepts and tools with a few additions and improvements related to integration flow construction elements and test run.
The list of all problem determination tools and resources could be much longer, but these tools and elements are most commonly used during the integration of an application lifecycle.
App Connect Enterprise Toolkit Flow exerciser
IDE-based development tool for viewing message path and structure and content of the logical message tree at any point in a message flow.
Integration flow construction element, shows any part of a message at any point in the flow.
Integration server generated trace file. Find where the message was routed and why. The most comprehensive tool when used with the trace node.
Primary source of information. Automatically records all errors. No increase in processor usage.
App Connect Enterprise Toolkit Test Client and Message flow debugger
IDE-based development tool, enables adding breakpoints and steps through the flow, examining and changing messages, ESQL, and Java variables.
Some of the mentioned tools are active by default and they are simple to use, and some require experience to be configured and efficiently used. The best results are achieved by combining all the tools together through development, testing and production cycles.
During flow execution, when an exception is detected within a message flow node, the message and the exception information are propagated to the failure terminal (red symbol) on the node.
If the node does not have a failure terminal or is not connected to one, App Connect Enterprise throws an exception and returns control to the closest upstream node that can process it. The default behavior is that the message is returned to the input node.
Different flow nodes can have different exit terminals, including error and failure handling ones, like timeout or catch.
Suppose that we want to receive an XML SOAP message over HTTP input, map values to a different XML output message and send it to the HTTP Reply node.
Our simple flow consists of the HTTP Input node, and Map and HTTP Reply nodes mutually connected in the order of the expected flow execution direction.
The Map node uses the custom XPath expression to calculate the Amount value inside the Statement output XML message.
In case the input values of the XPath expression do not comply with the data type expected in the expression, the Map node looks if there is something attached to the Failure terminal. If not, there are no further actions started.
You can notice that, as mentioned before, the error was sent back to the HTTP Input and flow execution was stopped.
In real-world integration flow usage, there is no point in leaving our flow like this, because nobody would see what has happened when the mapping node produced an error.
Hence, it’s a good idea to connect another node to the failure terminal of the HTTP Input node. The compute node gives us the opportunity to try fixing the error or to create a meaningful message to be sent to the receiving party.
Furthermore, the compute node contains ESQL code, where one can add breakpoint for debugging purposes.
The integration server is not ready for debugging by default. It needs to be configured manually in the server.conf.yaml file, listed under JVM. jvmDebugPort should be uncommented and the debug port number should be set.
After saving the configuration file, the server should be restarted and when we open integration server properties in the App Connect Toolkit, the debug port should be visible under JVM.
Now, integration flow debugging can be started. At each breakpoint it’s possible to browse through variables and processed message content.
To find the error which has caused the failure, we must navigate down through the Exception list nodes and find what the reason for the problem could be.
Beside Exception list analysis, there is a possibly better tool for determining the cause of the problem. Moreover, it’s the traditional log/trace approach.
Under the ActivityLogManager section of the server.conf.yaml parameter activityLogEnabled can be set to true in order to enable server activity logging.
Also, to enable user trace integration, the node should be updated over command line by executing the mqsichangetrace command.
When this is done, integration server properties should be checked to make sure that the user trace option is set to true.
Whilst running integration flow debugging again, we could find NODENAME.servername.userTrace.txt file containing detailed records of steps for flow execution activities. Comparing to the Exception List option, here we can try direct search for error, and follow operations preceded occurrence of the error.
As we can see, here we get comprehensive set of information, starting with error timestamp, exact integration flow construction component designation which is “Transformation_Map.Map”, direction of error propagation, and exact expression where problem emerged. Obvious solution is to check $Price variable content, which is in our case assigned with string “ABC”, and of course, it’s not convertible to xs:decimal.
The integration process by definition involves many different communication channels and protocols, data standards and types, custom API-s and data models. In such an heterogenous environment problems lurking constantly, either caused by developer, inconsistent data, external API component, or infrastructure environment. Handling problems in production integration systems while customers expect real-time response, data queues become overwhelmed by undelivered packages, infrastructure resources requirements growing exponentially, could be serious challenge. Mastering and well practicing problem determination tools and techniques are the only solution for everyday survival in the data integration world.
If you need some help with this, just reach us. We will be glad to help since we are IBM Business Partner.
The project was co-financed by the European Union from the European Regional Development Fund. The content of the site is the sole responsibility of Serengeti ltd.
Get a Quote
To get an accurate quote, please provide as many details as possible. One of our key account managers will contact you back with a custom quote for your project.
Manage Cookie Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.