Article presenting how OpenStack notifications in Zato can be consumed and configured with no programming needed.

Whereas the previous instalment focused on sending messages out to OpenStack-based systems, the current one describes how to receive information placed in OpenStack containers.

As it happens, it's only a matter of creating a connection, filling out a form and pressing OK, as below:

Screenshots

Screenshots

Here is what just happened after clicking OK:

  • A background asynchronous task has been started to monitor a set of OpenStack containers each 5 seconds
  • Any objects found matching, or not, the name pattern are taken into account for further processing
  • A service of one's choice will be invoked each time an object is taken off a container
  • The service will receive the contents of this object if the latter's name matches, or not, a certain pattern

And that's it, no programming is needed. Naturally, a piece of code is needed to accept the data each object holds but this is nothing OpenStack-specific. For instance, a basic service that logs everything it receives may look like:

# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function, unicode_literals

# Zato
from zato.server.service import Service

class ProcessNewAccounts(Service):
    def handle(self):
        self.log_input()

New on GitHub, Zato HTTP audit log allows you to store requests and responses, along with their headers in a database that can be queried at any time.

Recognizing that certain parts of messages should never be stored anywhere because of their sensitive nature, one can also specify JSON Pointers or XPath expressions indicating which elements in a message should be masked out - for instance, user passwords and social security numbers don't necessarily belong in the audit log.

The database can be asked to return only those items that match a given criteria, for instance - a correlation ID or a value of a header.

As always with each feature, audit log comes with both REST and SOAP APIs to facilitate the creation of interesting supplementary tools on top of what is available by default.

Screenshots

Screenshots

Screenshots

Screenshots

Summary: How to navigate over XML documents with reusable XPath expressions, including ones with namespaces.

Along with JSON Pointers, presented recently, all the features in the post will be released in Zato 2.0, which is currently in development on GitHub.

XPath expressions and XML namespaces can be defined once and reused in many services using the web admin GUI, API or enmasse. For instance, here are definitions needed to map identifiers between hypothetical ERP and CRM systems.

Screenshots

Screenshots

Screenshots

Screenshots

And that's how a sample service may look like - notice that response is an empty document initially and all the required elements are created on fly, including placing them in namespaces defined in the GUI.

# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function, unicode_literals

# lxml
from lxml import etree

# Zato
from zato.server.service import Service

class IdMapper(Service):
    name = 'id.mapper2'

    def handle(self):

        # Obtain an XPath wrapper for the service's request
        req_xp = self.msg.xpath()

        # Get a couple of request elements
        cust_id = req_xp.get('ERP CustomerId')
        account_id = req_xp.get('ERP AccountId')

        # Prepare response and its associated XPath wrapper.
        resp = etree.Element('resp')
        resp_xp = self.msg.xpath(resp)

        # Map request elements into response
        resp_xp.set('CRM CustomerId', cust_id)
        resp_xp.set('CRM AccountId', account_id)

        # Set a constant value, note that in response XML namespaces
        # will be added as needed.
        resp_xp.set('CRM Provider', 'ABC')

        # Return response
        self.response.payload = etree.tostring(resp, pretty_print=True)

Screenshots

As always - that very code can be hot-deployed onto a cluster and re-configured on fly. Should any updates be needed to XPath expressions or namespaces used, no code changes will have to be performed - such a service is purely configuration-driven and can be exposed over multiple channels, such as HTTP, AMQP, ZeroMQ or JMS WebSphere MQ.

JSON Pointer syntax, as defined in RFC 6901 offers means to navigate through JSON documents /using/paths/in/documents - here's how to make use of them in the upcoming version 2.0 of Zato.

Systems being integrated will usually offer standard ways to refer to certain objects transfered in JSON documents. For instance, a customer ID will almost always be in the same location across documents no matter the actual API call.

We can take advantage of it and define a set of reusable JSON Pointers that will be referred to in body of multiple services. They will be R in IRA of Interesting, Reusable and Atomic APIs one should strive to design.

Here come the definitions and the net result..

Screenshots

Screenshots

Screenshots

Screenshots

.. the service ..

# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function, unicode_literals

# stdlib
from json import dumps

# Zato
from zato.server.service import Service

class IdMapper(Service):
    name = 'id.mapper'

    def handle(self):

        # Obtain a JSON pointer to the service's request
        req_jsp = self.msg.json_pointer()

        # Get a couple of request elements
        cust_id = req_jsp.get('ERP CustomerId')
        account_id = req_jsp.get('ERP AccountId')

        # Prepare a response and its associated JSON pointer.
        # Note that JSON pointers can be used with regular Python dicts.
        resp = {}
        resp_jsp = self.msg.json_pointer(resp)

        # Map request elements into response
        resp_jsp.set('CRM CustomerId', cust_id)
        resp_jsp.set('CRM AccountId', account_id)

        # Return response
        self.response.payload = dumps(resp)

.. and the discussion of what has been presented:

  • A JSON Pointer by default points to the request a service receives
  • Calling .get on a pointer fetches a value from the underlying document
  • JSON Pointers can work with regular Python dicts, they're not strictly limited to JSON
  • Calling .set assigns a value under the path previously configured
  • When setting values, all paths that don't exist are created on fly (similar to mkdir -p in shell)

JSON Pointers can be freely reused in more than one service - and should one day an ERP or CRM above decide to change their data model only the JSON Pointer's definition will have to be updated, with no code changes to the services necessary.

Note that the very same API is also available for XPath expressions, presented in a companion post.

Article presenting a scalable integration pattern utilizing OpenStack Swift connections, a feature to be released in Zato 2.0, to connect independent applications in an asynchronous manner.

Let's say your application receives invoices that need to be distributed to multiple target applications using a cloud storage.

It doesn't matter how many input channel there are, how many receivers there will be or what the transport protocol and data format are - the approach needs to be generic and re-usable.

Here's how one would tackle it in Python with Zato ESB and app server.

First using the web-admin, a connection to OpenStack needs to be defined. This particular one uses Rackspace but any OpenStack provider will do.

Screenshots

Screenshots

Note that filling out a form suffices for a pool of clients to be created and made available on all nodes of a Zato cluster, as confirmed in server logs.

Screenshots

Next you need to hot-deploy a service, such as the one below.

# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function, unicode_literals

# Zato
from zato.server.service import List, Service

class InvoiceDispatcher(Service):
    name = 'invoice.dispatcher'

    class SimpleIO:
        input_required = (List('target'),)

    def handle(self):

        # Grab a connection wrapper
        conn = self.cloud.openstack.swift['Invoice Dispatcher'].conn

        # Fetch a client for that connection
        with conn.client() as client:

            # Iterate over targets and store the invoice in the cloud
            for target in self.request.input.target:
                client.put_object(target, 'Invoice', self.request.raw_request)

Note the use of SimpleIO - this is a declarative syntax for expressing requests and responses independent of how the data is transported, in query string parameters, JSON, XML or SOAP.

This service expects a list of elements called 'target' to be provided on input. In the article, the list will be produced using the query string but it could have been any other source as well and the service wouldn't care as long as they were available on input.

Once a service has been deployed a JSON channel is needed, again filling out a form is enough.

Screenshots

And that's it. That's the whole of it. You can now invoke the service over HTTP with JSON/XML/SOAP, AMQP, ZeroMQ or perhaps WebSphere MQ, including JMS.

The trick is - no code changes would have been ever required, the same service can handle multiple channels or target applications without ever touching the code. Everything is a matter of configuration and the service itself is re-usable in many contexts (R in IRA of Interesting, Reusable and Atomic).

Likewise, if one day you decide to change the OpenStack provider no code changes will be needed - filling out a form with new credentials would be enough. The changes will propagate throughout the whole cluster with no restarts, unless your deployment methodology needs them.

For clarity's sake, curl will now be used to invoke the newly deployed service but the same would have been achieved if say, AMQP with XML had been used instead of curl with JSON ..

$ curl "localhost:11223/invoice.dispatcher?target=CRM&target=Billing" -d '{"id":"123"}'

.. and the result in OpenStack-based Rackspace Cloud Files is ..

Screenshots

Screenshots

Note that GUI is not the only way to configure Zato - everything the GUI does is always available through command line, enmasse, and the admin API so while being a powerful ally web-admin is not the only way for managing Zato clusters.

Regardless of the config method, no restarts are required unless you need them - everything can be always reconfigured on fly.

The next instalment will cover the other side of the story - means to receive requests over cloud connections sent by other applications connecting to Zato-based solutions.