Article presenting a scalable integration pattern utilizing OpenStack Swift connections, a feature to be released in Zato 2.0, to connect independent applications in an asynchronous manner.

Let's say your application receives invoices that need to be distributed to multiple target applications using a cloud storage.

It doesn't matter how many input channel there are, how many receivers there will be or what the transport protocol and data format are - the approach needs to be generic and re-usable.

Here's how one would tackle it in Python with Zato ESB and app server.

First using the web-admin, a connection to OpenStack needs to be defined. This particular one uses Rackspace but any OpenStack provider will do.



Note that filling out a form suffices for a pool of clients to be created and made available on all nodes of a Zato cluster, as confirmed in server logs.


Next you need to hot-deploy a service, such as the one below.

# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function, unicode_literals

# Zato
from zato.server.service import List, Service

class InvoiceDispatcher(Service):
    name = 'invoice.dispatcher'

    class SimpleIO:
        input_required = (List('target'),)

    def handle(self):

        # Grab a connection wrapper
        conn =['Invoice Dispatcher'].conn

        # Fetch a client for that connection
        with conn.client() as client:

            # Iterate over targets and store the invoice in the cloud
            for target in
                client.put_object(target, 'Invoice', self.request.raw_request)

Note the use of SimpleIO - this is a declarative syntax for expressing requests and responses independent of how the data is transported, in query string parameters, JSON, XML or SOAP.

This service expects a list of elements called 'target' to be provided on input. In the article, the list will be produced using the query string but it could have been any other source as well and the service wouldn't care as long as they were available on input.

Once a service has been deployed a JSON channel is needed, again filling out a form is enough.


And that's it. That's the whole of it. You can now invoke the service over HTTP with JSON/XML/SOAP, AMQP, ZeroMQ or perhaps WebSphere MQ, including JMS.

The trick is - no code changes would have been ever required, the same service can handle multiple channels or target applications without ever touching the code. Everything is a matter of configuration and the service itself is re-usable in many contexts (R in IRA of Interesting, Reusable and Atomic).

Likewise, if one day you decide to change the OpenStack provider no code changes will be needed - filling out a form with new credentials would be enough. The changes will propagate throughout the whole cluster with no restarts, unless your deployment methodology needs them.

For clarity's sake, curl will now be used to invoke the newly deployed service but the same would have been achieved if say, AMQP with XML had been used instead of curl with JSON ..

$ curl "localhost:11223/invoice.dispatcher?target=CRM&target=Billing" -d '{"id":"123"}'

.. and the result in OpenStack-based Rackspace Cloud Files is ..



Note that GUI is not the only way to configure Zato - everything the GUI does is always available through command line, enmasse, and the admin API so while being a powerful ally web-admin is not the only way for managing Zato clusters.

Regardless of the config method, no restarts are required unless you need them - everything can be always reconfigured on fly.

The next instalment will cover the other side of the story - means to receive requests over cloud connections sent by other applications connecting to Zato-based solutions.

Check out the command line snippet and screenshot below - as of recent git master versions on GitHub it's possible to use MySQL as a Zato's SQL Operational Database. This is in addition to previously supported databases - PostgreSQL and Oracle DB.

The command shown was taken straight from the tutorial - the only difference is that MySQL has been used instead of PostgreSQL.

$ zato quickstart create ~/env/qs-1 mysql localhost 3306 zato1 zato1 localhost 6379

ODB database password (will not be echoed): 
Enter the odb_password again (will not be echoed): 

Key/value database password (will not be echoed): 
Enter the kvdb_password again (will not be echoed): 

[1/8] Certificate authority created
[2/8] ODB schema created
[3/8] ODB initial data created
[4/8] server1 created
[5/8] server2 created
[6/8] Load-balancer created
Superuser created successfully.
[7/8] Web admin created
[8/8] Management scripts created

Quickstart cluster quickstart-887030 created
Web admin user:[admin], password:[ilat-edan-atey-uram]
Start the cluster by issuing the /home/dsuch/env/qs-1/ command
Visit for more information and support options


A newly added Zato feature lets one update HTTP timeouts on fly, without any redeployments.

It's been always possible to provide a timeout a given HTTP connection should be capped to. However, changing the value required either redeploying a service or storing it in an external store, such as Redis, that was queried each time a request was made. While being very lightweight, these operations still were something to remember about.

Thus recent GitHub versions allow one to update the value on fly, with no deployments or any restarts. It works with JSON, SOAP (both Suds and string-based) or any other HTTP connection established.



The feature will be released in Zato 2.0 but it's already available in git master.

The steps below describe what is needed to install Zato ESB and app server under OS X.

osx$ mkdir ~/zato && cd ~/zato
osx$ curl -O
osx$ vagrant up
[snip output]
  • ssh into the newly provisioned virtual machine - Zato is installed in /opt/zato, default username/password zato/zato
osx$ vagrant ssh

And that's it - Zato is installed in the VM so you can head over to the main documentation site.






Kudos to Ernesto Revilla Derksen for his assistance in preparing the installation guide.

Recent works on Zato's command line interface resulted in several improvements that are available on GitHub, slated for delivery in the next release.

  • zdaemon has been replaced with sarge - this resulted in a signifact speedup and servers now start much faster.

  • Improved configuration checks provide immediate feedback when a component is found to have been misconfigured - TCP ports are checked to be free, Postgres/Oracle and Redis are pinged and the existence of pidfiles prevents components from attempting to run.

  • All the components start commands work with grew a new --fg flag indicating that a given component - servers, Django-based web admin or load-balancers, shoud start in foreground instead of going into background daemon mode. Components started in such a way can be stopped with Ctrl-C.