While it is not very common to connect one's Zato environments to dozens of different AMQP or JMS WebSphere MQ queue managers at a time, one can frequently have dozens or hundreds of services or REST channels declared in a given cluster.

Thus for Zato 3.0 (which is a work in progress) a nice little option was added to paginate web-admin's search results as in the screenshot below:


In fact, all listings will sport this feature in case that one actually does need to connect to dozens of distinct brokers simultaneously :-)

One of many great things about Zato is the fact how easy it is to plug into it new data sources and input methods triggering one's SOA/API services.

For instance, Zato 2.0 does not have a web-admin GUI for file notifications but it is still possible to listen for new or updated files in directories of choice and invoke services each time a new event arrives, e.g. when a new file is dropped into a directory, effectively creating a new channel type in addition to what Zato comes with out of the box.

This is exactly what the script below does - it uses watchmedo, part of the watchdog package, to listen for events in a given directory. Each time anything of interest happens in that directory (here - in /tmp/data) a Zato service defined below is invoked with a path to that item of interest provided on input.


watchmedo shell-command \
    --patterns="*" \
    --command='curl localhost:11223/file.notifications?path=${watch_src_path}; echo' \

It's entirely up to that service to decide how to handle the data in each file - one can decide to deliver it to any of outgoing connections in Zato using, for instance, AMQP, SMTP, FTP, ElasticSearch, Solr or do anything else that is required in a given integration scenario.


Just to exemplify the idea, in this blog post the data is simply stored in server.log:

# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function, unicode_literals

# Zato
from zato.server.service import Service

class FileNotifications(Service):
    name = 'file.notifications'

    class SimpleIO(object):
        input_required = ('path',)

    def handle(self):

        # Read data in
        data = open(self.request.input.path).read().strip()

        # Since this is just an example, only log the contents of what was read in
        self.logger.info('Data from %s is `%s`', self.request.input.path, data)

Such a service needs to be mounted on an HTTP channel, as below.


If you have not done it yet, create a directory /tmp/data and start the watchmedo script above before continuing with the next steps.

Now we can create a new file and observe in server.log what happens, how the service reacts to it:


Sure enough, server.log confirms that everything works as expected:


As of today, the mailing list used by Zato has been retired in favour of the new forum that will let us provide a great community experience.

All of the contents of the mailing list has been already migrated to the forum so all the terrific questions, answers and ideas posted to the list are still accessible in the new forum.

Speaking of the migration - a Python tool has been written to automate the process and its source code is published on GitHub. This can be taken and forked at will with the hope that it can serve as a source of inspiration on how to use the Discourse API from Python code.

Since its inception, Zato has always offered hot-deployment, which means that it is possible to update code of one's SOA/API services without restarting servers - simply upload a new version of a given Python module and it is automatically distributed to all servers forming a cluster.

The next major version will extend it to user configuration as well - so you will be able to edit any .ini config file on any server and all changes will be propagated to other servers each time that file is saved, just like with services today.

In Zato 2.0 a little piece of code such as the one below can be used to achieve the same effect.

Essentially - after updating a config file, for instance user.conf you need to invoke the 'util.config.reload' service, can be either directly from web-admin, command line or over HTTP, AMQP, ZeroMQ or any other channel.

This will publish an internal message to all servers prompting them to re-read all user-provided configuration files.

In this manner, there is even less need to restart servers - after all, why restart the whole server if only a file or two changed?

# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function, unicode_literals

# Zato
from zato.common.broker_message import SERVICE
from zato.common.util import get_config, new_cid
from zato.server.service import Service

class _Reload(Service):
    name = 'util.config._reload'

    def handle(self):

        server_conf = get_config(self.server.repo_location, 'server.conf')


        self.logger.info('User config reloaded')

class Reload(Service):
    name = 'util.config.reload'

    def handle(self):
        msg = {}
        msg['action'] = SERVICE.PUBLISH.value
        msg['service'] = _Reload.name
        msg['payload'] = ''
        msg['cid'] = new_cid()


While there had been always a plethora of options to install Zato with, under RHEL/CentOS, Ubuntu, Debian or Docker, now a newly added chapter of the documentation explains how to install the Python-based integration platform from source code as well.

This is essentially two commands, one to obtain the code from GitHub and another to run the installer.

$ git clone https://github.com/zatosource/zato && zato/code
$ ./install.sh

Alright, and the third to confirm the installation :-)

$ ./bin/zato --version
Zato 3.0.0pre1.rev-21dbcdfa

So here you have it - installing Zato from source - surely will come in handy if you'd like to try out Zato in a system for which there are no pre-built packages or in other situations in which case please drop an email to info@zato.io telling us where elsewhere, in addition to already supported systems, you would like for Zato to run. Thanks!