As an enterprise integration platform and backend, API-oriented, application server, Zato 3.0 ships with Single Sign-On and User Management APIs whose many exciting features are detailed in this blog post.


  • No need for maintaining one's own user database

  • Everything is API-based - user creation, updates, logging in, logging out, checking access, creating sessions, validating sessions, search, there is an API call for everything

  • Strong encryption and safe data storage assist in achieving compliance with regulations such as HIPAA or EU GDPR

  • APIs exist for both REST and Python calls which means that everything is also available to user-based services communicating through additional protocols, such as AMQP, WebSockets, ZeroMQ, IBM MQ or any other that Zato supports

  • Comes with a built-in workflow for user signup, including user approval and welcome messages - just fill in the email templates

  • Personally Identifiable Information (PII) can be optionally encrypted and decrypted without any programming needed

  • Both users and their sessions can be given arbitrary key/value tags, also optionally encrypted and decrypted on the fly

  • Users can be required to log in from selected applications only

  • Users can be required to access APIs from selected IP addresses only

  • Passwords are always hashed (PBKDF2) and, by default, encrypted as well (Fernet)

  • PBKDF2 parameters can be easily fine-tuned in each environment separately

  • Configurable warnings of an approaching password expiry

  • Password strength enforcement, including length checks and blacklisting of the most commonly used ones

  • Audit log keeps track of who accesses personal information and for what purpose

  • Clearly defined roles - regular users and admins (super-users)

  • Convenient command line tools for scripted management of user accounts, including typical tasks such as resetting a user's password or locking and unlocking an account

  • Extensive documentation covering the functionality, including dozens of REST and Python examples

The functionality is a major addition to Zato in version 3.0 and can be expected to expand with each new release, including support for additional authentication methods and interoperability with existing authentication protocols, yet in its initial form it can already handle a lot of use-cases and processes.

In particular, if you are creating applications that would not otherwise need a full server nor a database, e.g. single-page apps or mobile ones, be sure to check the new APIs out!

With the addition of WebSocket channels in Zato 3.0, a question arises of how to combine both HTTP and WebSocket channels in Zato environments. This article shows how a Docker container running in AWS can be configured to expose a single port to handle both kind of requests, including health-status checks from Amazon's ALB (Application Load Balancer).


The configuration we would like to arrive at is in the diagram below:


  • There is ALB (Application Load Balancer) which sends HTTP pings to Zato, expecting an HTTP 200 OK response if everything is fine
  • There are WebSocket-based clients (WSX) that send their own business requests, independent of regular HTTP ones
  • A Docker container holds a Zato environment, publishing port 11224 to the outside world, to ALB and WSX
  • Inside the container, Zato's load-balancer, based on HAProxy, distributes requests individual TCP sockets of a Zato server
  • If the request begins with a well-known prefix (here, /ws), it is forwarded to a WebSocket channel on port 48902
  • All other requests, including the one from ALB, will go the default HTTP port at 17010

WebSocket channel

First step is to ensure that a WebSocket channel's path begins with the desired prefix, such as /ws, as in the screenshot.



Now, HAProxy needs to be reconfigured in its source code view, from web-admin:


  • The changes required need to go the stanza called 'front_http_plain' - right after its current content add the lines with logic that will separate incoming requests into WebSocket ones or any other.

  • Next, add a new HAProxy backend pointing to the WebSocket channel created earlier.

  • Now, stop and start the load-balancer again

# ##############################################################################

frontend front_http_plain


    acl is_websocket path_beg /ws
    acl is_websocket hdr(Upgrade) -i WebSocket
    use_backend websockets if is_websocket

# ##############################################################################

backend websockets
    mode http
    option http-server-close
    option forceclose

    server http_plain--server1 check inter 2s rise 2 fall 2

# ##############################################################################

That's it

There are no more steps as far Zato is concerned - if a Docker container is now started with port 11224 mapped to 11223 inside the container, all the external applications will be able to access Zato-based services using both HTTP and WebSockets through a single port:

$ docker run -p 11224:11223 zato

WebSphere MQ is a messaging middleware by IBM - a message queue server - and this post shows how to integrate with MQ from Python and Zato.

The article will go through a short process that will let you:

  • Send messages to queues in 1 line of Python code
  • Receive messages from queues without coding
  • Seamlessly integrate with Java JMS applications - frequently found in WebSphere MQ environments
  • Push MQ messages from Django or Flask


Preliminary steps

  • Obtain connection details and credentials to the queue manager that you will be connecting to:

    • host, e.g.
    • port, e.g. 1414
    • channel name, e.g. DEV.SVRCONN.1
    • queue manager name (optional)
    • username (optional)
    • password (optional)
  • Install Zato

  • On the same system that Zato is on, install a WebSphere MQ Client - this is an umbrella term for a set of development headers and libraries that let applications connect to remote queue managers

  • Install PyMQI - an additional dependency implementing the low-level proprietary MQ protocol. Note that you need to use the pip command that Zato ships with:

# Assuming Zato is in /opt/zato/current
zato$ cd /opt/zato/current/bin
zato$ ./pip install pymqi
  • That is it - everything is installed and the rest is a matter of configuration

Understanding definitions, outgoing connections and channels

Everything in Zato revolves around re-usability and hot-reconfiguration - each individual piece of configuration can be changed on the fly, while servers are running, without restarts.

Note that the concepts below are presented in the context of WebSphere MQ but they apply to other connection types in Zato too.

  • Definitions - encapsulate common details that apply to other parts of configuration, e.g. a connection definition may contain remote host and port
  • Outgoing connections - objects through which data is sent to remote resources, such as MQ queues
  • Channels - objects through which data can be received, for instance, from MQ queues

It is usually most convenient to configure environments during development using web-admin GUI but afterwards this can be automated with enmasse, API or command-line interface.

Once configuration is defined, it can be used from Zato services which in turn represent APIs that Zato clients invoke. Then, external applications, such as a Django or Flask, will connect using HTTP to a Zato service which will on their behalf send messages to MQ queues.

Let's use web-admin to define all the Zato objects required for MQ integrations. (Hint: web-admin by default runs on http://localhost:8183)


  • Go to Connections -> Definitions -> WebSphere MQ
  • Fill out the form and click OK
  • Observe the 'Use JMS' checkbox - more about it later on


  • Note that a password is by default set to an unusable one (a random UUID4) so once a definition is created, click on Change password to set it to a required one


  • Click Ping to confirm that connections to the remote queue manager can be established


Outgoing connection

  • Go to Connections -> Outgoing -> WebSphere MQ
  • Fill out the form - the connection's name is just a descriptive label
  • Note that you do not specify a queue name here - this is because a single connection can be used with as many queues as needed


  • You can now send a test MQ message directly from web-admin after click Send a message



API services

  • Having carried out the steps above, you can now send messages to queue managers from web-admin, which is a great way to confirm MQ-level connectivity but the crucial point of using Zato is to offer API services to client applications so let's create two services now, one for sending messages to MQ and one that will receive them.
# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function, unicode_literals

# Zato
from zato.server.service import Service

class MQSender(Service):
    """ Sends all incoming messages as they are straight to a remote MQ queue.
    def handle(self):

        # This single line suffices
        self.out.wmq.send(self.request.raw_request, 'customer.updates', 'CUSTOMER.1')
  • In practice, a service such as the one above could perform transformation on incoming messages or read its destination queue names from configuration files but it serves to illustrate the point that literally 1 line of code is needed to send MQ messages

  • Let's create a channel service now - one that will act as a callback invoked for each message consumed off a queue:

# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function, unicode_literals

# Zato
from zato.server.service import Service

class MQReceiver(Service):
    """ Invoked for each message taken from a remote MQ queue
    def handle(self):

But wait - if this is the service that is a callback one then how does it know which queue to get messages from?

That is the key point of Zato architecture - services do not need to know it and unless you really need it, they won't ever access this information.

Such configuration details are configured externally (for instance, in web-admin) and a service is just a black box that receives some input, operates on it and produces output.

In fact, the very same service could be mounted not only on WebSphere MQ ones but also on REST or AMQP channels.

Without further ado, let's create a channel in that case, but since this is an article about MQ, only this connection type will be shown even if the same principle applies to other channel types.


  • Go to Connections -> Channels -> WebSphere MQ
  • Fill out the form and click OK
  • Data format may be JSON, XML or blank if no automatic de-serialization is required


After clicking OK a lightweight background task will start to listen for messages pertaining to a given queue and upon receiving any, the service configured for channel will be invoked.

You can start as many channels as there are queues to consume messages from, that is, each channel = one input queue and each channel may declare a different service.

JMS Java integration

In many MQ environments the majority of applications will be based on Java JMS and Zato implements the underlying wire-level MQ JMS protocol to let services integrate with such systems without any effort from a Python programmer's perspective.

When creating connection definitions, merely check Use JMS and everything will be taken care of under the hood - all the necessary wire headers will be added or removed when it needs to be done.


No restarts required

It's worth to emphasize again that at no point are server restarts required to reconfigure connection details.

No matter how many definitions, outgoing connections, channels there are, and no matter of what kind they are (MQ or not), changing any of them will only update that very one across the whole cluster of Zato servers without interrupting other API services running concurrently.

Configuration wrap-up

  • MQ connection definitions are re-used across outgoing connections and channels
  • Outgoing connections are used by services to send messages to queues
  • Data from queues is read through channels that invoke user-defined services
  • Everything is reconfigurable on the fly

Let's now check how to add a REST channel for the MQSender service thus letting Django and Flask push MQ messages.

Django and Flask integration

  • Any Zato-based API service can be mounted on a channel
  • For Django and Flask, it is most convenient to mount one's services on REST channels and invoke them using the zato-client from PyPI
  • zato-client is a set of convenience clients that lets any Python application, including ones based on Django or Flask, to invoke Zato services in just a few steps
  • There is a dedicated chapter in documentation about Django and Flask, including a sample integration scenario
  • It's recommended to go through the chapter step-by-step - since all Zato configuration objects share the same principles, the whole of its information applies to any sort of technology that Django or Flask may need to integrate with, including WebSphere MQ
  • After completing that chapter, to push messages to MQ, you will only need to:
    • Create a security definition for a new REST channel for Django or Flask
    • Create the REST channel itself
    • Assign a service to it (e.g. MQSender)
    • Use a Python client from zato-client to invoke that channel from Django or Flask
    • And that is it - no MQ programming is needed to send messages to MQ queues from any Python application :-)


  • Zato lets Python programmers integrate with WebSphere MQ with little to no effort
  • Built-in support for JMS lets one integrate with existing Java applications in a transparent manner
  • Built-in Python clients offer trivial access to Zato-based API services from other Python applications, including Django or Flask

Where to next? Start off with the tutorial, then consult the documentation, there is a lot of information for all types of API and integration projects, and have a look at support options in case you need absolutely any sort of assistance!

With Zato 3.0, we are excited to announce some popular development environments now have integrated support for hot-deploying Zato services directly from the comfort of your favourite text editor.

Hot-deployment obviates the need to visit the administration GUI while modifying an in-development service, or requiring filesystem access to the machines hosting your cluster, the primary options available for rapid deployment prior to our new support for IDE extensions.

Initially we are releasing extensions for PyCharm and Visual Studio Code while focusing on a great story around hot-deployment, however this is only the start. We have some very exciting integrations planned to simplify your Zato development workflow coming soon!


As our plugin is published in the JetBrains Plugin Repository, installation on PyCharm is achieved simply by visiting Preferences -> Plugins, then clicking Browse repositories:


Next, search for Zato and click the Install button:


Once installation completes, visit Preferences -> Languages & Frameworks -> Zato to populate connection information for your development cluster:


Finally, with a service module open in the editor, visit Tools -> Upload to default Zato server to trigger upload:


Alternatively, automatic deployment can be triggered on every file save, by including the magic ide-deploy=True marker comment somewhere in the file:


Success is indicated in the status bar at the bottom of the window:


Visual Studio Code

With Visual Studio Code, simply visit the Extensions Marketplace either in your browser, or within the application by pressing Ctrl+Shift+X hot-key:


After installation completes, a quick visit to the settings panel (Ctrl+,) is needed to configure your cluster connection:


While editing any Python script, you can deploy to the configured Zato cluster with a simple icon click, by pressing the Ctrl+Shift+L hotkey:


Just as in PyCharm, automatic deployment can also be triggered on file save, by including the magic ide-deploy=True marker comment somewhere in the file:


Success is indicated through a status panel that appears following activation:


For more information, please refer to the setup instructions provided in the extension's, which is presented as part of the Marketplace user interface embedded within Visual Studio Code.

This sounds great, but I'm on an older release

Don't worry, we have you covered! The Zato 2.0 documentation chapter on IDE integration includes a procedure for modifying older clusters to support the new endpoint. The process only takes a few moments, and the result works just great.


This release marks the foundational work for many exciting future integrations. We are already playing with ideas, or actively planning future support for:

  • Integrated service name completion for self.invoke()
  • In-IDE API specification documentation browser
  • In-IDE service invoker
  • Zato API code completion
  • Server log viewer

Stay tuned for more information on when these features may land!

One of many exciting features that the upcoming Zato 3.0 release will bring is API caching - this post provides an overiew of functionality that is already available if you install Zato directly from source code.

What is new

  • In-RAM caches accessible to Python-based services and external clients (including REST and other format or protocols)
  • A convenient set of practical cache commands based on real-world needs
  • GUI and cache contents search
  • HTTP channel caching
  • Integration with memcached

Built-in caches

In-RAM caches are perfect for rarely changing yet frequently read business data. For instance:

  • A customer's products, stored in a remote SQL database, will change infrequently but may be read very often
  • A user's permissions may be read from multiple databases and the process may take several seconds but once read, they will not change much and fast access is of primary importance.

Each Zato cluster has a default cache and there can be multiple independent caches for different purposes each with its own configuration.

In v3.0, there is no persistent storage for caches and synchronization between servers in a Zato cluster is performed in background. Both aspects will be extended in future releases.

Real-world commands

Built-in caches offer all the standard CRUD commands operating on cache entries but they come in several versions, each of which is meant to facilitate operations in real-world applications.

All of commands are available to Python services and external API clients through dedicated REST endpoints.

More commands will be added in the future, based on feedback from users employing caches in projects.

Command Description
get Returns an entry matching input key
get-by-prefix Returns all entries whose keys start with a given prefix
get-by-suffix Returns all entries whose keys end with a given suffix
get-by-regex Returns all entries whose keys match a regular expression
get-contains Returns entries with keys that contain a single input substring
get-contains-all Returns entries with keys that contain all of input substrings
get-contains-any Returns entries with keys that contain at least one of of input substrings
get-not-contains Returns entries with keys that do not contain the input substring
set Sets input value for a given key
set-by-prefix Sets input value for all keys starting with a given prefix
set-by-suffix Sets input value for all keys ending with a given suffix
set-by-regex Sets input value for all keys matching a regular expression
set-contains Sets input value for all keys that contain a single input substring
set-contains-all Sets input value for keys that contain all of input substrings
set-contains-any Sets input value for keys that contain at least one of input substrings
set-not-contains Sets input value for keys that do not contain the input substring
delete Deletes a given key
delete-by-prefix Deletes all keys starting with a given prefix
delete-by-suffix Deletes all keys ending with a given suffix
delete-by-regex Deletes all keys matching a regular expression
delete-contains Deletes all keys that contain a single input substring
delete-contains-all Deletes keys that contain all of input substrings
delete-contains-any Deletes keys that contain at least one of input substrings
delete-not-contains Deletes keys that do not contain the input substring
expire Sets expiration for a given key
expire-by-prefix Sets expiration for all keys starting with a given prefix
expire-by-suffix Sets expiration for all keys ending with a given suffix
expire-by-regex Sets expiration for all keys matching a regular expression
expire-contains Sets expiration for all keys that contain a single input substring
expire-contains-all Sets expiration for keys that contain all of input substrings
expire-contains-any Sets expiration for keys that contain at least one of input substrings
expire-not-contains Sets expiration for keys that do not contain the input substring
keys Returns all keys in cache
values Returns all values in cache
items Returns all items (entries) in cache
clear Clears out the cache completely

Additionally, Python code can access dict-like methods .iterkeys, .itervalues and .iteritems that return iterators over keys or values without building result lists upfront.

Cache configuration

Each cache can have its own policy that is best suited to a given purpose:

  • Cache size upon reaching of which the least commonly used entries will be evicted
  • Max item size - values that are too big will be rejected
  • Whether a given item's expiration time should be extended on each get or set operation




As with other Zato features, a GUI in web-admin lets one easily manage cache definitions, add new or update existing entries from the browser, and look up items by keys, values or both.



Auto-caching in HTTP channels

Caching of responses, either REST or SOAP, is such a common activity that it is available under one click for all HTTP channels

When creating a new channel or updating an existing one, pick a definition of cache that should hold responses for that channel and servers will automatically cache them on first request.

On subsequent invocations, all responses from that channel will be served from cache until underlying keys expire or are deleted.


Usage from Python

The self.cache object is an entry point to all the caches from a service's perspective. For instance, to add new entries:

# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function, unicode_literals

# stdlib
from time import sleep

# Zato
from zato.server.service import Service

class CacheDemo(Service):
    """ A demo service that uses built-in caches.
    def get_data(self):
        """ Returns data from external systems.
        # Assume this takes a long time to return, e.g. several remote calls

        return 'MyData'

    def handle(self):

        # Get data from external resources
        data = self.get_data()

        # Store it in default cache
        self.cache.default.set('data', data)

        # Or, store it in an explicitly named cache
        cache = self.cache.get_cache('builtin', 'Customer products')
        cache.set('cust-123-data', data)

Both entries can be now found in web-admin:




All cache commands are available through REST API calls. Here, set and get are used for illustrative purposes.

Note how input can be sent in both body and query string and that the last get command obtains metadata in addition to actual business value.

$ curl -XPOST localhost:11223/zato/cache -d '{"key":"my-key", "value":"my-value"}'
$ curl -XGET localhost:11223/zato/cache?key=my-key
{"value": "my-value"}
$ curl -XGET localhost:11223/zato/cache -d '{"key":"my-key"}'
{"value": "my-value"}
$ curl -XGET localhost:11223/zato/cache -d '{"key":"my-key", "details":true}'
{"hits": 4, "key": "my-key", "last_read": "2017-12-18T11:57:52.202461",
"value": "my-value", "expiry": null, "prev_write": "2017-12-18T11:56:57.714813",
"prev_read": "2017-12-18T11:57:47.225827", "expires_at": null,
"last_write": "2017-12-18T11:57:43.200757", "position": 0}


API caching in Zato 3.0 is a new feature, based on practical needs, that without doubt will come in handy in many integration scenarios that require efficient and convenient access to data cached in RAM.

Watch this space for future posts describing in detail commands, configuration options, response hooks or runtime usage statistics API.