zato-apitest, a spin-off from the main Zato project lets one test APIs in a convenient way using assertions written in plain English, such as below:

Feature: My API Test

Scenario: Check connection and hello

    Given address "http://my.address"
    Given URL path "/my/path"
    Given format "JSON"
    Given JSON Pointer "/customer" in request is "My name"

    When the URL is invoked

    Then header "Connection" starts with "keep-"
    And JSON Pointer "/hello" is one of "a,b,c"

APIs include HTTP endpoints and 100+ types of assertions that can check headers, payload, JSON, XML, JSON Pointers, XPath as well as obtain test data from environment, SQL, CSV or Cassandra.

This post with guide you through the installation on Ubuntu 14.04 up to the point of executing the tool's built-in demo.

The steps are:

  • Install prerequisites
  • Install zato-apitest
  • Run demo

Install prerequisites

Run the commands below:

$ sudo apt-get update && sudo apt-get upgrade
$ sudo apt-get install python-pip postgresql-server-dev-all
$ sudo apt-get install python-dev libxml2-dev libxslt1-dev
$ sudo pip install --upgrade pip

Install zato-apitest

$ sudo pip install zato-apitest

Run demo

This will set up a sample project and run a set of assertions against sample live APIs:

$ apitest demo

API testing demo

And that's it for now - stay tuned for upcoming instalments that will go through configuring scenarios connecting to a variety of APIs.

One of the nice things added in Zato 2.0 is the improved ability to store code of one's API services directly in a server's hot-deploy directory - each time a file is saved it is uploaded on server and automatically propagated throughout all the other nodes in a cluster the given server belongs to.

Now, this in itself had been already doable since the very 1.0 version but the newest release added means to configure servers not to clean up hot-deploy directory after the code was picked up - meaning anything that is saved in there stays until it's deleted manually.

Two cool things can be achieved thanks to it:

  • Working in deploy-on-save mode
  • Deploying code from a repository checkout

Initial steps

To make it all possible, navigate to all of the servers' server.conf files, find hot_deploy.delete_after_pick_up, change it from True to False and restart all servers. This is the only time they will be restarted, promise.

Working in deploy-on-save mode

  • Let's say your server is in /home/user/zato/server1
  • Save your files in /home/user/zato/server1/pickup-dir now
  • Each time it's saved, note in server.log how it's picked up and deployed
  • This lets you make use of the service in the actual environment a moment after it's saved



Deploying code from a repository checkout

  • Essentially, this is deploy-on-save described above working on a grander scale
  • Instead of saving individual files, everything that is needed for a given solution is stored in the hot-deploy's pickup directory in one go
  • Can be easily plugged into Jenkins or other automation tools
  • You can try it right now using this sample repository prepared for the article
  • Go to a server's pickup dir
  • Delete anything it already contains
  • Issue the command below:
$ git clone .
  • Witness that the two services just checked out are being nicely picked up by all servers in a cluster
  • This concludes the deployment - an environment has been just updated with newest versions of services and they are already operational, as can be confirmed in web-admin



This is the first part in a set of articles describing Zato ESB and application server's statistics - what they provide, how to use them and how they can be efficiently managed.

The series will include:

  • Trends (this part)
  • Summaries
  • Settings and maintenance
  • Statistics API

Trends answer the one important question asked by developers and operations alike - what is currently going on and what sort of tendencies can be found on my environments?

From that follows - is what happening right now considered typical or should I expect difficulties ahead soon?

Note that trends are not an early-warning system and as such they don't substitute monitoring solutions - they are instead meant to be an aid with an intricate knowledge of Zato.

Trends are concerned with relatively short spans of time - last hour, last 20 minutes or last 10 minutes. Essentially, this is a tool to give you an insight into how your services behave right after you receive a signal that there might be something unwanted hapenning right now.



When displaying trends the screen will be usually divided into 4 parts:

  • Left side - Slowest services + Most commonly used ones
  • Right side - as above but in a different time period, the one we want to compare the left side to

Hovering over any row will highlight matching rows in all the tables - for instance, the most commonly used service doesn't have to be necessarily the slowest one and the one that was slowest yesterday the same time of day doesn't have to be so slow today.

Understanding the difference in usage patterns is literally a hover away.



By default top 10 services in each category are shown but the number can be set to any one required.

3 quick links let one answer the most common questions:

  • How does it all compare to last hour?
  • To yesterday the same hour?
  • Last week the same day and hour?

It's also possible to pick arbitrary start/stop dates to compare to but do note that trends are always generated on fly from Redis and the process is CPU intensive so obtaining information for more than a couple of hours can result in visible CPU spikes.

Slowest services


The 6 columns displayed for each of the slowest services are:

  • M - Mean response time of that service (in ms)
  • AM - Average mean response time of all services, including that one (in ms)
  • U% - The service's usage share - of all invocations of all services, how many fell on that one
  • T% - The service's time share - of the whole time spent in all services, how many percents that service constitutes
  • TU - How many times all the services, including this one, have been invoked
  • Trend - A sparkline chart displaying how the service's mean response time fluctuated in the selected period

Hence the row in the screenshot describes a service whose mean response time was 12.72 ms which is almost 10 times faster than the average mean response time when all services are taken into account (122 ms).

Of out the total 40 invocations of all services (TU) this one took 2.5% of it but in terms of the time spent it was 18.9% of the whole time the 40 invocations took.

From the sparkline, one can also learn that the service was not used at all except for a sudden spike a few minutes ago.

Each service slower than expected is marked with a red dot left to its name (shown here).

Everything can also be exported to CSV.

Most commonly used


The 6 columns describing for each of the most commonly used services are:

  • U - How many times the service has been invoked
  • Rq/s - How many requests a second that service processed - note that 0.1 req/s is the smallest degree shown
  • U% - The service's usage share - of all invocations of all services, how many fell on that one (repeated)
  • T% - The service's time share - of the whole time spent in all services, how many percents that service constitutes (repeated)
  • TU - How many times all the services, including this one, have been invoked (repeated)
  • Trend - A sparkline chart displaying the service's average request rate per second

Thus in the screenshot one can find a service which was invoked 14 times (U) of all the 40 service invocations (TU) which is exactly 35% of them all.

However, despite consituting 35% of all invocations it took only 9.4% (T%) when CPU time is considered - meaning it was pretty fast.

The trend tells us that the service is mostly idle except for a brief period a couple of minutes ago.

Each service whose usage share exceeds what is expected is marked with a red dot left to its name.

And again, statistics can be exported to CSV.

Final words

Trends are a tool used for finding out or confirming whether what an environment does currently conforms to standard usage patterns or not.

Next instalment will cover summaries that let one quickly compare statistics across days, months or years.


但这样解析并没有太多的内在含义。 所以这里尽可能的提供一些更多的信息, 而不是仅仅从企业的角度的说法。

The no-nonsense intro to ESB and SOA has been just translated into Chinese - check it out for information on what ESB and SOA really are and how to design awesome microservices following the principles of IRA:

  • I nteresting
  • R eusable
  • A tomic

Also available in Català, Português and ру́сский.




Head over to the new chapter in the Zato documentation to find out how to integrate Django and Flask applications with Zato services.

The service it uses is a hypothetical yet fully functional one which looks up and caches user data by their IDs.

The integration effort is presented from the point of view of both libraries using self-contained ready-to-use projects living on GitHub.

Simply put - it contains everything you need to integrate Django or Flask with Zato :-)

# -*- coding: utf-8 -*-

from __future__ import absolute_import, division, print_function, unicode_literals

# stdlib
from random import choice

# Zato
from zato.server.service import Service

# A set of first and last names to generate customer names from
first = ('Richard', 'Victoria', 'Sebastian', 'James', 'Hannah')
last = ('Ogden', 'Piper', 'Edmunds', 'Murray', 'Young')

# Redis key we store our cache under
CACHE_KEY = 'example:customer:by-id'

class GetCustomer(Service):
    name = 'customer.get2'

    def get_customer(self, cust_id):

        # Assume in an actual service this would look up the data in a real DB
        return '{} {}'.format(choice(first), choice(last))

    def handle(self):


        # Customer ID received on input
        cust_id = self.request.payload['cust_id']

        # Do we have their name in cache?
        name = self.kvdb.conn.hget(CACHE_KEY, cust_id)

        # If not, we need to ask the backend and cache the response
        if not name:
            name = self.get_customer(cust_id)
            self.kvdb.conn.hset(CACHE_KEY, cust_id, name)

        # Produce the response for the calling application.
        self.response.payload = {'name': name}

Django screenshots below: