Introducing BST

Zato Business State Transitions (BST) is a newly added extension to the core Zato integration platform designed for ESB, SOA, REST, APIs and Cloud Integrations in Python.


BST is a perfect fit for workflow-oriented integrations with multiple applications, Python-based or not, cooperating with a shared definition of a process, such as ones found in Order Management.


Definitions of business states and their transitions are written in natural languages, such as English, for instance:


Objects: Order, Priority order
New: Submitted
Submitted: Ready
Ready: Sent
Sent: Confirmed, Rejected
Rejected: Updated
Updated: Ready
Force stop: Canceled, Timed out

Python API

From a Python's programmer perspective, everything boils down to a single with block in a Zato service which:

  • enforces that a given transition is correct for a provided business object
  • executes the block of code
  • on success, transitions the object to a new state
# Zato
from zato.server.service import Service

# zato-labs
from zato_bst import transition_to

class MyService(Service):

  def handle(self):

    with transition_to(self, 'Order', 123, 'ready'):

      # Here goes the actual user code


External applications, no matter if in Python, Java, .NET or employing any other technology, can always participate in transitions by using a BST REST API.

Below curl is used to simulate a sample transition for an object of type Customer whose ID is 2 to a state called "Consent given" in a hypothetical process of opening a customer account.

$ cat cust.json
 "state_new":"Consent given"
$ curl http://localhost:17010/bst/transition -d @cust.json
  "can_transition": true,
  "state_old": null,
  "state_new": "Consent given",
  "reason": ""

The full API additionally allows to:

  • confirm a transition is valid before executing it
  • perform a mass transition of multiple business objects
  • get history of transitions for a business object
  • return a list of transitions defined in a Zato cluster

Exports and diagramming

A REST API is also available to export existing BST data to either JSON or diagrams, including both definitions and run-time information about the state of a BST instance.

Full control over output is offered, including means to specify custom colors, diagram size or timezones the data should be presented in:



BST offers new, interesting, means to extend one's SOA or REST environments with a new perspective on how to approach integrations that are primarily oriented towards workflows built on top of individual APIs and endpoints.

Click here to learn more about BST and the the core Zato platform upon which it's based.

Das Akronym ESB und ein damit verwandtes - SOA - kann Verwirrung hervorrufen. ESB steht für "Enterprise Service Bus" - ein Datenbus zur Bereitstellung von Diensten in Unternehmen. SOA steht für "Service Oriented Architecture" - eine dienstorientierte Architektur.

Das macht diese Begriffe noch nicht wirklich verständlich. Daher versuchen wir im Folgenden einige Klartext-Informationen zu dem Thema zu geben, unter Vermeidung allzu vieler Marketing-Phrasen.

Thanks to a great contribution by Chris Zwerschke, the no-nonsense intro to ESB/SOA has just been translated into German.

This is a must read for anyone considering building modern APIs or microservices using contemporary tools and employing current practices.

The document is also available in:

Zato on:

zato-apitest, a spin-off from the main Zato project lets one test APIs in a convenient way using assertions written in plain English, such as below:

Feature: My API Test

Scenario: Check connection and hello

    Given address "http://my.address"
    Given URL path "/my/path"
    Given format "JSON"
    Given JSON Pointer "/customer" in request is "My name"

    When the URL is invoked

    Then header "Connection" starts with "keep-"
    And JSON Pointer "/hello" is one of "a,b,c"

APIs include HTTP endpoints and 100+ types of assertions that can check headers, payload, JSON, XML, JSON Pointers, XPath as well as obtain test data from environment, SQL, CSV or Cassandra.

This post with guide you through the installation on Ubuntu 14.04 up to the point of executing the tool's built-in demo.

The steps are:

  • Install prerequisites
  • Install zato-apitest
  • Run demo

Install prerequisites

Run the commands below:

$ sudo apt-get update && sudo apt-get upgrade
$ sudo apt-get install python-pip postgresql-server-dev-all
$ sudo apt-get install python-dev libxml2-dev libxslt1-dev
$ sudo pip install --upgrade pip

Install zato-apitest

$ sudo pip install zato-apitest

Run demo

This will set up a sample project and run a set of assertions against sample live APIs:

$ apitest demo

API testing demo

And that's it for now - stay tuned for upcoming instalments that will go through configuring scenarios connecting to a variety of APIs.

One of the nice things added in Zato 2.0 is the improved ability to store code of one's API services directly in a server's hot-deploy directory - each time a file is saved it is uploaded on server and automatically propagated throughout all the other nodes in a cluster the given server belongs to.

Now, this in itself had been already doable since the very 1.0 version but the newest release added means to configure servers not to clean up hot-deploy directory after the code was picked up - meaning anything that is saved in there stays until it's deleted manually.

Two cool things can be achieved thanks to it:

  • Working in deploy-on-save mode
  • Deploying code from a repository checkout

Initial steps

To make it all possible, navigate to all of the servers' server.conf files, find hot_deploy.delete_after_pick_up, change it from True to False and restart all servers. This is the only time they will be restarted, promise.

Working in deploy-on-save mode

  • Let's say your server is in /home/user/zato/server1
  • Save your files in /home/user/zato/server1/pickup-dir now
  • Each time it's saved, note in server.log how it's picked up and deployed
  • This lets you make use of the service in the actual environment a moment after it's saved



Deploying code from a repository checkout

  • Essentially, this is deploy-on-save described above working on a grander scale
  • Instead of saving individual files, everything that is needed for a given solution is stored in the hot-deploy's pickup directory in one go
  • Can be easily plugged into Jenkins or other automation tools
  • You can try it right now using this sample repository prepared for the article
  • Go to a server's pickup dir
  • Delete anything it already contains
  • Issue the command below:
$ git clone .
  • Witness that the two services just checked out are being nicely picked up by all servers in a cluster
  • This concludes the deployment - an environment has been just updated with newest versions of services and they are already operational, as can be confirmed in web-admin



This is the first part in a set of articles describing Zato ESB and application server's statistics - what they provide, how to use them and how they can be efficiently managed.

The series will include:

  • Trends (this part)
  • Summaries
  • Settings and maintenance
  • Statistics API

Trends answer the one important question asked by developers and operations alike - what is currently going on and what sort of tendencies can be found on my environments?

From that follows - is what happening right now considered typical or should I expect difficulties ahead soon?

Note that trends are not an early-warning system and as such they don't substitute monitoring solutions - they are instead meant to be an aid with an intricate knowledge of Zato.

Trends are concerned with relatively short spans of time - last hour, last 20 minutes or last 10 minutes. Essentially, this is a tool to give you an insight into how your services behave right after you receive a signal that there might be something unwanted hapenning right now.



When displaying trends the screen will be usually divided into 4 parts:

  • Left side - Slowest services + Most commonly used ones
  • Right side - as above but in a different time period, the one we want to compare the left side to

Hovering over any row will highlight matching rows in all the tables - for instance, the most commonly used service doesn't have to be necessarily the slowest one and the one that was slowest yesterday the same time of day doesn't have to be so slow today.

Understanding the difference in usage patterns is literally a hover away.



By default top 10 services in each category are shown but the number can be set to any one required.

3 quick links let one answer the most common questions:

  • How does it all compare to last hour?
  • To yesterday the same hour?
  • Last week the same day and hour?

It's also possible to pick arbitrary start/stop dates to compare to but do note that trends are always generated on fly from Redis and the process is CPU intensive so obtaining information for more than a couple of hours can result in visible CPU spikes.

Slowest services


The 6 columns displayed for each of the slowest services are:

  • M - Mean response time of that service (in ms)
  • AM - Average mean response time of all services, including that one (in ms)
  • U% - The service's usage share - of all invocations of all services, how many fell on that one
  • T% - The service's time share - of the whole time spent in all services, how many percents that service constitutes
  • TU - How many times all the services, including this one, have been invoked
  • Trend - A sparkline chart displaying how the service's mean response time fluctuated in the selected period

Hence the row in the screenshot describes a service whose mean response time was 12.72 ms which is almost 10 times faster than the average mean response time when all services are taken into account (122 ms).

Of out the total 40 invocations of all services (TU) this one took 2.5% of it but in terms of the time spent it was 18.9% of the whole time the 40 invocations took.

From the sparkline, one can also learn that the service was not used at all except for a sudden spike a few minutes ago.

Each service slower than expected is marked with a red dot left to its name (shown here).

Everything can also be exported to CSV.

Most commonly used


The 6 columns describing for each of the most commonly used services are:

  • U - How many times the service has been invoked
  • Rq/s - How many requests a second that service processed - note that 0.1 req/s is the smallest degree shown
  • U% - The service's usage share - of all invocations of all services, how many fell on that one (repeated)
  • T% - The service's time share - of the whole time spent in all services, how many percents that service constitutes (repeated)
  • TU - How many times all the services, including this one, have been invoked (repeated)
  • Trend - A sparkline chart displaying the service's average request rate per second

Thus in the screenshot one can find a service which was invoked 14 times (U) of all the 40 service invocations (TU) which is exactly 35% of them all.

However, despite consituting 35% of all invocations it took only 9.4% (T%) when CPU time is considered - meaning it was pretty fast.

The trend tells us that the service is mostly idle except for a brief period a couple of minutes ago.

Each service whose usage share exceeds what is expected is marked with a red dot left to its name.

And again, statistics can be exported to CSV.

Final words

Trends are a tool used for finding out or confirming whether what an environment does currently conforms to standard usage patterns or not.

Next instalment will cover summaries that let one quickly compare statistics across days, months or years.