Tutorial - part 1/2


This tutorial will guide you through the process of creating a real-world API service that, with Python, will integrate three applications using REST and AMQP. The result will be a solution ready to use in production.

If you are new to Zato, here are the key facts about the platform.

Zato is a convenient and secure, Python-based, open-source, service-oriented platform for automation, integrations and interoperability. It is used to connect distributed systems or data sources and to build API-focused, middleware and backend applications.

The platform is designed and built specifically with Python users in mind - often working in, and for, industries such as telecommunications, defense, health care and others that require automation, integrations and interoperability of multiple systems and processes.

Sample real-world, mission-critical Zato environments include:

  • Systems for telecommunication operators integrating CRM, ERP, Charging Systems, Billing and other OSS/BSS applications internal or external to the operators, including network automation of packet brokers and other network visibility and cybersecurity tools from Keysight

  • Enterprise service buses for government, helping in the digital transformation of legacy systems and processes towards modern capabilities

  • AI, ML and data science systems that analyze and improve acquisition and supply chain activities in government processes

  • Applied observability automation that enables meaningful decision making through the orchestration and coordination of the collection, distribution and presentation of data spread across a pool of independent systems

  • Platforms for health care and public administration systems, helping to achieve data interoperability through the integration of independent data sources, databases and health information exchanges (HIE)

  • Global IoT platforms for hybrid integrations of medical devices and software both in the cloud and on premises

  • Cybersecurity automation, including IT/OT hardware and software assets

  • Robotic process automation (RPA) of message flows and events produced by early warning systems

Zato offers connectors to all the popular technologies and vendors, such as REST, Cloud, task scheduling, Microsoft 365, Salesforce, Atlassian, SAP, Odoo, SQL, HL7, FHIR, AMQP, IBM MQ, LDAP, Redis, MongoDB, WebSockets, SOAP, Caching and many more.

Running in the cloud, on premises, or under Docker, Kubernetes and other container technologies, Zato services are optimized for high performance and security - it is easily possible to run hundreds and thousands of services on typical server instances as offered by Amazon, Google Cloud, Azure or other cloud providers.

Zato servers offer high availability and no-downtime deployment. Servers form clusters that are used to scale systems both horizontally and vertically.

The product is commercial open-source software with training, professional services and enterprise 24x7x365 support available.

A platform and language for interesting, reusable and atomic services

Zato promotes the design of, and helps you build, solutions composed of services that are interesting, reusable and atomic (IRA).

What does it really mean in practice that something is interesting, reusable and atomic? In particular, how do we define what is interesting?

Each interesting service should make its users want to keep using it more and more. People should immediately see the value of using the service in their processes. An interesting service strikes everyone as immediately useful in wider contexts, preferably with few or no conditions, prerequisites and obligations.

An interesting service is aesthetically pleasing, both in terms of its technical usage as well as its relevance to, and potential applicability in, fields broader than originally envisioned. If people check the service and say "I know, we will definitely use it" or "Why don't we use it" you know that the service is interesting. If they say "Oh no, not this one again" or "No, thanks, but no" then it is the opposite.

Note that focus here is on the value that the service brings for the user. You constantly need to keep in mind that people generally want to use services only if they allow them to fulfill their plans or execute some bigger ideas. Perhaps they already have them in mind and they are only looking for technical means of achieving that or perhaps it is your services that will make a person realize that something is possible at all, but the point is the same, your service should serve a grander purpose.

This mindset, of wanting to build things that are useful and interesting is not specific to Python or, indeed, to software and technology. Even if you are designing and implementing services for your own purposes, you need to act as if you were a consultant that can always see a bigger vision, a bigger architecture, and who can envision results that are still ahead in the future while at the same time not forgetting that it is always a series of small interesting actions, that everyone can relate to, that lead to success.

A curious observation can be made, particularly when you consider all the various aspects of the digital transformation that companies and organizations go through, is that many people to whom the services are addressed, or who sponsor their development, are surprised when they see what automation and integrations are capable of.

Put differently, many people can only begin to visualize bigger designs once they see in practice smaller, practical results that further their missions, careers and otherwise help them at work. This is why, again, the focus on being interesting is essential.

At the same, it can be at times advantageous to you that people will not see automation or integrations coming. That lets you take the lead and build a center of such a fundamental shift around yourself. This is a great position to be in, a blue ocean of possibilities, because it means little to no competition inside an organization that you are a part of.

If you are your own audience, that is, if you build services for your own purposes, the same principles apply and it is easy to observe that thinking in services lets you build a toolbox of reusable, complementary capabilities, a portfolio, that you can take with you as you progress in your career. For instance, your services, and your work, can concentrate on a particular vendor and with a set of services that automate their products, you will be always able to put that into use, shortening your own development time, no matter who employs you and in what way.

Regardless of who the clients that you build the solutions for are, observe that automation and integrations with services are evolutionary and incremental in their nature, at least initially. Yes, the resulting value can often be revolutionary but you do not intend to incur any massive changes until there are clear, interesting results available. Trying to integrate and change existing systems at the same time is doable, but not trivial, and it is best left to later stages, once your automation gets the necessary, initial buy-in from the organization.

Services should be ready to be used in different, independent processes. The processes can be strictly business ones, such as processing of orders or payments, or they can be of a deep, technical nature, e.g. automating cybersecurity hardware. What matters in either case is that reusability breeds both flexibility and stability.

There is inherent flexibility in being able to compose bigger processes out of smaller blocks with clearly defined boundaries, which can easily translate to increased competitive advantage when services are placed into more and more areas. A direct result of this is a reduction in R&D time as, over time, you are able to choose from a collection of loosely-coupled components, the services, that hide implementation details of a particular system or technology that they automate or integrate with.

Through their continued use in different processes, services can also reduce overall implementation risks that are always part of any kind of software development - you will know that you can keep reusing stable functionality that has been already well tested and that is used elsewhere.

Because services are reusable, there is no need for gigantic, pure waterfall-style implementations of automation and integrations in an organization. Each individual project can contribute a smaller set of services that, as a whole, constitute the whole integrated environment. Conversely, each new project can start to reuse services delivered by the previous ones, hence allowing you to quickly, incrementally, prove the value of the investment in service-oriented thinking.

To make them reusable, services are designed in a way that hides their implementation details. Users only need to know how to invoke the service; the specific systems or processes it automates or integrates are not necessarily important for them to know as long as a specific business goal is achieved. Thanks to that, both services and what they integrate can be replaced without disrupting other parts - and, in reality, this is exactly what happens - systems with various kinds of data will be changed or modernized but the service will stay the same and the user will not notice anything.

Each service fulfills a single, atomic business need. Each service is deployed independently and, as a whole, they constitute an implementation of business processes taking place in your company or organization. Note that the definition of what the business need is, again, specific to your own needs. In purely market-oriented integrations, this may mean, for instance, the opening of a bank account. In IT or OT automation, on the other hand, it may mean the reconfiguration of a specific device.

That services are atomic also means that they are discrete and that their functionality is finely grained. You will recognize whether a design goes in this direction if you consider the names of the services for a moment. An atomic service will invariably use a short name, almost always consisting of a single verb and noun. For instance, "Create Customer Account", "Stop Firewall", "Conduct Feasibility Study", it is easy to see that we cannot break them down into smaller part, they are atomic.

At the same time, you will keep creating composite services that invoke other services; this is natural and as expected but you will not consider services such as "Create Customer Account and Set Up a SIM Card" as atomic ones because, in that form, they will not be very reusable, and a major part of why being atomic is important is that it promotes reusability. For instance, having separate services to create customer accounts, independently of setting up their SIM cards, is that one can without difficulty foresee situations when an account is created but a SIM card is purchased at a later time and, conversely, surely one customer account should be able to potentially have multiple SIM cards. Think of it as being similar to LEGO bricks where just a few basic shapes can form millions of interesting combinations.

The point about service naming conventions is well worth remembering because this lets you maintain a vocabulary that is common to both technical and business people. A technical person will understand that such naming is akin to the CRUD convention from the web programming world while a business person will find it easy to map the meaning to a specific business function within a broader business process.

With Zato, you use Python to focus on the business logic exclusively and the platform takes care of scalability, availability communications protocols, messaging, security or routing. This lets you concentrate only on what is the very core of systems integrations - making sure their services are interesting, reusable and atomic.

Python is the perfect choice for this job because it hits the sweet spot under several key headings:

  • It is a very high level language, with a syntax close to how grammar of various spoken languages works, which makes it easy to translate business requirements into implementation.

  • It is a solid, mainstream and full-featured, real programming language rather than a domain-specific one which means that it offers a great degree of flexibility and choice in expressing their needs.

  • It is difficult to find universities without Python courses. Most people entering the workforce already know Python, it is a new career language. In fact, it is becoming more and more difficult to find new talent who would not prefer to use Python.

  • Yet, one does not need to be a developer or a full-time programmer to use Python. In fact, most people who use Python are not programmers at all. They are specialists in other fields who also need to use a programming language to automate or integrate their work in a meaningful way.

  • Many Python users come from backgrounds in network and cybersecurity engineering - fields that naturally require a lot of automation using a real language that is convenient and easy to get started with.

  • Many Python users are scientists with a background in AI, ML and data science, applying their domain-specific knowledge in processes that, by their very nature, require them to collect and integrate data from independent sources, which again leads to automation and integrations.

  • Many Python users have a strong web programming background which means that it takes little effort to take a step further, towards automation and integrations. In turn, this means that it is easy to find good people for API projects.

  • Many Python users know multiple programming languages - this is very useful in the context of integration projects where one is typically faced with dozens of technologies, vendors or integration methods and techniques.

  • Lower maintenance costs - thanks to the language's unique design, Python programmers tend to produce code that is easy to read and understand. From the perspective of multi-year maintenance, reading and analyzing code, rather than writing it, is what most people do most of the time, making sense to use a language that makes it easy to carry out the most common tasks.

In short, Python can be construed as executable pseudo-code with many of its users already having experience with modern automation and integrations so Python, both from a technical and strategic perspective, is a natural choice for both simple and complex, sophisticated automation, integration and interoperability solutions. This is why Zato is designed specifically with Python people in mind.

More about Python services with Zato

Zato is a multi-protocol platform and services are often not tied to any specific protocol. It means that it is possible to design services that can be invoked through REST but they can also listen for data from AMQP, IBM MQ queues or SQL databases. They can also accept HL7 MLLP, SOAP, WebSocket, SFTP, FTP, e-mail, JSON-RPC and ZeroMQ-based messages.

Naturally, REST is ubiquitous and usually, this is the way that most APIs are exposed through but there are other ways too and in various scenarios other means of communication are employed.

Zato ships with connectors and adapters for REST, AWS S3, AMQP, MongoDB, Redis, HL7, Odoo, SAP, IBM MQ, SQL, SOAP, FTP, SFTP, LDAP, Cassandra, Dropbox, Twilio, IMAP, SMTP, ElasticSearch, Solr, Swift, Slack, Telegram, WebSockets and ZeroMQ. Because it is written in Python, you have access to many third-party libraries which provide connectivity to other types of systems.

Because API platforms often need dashboards, it is also possible to use Django templates with Zato to output user interfaces.

Built-in security options include API keys, Basic Auth, JWT, NTLM, OAuth, RBAC, SSL/TLS, Vault, WS-Security and XPath. It is always possible to secure services using other, non-built in, means.

In terms of its implementation, an individual Zato service is a Python class implementing a specific method called self.handle. The service receives input, processes it according to its business requirements, which may involve communicating with other systems, applications or services, and then some output is produced. Note that both input and output are optional, e.g. a background service transferring files between applications will usually have neither whereas a typical CRUD service will have both.

Because a service is merely a Python class, it means that each one consumes very little resources and it is possible to deploy hundreds or thousands of services on a single Zato server. And because Zato can use multiple CPUs and multiple Linux instances, it scales without limits both horizontally and vertically.

Services accept their input through channels - a channel tells Zato that it should make a particular service available to the outside world using such and such protocol, data format and security definition. For instance, a service can be mounted on independent REST channels, sometimes using API keys and sometimes using Basic Auth. Additionally, each channel type has its own specific pieces of configuration, such as caching, timeouts or other options.

Services can invoke other Zato services too - this is just a regular Python method call, within the same Python process. It means that it is very efficient to invoke them - it is simply like invoking another Python method.

Services are hot-deployed to Zato servers without server restarts and a service may be made available to its consumers immediately after deployment.

There are plugins for Visual Studio Code and PyCharm that automatically deploy your service each time you save it in your IDE. Other code editors or IDEs can be used too.

During development, usually, the built-in web-admin dashboard is used to create and manage channels or other Zato objects. As soon as a solution is ready for DevOps automation, a configuration of a solution can be deployed automatically from the command line or directly from a git clone which makes it easy to use Zato with tools such as Terraform, Nomad or Ansible.

What will the tutorial achieve, exactly?

After completing the tutorial, we will have:

  • A complete integration environment.
  • An API service offered via REST and JSON.
  • The service will invoke two REST endpoints to collect data.
  • The service will send notifications to an AMQP broker.

Message flow

We will be implementing an API integration process typical to banks and other financial institutions.

  • A client application wishes to learn details about a customer given the person's ID.
  • Customer data is stored in a CRM.
  • Payments history is stored in a different application.
  • For certain customer types, there is a business requirement that a fraud detection system be notified of any operations regarding such customers and we send notifications to the system accordingly.

The Client App is a building block that we will not be developing in the tutorial - this is where Django, React, Vue, Flutter, ASP.NET and other frameworks can be used in actual projects.

On the other hand, remember that other backend systems can invoke the service too - this is crucial, the same one can be made available to many applications, each with its own access channel, even if in the tutorial we will assume there is only one API client.

Installing Zato

The installation procedure for Mac, Linux, Docker and Vagrant is covered in the other version of the tutorial.

Under Windows, follow the steps below. Administrator rights are not required.

  • Download the Zato installer by clicking here
  • Unpack the .zip archive and run the "install.bat" file that you will find inside
  • Click "Install" to begin the installation process which will install Zato and create a new environment for you. Click "Exit" once it finishes.

  • Afterwards, click "Start Zato" either through a newly created desktop icon or through a new entry in the Start menu.

  • Clicking "Start Zato" will start your environment with server logs visible in a new window - to stop the environment, simply close the window.

What did the installer install?


Installing Zato creates a quickstart cluster as well. Quickstart clusters are a convenient way of creating new, self-contained, fully functional environments comprised of one server, dashboard and a scheduler.

You can create quickstart clusters from the command line and via Docker Quickstart as well and they can be used for any purposes, from development, through testing to production.

The two main endpoints of a Windows quickstart cluster are:

localhost:8183DashboardA Web-based Dashboard to manage Zato servers. The username is "admin" and how to reveal the auto-generated password is described below.
localhost:17010ServerA Zato API server instance. This is the main component that runs your API integrations and backend Python services.

Managing your credentials

As a rule, there are no default credentials anywhere in Zato, which means that each environment has different, auto-generated passwords.

To retrieve the one generated for you, click Programs → Zato 3.2 → Show configuration in the Windows Start menu. This will open a JSON file with various details of the environment, including two keys that are of interest now:

dashboard_passwordUsed to log in to the Dashboard at localhost:8183. Username: "admin".
ide_passwordUsed when configuring a VS Code or PyCharm plugin for Zato. Username: "ide_publisher".

Note that you can also change the passwords at any time directly in the JSON file and they will be used when you restart the environment by closing its window and clicking "Start Zato" again.

You can now go to the Dashboard at http://localhost:8183 and log in with user 'admin' and password from the configuration file.

Where is Zato installed on disk?

Zato itself and the newly created environment is installed to your Windows user's ~\AppData\Local directory.

The exact name of this directory may differ depending on the locale set for your Windows system. If the language is English and the username is "dsuch" then the default, full path will be C:\Users\dsuch\AppData\Local\Zato3.2, as in the screenshot below.

Note that, because Zato does not require Administrator permissions, it never installs anything to "C:\Program Files" or similar locations.

Integrating with your IDE

You can author Zato services with any code editor and you can also install a plugin for your IDE to auto-deploy your services each time a Python file is saved on disk.

Configure Visual Studio Code for work with Zato.
Configure PyCharm for work with Zato.

Depending on the plugin's version, it may by default try to connect to localhost:11223 instead of localhost:17010 because 11223 is the default plugin port under Mac, Linux and Docker. If that be the case, make sure to configure the plugin to use localhost:17010 which is what Zato for Windows uses.

Regardless of what your IDE and plugin are, the username for the plugin to authenticate with a Zato server is always "ide_publisher".

Introducing hot-deployment

Hot-deployment is a key concept in Zato. The term means the process of transferring your service to a cluster. It is considered hot because, afterwards, it does not require server restarts, i.e. you hot-deploy a service and it is immediately available on all the servers.

If there is more than one server in the cluster, it suffices to hot-deploy the service to only one of them and it will synchronize with other nodes in the cluster.

There are a few ways to hot-deploy services. The first two will be used in the tutorial but we will describe each to let you understand what the options are and when to use them.

  • From your IDE

    Commonly used during development - once you install a plugin for the IDE, each time you press Ctrl-S to save a service on disk, it will be auto-deployed to your cluster and made available for immediate use.

  • Command line

    This is used for deployment automation or if you have an IDE or editor without a Zato plugin.

    Each Zato server monitors a specific directory, called a hot-deploy directory, and each time Python files with your services are saved there, that server will pick them up and hot-deploy throughout the cluster.

    In the quickstart cluster from this tutorial, the directory is ~\AppData\Local\Zato3.2\env\qs-1\server1\pickup\incoming\services.

    During development, you can save your files with Zato services directly in this directory and then, when you press Ctrl-S, the file will be deployed to the cluster. You can also clone your git repository directly into this directory.

    Another way to use it during development is to make it point to a git clone residing in another directory and again, each time you save a file its contents are sent to all the servers.

    This method is used for automation too - simply use built-in copy or robocopy Windows tools to copy files into the directory and all the services from these files will be deployed.

  • Dashboard

    When you log in to the Dashboard and navigate to Services, you will note a button called "Upload services". This will let you deploy local files to a remote server. This is useful when there is no direct connection to the server, e.g. no way to ssh into it.

  • Local config file

    This method is usually used for automated deployments only - it lets you point a starting server to files from the file system that it should deploy.

    The difference between it and a hot-deploy directory is that the latter requires for the server to be already running whereas this one tells a server what it should do while it is still starting up.

    This option is most often employed when building one's own Docker images or using Terraform, Packer and similar tools.

  • Remote file transfer

    This automation method uses file transfer to let servers listen for changes in directories of remote servers.

    For instance, you can have a central git clone of a repository for multiple environments and Zato servers will connect to it via SFTP, download any new or changed ones and deploy them locally.

In terms of the end result, there is no difference between the methods, they achieve exactly the same result.

This is actually a good example of the way Zato itself is designed around reusable services - all these deployment methods, all these channels, ultimately lead to the same services that deploy your code and it is only the manner in which they are accessed that differs.

Hot-deploying your first service

We can now create the first service and hot-deploy it. Create a new file called api.py with the contents below. This is a basis of the service that we will fill in with details later on.

# -*- coding: utf-8 -*-
# zato: ide-deploy=True

from zato.server.service import Service

class GetUserDetails(Service):
    """ Returns details of a user by the person's ID.
    name = 'api.user.get-details'

    def handle(self):

        # For now, return static data only
        self.response.payload = {
            'user_name': 'John Doe',
            'user_type': 'SRT'

If you configured a plugin for PyCharm or Visual Studio Code, note the highlighted line - this is a special marker which lets the plugin know that saving this file should result in the IDE's deploying it to your cluster.

Without a plugin, you need to save the file in the server's ~/env/qs-1/server1/pickup/incoming/services directory.

If you want to deploy it to a Zato server from your browser, log in to the Dashboard, go to Services and click Upload package. Dashboard's address is localhost:8183, username is "admin" and the password is in your Start menu, under Programs → Zato 3.2 → Show configuration.

No matter how you deploy the service, there will be activity in the server's log:

Having deployed the code, we can confirm in the Dashboard that the service is there.

Once you logged in to the Dashboard, navigate to Services. Enter "get-details" in the search box, then click Show services. Click the service's name and this will display basic information about the deployed service. You can click Source code to confirm that this is the same service.

We have a service so now we can create a REST channel for it.

Creating your first channel

We want to invoke our API service using REST but we also want to make sure that access to it is secured so we will first create a security definition for our API client.

In Dashboard, go to Security → Basic Auth → Click "Create a new definition" and enter:

  • Name: API Credentials
  • Username: api
  • Domain: API

Clicking OK will create the definition with its user's password automatically set to a random uuid4 so we need to reset it by clicking "Change password" and providing a new one - it is up to you to decide what it should be.

Now, we can create a REST channel by going to Connections → Channels → REST, as below:


Click Create a new REST channel link:


Fill out the form as here, the fields to provide values for are:

  • Name
  • URL path
  • Data format
  • Service
  • Security definition


Clicking OK will create the channel and we will be able to invoke the API service now.

Invoking your first service

We are going to use curl to invoke the service - we will access it through the server's port of 17010, as below. Note that you need to enter the API client's password too.

$ curl http://api:<password-here>@localhost:17010/api/v1/user ; echo
{"user_name":"John Doe","user_type":"SRT"}

Everything works as expected - you have just created and invoked your first API service! Now, try to see what happens if you provide an invalid password or a URL path - your requests will not be allowed.

This concludes the first part of the tutorial and the next one will see the service integrate with external systems to transform and enrich their replies before producing the final response to the API client.

But first, there is one observation to be made - the quickstart cluster that you created is a real, fully functional environment. If you were to create it from scratch, by adding each component individually, Dashboard, server and scheduler, the outcome would be precisely the same.

In other words, quickstart clusters are a convenient method for the creation of new environments and they can be very well used not only for development but for testing and production too.

Now, we are ready to go to the second part of the tutorial.

Schedule a meaningful demo

Book a demo with an expert who will help you build meaningful systems that match your ambitions

"For me, Zato Source is the only technology partner to help with operational improvements."

John Adams, Program Manager of Channel Enablement at Keysight