Zato WebSocket channels let you accept long-running API connections and, as such, they have a few settings to fine tune their usage of timeouts. Let's discover what they are and how to use them.

WebSocket channels

Zato Dashboard WebSocket menu

Zato Dashboard WebSocket channel creation form

The four timeout settings are listed below. All of the WebSocket clients using a particular channel will use the same timeouts configuration - this means that a different channel is needed if particular clients require different settings.

  • New token wait time
  • Token TTL
  • Ping interval
  • Threshold

Tokens

  • New token wait time - when a new WebSocket connection is established to Zato, it has that many seconds to open a session and to send its credentials. If that is not done, Zato immediately closes the connection.

  • Token TTL - once a session is established and a session token is returned to the client, the token's time-to-live (TTL) will be that many seconds. If there is no message from the client within TTL seconds, Zato considers the token expired and it cannot be used any longer although it is not guaranteed that the connection will be closed immediately after the token becomes expired.

    In this context, a message that can extend TTL means one of:

    • A request sent by the client
    • A response to a request previously sent by Zato
    • A response to a ping message sent by Zato

Ping messages

  • Ping interval - Zato sends WebSocket ping messages once in that many seconds. Each time a response to a ping request is received from the client, the session token's TTL is extended by the same number of seconds.

    For instance, supposing a new session token was issued to a client at 15:00:00 with a TTL of 3600 (to 16:00:00) and ping inteval is 30 seconds.

    First, at 15:00:30 Zato will send a ping message.

    If the client responds successfully, the token's TTL will be increased by ping interval seconds more (here, 30) from the time the response arrived, e.g. if it arrives at 15:00:30,789 (after 789 milliseconds), it will be valid up to 16:00:30,789 because this is the result of adding TTL and ping interval seconds from the time the response was received by the server.

  • Threshold - the threshold of missed ping messages after exceeding of which Zato will close the connection. For instance, if the threshold is 5 and ping interval is 10, Zato will ping the client once in 10 seconds, if there are no 5 responses to the pings in a row (a total of 50 seconds in this case), the connection will be closed immediately.

    Note that only pings missed consecutively are counted towards the threshold. For instance, if a client missed 2 out of 5 pings but then replies on the 3rd attempt, its counter of messages missed is reset and it starts from 0 once more as though it never missed a single ping.

A note about firewalls

A great advantage of using WebSocket connections is that they are bidirectional and let one easily send messages to and from clients using the same TCP connection over a longer time.

However, particularly in the relation to ping messages, it needs to be remembered that stateful firewalls in data centers may have their requirements as to how often peers should communicate. This is especially true if the communication is over the Internet rather than in the same data center.

On one hand, this means that the ping interval should be set to a value small enough to ensure that firewalls will not break connections in a belief that Zato does not have anything more to send. Yet, it should not be too small lest, with a huge number of connections, the overhead of pings becomes too burdensome. For instance, pinging each client once a second is almost certainly too much and usually 20-40 seconds are a much better choice.

On the other hand, firewalls may also require the side which initiated the TCP connection (i.e. the WebSocket client) to periodically send some data to keep the connection active, otherwise the firewalls will drop the connection. This means that clients should be also possibly configured to send ping messages and how often they should do it may depend on what the applicable firewalls expect - otherwise, with only Zato pinging the client, it may not be enough for firewalls to understand that a connection is still active.

Python code

Finally, it is worth to keep in mind that all the timeouts, TTLs and pings are managed by the platform automatically and there is no programming needed for them to work.

For instance, the service below, once assigned to a WebSocket channel, will focus on the business functionality rather than on low-level management of timeouts - in other words, there is no additional code required.


# -*- coding: utf-8 -*-

# Zato
from zato.server.service import Service

class MyService(Service):
    def handle(self):
        self.logger.info('My request is %s', self.request.input)

Next steps

  • Start the tutorial to learn more technical details about Zato, including its architecture, installation and usage. After completing it, you will have a multi-protocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single and consistent API to its callers.

  • Visit the support page if you would like to discuss anything about Zato with its creators

  • Para aprender más sobre las integraciones de Zato y API en español, haga clic aquí

One of the additions in the upcoming Zato 3.2 release of is an extension to its publish/subscribe mechanism that lets services publish messages directly to other services. Let's check how to use it and how it compares to other means of invoking one's API services.

How does it work?

In your Zato service, you can publish a message to any other services as below. Simply point self.pubsub.publish to the target service by the latter's name and it will receive your message.

Here, we have two services - my.source publishes a message to my.target:

# -*- coding: utf-8 -*-

# Zato
from zato.server.service import Int, Service

# ############################################################################

# For code completion
if 0:
    from zato.common.pubsub import PubSubMessage
    PubSubMessage = PubSubMessage # For flake8

# ############################################################################

class MySource(Service):
    name = 'my.source'

    def handle(self):
        self.pubsub.publish('my.target', data={'abc':'Hello World'})

# ############################################################################

class MyTarget(Service):
    name = 'my.target'

    def handle(self):

        # Our message read from a topic ..
        msg = self.request.raw_request # type: PubSubMessage

        # .. let's log its contents now.
        self.logger.info('My message is %s', msg.data)

# ############################################################################

In server.log:

INFO - My message is {'abc': 'Hello World'}

This looks straightforward, and it is in usage, but there is a lot going on under the hood:

  • When you publish a message to a service, a topic and subscriptions are created automatically for the target service

  • The message that you publish is stored in a persistent database

  • The message is then delivered to your service asynchronously

  • Now comes the first crucial point, if your service raises an exception for any reason, the message is understood not to have been delivered safely and Zato will retry its delivery

  • Another key point is that, because the message is kept in persistent storage, it will be still available for Zato to deliver it even if you stop all of your servers - when starting, they will enqueue any undelivered messages to repeat their delivery

In other words, Zato services can participate in publish/subscribe just like other endpoints, e.g. REST, SOAP, AMQP or WebSockets.

For instance, we can use Zato Dashboard to check the topic - what its current depth is, what the last message was, or to browse messages enqueued for the subscriber service.

Zato Dashboard Topic List

Let's compare pub/sub with other methods of communication between services.

How does it differ from self.invoke?

You use self.invoke to invoke another service synchronously, within the same operating system's process. If the target service raises an error, you get a live Python exception object in the source service.

How does it differ from self.invoke_async?

Using self.invoke_async lets you send a message to another service asynchronously, which is similar to what self.pubsub.publish can do. However, self.invoke_async operates in RAM only.

On one hand, this means that it is much more efficient than publish/subscribe.

Yet, on the other hand, this means that if a server sending a message to a service using self.invoke_async is shut down, the message is lost irrevocably because it only ever exists in RAM.

How does it differ from self.patterns?

Several integration patterns can be accessed through self.patterns - fan-out/fan-in, parallel execution, invoke/retry and async invocation with a callback.

All of them can be used to form and support complex integration scenarios and what all of them have in common is that they work on messages that exist in RAM only, whereas publish/subscribe uses persistent storage.

When to choose which?

As usual, there is not one choice to cover all needs but several guidelines can be found:

  • Use self.invoke if you need immediate feedback (an exception) if the service that you invoke fails for any reason. For instance, get-like services (GetClient, GetInvoice etc.) typically fall into this category.

  • Use self.invoke_async if you need asynchronous invocations and it is acceptable that a message can be dropped in case of a server restart. This will work in cases when it is always possible for the initial application or service to retry the transmission.

  • Use self.patterns if you need to build more advanced scenarios but keep in mind that all the messages exist in RAM only.

  • Use self.pubsub.publish if you need the highest degree of durability and retransmissions are not possible; all messages are always saved in a database before they are delivered which means that they are not lost if servers restart.

Finally, keep in mind that you can always mix them all - a service can invoke other services using, for instance, parallel invoke which in turn publishes messages to other services that, on their part, invoke more services using self.invoke - this kind of interactions is very well possible too.

Next steps

  • Start the tutorial to learn more technical details about Zato, including its architecture, installation and usage. After completing it, you will have a multi-protocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single and consistent API to its callers.

  • Visit the support page if you would like to discuss anything about Zato with its creators

  • Para aprender más sobre las integraciones de Zato y API en español, haga clic aquí

In this article, we will cover the details of how Zato SQL connection pools can be configured to take advantage of features and options specific to a particular driver or to the SQLAlchemy library.

SQL connection pools

First, let's review the basic Zato Dashboard form that creates a new SQL connection pool.

Zato Dashboard SQL connections menu

Zato Dashboard SQL connection pool form

Above, we find options that are common to all the supported databases:

  • Name of the connection as it is referenced to in your Python code
  • Whether the connection is active or not
  • How big the pool should be
  • What kind of a database it is (here, Oracle DB)
  • Where the database is located - host and port
  • What is the name of the database
  • The username to connect with (password is changed using a separate form)

More options

The basic form covers the most often required, common options but there is more to it and that comes in two flavours:

  • Options specific to the driver library for a given database
  • Options specific to SQLAlchemy, which is the underlying toolkit used for SQL connections in Zato

As to how to declare them:

  • Options from the first group are specified using a query string appended to the database's name, e.g. mydb?option=value&another_option=another_value.

  • Options from the second group go to the Extra field in the form.

We will cover both of them using a few practical examples.

Specifying encoding in MySQL connections

When connecting to MySQL, there may arise a need to be explicit about the character set to use when issuing queries. This is a driver-level setting so it is configured by adding a query string to the database's name, such as mydb?charset=utf8.

Zato Dashboard Setting MySQL encoding

Using a service name when connecting to Oracle DB

Oracle DB connections will at times require a service name alone to connect to, i.e. without a database name. This is also a driver-level option but this time around the query string is the sole element that is needed, for instance: ?service_name=MYDB.

Zato Dashboard Setting Oracle DB service name

Echoing all SQL queries to server logs

It is often convenient to be able to quickly check what queries exactly are being sent to the database - this time, it is an SQLAlchemy-level setting which means that it forms part of the Extra field.

Each entry in Extra is a key=value pair, e.g. echo=True. If more than one is needed, each such entry is in its own line.

Zato Dashboard Setting MySQL encoding

Knowing which options can be set

At this point, you may wonder about how to learn which options can be used by which driver and what can go to the extra field?

The driver-specific options will be in that particular library's documentation - each one will have a class called Connection whose __init__ method will contain all such arguments. This is what the query string is built from.

As for the extra field - it accepts all the keyword arguments that SQLAlchemy's sqlalchemy.create_engine function accepts, e.g. in addition to echo, it may be max_overflow, isolation_level and others.

And that sums up this quick how-to - now, you can configure more advanced SQL options that are specific either to each driver or to SQLAlchemy as such.

Next steps

  • Start the tutorial to learn more technical details about Zato, including its architecture, installation and usage. After completing it, you will have a multi-protocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single and consistent API to its callers.

  • Visit the support page if you would like to discuss anything about Zato with its creators

  • Para aprender más sobre las integraciones de Zato y API en español, haga clic aquí

Today, we are looking at how environment variables can be used to let the configuration of your Zato-based API services be reusable across environments - this will help you centralise all of your configuration artefacts without a need for changes when code is promoted across environments of different levels.

Making code and configuration reusable

Let's say we have three environments - development, test and production - and each of them uses a different address for an external endpoint. The endpoint's functionality is the same but the provider of this API also has several environments matching yours.

What we want to achieve is twofold:

  • Ensuring that the code does not need changes when you deploy it in each of your environments
  • Making configuration artefacts, such as files with details of where the external APIs are, independent of a particular environment that executes your code

In this article, we will see how it can be done.

Python code

In the snippet below, we have an outgoing REST connection called CRM which the Python code uses to access an external system.

Zato knows that CRM maps to a specific set of parameters, like host, port, URL path or security credentials to invoke the remote endpoint with.

# -*- coding: utf-8 -*-

# Zato
from zato.server.service import Service

class GetUserDetails(Service):
    """ Returns details of a user by the person's name.
    """
    name = 'api.user.get-details'

    def handle(self):

        # Name of the connection to use
        conn_name = 'CRM'

        # Get a connection object
        conn = self.out.rest[conn_name].conn

        # Create a request
        request = {
            "user_id": 123
        }

        # Invoke the remote endpoint
        response = conn.get(self.cid, request)

        # Skip the rest of the implementation
        pass

The above fulfils the first requirement - our code only ever uses the name CRM and it never has to actually consider what lies behind it, what CRM in reality points to.

All it cares about is that there is some CRM that gives it the required user data - if the connection details to the CRM change, the code will continue to work uninterrupted.

This is already good but we still have the other requirement - that CRM point to different external endpoints depending on which of your environment your code currently runs in.

This is where environment variables come into play. They let you move the details of a configuration to another layer - instead of keeping them along with your code, they can be specified for each execution environment separately. In this way, neither code nor configuration need any modifications - only environment variables change.

Zato Dashboard, enmasse CLI and Docker

When Dashboard is used for configuration, enter environment variables prefixed with a dollar sign "$" instead of an actual value. Zato will understand that the real value is to be read from the server's environment.

For instance, if the CRM is available under the address of https://example.net, create a variable called CRM_ADDRESS with the value of "https://example.net", store it in your ~/.bashrc or a similar file, source this file or log in to a new Linux shell, and then log in to Dashboard to fill out the corresponding form as here.

Zato Dashboard

Zato Dashboard

Once you click OK, restart your Zato servers for the changes to take effect. From now on, each time the CRM connection is used by any service, its address will be read from what $CRM_ADDRESS reads.

In this way, you can assign different values to as many environment variables as needed and Zato will read them when the server starts or when you edit a given connection's definition in Dashboard or via enmasse.

If you use enmasse for DevOps automation, the same can be done in your YAML configuration files, for instance:

  - connection: outgoing
    name: CRM
    address: "$CRM_ADDRESS"
    url_path: /api

Note that the same can be used if you start Zato servers in Docker containers.

For instance, in one environment you can start it as such:

sudo docker run   \
  --env CRM_ADDRESS=https://example.net \
  --publish 17010:17010 \
  --detach \
  mycompany/myrepo

In another environment, the address may be different:

sudo docker run   \
  --env CRM_ADDRESS=https://example.com \
  --publish 17010:17010 \
  --detach \
  mycompany/myrepo

Finally, keep in mind that environment variables can be used in place of any string or integer values, no matter what kind of a connection type we consider, which means that parameters such as pool sizes, usernames, timeouts or options of a similar nature can be always specified using variables specific to each environment without a need for maintaining config files for each environment independently.

Next steps

  • Start the tutorial to learn more technical details about Zato, including its architecture, installation and usage. After completing it, you will have a multi-protocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single and consistent API to its callers.

  • Visit the support page if you would like to discuss anything about Zato with its creators

  • Para aprender más sobre las integraciones de Zato y API en español, haga clic aquí

Using Zato, it is easy to make IBM MQ queues available to Python applications - this article will lead you step-by-step through the process of setting up the Python integration platform to listen for MQ messages and to send them to MQ queue managers.

Zato and IBM MQ integrations

Prerequisites

  • First, install Zato - pick the system of your choice and follow the installation instructions
  • In the same operating system where Zato runs, install an IBM MQ Client matching the version of the remote queue managers that you will be connecting to - an MQ Client is a redistributable package that lets applications, such as Zato, connect to IBM MQ and if your queue manager is at version 9, you also need an MQ Client v9.

Further steps will assume that Zato and MQ Client are installed.

Configuring Zato

  • Install PyMQI - this is a low-level package that Zato uses for connections to IBM MQ. You can install it using pip - for instance, assuming that the 'zato' command is in /opt/zato/current/bin/zato, pip can be used as below. Note that we are using the same pip version that Zato ships with, no the system one.

    $ cd /opt/zato/current/bin
    $ ./pip install pymqi
    
  • Now, we need to enable IBM MQ connections in your Zato server - this needs to be done in a file called server.conf, e.g. assuming that your server is in the /opt/zato/env/dev/server1 directory, the file will be in /opt/zato/env/dev/server1/config/repo/server.conf

  • Open the file and locate the [component_enabled] stanza

  • Make sure that there is an entry reading "ibm_mq=True" in the stanza (by default it is False)

  • Save and close the file

  • If the server was running while you were editing the file, use 'zato stop' to stop it

  • Start the server with 'zato start'

  • If you have more than one Zato server, all the steps need to be repeated for each one

Understanding definitions, channels and outgoing connections

Most Zato connection types, including IBM MQ ones, are divided into two broad classes:

  • Channels - for messages sent to Zato from external applications and data sources
  • Outgoing connections - for messages sent from Zato to external connections and data sources

Moreover, certain connection types - including IBM MQ - make use of connection definitions which are reusable pieces of configuration that can be applied to other parts of the configuration.

For instance, IBM MQ credentials are used by both channels and outgoing connections so they can be defined once, in a definition, and reused in many other places.

Note that, if you are familiar with IBM MQ, you may already know what an MQ channel is - the term is the same but the concept does not map 1:1, because in Zato a channel always relates to incoming messages, never to outgoing.

Let's configure Zato using its web-admin dashboard. We shall assume that your queue manager's configuration is the following:

  • Queue manager: QM1
  • Host: localhost
  • Port: 1414
  • Channel: DEV.APP.SVRCONN
  • Username: app
  • Queue 1: DEV.QUEUE.1 (for messages to Zato)
  • Queue 2: DEV.QUEUE.2 (for messages from Zato)

But first, we will need some Python code.

Python API services

Let's deploy this module with two sample Zato services that will handle messages from and to IBM MQ.

You will note two aspects:

  • A Zato channel service is invoked each time a new message arrives in the queue the channel listens for - there is no MQ programming involved. Note that you can mount the same service on multiple Zato channels - it means that the service is reusable and a single one can wait for messages from multiple queues simultaneously.

  • The other service is a producer - it uses an outgoing connection to put messages on MQ queues. Again, you just invoke a method and Zato sends your message, there is no low-level MQ programming here. Just like with channels, it can be used for communication with multiple queues at a time.

# -*- coding: utf-8 -*-

# Zato
from zato.server.service import Service

class APIMQChannel(Service):
    """ Receives messages from IBM MQ queues.
    """
    name = 'api.mq.channel'

    def handle(self):

        # Our handle method is invoked for each message taken off a queue
        # and we can access the message as below.

        # Here is the business data received
        data = self.request.ibm_mq.data

        # Here is how you can access lower-level details,
        # such as MQMD or CorrelId.
        mqmd = self.request.ibm_mq.mqmd
        correl_id = self.request.ibm_mq.correlation_id

        # Let's log the message and some of its details
        self.logger.info('Data: %s', data)
        self.logger.info('Sent by: %s', mqmd.PutApplName)
        self.logger.info('CorrelId: %s', correl_id)

class APIMQProducer(Service):
    """ Sends messages to IBM MQ queues.
    """
    name = 'api.mq.producer'

    def handle(self):

        # Message to send as received on input,
        # without any transformations or deserialisation,
        # hence it is considered 'raw'.
        msg = self.request.raw_request

        # Outgoing connection to use
        conn = 'My MQ Connection'

        # Queue to send the message to
        queue = 'DEV.QUEUE.2'

        # Send the message
        self.outgoing.ibm_mq.send(msg, conn, queue)

        # And that's it, the message is already sent!

Connection definition

In web-admin, go to Connections -> Definitions -> IBM MQ and filll out the form as below. Afterwards, make sure to change your user's password by clicking Change password for the connection definition you have just created.

Zato web-admin connection definitions menu

Zato web-admin connection definitions creation form

Channel

Let's create a new Zato channel to receive message sent from IBM MQ. In web-admin, create it via Connections -> Channels -> IBM MQ. In the Service field, use the channel service deployed earlier.

Zato web-admin channel creation form

Outgoing connections

Zato web-admin outgoing connection creation form

Testing the channel

We have configured everything as far as Zato goes and we can try it out now - let's start with channels. We can use IBM's MQ Explorer to put a message on a queue:

MQ Explorer message received

As expected, here is an entry from the Zato server log confirming that it received the message:

INFO - Data: This is a test message
INFO - Sent by: b'MQ Explorer 9.1.5           '
INFO - CorrelId: b'\x00\x00\x00\x00\x00\x00\x00\x00\x00...'

Testing the outgoing connection

The next step is to send a message from Zato to IBM MQ. We already have our producer deployed and we need a way to invoke it.

This means that the producer service itself needs a channel - for instance, if you want to make it available to REST clients, head over to these articles for more information about using REST channels in Zato.

For the purposes of this guide, though, it will suffice if we invoke our service from web-admin. To that end, navigate to Services -> Find "api.mq.producer" -> Click its name -> Invoker, and a form will show.

Enter any test data and click Submit - data format and transport can be left empty.

Zato web-admin message invoker

The message will go to your service and Zato will deliver it to the queue manager that "My MQ Connection" uses. We confirm it using MQ Explorer again:

MQ Explorer message received

Sending connections from the dashboard

At this point, everything is already configured but we can still go one better. It is often useful to be able to send test messages to queue managers directly from servers, without any service, which is exactly what can be done from an outgoing connection's definition page, as in these screenshots:

Zato web-admin send IBM MQ message listing

Zato web-admin send IBM MQ message form

Note that the message is sent from a Zato server, not from the dashboard - the latter is just a GUI that delivers your message to the server. This means that it is a genuine test of connectivity from your servers to remote queue managers.

Log files

Finally, it is worth to keep in mind that there are two server log files with details pertaining to communication with IBM MQ:

  • server.log - general messages, including entries related to IBM MQ
  • ibm-mq.log - low-level details about communication with queue managers

With that, we conclude this blog post - everything is set up, tested and you are ready to integrate with IBM MQ in your Python projects now!

Next steps

  • Start the tutorial to learn more technical details about Zato, including its architecture, installation and usage. After completing it, you will have a multi-protocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single and consistent API to its callers.

  • Visit the support page if you would like to discuss anything about Zato with its creators

  • Para aprender más sobre las integraciones de Zato y API en español, haga clic aquí