This post provides steps needed to enable and make use of TOTP two-factor authentication in Zato web-admin - the method is compatible with the most popular apps such as Google Authenticator, Authy or FreeOTP.

The functionality is slated for release in Zato 3.1 and, currently, it is available via Zato source installation.

Modifying web-admin.conf

We start by letting web-admin know that it is to require TOTP security codes from users. This is a global flag for all users.

  • Stop web-admin
  • Open web-admin.conf file
  • Set "is_toptp_enabled" to true (note that it is lower-case, without quotes)
  • Save the file
  • Start web-admin back

Generating the initial TOTP key

Now that the two-factor authentication with TOTP is enabled, we need the initial TOTP key for each user, otherwise they will not be able to log in at all.

Command line can be used to reset such keys for any user - in this case the process of resetting a key will amount to setting the initial one, as in the sample below that resets the key for a web-admin user called admin:

$ zato reset-totp-key /path/to/web-admin admin
IDNSLY27ESRPWUI5
$

Such a key can now be transferred to an app of choice to generate time-based security codes that web-admin will require.

Logging in

The app that the key was saved to will now present security codes needed for logging in.

Each such key is valid for at most 30 seconds - in this way a potential attacker will have to take two factors into account, one is your password and the other is getting access to your app and the associated TOTP secret key. Replay attacks are not particularly feasible against such codes because they will change frequently.

As always, even without TOTP, you should also make sure you rate-limit login attempts to web-admin by configuring your frontend load-balancer, or proxy, accordingly. With TOTP, since the codes are relatively short, it would be possible for an attacker to check them all very quickly so this kind of rate-limiting is of double importance.

Changing your TOTP key in web-admin

Having logged in, it is possible to generate secret keys directly in web-admin too. This option will also output QR codes to transfer the code to mobile devices simply by scanning the resulting QR code.

Summary

That is all - everything is set up and two-factor authentication will be enforced each time anyone logs in. TOTP is easy to configure and use so it is recommended to enable it as an additional security layer.

As of today, everyone building Zato from source will notice a significant decrease in time needed for servers to start and stop - it is now possible for an entire server to boot up to a fully functional state in less than two seconds and to shut down in half a second.

The changes in startup time were possible thanks to modifications to how services are deployed - deploying 700 services a second using just one CPU is now feasible, as in the snippet below.

Note that it means services that are actually ready to accept and process incoming messages, in this particular case there were almost 600 internal services and 100+ user-defined ones.

It is easiest to observe it when a server is started in foreground, using the --fg switch; after the initial messages comes the part when the server deploys all of its services, in this case it took roughly 0.9 s.

$ zato start ./server1 --fg
2019-02-18 11:36:07,394 - Starting gunicorn 19.9.0
2019-02-18 11:36:07,396 - Listening at: http://0.0.0.0:17010 (4640)
2019-02-18 11:36:07,396 - Using worker: gevent
2019-02-18 11:36:07,398 - Booting worker with pid: 4662

2019-02-18 11:36:08,462 - Deploying cached internal services (server1)
2019-02-18 11:36:09,229 - Deployed 558 cached internal services (server1)
2019-02-18 11:36:09,230 - Deploying user-defined services (server1)
2019-02-18 11:36:09,365 - Deployed 122 user-defined services  (server1)

Similarly, to stop a server, a fraction of a second is needed. Here, below, the server was running in foreground so Ctrl-C sufficed to stop it - and it took 0.3 s for it to shut down.

^C
2019-02-18 11:46:22,060 - gunicorn.main:271 - Handling signal: int
2019-02-18 11:46:22,060 - Closing IPC (/pubsub/pid)
2019-02-18 11:46:22,061 - Closing IPC (/connector/config)
2019-02-18 11:46:22,075 - Stopping server process (server1:4662) (4662)
2019-02-18 11:46:22,163 - Process exiting (pid: 4662)
2019-02-18 11:46:22,361 - Shutting down: Main

All of it will be released in June 2019, as part of Zato 3.1, and today - one can simply install Zato from source and take advantage of the improvements already.

Now that Python 3 support is available as a preview for developers, this post summarizes the effort that went into making sure that Zato works smoothly using both Python 2.7 and 3.x.

In fact, the works required were remarkably straightforward and trouble-free and the article discusses the thought process behind it, some of the techniques applied or tools used.

Background

Zato is an enterprise API integration platform and backend application server. We support a couple dozen of protocols, data formats, several sorts of IPC and other means to exchange messages across applications.

In other words, on the lowest level, passing bytes around, transforming, extracting, changing, collecting, manipulating, converting, encoding, decoding and comparing them, including support for all kinds of natural languages from around the world, is what Zato is about at its core when it is considered from the perspective of the programming language it is implemented in.

The codebase is around 130,000 lines of code, out of which Python and Cython are 60,000 lines. This is not everything, though, because we also have 170+ external dependencies that need to work with Python 2.7 and 3.x.

The works took two people a total of 80 hours. They were spread over a longer calendar time, except for the final sprint that required more attention for several days in a row.

Preparations

Since the very beginning, it was clear that Python 3 will have to be supported at one day so the number one thing that each and every Python module has always had is this preamble:

from __future__ import absolute_import, division, print_function, unicode_literals

This is what every Python file contains and it easily saved 90% of any potential work required to support Python 3 because, among other less demanding things, it enforced a separation, though still not as strict as in Python 3, between byte and Unicode objects. The separation is a good thing and the more one works with Python 3 the clearer it becomes.

In Python 2, it was sometimes possible to mix the two. Imagine that there is a Python-derived language where JSON dicts and Python dicts can be sometimes used interchangeably.

For instance, this is a JSON object: {"key1": "value1"} and it so happens that it is also a valid Python dict so in this hypothetical language, this would work:

json = '{"key1": "value1"}'
python = {'key2': 'value2'}

result = json + python

Now the result is this:

{'key1': 'value1', 'key2': 'value2'}.

Or wait, perhaps it should be this?

'{"key1": "value1", "key2": "value2"}'

This is the central thing - they are distinct types and they should not be mixed merely because they may be related or seem similar.

Conceptually, just like upon receiving a JSON request from the network a Python application will decode it into a canonical representation, such as a dict, list or another Python object, the same should happen to other bytes, including ones that happen to represent text or similar information. In this case, the canonical format is called Unicode, and that is the whole point of employing it in one's application.

All of this was clear from the outset and the from __future__ statements helped in its execution, even if theoretically one could have been still able to mix bytes and Unicode - it was simply a matter of using the correct canonical format in a given context, i.e. a case of making sure the architecture was clean.

This particular __future__ statement was first announced in 2008 so there was plenty of time to prepare to it.

As part of the preparations, it is good to read a book about Unicode. Not just a 'Unicode for overburdened developers' kind of an article but an actual book that will let one truly appreciate the standard's breadth and scope. While reading it, do not resist the temptation to learn at least basics of two or more natural languages that you never knew about before. It will only help you develop into a better person and this is not a joke.

While programming with bytes and Unicode, it is convenient simply to forget about whether it is a 'str', 'bytes' or 'unicode' object - it is easier simply to think about bytes and text. There are bytes that can mean anything and there is text whose native, canonical form is Unicode. This is not always 100% accurate because Unicode can represent marvellous gems such as Byzantine musical notation and more but if a given application's scope is mostly constrained to text then this will work - there are bytes and there is text.

This is all fine with our own code but there are still the external libraries that Zato uses and some of them will want bytes, not text, or the other way around, in seemingly similar situations. There can be even cases like a library expecting for protocol header keys to be text and protocol header values to be bytes for rather unclear reasons. Simply accept it as a fact of life and move on with your works, there is no need to pause even for a moment to think about it.

Side projects

It was good to try out Python 3 first in a few new, smaller side-projects, GUI or command-line tools that are not part of the core yet they are important in the overall picture. The most important part of it was that creating a Python 3 application from scratch was in no way different than in Python 2, this served as a gentle introduction to Python 3-specific constructs and this knowledge was easily transferred later on to the main porting job.

Dependencies

Out of a total of 170+ dependencies, around 10 were not Python 3-compatible. All of them had not been updated in eight, twelve or more years. At this point, it is safe to assume that if there is a dependency that was last updated in 2009 and it has no Python 3 support then it never will.

What to do next depended on a particular case, each of them was some kind of a convenience library - sometimes they had to be dropped and sometimes forked. Most complex changes required in a fork were on the level of updating 'print' to 'print()' or doing away with complex installation setups that predated contemporary pip-based configuration options.

Other than that, there were no issues with dependencies, all of them were ready for Python 3.

Idioms and imports

Most of the reference information needed to make use of Python 2 and 3 was available via the python-future project which itself is a great assistance. Installing this library, along with its dependencies, sufficed for 99% of cases. There were some lesser requirements that were incorporated into a Zato-specific submodule directly, e.g. sys.maxint is at times useful as a loop terminator but ints in Python 3 have no limits so an equivalent had to be added to our own code.

Note that the page above does not show all the idioms and some changes were not always immediately obvious, like modifications to __slots__, or the way metaclasses can be declared, but there were no really impossible cases, just different things to use, either built in to Python 3 or available via future or six libraries.

A nice thing is that one is not required to immediately change all the imports in one go - they can be changed in smaller increments, e.g. 'basestring' is still available in the form of 'from past.builtins import basestring'.

Testing

A really important aspect during the migration was the ability to test sub-components of an application in isolation. This does not only include unittests, which may be too low-level, but also things such as starting only selected parts of Zato without a requirement to boot up whole servers which in turn meant each change could be tested within one second rather than ten. To a degree, this was an unexpected but really useful test of how modular our design was.

Intellectually, this was certainly the most challenging part because it required maintaining and traversing several trains of thought at once, sometimes for several days on end. This, in turn, means that it really is not a job for late afternoons only and it cannot be an afterthought, things can simply get complex very quickly.

String formatting

There is one thing that was not expected - the way str.format works with bytes and text.

For instance, this will fail in Python 3:

>>> 'aaa' + b'bbb'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: Can't convert 'bytes' object to str implicitly
>>> 

Just for reference, in Python 2 it does not fail:

>>> 'aaa' + b'bbb'
'aaabbb'
>>> 

Still under Python 2, let's use string formatting:

>>> template = '{}.{}'
>>> template.format('aaa', b'bbb')
'aaa.bbb'
>>>

In Python 3, this is the result:

>>> template = '{}.{}'
>>> template.format('aaa', b'bbb')
"aaa.b'bbb'"
>>> 

In the context of a Python 3 migration, it would have been probably more in line with other changes to the language if this had been special-cased to reject such constructs altogether.

Otherwise, it initially led to rather inexplicable error messages because the code that produces such string constants may be completely unaware of where they are used further on. But witnessed once or twice, it was apparent later on what the root cause was and this could be easily dealt with.

Things that are missed

One small, yet convenient, feature of Python 2 was the availability of some of the common codecs directly in string objects, e.g.:

>>> u'abc'.encode('hex')
'616263'
>>> u'abc'.encode('base64')
'YWJj\n'
>>> u'ελληνική'.encode('idna')
'xn--jxangifdar'
>>>

This will not work as-is in Python 3:

>>> u'abc'.encode('hex')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
LookupError: 'hex' is not a text encoding; use codecs.encode() to handle arbitrary codecs
>>>

Naturally, the functionality as such is still available in Python 3, just not via the same means.

Python 2.7

On server side, Python 2.7 will be around for many years. After all, this is a great language that let thousands and millions of people complete amazing projects and most of enterprise applications do not get rewritten solely because one of the technical components (here, Python) changes in a way that is partly incompatible with previous versions.

Both RHEL and Ubuntu ship with Python 2.7 and both of them have long-term support well into the 2020s so the language as such will not go away. Yet, piece by piece, all the applications will be changed, modified, modularized or rewritten and gradually Python 2.7's usage will diminish.

In Zato, Python 2.7 will be supported for as long as it is feasible and one of the current migration's explicit goals was to make sure that existing user Zato environments based on Python 2.7 will continue to work out-of-the-box with Python 3 so there is no difference which Python version one chooses - both are supported and can be used.

Summary

An extraordinary aspect of the migration is that it was so unextraordinary. There were no really hard-won battles, no true gotchas and no unlooked-for hurdles. This can be likely attributed to the facts that:

  • Python developers offered information what to expect during such a job
  • Unicode was not treated as an afterthought
  • Zato reuses common libraries that are all ported to Python 3 already
  • Internet offers guides, hints and other pieces of information about what to do
  • It was easy to test Zato components in isolation
  • Time was explicitly put aside for the most difficult parts without having to share it with other tasks

The next version of Zato, to be released in June 2019, will come with pre-built packages using Python 2.7 and 3, but for now installation from source is needed - visit this forum thread for more details.

This is part one of a mini-series about working with IBM MQ as a Zato and Python user. This installment will cover installation and configuration whereas the next one will delve into programming tasks.

Zato is a Python-based multi-protocol API integration platform, message broker and backend application server and IBM MQ is one of the protocols and products that it supports out of the box, making it possible to integrate with IBM MQ systems, including JMS ones, using little or no programming at all.

Step-by-step

This is what the article will include. The end result will be a working Zato-IBM MQ installation, capable of both receiving and sending messages from IBM MQ queue managers.

  • Installing a Zato environment
  • Enabling IBM MQ connections in Zato
  • Installing IBM MQ Client
  • Installing PyMQI
  • Configuring Zato connection definitions
  • Zato outgoing connections
  • Zato channels

Installing a Zato environment

  • The first step is to install a Zato package. To work with IBM MQ, you can choose any of the systems supported by Zato. Ubuntu will be used here but everything works the same no matter if it is Ubuntu, RHEL or any other OS.

  • Next, you need to create a Zato environment. The easiest way is to use a quickstart cluster which sets up a working Zato cluster in one command.

  • Alternatively, you can follow the tutorial that will guide you through the same process in more details

Enabling IBM MQ connections in Zato

  • Once you have have a Zato environment ready, you need to stop all of its servers and enable IBM MQ connections in server configuration files

  • Stop all the servers

  • Open file server.conf in each server

  • Find stanza [component_enabled]

  • Set ibm_mq=True in the stanza

  • Do not start all the servers back just yet

Installing IBM MQ Client

  • To make it possible to connect to queue managers via TCP, it is required to install a software package called an IBM MQ Client. This package contains runtime libraries that let applications, such as Zato, to use TCP connections with IBM MQ.

  • Download the client and follow its installation procedure as described by IBM

Installing PyMQI

  • With the client package in place, it is now possible to install PyMQI, which is a low-level IBM MQ Client library for Python - it was not possible to install it in previous steps because it required the IBM MQ Client as its prerequisite

  • To install PyMQI, navigate to Zato installation directory and use pip to download PyMQI:

  cd /opt/zato/current
  ./bin/pip install pymqi
  • This concludes the OS-level preparations and installation steps

  • Now all the servers can be brought back using the zato start command

Configuring Zato connection definitions

  • Log in to web-admin and navigate to Connections -> Definitions -> IBM MQ

  • A connection definition is a reusable piece of configuration, a common set of information that can be used in more than one place, in this context, it allows one to create both incoming MQ connections (Zato channels) and outgoing ones. Note that Zato channels share the name with MQ channels but they are an unrelated concept, only distantly similar.

  • Create a definition by filling out the form as below. You need to change the connection's password because by default it is a randomly generated one that cannot be used. Afterwards, you can click Ping to confirm that connections to your remote queue manager work correctly.

Zato outgoing connections

  • In web-admin, create a new IBM MQ outgoing connection via Connections -> Outgoing -> IBM MQ

  • An outgoing connection lets one push and send messages to other systems, in this case it will let you send messages to MQ queues

  • A single connection is tied to a particular connection definition, which means that it is related to a queue manager the definition points to, but it can be used with any number of MQ queues as needed

  • Once a connection is defined, it is possible to start to use it from Python code but in this post, let's send a test message directly from web-admin

Zato channels

  • In web-admin, create a new IBM MQ channel through Connections -> Channels -> IBM MQ

  • A Zato channel acts as a message listener, accepting messages from a particular queue and invoking a user defined API service that acts accordingly, e.g. by transforming the message and delivering it to intended recipients

  • No programming is needed to accept messages from queue - the very fact of creating a channel lets Zato automatically consume messages in background from the channel's queue

  • Any user-defined service in the channel can be used, but below, just for illustration purposes, a built-in service called helpers.ibm-mq-logger is employed. This is a convenience service that simply saves to Zato server logs all the information about each message taken off a queue, including the message's data and metadata, such as headers and MQMD.

  • To send a message to Zato, below, MQ's own command line utility is used and the full command that can be executed from the system that MQ runs on is /opt/mqm/samp/bin/amqsput DEV.QUEUE.1 QM1

  • If you do not have access to MQ command line, you can simply create an outgoing connection in Zato and use the Send message form in web-admin to send a message that will be received by a channel. It is not shown here but it would work just as fine.

Summary

This is it. Your Zato installation is configured to send and accept IBM MQ-originating messages and, in the next part, Python services will be used to actually process the messages in a useful manner, e.g. by enriching their contents and sending them out to other applications.

For more information - visit the main documentation site or go straight to the tutorial and stay tuned for the next article.

Zato publish/subscribe message queues and topics offer several ways to gain insight into the inner workings of all the components taking part in message delivery and this article presents an overview of the mechanisms available.

Two types of logging

Logging is broken out into two categories:

  • Files or other destinations - there are several files where messages and events may be stored
  • GUI in web-admin - dedicated parts of web-admin let one check runtime configuration and observe events taking place in internal components

Files or other destinations

By default, several files are employed to keep information about pub/sub messages, each file stores data with its own characteristics. As with other logging files, the details such as logging level are kept in the logging.conf file for each server in a cluster.

Note that logging to files is just what pub/sub uses in default settings - everything in logging.conf is based on Python's standard logging facilities so it is possible to reconfigure it in any way desired to send logging entries to other destinations instead of, or in addition to, what the configuration says by default. Likewise, it is possible to disable them altogether if they are not needed.

General log

zato_pubsub is the main file with internal events pertaining to handling of pub/sub messages. Any time a message is published or received by one of endpoints, a new log entry will be added here.

Moreover, the file contains plenty of extra information - whether there are any matching subscribers for a message, what their subscription keys are, if they are currently connected, what happened to message if there were none, if the message could not be delivered, what the reason was, how many attempts have been so far, all the relevant timestamps, headers and other metadata.

In short, it is the full life-cycle of publications and subscriptions for each message processed. This is the place that all the details of what is going on under the hood can be found in.

Audit log

Unlike the file above, the whole purpose of zato_pubsub_audit is to save data and metadata of all the messages received and published to topics - only that.

There is no additional information on what happened to each message, when and where it was delivered - it is purely an audit log of all the messages that Zato published.

It is at times convenient to use it precisely because it has basic information only, without any details.

Prevention of data loss

The whole purpose of zato_pubsub_overflow is to retain messages that would be otherwise dropped if topics reached their maximum depth yet subscribers were slow to consume them.

Consider the topic below - it has a maximum depth of 10,000 in-RAM messages. If there is a steady inflow of messages to this topic but subscribers do not consume them in a continuous manner, which means that the messages will not be placed in their queues in a timely fasion, this limit, the maximum depth, will be eventually reached.

The question then is, what should be done with such topics overflowing with messages? By default, Zato will save them to disk just in case they should be kept around for manual inspection or resubmission.

If this is not needed, zato_pubsub_overflow can be configured to log on level WARN or ERROR. This will disable logging to disk and any such superfluous messages will be ignored.

Web-admin GUI

There are several web-admin screens that let one understand what kind of internal events and structures participate in publish/subscribe:

  • Delivery tasks - their job is to deliver messages from subscriber queues to endpoints
  • Sync tasks - they are responsible for internal synchronization of state, data and metadata between topics and queues

Of particular interest is the event log - it shows step-by-step what happens to each message published in terms of decision making and code branches.

Note that the log has a maximum size of 1,000 events, with new events replacing any older ones, thus in a busy server it will suffice for a few seconds at most and it is primarily of importance in environments with low traffic to topics, such as development and test ones.

Note that runtime pub/sub information in web-admin is always presented for each server process separately - for instance, if a server is restarted, its process ID (PID) will likely change and all the data structures will be repopulated. The same holds for the event log which is not preserved across server restarts.

Summary

Publish/subscribe messages and their corresponding internal mechanisms are covered by a comprehensive logging infrastructure that lets one understand both the overall picture as well as low-level details of the functionality.

From an audit log, through events to tracing of individual server process, each and every angle is addressed in order to make sure that topics and queues work in a smooth and reliable way.