Managing server objects en masse

Overview

In addition to the GUI, CLI and API, you can use the enmasse command line tool to manage ODB-based server objects as a group. This lets you store each piece of configuration as a set of JSON files in a repository where they will be versioned or backed up and to use enmasse to perform roll-outs from command line without the use of GUI or other methods.

Enmasse does not deal with services, it’s concerned with server objects only which is anything but services, including channels, scheduler jobs or security definitions and other types. How to deploy services is documented in a separate chapter.

Enmasse will never delete existing objects. This needs to be performed using the GUI or API.

Tasks you can use enmasse for

  • Exporting server objects from a cluster into JSON
  • Preparing cluster configuration in JSON and merging it with a contents of an already existing cluster
  • Importing objects from an export file into clusters, optionally replacing objects that already exist

Usage scenarios

  • You create a development cluster and quickly prototype a solution using the GUI. For tests and production, however, you prefer to store the config in a git repository. Hence you use enmasse to export all server objects from a dev cluster into JSON. The JSON is next imported to tests and later on, on production.
  • You have an already existing cluster and want to add or replace server objects. You prepare a JSON config file and import it to a cluster.
  • You have a team of programmers each responsible for a part of a solution and you want to limit the number of conflicts in a central config repository. You split the local JSON config file into multiple smaller ones each of which can be included by enmasse to form a whole that will be imported as a single unit.

Installing

  • Make sure INSTALL_DIR/bin is on $PATH, where INSTALL_DIR is a directory you installed Zato binaries to. This is where the ‘py’ command needed by enmasse resides (note that it’s ‘py’, not ‘python’).

  • Download enmasse and display its help to confirm the download was successful

    $ curl -O https://zato.io/download/enmasse
    $ py enmasse --help
    usage: enmasse [-h] [--store-log] [--verbose] [--store-config]
                   [--export-local] [--export-odb] [--import]
                   [--ignore-missing-defs] [--replace-odb-objects] [--input INPUT]
                   [--cols_width COLS_WIDTH] [--path PATH]
    
    Manages server objects en masse.
    
    optional arguments:
      -h, --help            show this help message and exit
      --store-log           Whether to store an execution log
      --verbose             Show verbose output
      --store-config        Whether to store config options in a file for a later
                            use
      --export-local        Export local JSON definitions into one file (can be
                            used with --export-odb)
      --export-odb          Export ODB definitions into one file (can be used with
                            --export-local)
      --import              Import definitions from a local JSON (excludes
                            --export-*)
      --ignore-missing-defs
                            Ignore missing definitions when exporting to JSON
      --replace-odb-objects
                            Force replacing objects already existing in ODB during
                            import
      --input INPUT         Path to an input JSON document
      --cols_width COLS_WIDTH
                            A list of columns width to use for the table output,
                            default: 15,100
      --path PATH           Path to a running Zato server
    $

Note

Enmasse will become zato enmasse, part of CLI, in one of the future Zato releases.

Workflow

  • Either export objects from an already existing cluster to a JSON file or create the file from scratch, the format is given in later sections.
  • Customize the JSON config file, optionally splitting it into smaller units that can be included later on.
  • Combine all the includes, optionally merging ODB objects from a cluster of choice, into an export file. This is a file that can be imported to clusters. Note that its format is not documented and may change without prior notice.
  • Import objects from an export file, optionally replacing objects that already exist in a cluster.

Notes

  • Both --export-* arguments always need an input JSON file in --input, if you're just starting out, put an empty dictionary {} to the file, but remember that it must exist and must not be empty.

  • --path is always required and serves a double role.

    If you're exporting data it must be path to a running server you're exporting data from. Just point it to any existing server directory even if you're not exporting data from ODB. The requirement to point --path to a server even if ODB is not used will be lifted in future versions.

    If you're importing data --path must be path to a running server you're importing data into.

  • For exports, --input means an input JSON file out of which an export file should be created.

    For imports, --input is an export file out of which definitions should be imported on a cluster.

  • For imports, the order enmasse works in is guaranteed to be:

    • Existing definitions are updated
    • Existing objects other than definitions are updated
    • New definitions are created
    • New objects other than definitions are created

    This way, objects that depend on definitions can be sure the definitions they need already exist in the ODB.

  • If channels or outgoing connections are active and they require connectors, the connectors will be started right away. This may mean a flurry of activity on a cluster's connector server. It may be worthwhile to consider whether such channels and outgoing connection should not be imported in an inactive state, enabling them manually later on.

  • When creating an export file, enmasse tries to report as many warnings and errors in one go as possible. When importing a file, it does the contrary, and will stop as soon as it encounters anything it can't deal with to give you a chance to have a look at an erroneous situation as early as it can be done.

What arguments to use depending on use-case

Use case Arguments to use
Combining several JSON includes into one --export-local
Exporting ODB objects to a JSON file --export-odb
Merging JSON and ODB objects into one --export-local --export-odb
Ignoring that a resulting JSON file has some definitions missing --ignore-missing-defs
Importing objects from a JSON export file, bailing out if any object already exists --import
Importing objects from a JSON export file, replacing objects that already exist --import --replace-odb-objects

Usage samples

  • Creating an export file out of a JSON file, pulling all includes in along

    $ py enmasse --path /opt/server1/ --input input1.json --export-local
    Includes merged in successfully
    Data exported to /opt/zato/zato-export-2013-06-12T20_10_19_874355.json
    $
  • Creating an export file out of a JSON file, pulling all includes and any non-Zato internal objects from ODB in along

    $ py enmasse --path /opt/server1/ --input input1.json --export-local --export-odb
    Includes merged in successfully
    ODB objects read
    ODB objects merged in
    Data exported to /opt/zato/zato-export-2013-06-12T20_11_57_306991.json
    $
  • Importing an export JSON file on a cluster without setting a flag that already existing objects should be replaced

    $ py enmasse --path /server1/ --input export.json --import
    2 warnings and 0 errors found:
    +-----------------+--------------------------------------------------------------+
    |       Key       |                            Value                             |
    +=================+==============================================================+
    | warn0001/W01    | {u'name': u'channel-amqp1', u'service': u'zzz.my-service',   |
    | already exists  | u'is_active': False, u'queue': u'MYQUEUE',                   |
    | in ODB          | u'consumer_tag_prefix': u'myconsumer', u'def': u'myamqp1'}   |
    |                 | already exists in ODB {u'def_name': u'myamqp1', u'name': u   |
    |                 | 'channel-amqp1', u'service_name': u'zzz.my-service',         |
    |                 | u'is_active': False, u'data_format': None, u'queue':         |
    |                 | u'MYQUEUE', u'def_id': 3L, u'consumer_tag_prefix':           |
    |                 | u'myconsumer', u'id': 1L} (channel_amqp)                     |
    +-----------------+--------------------------------------------------------------+
    | warn0002/W01    | {u'name': u'channel-amqp2', u'service': u'zzz.my-service',   |
    | already exists  | u'is_active': False, u'queue': u'MYQUEUE',                   |
    | in ODB          | u'consumer_tag_prefix': u'myconsumer', u'def': u'myamqp1'}   |
    |                 | already exists in ODB {u'def_name': u'myamqp1', u'name': u   |
    |                 | 'channel-amqp2', u'service_name': u'zzz.my-service',         |
    |                 | u'is_active': False, u'data_format': None, u'queue':         |
    |                 | u'MYQUEUE', u'def_id': 3L, u'consumer_tag_prefix':           |
    |                 | u'myconsumer', u'id': 2L} (channel_amqp)                     |
    +-----------------+--------------------------------------------------------------+
    $
  • Importing an export JSON file on a cluster with a flag to replace already existing objects set

    $ py enmasse --path /server1/ --input export.json --import --replace-odb-objects
    Updated object 'myjmswmq2' (def_jms_wmq)
    Updated object 'myjmswmq3' (def_jms_wmq)
    Updated object 'channel-amqp1' (channel_amqp)
    Updated object 'channel-amqp2' (channel_amqp)
    Updated object 'ftp1' (outconn_ftp)
    Password missing but not required 'ftp1' (outconn_ftp)
    Updated object 'ftp2' (outconn_ftp)
    Password missing but not required 'ftp2' (outconn_ftp)
    Created object 'channel-myhttp3' (http_soap)
    Created object 'myhttp1' (http_soap)
    Created object 'myhttp2' (http_soap)
    $

Dissecting a JSON config file

Note

  • Unlike when using the GUI or API, the security definition names you use in JSON must be unique in a cluster. That is, not only there cannot be, say, two HTTP Basic Auth definitions known as 'mysecurity-def' but it's an error to have a HTTP Basic Auth and a technical account or a WS-Security one under this name.
  • If a given HTTP object, such as a channel or an outgoing connection, should not use any security definition ('sec_def') at all, you should use a special constant string 'zato-no-security' to signal it. It cannot be None/null, it must be this exact string.

Configuration is kept in a JSON file that contains a dictionary of well-known keys and a list of values for each key.

Keys are the types of server objects the config contains. Values are objects themselves written either as dictionaries or as links to files containing the configuration of each object individually - one file is one object. If not absolute, paths to files are relative to the main config file.

All the supported keys are discussed below. For values, they're almost always equivalent to API calls used to manage them and only differences are documented.

You never use any IDs in JSON config files, only human-readable names and types are used.

Passwords are always given in clear-text.

When using definitions, they don't necessarily have to exist anywhere at the time the input JSON is created however a warning will be displayed if an object depends on a definition which doesn't exist at that time - you can skip displaying this warning using the --ignore-missing-defs switch when invoking enmasse. This may makes sense if you know that such definitions will be available at the time the export file will be imported into a cluster.

Key Notes
def_sec Security definitions. 'type' must be one of 'basic_auth', 'wss' or 'tech_acc' and whether attributes required point to an HTTP Basic Auth, technical account or a WS-Security definition.
def_amqp AMQP connection definitions. 'password' is required.
def_jms_wmq JMS WebSphere MQ definitions.
channel_amqp AMQP channels. 'def' is an AMQP connection definition to use.
channel_jms_wmq JMS WebSphere MQ channels. 'def' is a JMS WebSphere MQ connection definition to use.
channel_plain_http Plain HTTP channels. 'sec_def' is the security definition to use. 'connection' and 'transport' can be skipped.
channel_soap SOAP channels. 'sec_def' is the security definition to use. 'connection' and 'transport' can be skipped.
channel_zmq Zero MQ channels.
outconn_amqp Outgoing AMQP connections. 'def' is an AMQP connection definition to use.
outconn_ftp Outgoing FTP connections.
outconn_jms_wmq Outgoing JMS WebSphere MQ connections. 'def' is a JMS WebSphere MQ connection definition to use.
outconn_plain_http Outgoing plain HTTP connections. 'sec_def' is the security definition to use. 'connection' and 'transport' can be skipped.
outconn_soap Outgoing SOAP connections. 'sec_def' is the security definition to use. 'connection' and 'transport' can be skipped.
outconn_sql Outgoing SQL connections.
outconn_zmq Outgoing ZeroMQ connections.
scheduler Scheduler jobs.

Sample JSON config files

input1.json

{
   "def_sec": [
       {"type": "basic_auth", "name":"mybasicauth", "realm":"myrealm",
         "password":"mypassword1", "is_active":true, "username":"mybasicauthuser"},

       {"type": "wss", "name":"mywss", "password":"mypassword2",
        "is_active":true, "nonce_freshness_time":8000, "password_type":"clear_text",
        "reject_empty_nonce_creat":true, "reject_expiry_limit":2800,
        "reject_stale_tokens":true, "username":"myuser"},

       {"type": "tech_acc", "name":"mytechacc", "password":"mypassword3",
        "is_active":true}
   ],

   "def_amqp": [
       {"name": "myamqp1", "host":"localhost", "port":5672, "username":"user1",
        "password":"password1", "frame_max":1980, "heartbeat":30,
        "vhost":"/zato1"},

       {"name": "myamqp2", "host":"localhost", "port":25672, "username":"user2",
        "password":"password2", "frame_max":290290, "heartbeat":230,
        "vhost":"/zato2"},

       "file://./myinclude2.json"
   ],

   "def_jms_wmq": [
       {"name": "myjmswmq", "host":"localhost3", "port":31415,
        "cache_open_receive_queues":true, "cache_open_send_queues":true,
        "channel": "SVRCONN.1", "max_chars_printed":500, "needs_mcd":true,
        "queue_manager":"QM01", "ssl":false, "use_shared_connections":true},

       {"name": "myjmswmq2", "host":"localhost3", "port":31415,
        "cache_open_receive_queues":true, "cache_open_send_queues":true,
        "channel": "SVRCONN.1", "max_chars_printed":500, "needs_mcd":true,
        "queue_manager":"QM01", "ssl":false, "use_shared_connections":true},

       "file://./myinclude3.json"
   ],

   "channel_amqp": [
       {"name": "channel-amqp1", "def": "myamqp1",
        "consumer_tag_prefix":"myconsumer", "is_active":true, "queue":"MYQUEUE",
        "service":"myservice"},

       {"name": "channel-amqp2", "def": "myamqp1",
        "consumer_tag_prefix":"myconsumer", "is_active":true, "queue":"MYQUEUE",
        "service":"myservice"}
   ],

   "channel_jms_wmq": [
       {"name": "channel-jms-wmq1", "def": "myjmswmq2", "is_active":true,
        "queue":"Q1", "service":"myservice1"},

       {"name": "channel-jms-wmq2", "def": "myjmswmq3", "is_active":true,
       "queue":"Q2", "service":"myservice2"}
   ],

   "channel_plain_http": [
       {"name": "channel-myhttp2", "sec_def": "mywss",
        "host":"test-channel-12.example.com", "connection":"channel",
        "is_active":true, "is_internal":false, "transport":"plain-http",
        "url_path":"/path2"},

       {"name": "channel-myhttp3", "sec_def": "mytechacc",
        "host":"test-channel-13.example.com", "connection":"channel",
        "is_active":true,
        "is_internal":false, "transport":"plain-http",
        "url_path":"/path3"}
   ],

   "channel_soap": [
       {"name": "channel-soap1", "sec_def": "zato-no-security",
        "host":"test-channel-11.example.com", "is_active":true,
        "is_internal":false, "url_path":"/path1", "connection":"channel",
        "transport":"soap"},

       {"name": "channel-soap2", "sec_def": "mytechacc",
        "host":"test-channel-11.example.com", "is_active":true,
        "is_internal":false, "url_path":"/path1", "connection":"channel",
        "transport":"soap"}
   ],

   "channel_zmq": [
       {"name": "mychannel-zmq1", "address": "tcp://channel-zmq1.example.com",
        "is_active":true, "service":"service-zmq1", "socket_type":"PULL"},

       {"name": "mychannel-zmq2", "address": "tcp://channel-zmq2.example.com",
        "is_active":true, "service":"service-zmq2", "socket_type":"SUB"},

       "file://./myinclude4.json"
   ],

   "outconn_amqp": [
       {"name": "outconn-amqp1", "def": "myamqp1", "delivery_mode":1,
        "is_active":true, "priority":5},

       {"name": "outconn-amqp2", "def": "myamqp2", "delivery_mode":1,
       "is_active":true, "priority":5}
   ],

   "outconn_ftp": [
       {"name": "ftp1", "username": "myftp1", "dircache":false,
        "host":"ftp1.example.com", "is_active":true, "port":10121},

       {"name": "ftp2", "username": "myftp2", "dircache":true,
        "host":"ftp2.example.com", "is_active":true, "port":20121}
   ],

   "outconn_jms_wmq": [
       {"name": "jmswmq1", "def": "myjmswmq", "delivery_mode":1,
        "is_active":true, "priority":5},

       {"name": "jmswmq2", "def": "myjmswmq", "delivery_mode":1,
        "is_active":true, "priority":5},

       "file://./myinclude5.json"
   ],

   "outconn_plain_http": [
       {"name": "myhttp1", "sec_def": "mybasicauth",
        "host":"test11.example.com", "connection":"outgoing", "is_active":true,
        "is_internal":false, "transport":"plain-http", "url_path":"/out/1"},

       {"name": "myhttp2", "sec_def": "mywss",
        "host":"test22.example.com", "connection":"outgoing", "is_active":true,
        "is_internal":false, "transport":"plain-http", "url_path":"/out/2"}
   ],

   "outconn_soap": [
       {"name": "mysoap1", "sec_def": "mybasicauth",
        "host":"test1.example.com", "connection":"outgoing", "is_active":true,
        "is_internal":false, "transport":"soap", "url_path":"/my/soap/service/1"},

       {"name": "mysoap2", "sec_def": "mytechacc", "host":"test2.example.com",
        "connection":"outgoing", "is_active":true,
        "is_internal":false, "transport":"soap", "url_path":"/my/soap/service/2"}
   ],

   "outconn_sql": [
       {"name": "mysql1", "username": "mysql2", "password":"mypassword1",
        "engine":"postgresql", "db_name": "mydb", "host": "localhost",
        "is_active": true,  "pool_size":2, "port": 1999},

       {"name": "mysql2", "username": "mysql2", "password":"mypassword2",
        "engine":"postgresql", "db_name": "mydb", "host": "localhost",
        "is_active": true, "pool_size":2, "port": 1999},

       "file://./myinclude6.json"
   ],

   "outconn_zmq": [
       {"name": "myzmq1", "address": "tcp://zmq1.example.com",
        "is_active":true, "socket_type":"PUSH"},

       {"name": "myzmq2", "address": "tcp://zmq2.example.com",
        "is_active":true, "socket_type":"PUSH"}
   ],

   "scheduler": [
       {"name": "myjob1", "is_active":true, "job_type":"interval-based",
        "service":"myservice", "start_date":"2013-10-21T09:37:45"},

       {"name": "myjob2", "is_active":true, "job_type":"one-time",
        "service":"myservice", "start_date":"2013-10-21T09:37:46"}
   ]
}

myinclude1.json

{"name": "CRM Connection", "host":"localhost3", "port":35673,
 "username":"user3", "password":"password3"}

myinclude2.json

{"name": "myamqp3", "host":"localhost", "port":35672, "username":"user3",
 "password":"password3", "frame_max":390290, "heartbeat":330, "vhost":"/zato3"}

myinclude3.json

{"name": "myjmswmq3", "host":"localhost3", "port":31415,
 "cache_open_receive_queues":true, "cache_open_send_queues":true,
 "channel": "SVRCONN.1", "max_chars_printed":500,
 "needs_mcd":true, "queue_manager":"QM01", "ssl":false,
 "use_shared_connections":true}

myinclude4.json

{"name": "mychannel-zmq3", "address": "tcp://channel-zmq3.example.com",
 "is_active":true, "service":"service-zmq1", "socket_type":"PULL"}

myinclude5.json

{"name": "jmswmq3", "def": "myjmswmq", "delivery_mode":1, "is_active":true,
 "priority":5}

myinclude6.json

{"name": "mysql3", "username": "mysql2", "password":"mypassword",
 "engine":"postgresql", "db_name": "mydb", "host": "localhost",
 "is_active": true, "pool_size":2, "port": 1999}