A distributed cluster is one whose individual components - servers, dashboard and the load-balancer - were created separately, each through its own invocation of the zato create command.
This is in contrast with quickstart clusters where a single command creates an entire cluster in one step.
Use distributed clusters if you would like to have its components installed in separate instances, VMs or containers.
It is typical for such a cluster to have each of its servers running in its own system or container with the scheduler and dashboard sharing the same system. Alternatively, the scheduler and dashboard may be also started in their own systems, but the primary, salient feature of distributed clusters is that each server has its own system.
Distributed clusters offer maximum flexibility in terms of the provisioning and containerization techniques that can employed to deploy Zato clusters. For instance, each server may be in a different Kubernetes pod, each on a different node. As another example, each server instance may be built in advance with Packer, to be deployed with Terraform as load increases. Or, a cloud instance can be provisioned with Terraform and Ansible will deploy Zato servers. In short, any DevOps tool can be used.
Each Zato component runs in its own host system. Here, "host" is a generic term that may equally represent a container, a VM or a bare metal, dedicated server.
Because the scheduler and dashboard require least resources, it is possible to run them on the same host although there may be situations when it best to run them on separate ones as well. For instance, if access to the dashboard is through the public network without VPN or a reverse proxy in front of it, it may be better to expose only the dashboard, and not the scheduler, over the Internet.
Note that the load-balancer can be the one that Zato ships with or it can be an external one, e.g. ALB in AWS or a dedicated appliance.
A distributed cluster is created through a series of zato create command invocations in the order specified below. Again, the process of issuing the commands can be automated through any provisioning or deployment tool that is preferred.
As a prerequisite, prepare an empty MySQL or PostgreSQL database. Each cluster needs its own and it can be a cloud equivalent, e.g. AWS RDS. The same database connection parameters will be used in all the subsequent steps.
Invoke zato create odb using the empty database. This will populate it with preliminary data, such as tables and indexes.
Invoke zato create cluster. This will insert metadata pertaining to this cluster. No directories or files are created in this step, it merely populates tables with additional details.
Invoke zato create web-admin. This will create an instance of the web-based dashboard on the host where you are executing this command.
(Optionally) Invoke zato create load-balancer. This will create a load-balancer on the host where you are executing this command. The step is optional and can be omitted if you are using an external load-balancer.
Invoke zato create server. This will create a server on the host where you are executing this command. Repeat this command on each host that this cluster's servers will run on. Note that this command must point to the address of the scheduler that will be created in the next step - it can be an IP or a DNS name.
Invoke zato create scheduler. This will create an instance of the scheduler dashboard on the host where you are executing this command. Note that this command must point to an address of one of the servers in the cluster - it can be an IP or a DNS name.
(Optionally) If you are not using an external load-balancer, you need to add all the server's to the built-in load-balancer's configuration file. The load-balancer is based on HAProxy and its config file is called "zato.config". Add all the servers from step 6 above as in the example below where there are three servers, each with its own name and IP address. Use zato stop and zato start afterwards to restart the load-balancer.
# ZATO begin backend bck_http_plain server http_plain--server1 10.151.17.21:17010 check inter 2s rise 2 fall 2 # ZATO backend bck_http_plain:server--server1 server http_plain--server1 10.151.17.22:17010 check inter 2s rise 2 fall 2 # ZATO backend bck_http_plain:server--server2 server http_plain--server1 10.151.17.23:17010 check inter 2s rise 2 fall 2 # ZATO backend bck_http_plain:server--server3 # ZATO end backend bck_http_plain
The provisioning procedure ends here.
Note that one of the benefits of using quickstart clusters is that all these steps run automatically with a single command. This is why using quickstart is recommended - it makes it possible to provision new environments in a few seconds or minutes.