sudo docker run --pull=always -it --rm -p 22022:22 -p 8183:8183 -p 11223:11223 -p 17010:17010 \
--name zato-3.2-quickstart \
ghcr.io/zatosource/zato-3.2-quickstart
sudo docker exec zato-3.2-quickstart /bin/bash \
-c "cat /opt/zato/env/details/all-zato-env-details.json"
{
"zato_dashboard_admin_password": "...",
"zato_dashboard_admin_username": "admin",
"zato_db_host": "localhost",
"zato_db_main": "zato_db_main1",
"zato_db_password_main": "...",
"zato_db_port": "5432",
"zato_db_type": "postgresql",
"zato_db_username_main": "zato_user_main1",
"zato_env_path": "/opt/zato/env/qs-1",
"zato_ide_publisher_password": "...",
"zato_ide_publisher_username": "ide_publisher",
"zato_ssh_password": "...",
"zato_ssh_username": "zato"
}
All the keys pointing to passwords are highlighted in the output above and, right after the installation, the two most important elements are credentials to Dashboard and to SSH.
A Dashboard instance is running at http://localhost:8183 and you can log into it with the username of "admin" using the password from key "zato_dashboard_admin_password".
You can also connect to the container via SSH using "ssh zato@localhost -p 22022". Password is under the "zato_ssh_password" key above.
Note that each container will have different passwords by default. Navigate to one of the sections below for information how to provide such secrets on input, using environment variables, if you would like for them to persist across restarts.
When configuring Visual Studio Code or PyCharm for work with Zato, you will be asked to provide credentials.
The username and password for your IDE to connect with will be available under keys "zato_ide_publisher_password" and "zato_ide_publisher_username" in file "/opt/zato/env/details/all-zato-env-details.json" inside the container.
When you create a new container, the password will be auto-generated to be different each time. Read more about the environment variables in a later section below to make the password persist between restarts.
When a Zato container starts, it will optionally install additional Python requirements and libraries using a file from the following location: /opt/hot-deploy/python-reqs/requirements.txt. This is a standard Python file with a list of requirements for pip.
By default, the file is empty and no additional libraries will be installed but you can map a requirements.txt file from the host to the container and they will be picked up when the container starts.
Use a bind mount to achieve it, as below:
sudo docker run \
...
--mount type=bind,source=/my/requirements.txt,target=/opt/hot-deploy/python-reqs/requirements.txt
Directory /opt/hot-deploy/services inside the container can be mapped to a local one on host. Any service placed and saved inside the local directory will be automatically deployed to the server in the container.
For instance, to map local directory /my/services, start the container with a bind mount as below:
Now, each time any file is saved in the host's /my/services directory, it will be deployed to the server inside the container.
Directory mounted in this why will be scanned for services by a starting container as well - it means that such a directory can be used for both initial deployment (when the container starts) as well as ongoing deployments (when the container is already running).
A starting container can import enmasse definitions with objects that the server running in the container should use.
In this manner, channels, connections and other server configuration elements can be recreated each time when a new container is spun up.
In order for a container to read in an enmasse file, it needs to be made available to the container through a bind mount pointing to the container's /opt/hot-deploy/enmasse/enmasse.yml file, as in the example below.
The example below is what a real-world boot-up script for a project could use to map multiple sources of data and configuration to a starting container.
Note that, thanks to the usage of bind mounts, it is possible to map arbitrary files and directories to the container, not only Zato-related ones. It means that a single script similar to the one below is everything that is needed to create new, reproducible environments for a project.
#!/bin/bash
export src=/opt/project/src
export target=/opt/hot-deploy
sudo docker rm --force zato &&
sudo docker run \
--rm \
--name zato \
--pull=always \
-p 22022:22 \
-p 8183:8183 \
-p 17010:17010 \
--mount type=bind,source=$src/requirements.txt,target=$target/python-reqs/requirements.txt \
--mount type=bind,source=$src/project.ini,target=$target/user-conf/project.ini \
--mount type=bind,source=$src/enmasse.yaml,target=$target/enmasse/enmasse.yaml \
--mount type=bind,source=$src/account.json,target=/opt/zato/account.json \
--mount type=bind,source=$src/src/model/model.py,target=/opt/zato/current/extlib/model.py \
--mount type=bind,source=$src/src/services,target=$target/services \
--mount type=bind,source=$HOME/.ssh,target=/opt/zato/.ssh \
ghcr.io/zatosource/zato-3.2-quickstart:latest
This sample script is just one way to automate the process of Docker deployments. Another would be to use Docker Compose - it is up to you to decide what is most productive given circumstances in a particular project.
The Docker image for building Quickstart containers is updated once in 24 hours. Each time you create a new container using the command provided earlier in this chapter, it will download the latest version of the image.
If you stop the container, the entire environment inside it will become unavailable and a new container will have to be created.
At times, it is desirable to ssh into the container to restart the server, without destroying the whole container. Use regular SSH connections to log in as user "zato" inside the container and move around freely, issuing zato stop and zato start commands as needed. This will not stop the container.
The Zato image uses various environment variables to fine-tune the resulting installation.
In particular, it is through environment variables that passwords can be provided to ensure that each new container uses the same secrets, e.g. in order for your IDE to connect to the server inside without a need to change the IDE connection's password each time a new container is created.
Note that the names of variables are case-sensitive and they are not all-uppercase. Use the exact form as provided below.
Variable | Notes |
---|---|
Zato_SSH_Password | What password the SSH "zato" user should have. |
Zato_Dashboard_Password | What password the Zato Dashboard's "admin" user should have. |
Zato_IDE_Password | What password the Zato IDE "ide_publisher" user should have. |
Zato_ODB_Password | What password the Zato SQL database connection's user should have. |
Zato_Host_Dashboard_Port | To what port on host to map the Dashboard's TCP port 8183 inside the container. |
Zato_Host_Server_Port | To what port on host to map the server's TCP port 17010 inside the container. |
Zato_Host_LB_Port | To what port on host to map the server's TCP port 11223 inside the container. |
Zato_Host_Database_Port | To what port on host to map the server's TCP port 5432 inside the container. |
Zato_Host_SSH_Port | To what port on host to map the server's TCP port 22 inside the container. |