Setup using Docker with Ansible
Docker/Ansible setup overview
Typically the Control Node would be a local PC, and the Managed Node(s) edge servers.
Ansible is an automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other needs as described here. That's why it makes more sense to use it when you use deployments of the same code in remote servers. If you are trying this code to deploy a site locally on your computer, please just skip the sections below about provisioning access. If you need some more info about testing Ansible recipes locally, check this article.
Note that you can copy commands to your clipboard, by clicking the icon that appears to the right of the code snippets.
If your goal is to test locally on your computer you can:
- Skip step 1 about provisioning access.
- Simply edit the variables
domainswhile leaving the rest as is.
1. Provision access
First, we will provision access so that Ansible can connect to the Managed Node(s) via SSH.
A SSH key can be created using the
ssh-keygen command as below. Creating a key pair (public key and private key) only takes a minute, and it includes answering some questions. The key files are usually stored in the
Once the SSH key has been created, the
ssh-copy-id command can be used to install it as an authorized key by using your
user on the host server:
$ ssh-copy-id -i ~/.ssh/mykey user@host
Once the key has been authorized for SSH, it grants access to the managed node(s) without a password. For more info and troubleshooting, see here.
2. Clone the repository
Now it's time to clone or copy the
repository to the Control Node. Typically we would clone it to our local PC, and to clone the repo using
git, you need to have it installed:
$ sudo apt-get install git
$ brew install git
Before cloning the repo, make sure you have
git-lfs installed to handle the download of large files. You can try to initialize
git-lfs by running the below command:
$ git lfs install
If you don't have
git-lfs installed, use this guide for instructions.
To clone the repo, now you can simply run:
$ git clone https://gitlab.zunzun.se/public-items/ansible-sc_pack-public.git
To configure your setup, you need to edit the variables in the file at
group_vars/edges.yml. The variables are used by the Ansible recipe uses to install and configure
haproxy node exporter and
prometheus node exporter. The recipe also configures the target server so that these services start automatically.
Below are descriptions of the variables that can be configured in the file
The below variables can take the values
We recommend to have all variables
||Task that updates the server and installs certain packages, for example git, aptitude, python3, pip, and locale.|
||Task that creates the user and group shimmercat, and uploads the installer for sc_pack.|
||Task that installs and configures the haproxy load balancer. This is demonized.|
||Task that installs the prometheus node exporter. Only necessary if you want to create alerts or view and visualize your metrics.|
||Task that installs the haproxy exporter. Only necessary if you want to create alerts or view and visualize your metrics.|
||Task that installs the accelerator client, responsible for registering the domain and the deployment in the Accelerator Platform database, as well as updating the sc_pack configuration, i.e. the sc_pack.conf.yaml file.|
||Task that creates the new Deployment Sites instances on the Accelerator Platform database in the cloud (calling the accelerator_client). The task installs and configures sc_pack and updates devlove.yml, sc_pack.conf.yaml, haproxy.cfg, and the views-dir. This is demonized.|
||A list of all domains that will be served by ShimmerCat for all deployments.|
||The authentication token received in the first step of the getting started tutorial.|
||A comma separated string, e.g
||Directory on the server where the executables to be installed will be uploaded.|
||Version of sc_pack that will be installed by default. It doesn't need to be the latest version, during the installation process it's updated automatically.|
||A password to haproxy that also the haproxy exporter will use.|
||You can choose between four configurations of haproxy-devlove, or create your own.
Note that with
Having more than one deployment per edge server can come in handy for experimentation, rolling upgrades and configuring high-availability setups. For testing only one instance is sufficient, but in production two is recommended.
||The directory for ShimmerCat, celery, and all the services that sc_pack includes. All the log files and directories needed to run the services, including configuration files for ShimmerCat and supervisord, will be in this directory.|
||True or False.|
||True or False.|
||True or False.|
||Your Google reCaptcha site key. Only needed if enable_bots_blocking is True. If needed you should get it from www.google.com/recaptcha/admin/.|
||Your Google reCaptcha site secret. Only needed if enable_bots_blocking is True. If needed you should get it from www.google.com/recaptcha/admin/.|
||This will be the encryption key to encrypt and decrypt data sent from the edge servers to the Accelerator Platform and vice versa. Among other things, it is used to synchronize the certificates and their private keys. Make sure to set the same key to all the deployments you would like to synchronize. You choose the key yourself.|
Below is an example setup with two servers. For a more detailed overview check here.
4. Specify server info
Edit the inventory file
production and the variable
ansible_host by replacing
vps1 with the IP of your server.
ansible_user is related to the first step about provisioning access, and it must match the
user for the host server so that it is possible to connect via ssh.
If you are using several servers, the file would look something like:
[edges] vps1 ansible_host=vps1 ansible_user=root1 vps2 ansible_host=vps2 ansible_user=root2 vps3 ansible_host=vps3 ansible_user=root3 etc...
For local testing, first find the docker network IP, which allows communication between the container and the host. You can get the
docker0 network ip by running:
$ sudo ip addr show docker0
This will return something like below, and we are interested in the first IP (i.e. in the example
Now edit the inventory file
docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default ... inet 172.17.0.1/16 brd 172.17.25 ...
[edges] vps1 ansible_host=172.17.0.1 ansible_user=aaa ansible_password=bbb ansible_sudo_password=bbbb
5. Run the recipes from the Docker container
To build the Docker Image, you need to have
docker installed on your Control Node, see info here.
We have put at your disposal a Dockerfile and a shell script
ansible_helper.sh to make the process smooth. First, build the docker image:
$ docker build -t shimmercat-python-ansible-alpine .
Now you can use the shell script to install python on the destination node:
$ ./ansible_helper.sh -i production 0-pre_tasks.yml
Finally install everything and run the Docker Image by executing:
$ ./ansible_helper.sh -i production 1-all-in-one.yml
If you don't want to use the shell script, you can check the Docker README.
6. Serve your website
Configure your local
/etc/hosts or equivalent, and add a new line to it with the format:
The IP address
<ansible_host>should be the same as in the inventory file
productionin step 4, and the domain
<your-domain>, should be one of the domains you set in
Now open your browser and check
We suggest the
/index.html because by by default, for
option-3, we configure an example static website when the Ansible recipes run, which has
index-3.html, and several more pages. You should be able to access all of them. If you use
haproxyconfig_option = option-2, then check
If the website does not appear in your browser, login to your remote server and restart the services. Normally, the
sc_pack service will be called using the variable
deployment_name value followed by
.service. For example: if deployment_name is
deployment_A, the service will be called
To restart it, run in the terminal:
$ systemctl restart deployment_A.service