2. Setup using Docker with Ansible
¶
Preliminary requirements
- Make sure you have created an authentication token.
- Make sure to have
docker
installed on your Control Node, see info here. - Ubuntu 16.04 or 18.04 is recommended for the Managed Node(s).
Docker/Ansible setup overview
Typically the Control Node would be a local PC, and the Managed Node(s) edge servers.
About Ansible
Ansible is an automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other needs as described here. That's why it makes more sense to use it when you use deployments of the same code in remote servers. If you are trying this code to deploy a site locally on your computer, please just skip the sections below about provisioning access. If you need some more info about testing Ansible recipes locally, check this article.
Note that you can copy commands to your clipboard, by clicking the icon that appears to the right of the code snippets.
Local testing
If your goal is to test locally on your computer you can:
- Skip step 1 about provisioning access.
- Simply edit the variables
api_access_token
,deployment_tags
, anddomains
while leaving the rest as is.
2.1. 1. Provision access¶
First, we will provision access so that Ansible can connect to the Managed Node(s) via SSH.
A SSH key can be created using the ssh-keygen
command as below. Creating a key pair (public key and private key) only takes a minute, and it includes answering some questions. The key files are usually stored in the ~/.ssh
directory.
$ ssh-keygen
Once the SSH key has been created, the ssh-copy-id
command can be used to install it as an authorized key by using your user
on the host server:
$ ssh-copy-id -i ~/.ssh/mykey user@host
Once the key has been authorized for SSH, it grants access to the managed node(s) without a password. For more info and troubleshooting, see here.
2.2. 2. Clone the repository¶
Now it’s time to clone or copy the
repository to the Control Node. Typically we would clone it to our local PC, and to clone the repo using git
, you need to have it installed:
$ sudo apt-get install git
$ brew install git
Before cloning the repo, make sure you have git-lfs
installed to handle the download of large files. You can try to initialize git-lfs
by running the below command:
$ git lfs install
If you don’t have git-lfs
installed, use this guide for instructions.
To clone the repo, now you can simply run:
$ git clone https://gitlab.zunzun.se/public-items/ansible-sc_pack-public.git
2.3. 3. Configuration¶
To configure your setup, you need to edit the variables in the file at group_vars/edges.yml
. The variables are used by the Ansible recipe uses to install and configure sc_pack
, haproxy
, haproxy node exporter
and prometheus node exporter
. The recipe also configures the target server so that these services start automatically.
Below are descriptions of the variables that can be configured in the file group_vars/edges.yml
:
Task Variables |
Description |
---|---|
The below variables can take the values True or False to define what to install. |
We recommend to have all variables True in the first run. In subsequent runs you can set them to False , except for run_create_deploys . |
run_install_packages |
Task that updates the server and installs certain packages, for example git, aptitude, python3, pip, and locale. |
run_install_requirements |
Task that creates the user and group shimmercat, and uploads the installer for sc_pack. |
run_install_haproxy |
Task that installs and configures the haproxy load balancer. This is demonized. |
run_install_prometheus_node_exporter |
Task that installs the prometheus node exporter. Only necessary if you want to create alerts or view and visualize your metrics. |
run_install_haproxy_exporter |
Task that installs the haproxy exporter. Only necessary if you want to create alerts or view and visualize your metrics. |
run_install_pushgateway |
Task that installs the pushgateway. Only necessary if you want to create alerts or view and visualize your metrics. |
run_update_pushgateway |
You can indicate whether or not you want to update pushgateway config. |
run_install_grok_exporter |
Task that installs the grok exporter. Only necessary if you want to create alerts or view and visualize your metrics. |
run_update_grok_exporter_config |
You can indicate whether or not you want to update grox exporter config. |
run_install_accelerator_client |
Task that installs the accelerator client, responsible for registering the domain and the deployment in the Accelerator Platform database, as well as updating the sc_pack configuration, i.e. the sc_pack.conf.yaml file. |
run_create_deploys |
Task that creates the new Deployment Sites instances on the Accelerator Platform database in the cloud (calling the accelerator_client). The task installs and configures sc_pack and updates devlove.yml, sc_pack.conf.yaml, haproxy.cfg, and the views-dir. This is demonized. |
create_deployment |
You can indicate whether or not you want to create a new deploy and link it to the existing domain. |
General Variables |
Description |
---|---|
domains |
A list of all domains that will be served by ShimmerCat for all deployments. |
origin_cdn_host |
In case you are creating a domain to be used as an image CDN, this would be the origin of the images. |
api_access_token |
The authentication token received in the first step of the getting started tutorial. |
customer_id |
The customer id received in the first step of the getting started tutorial. |
backup_config_secret |
The backup config secret received in the first step of the getting started tutorial. |
deployment_tags |
A comma separated string, e.g "shimmercat,test" , to identify deployments |
installers_dir |
Directory on the server where the executables to be installed will be uploaded. |
haproxy_auth_pass |
A password to haproxy that also the haproxy exporter will use. |
prometheus_node_exporter_port |
9112 . Port for the prometheus node exporter. |
haproxy_exporter_port |
9101 . Port for the haproxy exporter. |
haproxyconfig_option |
You can choose between four configurations of haproxy-devlove, or create your own. option-2 is for real sites, with an external origin that is online. option-1 , option-3 and option-4 are for test sites where it is not necessary to be online, the pages will be served from the same vps. You can see the examples of the outputs that are produced in the folders roles/create_deploys/templates/config/option-<id>/example-results . See more details in the haproxyconfig README. |
usher3_toilmore_api_prefix |
Our API location, "https://accelerator.shimmercat.com" or for demo "https://canary.shimmercat.com" |
Deployment Variables |
Description |
---|---|
Note that with instance_1 and instance_2 we are indicating that two instances of sc_pack will be installed in the respective install_dir , on each of the servers defined in the production inventory. |
Having more than one deployment per edge server can come in handy for experimentation, rolling upgrades and configuring high-availability setups. For testing only one instance is sufficient, but in production two is recommended. |
install_dir |
The directory for ShimmerCat, celery, and all the services that sc_pack includes. All the log files and directories needed to run the services, including configuration files for ShimmerCat and supervisord, will be in this directory. |
http_port |
8010 , 8011 , 8012 , etc. HTTP port where ShimmerCat listen. |
https_port |
4010 , 4011 , 4012 , etc. HTTPS port where ShimmerCat listen. |
humanity_validator_port |
8040 , 8041 , 8042 , etc. The port where the service that send the Google reCAPTCHA challenge will listen. Only needed if enable_bots_blocking is True. |
enable_sc_logs_agent |
True or False. |
usher3_disable_monitoring |
True or False. |
usher3_monitoring_listen_on |
4080 , 4081 , 4082 , etc. The port where the service usher3 monitoring listen. Only needed if usher3_disable_monitoring is False. |
enable_bots_blocking |
True or False. |
enable_images_optimization |
True or False. |
improve_images_quality |
True or False. |
images_optimization_with_aws |
True or False. |
enable_usher3 |
True or False. |
usher3_max_overhead |
A number, by default 1.0. Example 2.0 indicates that images optimized by usher3 can reach 2 times the size of the original (in kB) |
usher3_toilmore_subservice |
light or lux |
usher3_toilmore_api_version |
# if light 2020.1, if lux 2020.4 |
google_recaptcha_site_key |
Your Google reCaptcha site key. Only needed if enable_bots_blocking is True. If needed you should get it from www.google.com/recaptcha/admin/. |
google_recaptcha_site_secret |
Your Google reCaptcha site secret. Only needed if enable_bots_blocking is True. If needed you should get it from www.google.com/recaptcha/admin/. |
transit_encryption_key |
This will be the encryption key to encrypt and decrypt data sent from the edge servers to the Accelerator Platform and vice versa. Among other things, it is used to synchronize the certificates and their private keys. Make sure to set the same key to all the deployments you would like to synchronize. You choose the key yourself. |
Below is an example setup with two servers. For a more detailed overview check here.
Example overview
setup_overview
2.4. 4. Specify server info¶
Edit the inventory file production
and the variable ansible_host
by replacing vps1
with the IP of your server.
The variable ansible_user
is related to the first step about provisioning access, and it must match the user
for the host server so that it is possible to connect via ssh.
If you are using several servers, the file would look something like:
[edges]
vps1 ansible_host=vps1 ansible_user=root1
vps2 ansible_host=vps2 ansible_user=root2
vps3 ansible_host=vps3 ansible_user=root3
etc...
2.4.1. Local testing¶
For local testing, first find the docker network IP, which allows communication between the container and the host. You can get the docker0 network ip
by running:
$ sudo ip addr show docker0
or
$ ifconfig
This will return something like below, and we are interested in the first IP (i.e. in the example 172.17.0.1
):
docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc
noqueue state DOWN group default
...
inet 172.17.0.1/16 brd 172.17.25
...
Now edit the inventory file production
according to:
[edges]
vps1 ansible_host=172.17.0.1 ansible_user=aaa ansible_password=bbb ansible_sudo_password=bbbb
2.5. 5. Run the recipes from the Docker container¶
To build the Docker Image, you need to have docker
installed on your Control Node, see info here.
We have put at your disposal a Dockerfile and a shell script ansible_helper.sh
to make the process smooth. First, build the docker image:
$ docker build -t shimmercat-python-ansible-alpine .
Now you can use the shell script to install python on the destination node:
$ ./ansible_helper.sh -i production 0-pre_tasks.yml
Finally install everything and run the Docker Image by executing:
$ ./ansible_helper.sh -i production 1-all-in-one.yml
If you don’t want to use the shell script, you can check the Docker README.
2.6. 6. Serve your website¶
Configure your local /etc/hosts
or equivalent, and add a new line to it with the format:
<ansible_host> <your-domain>
The IP address <ansible_host>
should be the same as in the inventory file production
in step 4, and the domain <your-domain>
, should be one of the domains you set in group_vars/edges.yml
.
Now open your browser and check https://<your_domain>/index.html
.
We suggest the /index.html
because by by default, for haproxyconfig_option
as option-1
or option-3
, we configure an example static website when the Ansible recipes run, which has index.html
, index-2.html
, index-3.html
, and several more pages. You should be able to access all of them. If you use haproxyconfig_option = option-2
, then check https://<your_domain>
.
If the website does not appear in your browser, login to your remote server and restart the services. Normally, the sc_pack
service will be called using the variable deployment_name
value with the prefix sc-
followed by .service
. For example: if deployment_name is deployment_A
, the service will be called sc-deployment_A.service
.
To restart it, run in the terminal:
$ systemctl restart sc-deployment_A.service