Skip to content

Setup with Ansible

Preliminary requirements
  • Make sure you have created an authentication token.
  • Make sure to have ansible installed on your Control Node, see info here.
  • Ubuntu 16.04 or 18.04 is recommended for the Control Node and the Managed Node(s).

Ansible setup overview

Typically the Control Node would be a local PC, and the Managed Node(s) edge servers.

About Ansible

Ansible is an automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other needs as described here. That's why it makes more sense to use it when you use deployments of the same code in remote servers. If you are trying this code to deploy a site locally on your computer, please just skip the sections below about provisioning access. If you need some more info about testing Ansible recipes locally, check this article.

Normally you run ShimmerCat on a remote server which serves the website content, but you can also try this tutorial on your local PC. In that case, what is referred to as the "server" will be your local PC.

Local testing

If your goal is to test locally on your computer you can:

  • Skip step 1 about provisioning access.
  • Simply edit the variables api_access_token, deployment_tags, and domains while leaving the rest as is.

Note that you can copy commands to your clipboard, by clicking the icon that appears to the right of the code snippets.


1. Provision access

First, we will provision access so that Ansible can connect to the Managed Node(s) via SSH.

A SSH key can be created using the ssh-keygen command as below. Creating a key pair (public key and private key) only takes a minute, and it includes answering some questions. The key files are usually stored in the ~/.ssh directory.

$ ssh-keygen

Once the SSH key has been created, the ssh-copy-id command can be used to install it as an authorized key by using your user on the host server:

$ ssh-copy-id -i ~/.ssh/mykey user@host

Once the key has been authorized for SSH, it grants access to the managed node(s) without a password. For more info and troubleshooting, see here.


2. Clone the repository

Now it's time to clone or copy the repository to the Control Node. Typically we would clone it to our local PC.

Before cloning the repo, make sure you have git-lfs installed to handle the download of large files. You can try to initialize git-lfs by running the below command:

$ git lfs install

If you don't have git-lfs installed, use this guide for instructions. To clone the repo using git, you need to have it installed:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install git

To clone the repo, simply run the below command:

$ git clone https://gitlab.zunzun.se/public-items/ansible-sc_pack-public.git

3. Configuration

To configure your setup, you need to edit the variables in the file at group_vars/edges.yml. The variables are used by the Ansible recipe uses to install and configure sc_pack, haproxy, haproxy node exporter and prometheus node exporter. The recipe also configures the target server so that these services start automatically.

Below are descriptions of the variables that can be configured in the file group_vars/edges.yml:

Task Variables
Description
The below variables can take the values True or False to define what to install.
We recommend to have all variables True in the first run. In subsequent runs you can set them to False, except for run_create_deploys.
run_install_packages Task that updates the server and installs certain packages, for example git, aptitude, python3, pip, and locale.
run_install_requirements Task that creates the user and group shimmercat, and uploads the installer for sc_pack.
run_install_haproxy Task that installs and configures the haproxy load balancer. This is demonized.
run_install_prometheus_node_exporter Task that installs the prometheus node exporter. Only necessary if you want to create alerts or view and visualize your metrics.
run_install_haproxy_exporter Task that installs the haproxy exporter. Only necessary if you want to create alerts or view and visualize your metrics.
run_install_accelerator_client Task that installs the accelerator client, responsible for registering the domain and the deployment in the Accelerator Platform database, as well as updating the sc_pack configuration, i.e. the sc_pack.conf.yaml file.
run_create_deploys Task that creates the new Deployment Sites instances on the Accelerator Platform database in the cloud (calling the accelerator_client). The task installs and configures sc_pack and updates devlove.yml, sc_pack.conf.yaml, haproxy.cfg, and the views-dir. This is demonized.
General Variables
Description
domains A list of all domains that will be served by ShimmerCat for all deployments.
api_access_token The authentication token received in the first step of the getting started tutorial.
deployment_tags A comma separated string, e.g "shimmercat,test", to identify deployments
installers_dir Directory on the server where the executables to be installed will be uploaded.
sc_pack_version Version of sc_pack that will be installed by default. It doesn't need to be the latest version, during the installation process it's updated automatically.
haproxy_auth_pass A password to haproxy that also the haproxy exporter will use.
prometheus_node_exporter_port 9112. Port for the prometheus node exporter.
haproxy_exporter_port 9101. Port for the haproxy exporter.
haproxyconfig_option You can choose between four configurations of haproxy-devlove, or create your own. option-2 is for real sites, with an external origin that is online. option-1, option-3 and option-4 are for test sites where it is not necessary to be online, the pages will be served from the same vps. You can see the examples of the outputs that are produced in the folders roles/create_deploys/templates/config/option-<id>/example-results. See more details in the haproxyconfig README.
Deployment Variables
Description
Note that with instance_1 and instance_2 we are indicating that two instances of sc_pack will be installed in the respective install_dir, on each of the servers defined in the production inventory.
Having more than one deployment per edge server can come in handy for experimentation, rolling upgrades and configuring high-availability setups. For testing only one instance is sufficient, but in production two is recommended.
install_dir The directory for ShimmerCat, celery, and all the services that sc_pack includes. All the log files and directories needed to run the services, including configuration files for ShimmerCat and supervisord, will be in this directory.
http_port 8010, 8011, 8012, etc. HTTP port where ShimmerCat listen.
https_port 4010, 4011, 4012, etc. HTTPS port where ShimmerCat listen.
humanity_validator_port 8040, 8041, 8042, etc. The port where the service that send the Google reCAPTCHA challenge will listen. Only needed if enable_bots_blocking is True.
enable_bots_blocking True or False.
enable_images_optimization True or False.
improve_images_quality True or False.
google_recaptcha_site_key Your Google reCaptcha site key. Only needed if enable_bots_blocking is True. If needed you should get it from www.google.com/recaptcha/admin/.
google_recaptcha_site_secret Your Google reCaptcha site secret. Only needed if enable_bots_blocking is True. If needed you should get it from www.google.com/recaptcha/admin/.
transit_encryption_key This will be the encryption key to encrypt and decrypt data sent from the edge servers to the Accelerator Platform and vice versa. Among other things, it is used to synchronize the certificates and their private keys. Make sure to set the same key to all the deployments you would like to synchronize. You choose the key yourself.

Below is an example setup with two servers. For a more detailed overview check here.

Example overview


4. Specify server info

Edit the inventory file production and the variable ansible_host by replacing vps1 with the IP of your server.

The variable ansible_user is related to the first step about provisioning access, and it must match the user for the host server so that it is possible to connect via ssh.

If you are using several servers, the file would look something like:

[edges]
vps1 ansible_host=vps1 ansible_user=root1
vps2 ansible_host=vps2 ansible_user=root2
vps3 ansible_host=vps3 ansible_user=root3
etc...

Local testing

For local testing, edit the inventory file production according to:

[edges]
vps1 ansible_host=127.0.0.1 ansible_connection=local ansible_user=aaa ansible_sudo_password=bb

5. Run the recipes

To run the Ansible playbooks, you need to have ansible installed on your server, see info here. Also make sure you have installed python, python3-dev, and virtualenv on the server. You can install them by executing:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install python3 python3-dev virtualenv

Then you can run the playbook by executing:

$ ansible-playbook -i production 1-all-in-one.yml

For more information about the recipes check the summary of the Ansible recipes.


6. Serve your website

Configure your local /etc/hosts or equivalent, and add a new line to it with the format:

<ansible_host> <your-domain>

The IP address <ansible_host> should be the same as in the inventory file production in step 4, and the domain <your-domain>, should be one of the domains you set in group_vars/edges.yml.

Now open your browser and check https://<your_domain>/index.html.

We suggest the /index.html because by by default, for haproxyconfig_option as option-1 or option-3, we configure an example static website when the Ansible recipes run, which has index.html, index-2.html, index-3.html, and several more pages. You should be able to access all of them. If you use haproxyconfig_option = option-2, then check https://<your_domain>.

If the website does not appear in your browser, login to your remote server and restart the services. Normally, the sc_pack service will be called using the variable deployment_name value followed by .service. For example: if deployment_name is deployment_A, the service will be called deployment_A.service.

To restart it, run in the terminal:

$ systemctl restart deployment_A.service