Skip to content

Setup sc_pack with Ansible

Preliminaries

We provide an Ansible recipe that automates installing one or more deployments in an edge server. Having more than one deployment per edge can come handy for experimentation, rolling upgrades and configuring high-availability setups.

Notes on Ansible

Ansible is an automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other needs as described here. That's why it makes more sense to use it when you use deployments of the same code in remote servers. If you are trying this code to deploy a site locally on your computer, please just skip the sections below about provisioning access. If you need some more info about testing Ansible recipes locally, check this article.

The Ansible recipe installs and configures sc_pack, haproxy, haproxy node exporter and prometheus node exporter. It also configures the target server so that these services start automatically1.

You can find our Ansible recipes in this repository.


Example overview

Local testing
  • If you are trying this code to deploy it locally on your computer instead of a remote server, please skip the first step about provisioning access.
  • Note that you can use our static website example for testing

1. Provision access

First, you will provision access without requiring a password for each login, this facilitates automated, passwordless logins and single sign-on using the SSH protocol. Take a look to at How Ansible works for more info on why it is useful.

To provision access, generate a ssh key and copy it to the server; use the commands ssh-keygen and ssh-copy-id (see more info at https://www.ssh.com/ssh/copy-id)


2. Install python, virtualenv and git

Second, you need to have installed python3, python3-dev, virtualenv and git on the server. You can install them by executing:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install python3 python3-dev virtualenv git

3. Cloning the repository - install git-lfs

If you are cloning the repository, make sure you have git-lfs installed in the place where you are cloning it, as we have included some binaries directly here. Use this guide for instructions.


4. Edit variables

Copy setupvariables.yml.skeleton to setupvariables.yml (if it does not exist please create it), and adjust the latter file to configure your deployment(s).

In the configuration file setupvariables.yml you need to specify the installation directory, domains, ports and credentials will be able to use ShimmerCat Accelerator's cloud service for automatic optimisations, for each deployment.

An example file can be seen below.

domains: [www.domain1.com, domain2.com]
deployment_tags: <a command separated string e.g "shimmercat,test" which identify your deployments>
api_access_token: <your_authentication_token, received in first step of the getting started tutorial>
deployments:
  instance_1:
    deployment_name: deployment_A # A name for your deployment it should be unique on the server
    install_dir: /srv/deployment_A # Where the sc_pack, ShimmerCat, celery, and all the services the sc_pack includes will be in the server. Here we will have all the logs files, and directories needed to run the services, like configuration files for ShimmerCat, and supervisor.
    http_port: 8030 # http port where ShimmerCat listen
    https_port: 4040 # https port where ShimmerCat listen
    humanity_validator_port: 8060 # The port where the service that send the Google reCAPTCHA challenge will listen. You just need it if the enable_bots_blocking is True.
    enable_bots_blocking: False
    enable_images_optimization: False
    google_recaptcha_site_key: <your_google_recaptcha_site_key> # You just need it if the enable_bots_blocking is True. You should get it from https://www.google.com/recaptcha/admin/ if needed.
    google_recaptcha_site_secret: <your_google_recaptcha_site_secret> # You just need it if enable_bots_blocking is True. You should get it from https://www.google.com/recaptcha/admin/ if needed.
    transit_encryption_key: <your_transit_encryption_key> # This will be the encryption key to encrypt and decrypt data send from the edges to the Accelerator Platform and vice versa.
  instance_2:
    deployment_name: deployment_B
    install_dir: /srv/deployment_B
    http_port: 8033
    https_port: 4045
    humanity_validator_port: 8061
    enable_bots_blocking: False
    enable_images_optimization: False
    google_recaptcha_site_key: <your_google_recaptcha_site_key>
    google_recaptcha_site_secret: <your_google_recaptcha_site_secret>
    transit_encryption_key: <your_transit_encryption_key>
sc_pack_version: sc_pack-0.1.905-py3-none-any.whl # the version of the sc_pack that will be installed by default.
installers_dir: /srv/installers # Where the files to be installed will be uploaded.

Some notes:

  • domains: is a list with all the domains that will be served by ShimmerCat for all the deployments.
  • sc_pack_version: is the version we have in files/installers. It does not need to be the latest version, during the installation process we update it.
  • installers_dir: where the files to be installed will be uploaded.
  • deployments.<instance>.deployment_name: is an identifier, it must be unique.
  • deployments.<instance>.install_dir: where sc_pack will be installed, it must be unique.

5. Update the Ansible inventory

Update the Ansible inventory, by copying the file hosts.example to hosts at the root of the project and place the IP (your-remote-ip) of your server in the variable ansible_host. The variable ansible_user is related to the first step, it must be the same user.


6. Run the playbook

We have three recipes, the first (0-requirements.yml that must be executed only once), is responsible, among other tasks, for creating the shimmercat user, uploading the sc_pack files, and haproxy, and the necessary folders are created. The second (1-install.yml, can be executed as many times as necessary). Remember that for each domain you can have multiple instances of sc_pack, you just have to take into account other values of ports, credentials and installation folder (install_dir).

To avoid conflicts prior to installation, a check is made on the remote machine and if something fails, the process stops and an error message is displayed.

$ ansible-playbook -i hosts 0-requirements.yml
$ ansible-playbook -i hosts 1-install.yml

If the variable ansible_user is not root, then you must call the recipes by passing the password of ansible_user in the remote server:

$ ansible-playbook -i hosts 0-requirements.yml  --extra-vars "ansible_sudo_pass=xxxxxxxxxxxx"

7. Configure your local /etc/hosts or equivalent

Configure your local /etc/hosts or equivalent, adding <ansible_host> <your-domain>


8. Open the browser

Now open your browser and check https://<your_domain>. If the website does not appear in your browser, login to your remote server and restart the services.

Normally, the sc_pack service will be called with variable deployment_name followed by .service. For example: if deployment_name is deployment_A, the service will be called deployment_A.service.

To restart it, run in the terminal:

$ systemctl restart deployment_A.service


[^1]: Meaning that they will be running under SystemD, see more info.