11. Using sc_pack

11.1. Siloed deployments under a single directory

A deployment is a set of programs and configuration data under a single directory. This way, it’s easier to keep programs and their configuration compatible with each other

Here is how that hierarchy looks like:

      -- sc_logs_agent
            -- program
                     -- bin
            -- data
      -- shimmercat
           -- program
                -- bin
                -- lib
           -- data
      -- redis
           -- data
                <redis dumps>
      -- daemon
            -- data
      -- supervisor
      -- celery
            -- data
      -- <views-dir>
      -- <www>
      -- <venv>

The parts between angle brackets are variable, and of special notice are www and views-dir, of which we will talk a little bit more later. There is also a little bit more to be said later about the <venv> folder.

Closely related to “deployment” is the notion of “deployment group”. A deployment group is a set of mirror deployments. For example, if “” uses several caching servers, the deployments on each should be part of the same deployment group.

11.2. Starting the accelerator at boot time

You can execute

$ sc_pack supervisord -f configuration/file/path/my_sc_pack_conf.yaml

in a boot script stanza. This command will show you a report of the packages of the sc_pack that are running and the problems that may be occurring, usually situations of denied permissions and repeated ports. It will also create the folder /path/to/install_dir/shimmercat-scratch-folder, where the certificate created for your website will be saved.

Notice that the command above doesn’t demonize, and thanks to that it is relatively straightforward to include it as part of e.g. systemd boot sequence. You can get a .service file for systemd with the following command:

$ sc_pack extract -f configuration/file/path/sc_pack.conf.yaml \
--options user=shimmercat \

just replace user with the user you want to use. Or, if you want to restrict memory usage, you can use sc_pack.service.quotas in the line above, instead of sc_pack.service.simple.

Here is the whole sequence to get things configured with systemd:

$ MY_SITE_ALIAS=www_mysite_com
$ sc_pack extract -f configuration/file/path/sc_pack.conf.yaml
--options user=sc_pack \

Please copy the result of the command above, and paste it below:

$ sudo vim /etc/systemd/system/${MY_SITE_ALIAS}.service

and save the file. We used vim above to edit the file, but you could use any other way of doing it of course.

$ systemctl daemon-reload && \
 systemctl enable ${MY_SITE_ALIAS}.service && \
 systemctl start ${MY_SITE_ALIAS}.service

11.3. Miscellaneous utilities

11.3.1. Using sc_pack to start and stop the processes

sc_pack helps you by acting as a thin shim that issues the corresponding commands to supervisorctl:

$ sc_pack ctl -f configuration/file/path/sc_pack.conf.yaml \
start|restart|stop \

You can use any combination of them to apply any action to individual components, or to all of them at the same time.

For example you can restart ShimmerCat by typing:

$ sc_pack ctl -f configuration/file/path/sc_pack.conf.yaml restart shimmercat

11.3.2. Automatic updates of sc_pack

sc_pack also contains functions to update itself:

  • Update the sc_pack version through the sc_pack command:

    $ sc_pack update -f configuration/file/path/sc_pack.conf.yaml

  • You can add a line to the crontab with the options the scripts require. We give you an example with a line ready to include on your crontab that runs the update everyday at 1:00am, you can play with it, and following the doc here change the frequency the script runs. Just run it:

    $ sc_pack crontab -f configuration/file/path/sc_pack.conf.yaml

Copy the line, and add it on the crontab either doing crontab -e or editing the /etc/crontab file. There are other options like using systemd units type timer as explained here.

11.4. Edit mode

The sc_pack has an edit mode. It means that you can edit content on any sc_pack instance, and then later push those changes to the accelerator to have those changes synchronized. You can do that by typing:

$ sc_pack begin_edit

and once you finish editing:

$ sc_pack push

For instance suppose you need to edit the devlove.yaml to test something on the deployment test, and once you see those changes work push them to the accelerator platform. Then you just need to ssh to where test is deployed, activate the virtual environment, and type:

$ sc_pack begin_edit

then change some content on the devlove.yaml, and once you know they work do:

$ sc_pack push

While you are on edit mode we ensure that none is written automatically to that sc_pack instance, to avoid collisions. Have into account that a push will sync those changes for this domain on all the deployment sites this domain is deployed.

11.5. Enable a specific viewset

You can now list the viewsets, and enable one of them. It is useful because you can get back to a specific viewset version, as you do with a commit on git. For instance, suppose that you edited a view by hand, and you did a $ sc_pack push and that this change brake the site on all the deployments. You could then do:

$ sc_pack list_viewsets <domain_name>

see which was the previous viewset for this domain, and activate it again by doing:

$ sc_pack viewset <viewset_id> <domain_name>

11.6. Clone

You can clone a domain for a specific deployment site. The views, devlove file, and the certificates for this domain will be pulled from our accelerator platform and will be written on the sc_pack from where you are executing this command. The clone won’t affect any other deployment site associated to that domain.

$ sc_pack clone <domain_name>

11.7. Version

You can show the sc_pack version that is installed by running:

$ sc_pack version

11.8. Benchmark

You can easily find out the amount of RAM the server has, and some CPU stats with the bench command:

$ sc_pack bench

it will print an output like this below:

{'RAM_memory': '15.68 GB', 'CPU_singlethreaded_score': 3559, 'CPU_count': 4, 'multithreaded_theoretical_CPU_score': '3559 * 4', 'CPU_percent_usage': 1.0}

11.9. Known issues

  1. psutil installation when you install the sc_pack with pip:

     error: command 'i686-linux-gnu-gcc' failed with exit status 1


     sudo apt-get install python3-dev gcc libdpkg-perl
  2. sc_logs_agent issue when running the service: cannot open shared object file: No such file or directory


     sudo apt-get install libnuma-dev
  3. sc_pack certs run_staging/run_prod issue:

    If you have problems when you try to generate the certificates like a domain could not be validated then you are probably facing a timeout issue we have seen while using that command.

    Possible solutions:

    • Go to the ShimmerCat’s scratch-dir directory there you will find a file called tweaks.yaml, and increase the value of the parameter: lazyClientsWait. Restart the ShimmerCat and try again.

    • If you use haproxy for load balancing increase the:

      • frontend timeout client,

      • backend timeout server, and timeout connect

      Restart the haproxy and try again.

    • If None of it works please contact us at or through our ticket system.