2. Edge servers

The part where files and visitors are served happens in what we call an “edge server”. The goal is to separate clearly the runnings of ShimmerCat Accelerator from the runnings of the backend application, so that we can find more easily culprits of performance issues.

On that sever we install and have a “pack” of programs, sc_pack, that helps manage the edge servers running ShimmerCat Accelerator. It includes and configures the following programs:

  • ShimmerCat QS: the program actually serving files.

  • sc_logs_agent: a small program that sends logs from ShimmerCat QS to the mothership (our cloud service).

  • redis: used by ShimmerCat QS, and usually included right with it.

  • supervisord: A python program that helps manage stacks of programs, like this one.

  • A Celery worker, that takes instructions from the cloud service.

  • An update daemon, that can upon instructions download a new version of sc_pack and install it.

You can understand more easily what’s going on if you compare the programs in this list to a well-coordinated sports team whose captain is supervisord, while sc-pack itself is its coach or manager.

In Unix, each process has a parent, and here is how it looks for sc_pack:

<system process manager>
   -- sc_pack supervisord
      -- redis-server
      -- sc_logs_agent
      -- shimmercat
      -- celery worker
   -- sc_pack update

everything under a single directory.

There is no single optimal way for all users to deploy this set of programs and sc_pack introduces one, which is kind of siloed deployments.

In this modality, all the programs and their data directories (which include their logs) are installed under a single hierarchy of directories. This goes against traditional Unix conventions for handling filesystem paths, but it makes things easier for developers and sysadmin: they know where all the files are.

Here is how that hierarchy looks like:

	-- sc_logs_agent
		-- program
			-- bin
          -- data
	-- shimmercat
		-- program
			-- bin
          -- lib
          -- data
	-- redis
		-- data
	-- daemon
		-- data
		   -- supervisor
	-- celery
		-- data
 	-- <views-dir>
	-- <www>
	-- <venv>

The parts between angle brackets are variable, and of special notice are www and views-dir, of which we will talk a little bit more later. There is also a little bit more to be said later about the <venv> folder.

2.1. Load balancing

We usually deploy Haproxy in front of the edge servers. Haproxy has several advantages, including very robust SNI support, live-checks and fallback options. We can also provide configuration and binaries for Haproxy if needed.

See below an overview of how the setup looks:


2.2. Redundancy and specifications

Often it is enough for a site with just one edge server, although as a matter of redundancy and resiliency we recommend a setup with two edges or more.

The VPS requirements for the amount and capacity of edge servers depends on the load. The requirements can be estimated by using the number of visitor page-views per month. This number can be found with, for example, Google Analytics, Alexa, Ahrefs, SEMRush, etc.

For an recommendations with a lot of legroom:

  • Monthly visits: 1 000 000

  • Recommended RAM capacity: 4GB

  • Recommended Cores in VPS: 4