6. Onboarding technical checks¶
6.1. Preparations:¶
Make sure the domain has not already been created in a demo.
If we have created a demo, remember that the stored data can be pulled from the cloud database after activating the Python virtualenv:
sc_pack pull --from_domains <www.your-domain.com>
6.2. 1. Server checks¶
Support for Ubuntu 16.04 and up.
It is highly recommended that the CPU has the AVX2, if it doesn’t ShimmerCat will still work but we use it for accelerating TLS handshakes.
Server capacity depends on the web traffic volume. As an example, if the site has around 5 million visitors/month, a setup with two edge servers each with 4 CPUs, 8 GB RAM and 120 GB disk will give a lot of leg room.
If the servers are running behind a firewall, make sure that the ports used by ShimmerCat are open.
6.3. 2. Installation¶
Check our getting started tutorial including the ansible guide that is using this repo.
6.4. 3. Confirm installation¶
Check that the
hosts
is changed to point to the ip address from where ShimmerCat is running.Check that the haproxy is running:
systemctl status haproxy
.At this stage it should be possible to browse the site. If you get
“NET::ERR_CERT_AUTHORITY_INVALID”
, check here.
6.5. 4. SSL-certificate setup¶
The default certificates can be placed in
<deployment_dir>/shimmercat-scratch-folder
.The private key should be in PKCS8, and the file should be called privkey.unencrypted-pkcs8.pem.
Concatenate the leaf certificate with any intermediate certificates in the order they are needed, excluding the CA root certificate. We put that on cert.pem.
Check that the private key matches the certificate, see here.
If SNI will be used copy both files cert.pem, and privkey.unencrypted-pkcs8.pem to
<install_dir>/shimmercat-scratch-folder/sni-certs/<domain>
.If Certbot is used check the doc for Importing certificates handled with certbot.
Restart ShimmerCat:
sc_pack ctl restart shimmercat
.Check output from
sc_pack ctl status
.Run and verfify the certifcate over Qualys SSL before considering it ready, see https://www.ssllabs.com/ssltest/.
We suggest to install the sc_pack with a non-root user. We normally use a user `shimmercat` (although it is up to you) if you do so it is very important that you execute the sc_pack commands as this user either using `sudo -s -u shimmercat` or being logged in as this user: `sudo su shimmercat`.
6.6. 5. Redirect checks¶
Check redirects from HTTP to HTTPS
If it applies, check redirects from the naked domain to the domain prefixed with www, e.g.
ecommerce.se -> www.ecommerce.se
. Note that this is specific to each domain, because some site owners choose to serve the site on the naked domain.
6.7. 6. Basic website checks¶
Take a look to our Onboarding website checks steps 1-3.
6.8. 7. Additional website checks¶
Configure the error pages if that is needed.
Check that there are no bad cookies
If there is a certificate that contains multiple domains and all those domains resolve to the same IP, browsers will try to do requests to all those domains through a single HTTPS connection. If this is the case, make sure to configure all the deployments to serve all the same domains.
Note that some of our recipes use a load-balancer (e.g., Haproxy ) to connect to the origin application. If that is the case, please double-check that the application backends are reachable after any DNS changes needed. For example, if a site to be served is called
www.example.com
, ensure you don’t use that name in the Haproxy configuration as the address of a server to pull dynamic contents from, since eventually the DNS forwww.example.com
will point to the edges and not to the origin.
6.9. 8. Configuration¶
Check that the push are working.
Enable protection against bots, see bot blocking tutorial.
Configure the bot blocking (info about integrations might be needed). Note that the general bot policy should be set to
"unknown_bots_can_access": true
by default.Check that the bot blocking captcha is working
Enable images optimization, and image prioritization features,
sc_pack enable_images_optimization
.Insert the metrics snippet
to enable data collection.
Enable and verify monitoring with the exporters and configure the grafana-prometheus dashboard. This includes the creation of the deployment metadata in the ShimmerCat database so that the Grafana dashboard can be automatically generated from this data when the task in charge of that runs.
Collect and add the client’s email address in the system so that they will receive email updates and weekly reports. Verify that the client has received the initial email with the authentication token.
Reload the supervisor,
systemctl restart <name_of_service.service>
, the name can be found withls /etc/systemd/system
.Check the output of
sc_pack ctl status
.
E-commerce websites can have a variety of configurations and it is possible that there are some corner cases that might need to be fixed. Be sure to have some time to monitor the site, and be ready to just change DNS to the origin if you need to edit something.
6.10. 9. DNS records¶
Before going live, the DNS records must be updated to point to the edge servers with ShimmerCat, see more info here. Make sure to provide the client an URL for the
CNAME
record.If there are multiple domains to be configured, check whether or not they resolve to different IPs. Browsers will try to do requests to all those domains through a single HTTPS connection, and if this is the case, make sure to configure all the deployments to serve all the same domains.
Check that aliases exist in
/etc/hosts
for all of the domains where the haproxy is installed and using them, so that it can resolve those domains to the correct IP addresses.
6.11. 10. Go live¶
Execute
sc_pack push_all --sync_with_all_deployments
to be sure that the latest changes are in place.Update the DNS to point to the servers with ShimmerCat.
Browse the site and double check that everything works as expected.
Create a test order to verify that everything works.
6.12. Other¶
Provide the client with the authentication token for using the API.
Ask the client for a email adresses to receive the weekly data reports.