HTTPS and pooling
ShimmerCat QS can also provide support for HTTPS and pooling!
Just add an
encryption-level: tls to the consultant definition.
--- shimmercat-devlove: domains: elec www.proba.com: root-dir: /srv/www.mounted.files/ views-dir: files consultant: connect-to: "4047" encryption-level: tls application-protocol: http
Notice that at the moment, if you enable TLS, you also need to specify an application protocol, with both
http supported as application protocols.
HTTPS is just HTTP/1.1 + TLS.
HTTPS connection pooling works by having several channels open simultaneously, and using them on demand for the needs of ShimmerCat:
- Fetching dynamic HTML
- Fetching static assets
Whenever a connection is idle for a set period of time, the connection is closed to save resources. Here is an example of how it's configured, the whole shebang:
--- shimmercat-devlove: domains: elec www.supershoes.com: root-dir: /srv/www.supershoes.com changelist-settings: tNew: 20 tOld: 72000 consultant: connect-to: "8098" application-protocol: http encryption-level: tls pooling: max-size: 4 max-idle: 5.0
The above snippet configures the default consultant "default" to use a pool of HTTPS connections with maximium 4 connections, each of which can idle up to five seconds before being closed. Notice that the configuration above will still try to fetch from the local filesystem (which in turn can be a network mount).
The connection pool size is for each ShimmerCat worker process.
By default, ShimmerCat creates a pool of 3 workers, but that can be changed with the
In any case a
max-size of 4 is kind of small for a high traffic website.
Changing both things, we can get a better configuration this way:
--- shimmercat-devlove: domains: elec www.supershoes.com: root-dir: use-consultant: default changelist-settings: tNew: 20 tOld: 72000 consultant: connect-to: "8098" application-protocol: http encryption-level: tls pooling: max-size: 256 max-idle: 5.0
max-idle time should be chosen in such a way that it doesn't cause problems
with the remote side closing connections too eagerly.
For example, if the consultant on the other end is an Nginx instance with default configuration,
Nginx will time-out connections after two seconds.
In that case, our
max-idle should be lower than two seconds.
At the moment, our pooling implementation doesn't handle these cases very gracefully, and the "gracefull handling"
logic looks convoluted and brittle... it's far better to set a low
Of course we can always test what works and what doesn't.