Simulating network performance
Why simulate bandwidth for web pages
Websites always look fast in the local development computer, but they are seldom that fast when deployed over the Internet. Simply put, a website that loads in less than a second over a local interface, can easily take ten seconds to load in a user's browser. This is because the Internet has less bandwidth than a local network interface, and more latency.
ShimmerCat allows you to simulate different network conditions
To account for those differences, ShimmerCat comes with a super-convenient way to
simulate bandwidth and
Simply use the options
For example, to emulate a mobile broadband network, invoke ShimmerCat with these options:
$ shimmercat devlove --sim-bandwidth 7Mbit/s --sim-latency 210ms
Then navigate to your web page as usual. ShimmerCat will shape bandwidth for both the direct HTTPS port and for the SOCKS5 proxy.
Bandwidth and latency units
Bandwidth is the amount of data that the network can pass through in a unit of time. ShimmerCat understands the following bandwidth units:
- kbit/s and kb/s: kilobits per second
- Mbit/s and Mb/s: megabits per second
- Kibit/s: 1024 bits per second
- Mibit/s: 2^20 bits per second
- kB/s: 1000 bytes per second
- MB/s: 1000000 bytes per second
- KiB/s: 1024 bytes per second
- MiB/s: 2^20 bytes per second.
If no suffix is specified, bytes per second is assumed. They are also assumed if the suffix can not be interpreted. In any case, be sure to not leave any space between the number and the unit suffix.
Latency is the minimum amount of time that it takes to send a message and get an answer, if any processing in the middle happens instantaneously.
For latency, two units are allowed:
- s for seconds
- ms for milliseconds.
You can always see ShimmerCat's interpretation of the units you specified in the log. For example, for the command line above you should see a line in the log with the following:
... Speed throttle: bandwidth 875000.0 bytes/second; latency 0.21 seconds
Typical bandwidth and latency for different network conditions
Bandwidth and latency vary a lot from scenario to scenario, and even according to time of the day. Therefore, the following numbers should be considered the absolutely roughest estimation that you can use in the absence of better data.
Residential wired connections
They are usually labeled with the bandwidth, e.g. a 10Mbit/s residential connection. As for the latency, it usually depends on far away is your server from your user. A user in the same continent can be expected to have up to 40 ms of latency.
If your user is Europe and your server is in US, or the other way around, a typical value for latency would be 110ms.
Keep in mind however that a congested Internet connection can reduce bandwidth and due to the way in which TCP works it can even introduce extra-latency.
WiFi, 3G and 4G
Wireless networks are the most tricky, and we don't have yet numbers that you can readily use. In practice latency over wireless networks is hard to predict because it deteriorates rapidly with channel congestion. To understand this problem better, please read this blog post from a Google engineer.
Advantages and Limitations
We simply delay packets to approximate the latency or limit the number of packets the server sends to approximate the bandwidth. The advantage of this approach is that it is extremely simple to use, just a couple of command line switches. There is no need of any special support from the operating system, nor special user privileges. It also works great for seeing the effects of HTTP/2 Push. And in headless performance testing.
The disadvantage is precision: a real-life network is a lot less predictable. Also, because the way TCP works, during the initial time of a network connection the bandwidth is a lot lower than afterwards. This is called TCP slow start, and we are currently not able to simulate it. But if you are interested, please drop a note of interest in the comments!
Google Chrome has a built-in network simulator which is also very easy to use. However, as of the time of this writing it doesn't seem to work very well with HTTP/2 Push. Chrome's simulator shows the pushed streams as if they were the result of normal requests, which hides any gains over latency that HTTP/2 Push could provide.
The next alternative is by using a kernel-supported packet shaping solution, like NetEm. Or even better, a hardware simulation tool like the ones provided by Apposite Technologies. They are however more complex to setup.