In preparation for a talk I'll give at Codemotion Spain 2015, I've put up a small HTTP/2.0 mosaic demo (source code on github) that showcases the latency and page load improvements offered by the new version of the HTTP protocol.
Animated GIF showing the HTTP/2.0 mosaic demo. Reload the page to see the animation.
It's inspired by a similar concept found at Golang's h2 website. Since I've been using nginx HTTP/2.0 support for the past few months, I wanted to implement the tiles concept using a familiar setup and measure the performance improvements first-hand.
This post will use the mosaic demo as a tool to explain some of the key concepts found in HTTP/2.0.
HTTP/2.0 connection multiplexing.
If we record the timelines followed by browsers in the two different scenarios (HTTP/2.0 and HTTP/1.1), we can make a few observations.
The H2 (codename for HTTP/2.0) version uses a single, multiplexed TCP connection ("Connection Id" column) to fetch all the resources, whereas the H1 version uses multiple TCP connections to parallelize network operations (again, look at the "Connection Id" column).
HTTP/2.0 timeline, in which tiles are requested roughly all at once by the browser HTTP/1.1 timeline, in which tiles are requested sequentially by various parallel connections
Using multiple, parallel TCP connections is one of those "tricks" browsers employ to maximize the usage of the available bandwidth. They do so because HTTP/1.1 doesn't offer any effective way for the browser to request a resource before the previous one has been delivered.
HTTP pipelining was meant to address this in HTTP/1.1, but it's a solution that never really took off due to implementation inconsistencies, misbehaving middleboxes and the fact that it introduces head-of-line blocking issues by itself.
These multiple connections also explain why in the H1 version images appear in a seemingly random order. That's because each connection is assigned a subset of the page images to download. You can see from the graphs that in this case Chrome is opening up to 6 parallel connections.
In the H2 timeline, we observe images are requested by the browser all at once, and they are delivered by the server using the same TCP connection. This lets the TCP congestion window ramp-up to optimal values, fully exploiting the available bandwidth and avoiding unnecessary round-trips which are needed in case of multiple TCP connections. As you can probably verify by yourself, the net result is usually a shorter page load time.
What about HTTP/2.0 server push?
Further page-load improvements could be achieved by leveraging HTTP/2.0 push.
In this demo, my browser spends about ~5msec parsing the HTML before it knows what resources have to be downloaded and requests them from the server.
HTTP/2.0 push allows the server to start pushing images (and other resources like CSS, Javscript, etc...) to the client even before the HTML is parsed. In this particular case, this translates to a possible saving of these ~5msec.
Push support is not currently available in nginx (version 1.9.6), so I experimented with nghttpd instead. While it does indeed work, there are still some rough edges. Numbers are not conclusive. Browser-side support is also still a bit rough (no diagnostic of server pushes for now).
HTTP/2.0 push is a promising technology, but we're still at a point where best practices have to surface and implementations have to mature.
How do we calculate the time needed to load the demo?
Navigation Timing events. Image taken from the official W3C specification
The demo takes a very simplistic approach at measuring page load time. It does it by leveraging the Navigation Timing API, counting the time spent by the browser between the connectStart and the final page load events.
connectStart corresponds to the establishment of the first TCP connection to the HTTP server. For this reason, the page load time shown in the demo doesn't take into account things like DNS lookup times and any other event which would be disturbing for a direct comparison of HTTP/1.1 and HTTP/2.0.
Performance under simulated network conditions
Beware: This is a toy benchmark, a demo. It is not meant to represent real-world conditions. Right now it's just testing two specific implementations (Chrome's and Nginx'), along with specific content and network scenarios. Still, it yields some interesting numbers which match the expectations I had.
I ran the demo under a few different network scenarios, and recorded the median page load time as well as the 99th percentile over 1000 runs. RTT and bandwidth are those measured between the client running the benchmark and the VPS hosting the demo.
|Scenario||Bandwidth||RTT||H1 median||H2 median||H1 p99||H2 p99|
|3G phone||~780Kbps||~100ms||5.21s||1.30s (-75%)||5.96s||2.38s (-60%)|
|Wifi||~8Mbit||~43ms||1.88s||0.892s (-53%)||2.14s||1.06s (-50%)|
|Fiber (FTTH)||~30Mbit||~40ms||1.72s||0.308s (-82%)||1.93s||0.66s (-65%)|
In this demo, HTTP/2.0 maximizes the bandwidth usage in all these scenarios:
- It's particularly beneficial on 3G, with a median absolute saving of 3.91 seconds and a relative 75% saving.
- It's also effective with fast fiber connections. Here the saving in seconds is much lower, but the relative saving is huge: 82%.
There are a few important things to notice here:
The demo delivers around 230 small objects. This creates the "ideal scenario" to appreciate HTTP/2.0 latency-related improvements.
If there were fewer objects in the page, the difference between the two protocols would be much less appreciable. In some situations, depending on the mix of objects, HTTP/1.1 can be faster. There's an interesting presentation from the Nginx team presenting a few benchmarks which demonstrates this scenario.
If these object were larger (think: videos), the story would also be completely different. HTTP/2.0 really shines when the browser generates a considerable number of short requests.
For these reasons, these improvements cannot be easily projected onto other scenarios.
The demo it not optimized to be delivered over HTTP/1.1.
It could use domain sharding to increase the number of parallel connections started by the browser and achieve faster page load times. That would, however, mean using one of these HTTP/1.1 "workarounds" that the industry is trying to overcome, since they usually come with some downsides.
We're comparing SSL (HTTP/2.0) with plaintext (HTTP/1.1) traffic.
Most browser vendors decided to implement HTTP/2.0 exclusively over SSL. This means that most real-world HTTP/2.0 connections will incur the SSL setup overhead. Under optimistic circumstances (ie. availability of TCP false start or SSL session resumption), this is a round-trip penalty.
Given this de-facto requirement of SSL for HTTP/2.0, I felt that comparing plaintext HTTP/1.1 with encrypted HTTP/2.0 was still meaningful.
Performance is all good, but can I deploy HTTP/2.0 services right now?
HTTP/2.0 is available for the majority of modern browsers. Given that it can be easily deployed in a 100% retro-compatible manner, I'd say that client-side support is a non-issue.
HTTP/2.0-compatible browser market share as of November 2015. Data from caniuse.com. "Android Browser" refers to the stock AOSP browser, as opposed to "Chrome for Android", which ships with modern versions of Android
What's holding back most people from deploying HTTP/2.0-compatible infrastructure is the low availability of server-side options. Most vendors are currently in the process of delivering their first HTTP/2.0-compatible solutions.
Nginx Inc. released its first, stable HTTP/2.0-compatible version just a few weeks ago (end of September 2015).
The Apache Software foundation released an experimental mod_http2 a few weeks ago (mid-October 2015).
Microsoft IIS: Technical preview available in Windows Server 2016.
HAProxy: No HTTP/2.0 support for now (It does support ALPN though, so it can be used in TCP mode).
At work, we base a big chunk of our infrastructure on HAProxy, and unfortunately TCP mode is not an option in our specific case. This is currently being a major operational obstacle for us.
Want to know more?
Well, I hope to see you in Madrid at this year's Codemotion then! I will give a talk on HTTP/2.0 on November the 28th. That'll be my small contribution to the adoption of a standard I've lived off for the past 15 years :-)