What is HTTP/2, how is it different from HTTP/1.1 and why should you bother to use it?
In the beginning, the world wide web consisted mostly of text. Then came the “Under Construction” GIFs, Calvin and Hobbes, and the MIDI files playing in the background. This was also the time of visitor counters, BLINK and MARQUEE. As storage got cheaper and faster internet connection became more common, videos of cute cats flooded the internet. Today, everyone is sharing everything instantly, and word processing, spread sheets and other office applications have moved from the desktop to the cloud.
The internet has changed quite a lot since its humble beginning, but the Hypertext Transfer Protocol (HTTP), the application protocol used as a foundation for communicating on the internet has not changed much since the release of HTTP/1.1 in 1999.
HTTP/1.1 worked well back in the days, but it’s not capable of efficiently serving today’s instant, massive and high speed internet. In 2009, Google realized that HTTP/1.1 was about to reach it expiration date, and started an internal project called SPDY, with the goal to reduce web load time. Responsibility for the SPDY specification has since been transferred to the IETF, which in turn has adapted it to the HTTP/2 specification, RFC7549.
Can HTTP/2 save the internet from coming to a grinding halt?
What’s Wrong With HTTP/1.1?
First of all, let me point out that there isn’t really anything wrong with HTTP/1.1 per se: It still works well for use cases it was designed to handle. The problem is that, with the way the internet has changed over the years, HTTP/1.1 is now being used in ways that was never in the original design scope. The main issues we’re seeing with HTTP/1.1 when used with today’s modern internet is head-of-line blocking and associated latency challenges.
Head-of-line blocking occurs when using HTTP/1.1 because only one request can be outstanding on a connection at a time. This means that if the client sends two requests to the server, the response to the second request have to wait for the response to the first request to complete. The second request/response is head-of-line blocked by the first. HTTP/1.1 clients use a number of heuristics (often guessing) to determine what requests to send first, preferably sending requests that will probably result in a smaller response first.
Another way HTTP/1.1 clients try to reduce head-of-line blocking is by opening multiple TCP connections to a server. Most browsers open between four and eight connections per origin, i.e. per domain and subdomain. Using so many connections unfairly monopolizes network resources.
How is HTTP/2 different from HTTP/1.1?
There are a lot of exciting changes to HTTP/2 compared to HTTP/1.1. The most notable change is perhaps the transition from a text-based to a binary protocol.
While HTTP/1.1 was text-based, a feature that unfortunately made the protocol parsing somewhat ambiguous, HTTP/2 is a binary protocol. Binary protocols are more efficient to parse and more compact on the wire. Debugging a binary protocol, however, can be tricky. Fortunately, tool support is slowly getting better. There is, for instance, a Wireshark plugin in the works, and it shouldn’t be a crazy assumption to make that tool support for HTTP/2 will be up to par with HTTP/1.1 when HTTP/2 usage on the internet reaches critical mass. All browsers that support HTTP/2 also support debugging HTTP/2 in their respective developer tools.
Other changes, like multiplexing aim to solve head-of-line blocking and latency issues we have in HTTP/1.1. HTTP/2 is multiplexed, meaning that it allows multiple request and response messages to be in flight at the same time. That way, there is no longer any need for response two to wait for response one to complete as described in the head-of-line blocking example above. The use of multiplexing, in turn, eliminates the need for the client to open several TCP connections to a server, thus preventing resource hogging.
Two other techniques for reducing latency are server side push and header compression.
It supports server side push
Server side push should not be confused with WebSockets, which was introduced with HTML5 in 2011 as RFC 6455. The WebSockets protocol provides full-duplex communication channels over a single TCP connection. This can, for instance, be used to replace AJAX calls where a client is polling the server at regular intervals, often to update a particular part of a website. A chat or a stock ticker are classic examples. With WebSockets, the server can instead push any changes to the client - there is no need for the client to actively ask the server if there has been any changes.
HTTP/2’s server side push works in a similar manner, but is not intended to be a continuous and open communication channel between the server and the client. Instead, the server will try to guess what kind of resources a client is likely to request in the very near future, and then push those resources to the client before they are actually requested. One practical example is an HTML page with an embedded image. In HTTP/1.1, the client would first request the HTML, then parse it, before finally requesting the image referenced in the HTML. With HTTP/2, the server can instead actively push the image to the client - because the client is very likely to also request the image.
The headers are compressed
Header compression might not sound like a big thing, but it can be, particularly on mobile devices. Have a look at this example from the HTTP/2 FAQ:
“If you assume that a page has about 80 assets (which is conservative in today’s Web), and each request has 1400 bytes of headers (again, not uncommon, thanks to Cookies, Referer, etc.), it takes at least 7-8 round trips to get the headers out “on the wire.” That’s not counting response time - that’s just to get them out of the client.”
Cookies are often the main problem, because they tend to be quite big. With header compression, it’s possible in some cases to reduce the header size so much, all the headers can fit on a single packet, hence potentially reducing the time it takes to transfer headers considerably.
How can I start using HTTP/2?
If you’re on the receiving end of the tubes that supposedly make up the internet, starting to use HTTP/2 is amazingly easy: Just make sure you use a modern browser. All the major browser, including Firefox, Chrome, Opera, Internet Explorer, Microsoft Edge and Safari, now support HTTP/2. Please note, however, that some of them only do so over TLS. If you’re unsure if your current browser supports HTTP/2, you can check with Akamai’s HTTP/2 browser support detector.
If you are at the other end of the tubes, a.k.a. the server side, supporting HTTP/2 is a bit more complicated. None of the most popular servers support HTTP/2 out of the box yet; as of right now, neither IIS, Apache, nor nginx do. IIS will support it in Windows 10, so will future versions of Apache 2.4 (without the current need to install patches and third party modules), and nginx plans to support HTTP/2 by the end of 2015.
Many of the biggest names on the internet, including Google, Facebook and Twitter are already serving their content using HTTP/2. It’s really no good reason for you not to do the same yourself as soon as you can.
This post is just a basic introduction to HTTP/2, and a lot of details have been left out. If you want to learn more about the protocol, I highly recommend the free e-book “http2 explained” by Mozilla’s Daniel Stenberg. It gives a very good technical overview of HTTP/2 without diving down into the intricate details of the specification itself.
This post has no feedback yet.
Do you have any thoughts you want to share? A question, maybe? Or is something in this post just plainly wrong? Then please send an e-mail to
vegard at vegard dot net with your input. You can also use any of the other points of contact listed on the About page.