Lets start off with a little bit of a history lesson. Rewind back to 1991, we had HTTP 0.9 . And it was designed around the notion of doing text based document exchanging. 0.9 was designed around the notion of GETs and GET-ting documents, it was essential very request and response based. You could only do GETs and there was only the HTML response type permissible in 0.9. It was designed for sharing physics documents for the folks over at CERN.
Fast forward to 1996 It was officially RFC‘d to 1.0 so that other individuals and organizations can implement systems that required HTTP. They formalised the request and response dance further by adding request and response headers. The RFC also called for multiple response types (images for example), so HTTP started to support more resource types to be delivered over the wire.
In 1999, HTTP/1.1 was finalized and RFC‘d. In my opinion the most important aspect of HTTP/1.1 is that it introduced a couple of more verbs or actions that describe your request. POST, DELETE, PUT, PATCH, and OPTIONS to name a few. Another major ‘feature’ of HTTP/1.1 is that the support for persistent connections was added (using Keep-Alive’s). This allowed clients to reuse connections going to the same server, reducing the latency cost of doing a connection handshake with an HTTP server.
HTTP/1.1 has seen its days numbered for a few years now. It was not designed for today’s web pages. Today’s web pages are rather large (2+MB per page). With HTTP/1.1 when you load up a web page there is no way to tell the HTTP server how you would want to prioritize your requests. These days we are at the mercy of whatever browser the user is running to do that, which is not that great since if you developed your web application, you might know which requests and responses are more important, and thus you could prioritize them on page loads. Anyways one of the things that HTTP/2 will address is that exact issue, resource/request prioritization. Another issue that HTTP/1.1 has is that persistent connections, while a brilliant concept, has a gigantic flaw called head-of-line blocking. Head of line blocking is where a client/server cannot send data to its reciprocal while data is currently in flight between those two parties on that particular connection. HTTP/2.2 addresses this by encapsulating the data in a data stream and allowing data to be multiplexed on the wire. This multiplexing of data will seriously amortize the cost of establishing connections to websites. And as someone like me who lives very far away from popular websites, this means I get to have a better experience on the web since I will not have to incur that latency cost as often as I do today. Reducing latency is crucial to a better browsing experience.
What is HTTP/2? Well besides what I have already mentioned, HTTP/2 is a giant improvement over HTTP/1.1. The spec of HTTP/2 was largely inspired by SPDY, and in fact HTTP/2 picks up where SPDY left off since SPDY is no longer being actively developed. SPDYs goals were to improve loading times of pages by reducing the price incurred by high latency, and its other major goal was to improve web security altogether. The guys who designed SPDY over at google, are on the RFC committee overseeing HTTP/2.In short, HTTP/2 has the following goals:
- Minimize the cost of latency on reused connections
- Improve security of connections
- Keep HTTP/1.1 semantics
- Add server PUSH
- Compression of HTTP headers
While the spec of HTTP/2 does not mandate any use of TLS, most of the big browsers have come out and said that they will not implement non-HTTPS for HTTP/2. There is already a lot of noise out there. If you work in the Microsoft universe you’d be glad to know that IIS 10 supports HTTP/2. Check out these links for more information on HTTP/2