Home ¦ Posts ¦ RSS

High performance browser networking review

I recently listened to a podcast episode about the newly standardized HTTP/2 protocol and I was really impressed by the clarity of Ilya Grigorik's exposition.

Ilya happens to be one of the people working on HTTP/2, which will be a huge step forward in terms of latency reduction and user experience on the web.

During the podcat Ilya mentioned his book, High performance browser networking, which is available online for free.

High performance browser networking book cover

This is the table of content (taken from the O'Reilly website):

  • Deliver optimal TCP, UDP, and TLS performance
  • Optimize network performance over 3G/4G mobile networks
  • Develop fast and energy-efficient mobile applications
  • Address bottlenecks in HTTP 1.x and other browser protocols
  • Plan for and deliver the best HTTP 2.0 performance
  • Enable efficient real-time streaming in the browser
  • Create efficient peer-to-peer videoconferencing and low-latency applications with real-time WebRTC transports

Needless to say, the book is very interesting. It gives a solid foundation to improve the delivery of data over the HTTP protocol in a number of situations.

I particularly liked the first part about networking. Apart from a recap of the classical networks building blocks, the author introduces contemporary topics like bufferbloat and TCP fast open, preparing the reader for a fight against latency which will basically last the whole book.

A small digression: Latency is one pesky thing that we can't easily improve. ISPs are using bandwidth as the primary metric to advertise new products on the market. But honestly, having a 300Mbit connection at home (like I have here in Spain) vs having a 100Mbit connection, doesn't really make a difference in many cases. It does make a difference when you stream large files or videos, but HTTP browsing and similar activities won't be affected much (if at all).

The maximum theoretical round-trip time between New York and London over a fiber-optic cable is around 60msec and that is pretty much bound to the speed of light, which can't be tweaked so easily :-) Multiply that latency for the number of connection you typically perform while downloading all the resources of a web page, and you can get pretty much an idea of how bad the situation can get (although the problem is usually mitigated by Keep-Alive, at least if these resources live on the same server).

The book will eventually converge towards the presentation of the HTTP/2 protocol whose primary target is, in fact, the decrease of latency by the means of multiplexed connections and binary framing.

Thanks Ilya for this great resource!

© Giuseppe Ciotta. Built using Pelican. Theme on github.