Thursday, July 2, 2009

P2P, bandwidth, and FTTH urgency.


Figure 1 (from MPI-SWS) Bittorrent throttling by geographically spread ISPs. Red areas indicate ISPs throttling Bittorrent traffic.

This figure is from the Max Planck Institute for Software Systems' Glasnost project. It shows geographical regions where ISPs interfere with Bittorrent traffic. Comcast (and several other ISPs) claim that P2P applications of a few users slow down the Internet for all network users. All the bandwidth is used up by a miniscule subset of the subscribers, leaving everyone else with a slow network. Theres no reason to disbelieve this argument, a limited shared resource being over-used will result in poor quality for all users in the statistically multiplexed Internet.

There are 2 ways of dealing with the issue. Either bandwidth-hogging users are cut-off (like Bittorrent throttling), or, the network capacity is increased to accomadate the "over-use". The latter technique bailed us out the last time Internet traffic exploded. Broadband was roled out just as media rich Internet applications were catching on (or was it the other way?). Everyone was happy. Customers got better service for similar subscription costs, Web 2.0 companies got the pipes to deliver their content, and ISPs created the whole broadband business, with the option of up-selling through services like digital IPTV.

If it worked so well at that time, can't we do the same trick again? Why not roll out broader-broad-band: FTTH (Fiber to the home) for example? The simple answer is that we cannot, at least not quickly, given the cost. When broadband came the physical access network was already built! There were cable TV wires running to homes and there were phone lines. In terms of a tree analogy, the leaves of the access network were already connected up. All that remained to be done was to put in the trunk links and the branches. And there are a lot fewer branches than there are leaves. On the other hand, FTTH will be prohibitively expensive in many countries - the leaves need to be rewired. Therefore the rollout timeline is going to be slower as compared to broadband.

Back to P2P. Why single out P2P? Don't video CDNs like You-tube, Netflix, Hula etc. also consume large amounts of bandwidth? In my opinion the extra load on the access network imposed by P2P, due to the uploading aspect, creates a lot more congestion in the access network at present. A P2P system will upload (in theory) as much as is downloaded in the system. And all this happens on the access network. Thats a 2X increase in bytes traversing the most expensive component of the network (the edge). This means many many more expensive boxes to cover the leaves of the ISP's tree.

However, most broadband connections are asymmetric (downlinks have higher data rates than uplinks). So P2P is limited to a glass ceiling (the lower uplink bandwidth data rates). On the other hand, conventional CDNs push data down the wire, and so, there is no reason for CDNs to limit video quality and resolution until they hit the downlink rate. As high-quality video content catches on, there will be disgruntled users who wonder about the difference between what data-rate their subscription plan claims (XX Mbps) and what comes down the wire (XX/ZZ Mbps, ZZ being the over-subscription factor).

ISPs need to hurry up fiber-wiring up those leaves! And governments need to help with the capex! Another stimulus perhaps?

No comments: