Moving at the Speed of Creativity by Wesley Fryer

Understanding Internet architecture, a need for smarter networks, TCP and UDP differences

Before starting this post, which promises to be rather lengthy, it is likely worth noting the disclaimer which appears in the upper left corner of my blog website but does not appear within my RSS feed for posts: “Wesley Fryer is the author of Moving at the Speed of Creativity. DISCLAIMER: The opinions expressed herein are my own and not necessarily those of my employer.” With that noted, I’ll proceed with a post on ideas I’ve been reflecting on and wanting to share in a thoughtful and careful way for some time.

These are my notes and reflections on the first four of five recorded lectures by Dr. Ramesh Johari of Stanford University in an iTunes U course titled, “The Future of the Internet.” (Nod to Dr. Tim Tyson for this recommendation.) I will be the first to admit I do not understand all the complexities of both technologies and politics which intersect in this discussion, but I am actively trying to learn more. For reasons you may be able to guess, blog posts from me related to this topic have been rare in the past, although I continue to save social bookmarks related to these issues from time to time.

This FREE series of podcast lectures from Dr. Johari offers the best explanation I have heard (or read) to date about the architecture of the Internet, how it evolved historically, where we are today and what needs to be considered as we discuss where we need to go in the future. Ramesh’s perspectives and opinions on these issues, which are often highly charged in the mainstream press, are balanced and well-supported in this series of lectures. I listened to these first four of these lectures (which are about 1.5 hours each) driving up to my parents’ house in Kansas and back this weekend, and on my commute to and from work today. The description on the course website states:

The Internet today has evolved a long way from its humble beginnings as a federally funded research project. As a society, we find ourselves increasingly dependent on the Internet for our daily routine; and yet, the future of the Internet remains a matter of vigorous political, economic, and academic debate. This debate centers around ownership: who will own the infrastructure, and who will own the content that the network delivers? Unfortunately, most of this debate does not involve a substantive discussion of the “architecture” of the network, or the role that architectural design will play in shaping the ownership of the future global network. This course provides a non-technical introduction to the architecture of the Internet, present and future. Students will be taken on a tour through the inner workings of the network, with a view toward how these details inform the current debate about “network neutrality” and the ownership of the future Internet.

The phrase “non-technical introduction” perhaps needs some explaining. This course DOES include technical terms and details, but is more designed for people wanting to UNDERSTAND the elements and implications of both network design and politics surrounding networks than people who do or want to design and support networks professionally. In Ramesh’s words, many of his examples are “stylized” with some details left out, but his focus is to impart a workable understanding of these issues rather than share all the technical details. I appreciate this approach and think it’s an effective way to provide an understandable and digestible version of complex issues that many people likely don’t appreciate fully, yet need to because of the importance of the issues at stake.

Before sharing some notes I jotted down listening to these lectures as well as my subsequent reflections, I’ll briefly review several of the main books and information sources I have read or been exposed to previously which have informed by own existing schema (background knowledge) on these topics. These include:

  1. Dr. Nicholas Negroponte’s seminal work “Being Digital”
  2. Dr. Lawrence Lessig’s book “The Future of Ideas: The Fate of the Commons in a Connected World.”
  3. Steve Gibson and Leo Laporte’s excellent “Security Now” podcast series (Episodes 25 and 26 on “How the Internet Works,” Episode #47 on Internet Weaponry, and Episode #8 on “Denial of Service (DoS) Attacks” are definitely shows I’ve found helpful)
  4. The 2000 version of George Gilder’s book “TELECOSM: How Infinite Bandwidth will Revolutionize Our World”
  5. The WikiPedia article for the OSI model (Open Systems Interconnection Basic Reference Model) has also informed my thinking a bit, along with various conversations with technical folks about networking and Internet access options
  6. My five years as the director of distance learning for a college of education at a major university, with its associated learning curve related to videoconferencing, H.323 connections, firewall traversal issues related to videoconferencing, quality of service, etc.
  7. The January 2007 Wall Street Journal article “The Coming Exaflood” by Bret Swanson

I mention my own background knowledge because although I’ve been working with, on and around the Internet and networking technologies intensively for the past ten years as an educator, my knowledge about many technical issues still remains somewhat limited. I’m not an engineer by trade, I’m a teacher.

As an example of my limited background knowledge, until listening to and reflecting on this lecture series from Dr. Johari, I had not understood the basic differences between TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) in networking and on the Internet. I knew these were both used, and had a general idea that UDP was related to video streaming, but I didn’t understand the basic difference and the tradeoffs which are involved when one or both are used. TCP is the predominant protocol used on the Internet, and was one of the original protocols developed to share data. TCP guarantees delivery of packets over the network– it does this as a protocol, but the network itself (the Internet) does NOT make this guarantee. TCP does not guarantee the order or the speed with which a user will receive packets, but it does make a best effort to insure delivery of all the packets which make up a given file. UDP, on the other hand, basically floods the network channel with a large quantity of packets in the hope that most of them will get through and delay (communication latency) will be minimized. File transfers over the Internet, including the distribution of audio and video via software programs like iTunes or BitTorrent, work well over TCP. Synchronous applications, like internet telephony, videoconferencing, or live web-streaming, often use UDP to flood a network channel with packets and minimize latency. The desire of some to emphasize the guarantee of packet delivery over a network to minimize latency is in many ways at competing odds with TCP. When running on the same network, UDP packets will always (at least with past and what I understand to be currently accepted protocols) dominate over TCP packets. For this reason, I think some network administrators choose to not support UDP on their local networks. This has given rise to quality of service (QoS) frameworks for networking as well as discussion of “fair share” limits to UDP traffic to insure TCP requests on a network are not overwhelmed. Dr. Johari discusses all of these issues in this lecture series.

In defining new terms and concepts. Dr. Johari is the first person to introduce me to the term “autonomous system.” According to WikiPedia, in the context of the Internet:

…an autonomous system (AS) is a collection of IP networks and routers under the control of one entity (or sometimes more) that presents a common routing policy to the Internet.

I had been familiar with the concepts and terms LAN (local area network) WAN (wide area network) but not “AS.” I had heard the term “Akamai” in the context of a vendor seeding videos on a network around the world that thousands of people could simultaneously view and download, but hadn’t heard of the broader concept of a “content delivery network.” In addition to Akamai, Dr. Johari mentioned LimeLight Networks as a smaller competitor in the content delivery network field.

At a very basic level, I have understood packet switching, but perhaps not fully appreciated its differences to the traditional circuit switching of the past and still continuing POTS (plain old telephone system) as well as higher speed circuit switched networks. I’ve heard the term ATM (Asynchronous Transfer Mode) used in discussions about T1 and other access lines, but not fully appreciated the HUGE differences in utilization potential of network capacity that is available in a packet-switched versus a circuit-switched environment.

One of the main themes Dr. Johari emphasizes throughout his lecture is the remarkable chain of events which combined over the past twenty-five years to permit the Internet explosion that has taken place in the last decade. I was first introduced to the concept of end to end design for networking and the Internet by Dr. Lessig’s book “The Future of Ideas: The Fate of the Commons in a Connected World.” Where the themes in that book tend to favor a more extreme view of Internet architecture and the politics which should be supported regarding the Internet moving forward, Dr. Johari takes a more balanced and moderate approach leaving open the possibility and need for different networking architectures based on the changed environment we see today in 2007 with Internet content and the move to higher bandwidth requirements for Internet-delivered video, as well as the need for a financially sustainable Internet architecture in which providers can reasonably expect a return on investment (ROI) for the additional infrastructure which will be required for long term viability of the Internet. I read George Gilder’s 2000 edition of “TELECOSM: How Infinite Bandwidth will Revolutionize Our World” and gained from it some additional insight into the background of the dot com bubble which largely was responsible for the vast levels of investment in fiber optic cable that have powered the Internet explosion of the past decade. As Johari points out, that model of bankruptcy which made fiber both so abundant and affordable (for post-bankruptcy owners) is not a sustainable model for an infrastructure build-out moving forward. I had not been aware of the NSF in the United States in deregulating the provision of long-haul telecommunications services in 1995. His metaphor of this basically being like turning over the provision of water utility services to a group of vendors, without even a common exchange provided for negotiation and contracting, was quite eye opening. In many ways, the marvel of the modern Internet and the fact that it works as well as it does IS amazing. I fervently hope we are not living in a finite golden age of Internet access and publication, when abundant peering connections between tier 1 carriers make it possible for literally thousands of people around the world to access content like my blog for essentially zero additional cost beyond their fees paid for local Internet access. After hearing most of Dr. Johari’s lectures, I’m less certain that the architecture of the Internet and the experiences we have come (most likely) to take for granted in 2007 for our “online experiences” will remain exactly the same moving forward.

As we consider the variety of voices and “players” involved in discussions about the future of the Internet’s architecture, I think it is helpful to consider perspectives of the four players Dr. Johari highlights in his lecture:

Ultimately, we need to have an Internet architecture which supports the competing as well as sometimes complimentary goals of these different entities.

Here is a good exercise to do with your students: “Define the Internet.”

Next, try taking on this question: “Define the purposes of the Internet.”

Dr. Johari takes on both these questions in his lectures, and illustrates the fact that while many people use the Internet every day for communication, learning and entertainment, many misconceptions about the Internet exist– especially relating to the lack of guarantees about packet delivery. He also highlights the fact that the purposes for which the Internet is used have and continue to morph dramatically, and the existing bandwidth capacity for Internet traffic which exists today may be oustripped by the demand which is rising and likely to continue growing geometrically in the years to come. For more on this, the previously mentioned WSJ article “The Coming Exaflood” by Bret Swanson is eye opening.

Dr. Johari asks some really good questions which get at the intersection of politics and technologies at the heart of these discussions about the future of the Internet. One question was, “How do you incentivize investment in the network?” Clearly a basic business assumption and principle is the idea that companies will not make an investment if they cannot see a viable future return on that investment. So how can this be incentivized in the context of the Internet and the dramatically larger long-haul network capacity which will likely be required in the years ahead to meet increasing demand for high-bandwidth applications like video and synchronous conferencing? Where Dr. Lessig and others (at some point in the past, I’m not entirely sure what Lessig’s current views on this are) might have advocated for simply “more bandwidth” to meet the requirements of increasing demand rather than tampering with the Internet’s fundamental end-to-end design, Dr. Johari suggests that it may be necessary to enable organizations and individuals to differentiate between packets traversing the network to guarantee different speeds of delivery. This suggestion could open a pandora’s box, of course, of both intended and unintended consequences. The necessity for this capacity of packet differentiation, however, may not be glaringly apparent to the “average consumer.” Graphs of bandwidth consumption and forecast bandwidth consumption suggest, however, that this “exaflood” of data may indeed be on the way and our current Internet infrastructure may be ill-equipped to handle it. Hence the question of how we can incentivize network investment is pivotal. Quality of Service (QoS) and “smart networks” support this idea of packet differentiation, but as Dr. Johari illustrates an even more basic level of packet differentiation is needed.

His example of spam email and an online purchase with PayPal is basic as well as helpful. Both of these TCP requests may consume an equivalent amount of Internet bandwidth. Yet the priority with which a consumer (as well, presumably, as the vendor selling the product in question in the second case) would want these packets handled is VERY different. No one except the spammer and their customers would want the spam email packet to have a high priority, but most consumers purchasing something on the web would DEFINITELY want to make sure their online transaction went through and their purchase was completed. This example is used by Dr. Johari to illustrate the idea that a simple measurement of bandwidth quantity is not sufficient to measure value. The diverse contexts of data exchanges on the Internet make the economics quite challenging to both understand and prescribe.

Dr. Johari offers the following as a guideline for the economics of packet transmissions online: “The relative value of transmitting content over a network defines how money should flow over the Internet.” The questions of what direction the money should flow and how much money should flow in a given situation are really up in the air at this point, in many circumstances.

In the past, according to Dr. Johari, the United States had more power and authority over discussions about Internet content and the Internet market than it has today. In the past, both “eyeballs” and content online were largely in the United States. Today that is no longer the case. The Internet is a GLOBAL network of networks, and the United States does NOT have hegemony over Internet policy. This is important to recognize, as is the fact that in some countries access to the Internet IS centrally managed. That is not the case in the U.S. and in many other countries. I actually would like to see a Google Earth KML/KMZ overlay of countries where Internet access is centrally managed. I’m not sure how short that list is, but I suspect it’s longer than I would have guessed before hearing this lecture series.

End to end design of the Internet has led to the current situation where the network is, by design, “content agnostic.” It strikes me that this design element is one of the characteristics of the web which makes it so inherently disruptive to authoritarian cultures, both in schools and in some countries. Overall, I think these discussions about the architecture of the Internet, the related economics and politics of that technology, and the future of the Internet are less a clash of “good versus evil” as they are sometimes portrayed in the mainstream media as they are the natural growing pains of a dynamic environment which continues to morph in ways that were unpredictable even a few short years ago.

As I heard Dr. Johari describe the futility and ridiculousness of someone trying to centrally manage and predict the path of individual packets under our current end-to-end designed Internet, I was struck by a similarity to the accountability movement in U.S schools. In the case of schools, technocratic politicians have ostensibly wanted to “improve schools” by centrally controlling the curriculum via the mandated assessments required for most students to advance on to the next grade and eventually graduate. Just as it is futile to predict the exact path of a TCP packet in the modern/current Internet with 100% accuracy, so too is it futile for a leader to guarantee “educational quality” through these top-down mandates for punitive testing. My own question in response to these ideas was, do we need a framework analogous to “end to end design” for schools which enables creativity and innovation, yet defies managed central control? My instinct says yes.

As I mentioned previously, before listening and reflecting on this lecture series I had not understood the differences between TCP and UDP well. As I now understand it, TCP as a protocol is concerned primarily with issues of reliability and fairness in packet delivery. As Dr. Johari repeats several times in his lecture, the “Internet” itself (internet protocol) does not guarantee the delivery of anything. Packet switching simply says that routers forward packets according to a routing table, it does not specify a guarantee for delivery. The discussion about neighbors on the same cable modem node sharing bandwidth reminded me of the “old days” of telephony when people shared a party line. I’d like to have access to a bandwidth utilization chart for neighborhoods in our area, to see where the highest levels of bandwidth utilization are concentrated. Those are the proprietary property of local ISPs, I suppose, so that is a pipe dream (pun intended) but I still think it would be interesting. When my local Internet access at home is slow, I’d like to know if there are P2P sharing teenagers in my neighborhood responsible for the slowdown, or if the blame lies elsewhere…..

TCP is “charged” with reliable delivery of packets over a network. The roundtrip time for a TCP packet is the time it takes to travel from you to a receiver and back. Traceroute programs on computers are used to determine the path individual packets take to transverse a network, and measure roundtrip time. Incidentally, my favorite way to measure local bandwidth remains the free Internet Frog Speed test. Whenever I’m in a hotel, school district, or other location when I want to measure downstream and upstream bandwidth from my network access point, I google for that website and use it. I’ve published a series of screenshots taken after using that tool to Flickr, tagged them “bandwidth,” and generally add them to a Flickr set I created just for those images.

TCP works by “moving up slowly” as the connection increases speed, to go as fast as the connection will permit, but backs off quickly. This supports the idea of fairness and guaranteeing packet delivery, based on available bandwidth. Dr. Johari shared a link in his lecture notes for tweaking TCP speed, which in the earlier days of “high speed” residential Internet access could lead to as much as a 4 times increase in access speed for computers. A simple Google keyword search for “tweak TCP” yields a variety of additional links related to this. I’ve never tried this but it sounds like something worth exploring if someone has a lot of free time on their hands. Unfortunately, at the current time, I don’t fall into that category. [grin]

Dr. Johari contends (and I think he is correct) that “unused capacity is bad but not as bad as losing packets” in the context of TCP connections. If you lose packets, you don’t download your entire purchased iTunes song, the file is corrupt and not usable. In the context of UDP connections, however, dropped packets are less of a concern, since communications latency is the priority. Again, the tradeoff between these two uses of bandwidth is fundamental, and I think not understood by many people fervently advocating a position in debates over “the future of the Internet.” Ignoring this fundamental, competing difference between online protocols is like ignoring the fact that people inhale and require oxygen to live while plants take in and require carbon dioxide. Both things are part of the biological circle of life, and it would be ridiculous to ignore the need for one and just insist on the provision of the other. Both are needed. Yet if someone is yelling, “Can’t we just all agree to breathe, here” they may be ignoring an important fact. Not all organisms breathe the same thing. Not all Internet connections rely on the same protocol either, and at a fundamental level TCP and UDP connections compete with each other. Which connections should get priority? That answer depends on many factors. Ultimately, we need an architecture for the Internet which will support both, just as we’ll continue to need both oxygen and carbon dioxide on our planet to survive.

Dr. Johari points out that a single webpage can open many different, independent TCP connections on the web, maybe even one hundred parallel connections for a single webpage. My use of the free “No Scripts” plug-in for the FireFox web browser on Windows XP has in some ways highlighted the different TCP connections opened by a single webpage in a visual way for me. I found his statistic of 200 milliseconds as the human tolerance for latency in a voice conversation to be amazing. TCP is a poor protocol for voice communication over the Internet, since it fundamentally favors reliability over minimizing latency. That is why UDP is a preferred protocol for VOIP applications on the web.

How will the future architecture of the Internet deal with these “issues in the middle” when it comes to prioritizing packets and insuring that network capacity cannot be overwhelmed by people “blasting away” with UDP connections or similar network packet requests? The current Internet architecture IS “dumb” and does not permit this type of packet prioritization currently. This is really the key element to many of the debates over network architecture: How can and should packet differentiation be enabled? As Dr. Johari states, as a consumer you absolutely do NOT want TCP and UDP connections to share all available bandwidth access lines. If they do, and UDP network traffic skyrockets, the prospects for a useable TCP-based web are dim. No one wants that.

Dr. Johari discussed the “innovator’s dilemma” in this context, and I find it interesting that phrase is linked to “disruptive technology” in WikiPedia. The changes being contemplated and debated here in the context of Internet architecture are certainly disruptive, and the unintended consequences are likely impossible to forecast with complete certainty.

The discussion of the contracts which hold the long-haul Internet together, peering as well as transit fees, and overlay networks included in these lectures is also interesting and helpful, but I think I’ll wrap up this admittedly long post. I wanted to write these ideas down to document them for my own future reference, to further process them in my own mind for my own understanding, and to share them in the hope they may help others and spark some constructive discussion around these issues.

I’ll close with a final question Dr. Johari posed several times in his lectures which really struck a chord with me. “What is the opportunity cost of lost innovation?” This is a question that is relevant not only to discussions about network architectures, but also to education and education policy. What has been the opportunity cost of lost innovation we’ve suffered in the past decade as high-stakes, punitive testing has become the norm rather than an anomaly in our classrooms? While that may be impossible to quantify, it is not impossible to understand or imagine. Lost innovation SHOULD be a concern both in debates over Internet politics and economics as well as educational policy. I’m blown away by the level of innovation and creativity which has been enabled in the past ten years by the emergence of the “modern” Internet. I dearly hope that level of innovation and creativity can be sustained into the future. I also hope we can encourage it within the context of formal Schooling too. That is certainly a topic for another post, but is also an issue near and dear to my heart.

If you’ve stuck with me for this lengthy post, congratulations and thanks are in order! As Edward R. Murrow used to say, “Good night and good luck.” Those crafting the policies which will determine the future of the Internet will certainly need reasoned thinking, but a large helping of good luck probably wouldn’t hurt either.

Technorati Tags:
, , , , , , , ,


by

Tags:

Comments

2 responses to “Understanding Internet architecture, a need for smarter networks, TCP and UDP differences”

  1. Tim Tyson Avatar

    You got me thinking about the parallels between internet policy and educational policy. Wow! Food for thought.

    I posted some initial thoughts and reactions: http://drtimtyson.com/blog/archives/2007/09/i_share_these_same_concerns.html