• Tidak ada hasil yang ditemukan

History

Dalam dokumen Network Services Investment Guide (Halaman 56-59)

This simple core protocol has allowed immense innovation at the transport and application layers. Different application modules can utilize different transport protocols that match their needs, but all of them are built over IP, which has become the glue holding the Internet together. The success of the Internet is partially due to the simplicity of IP. Again, this validates the end-2-end argument.

By pushing applications to the user level with end-2-end applications, more experimentation is likely. There are several reasons for this. First, application-layer development is faster and less expensive than kernel work because kernel code tends to be complex and debugging is often difficult. Next, the pool of talent with the skills to do application-layer cod- ing is greater. Finally, those programmers allowed to develop new services are much broader at the application level because they include users, and as Hippel [4] shows, users sometimes are best suited to solve their own problems.

Because end-2-end applications do not require network infrastructure change or permission to experiment, users can and do innovate new ser- vices. Consider the creation of the Web. Tim Berners-Lee [5] was not a net- work researcher searching for innovative ways to utilize the Internet, he was an administrator trying to better serve his users. He developed the World Wide Web to allow the scientists in his organization to share infor- mation across diverse computers and networks. It just so happened that his solution, the Web, met many other user needs far better than anything else at the time. This illustrates one powerful attribute of the end-2-end argument — you never know who will think of the next great idea, and with end-2-end services, it can be anybody.

update protocols. His answer is one early and important step in the history of the end-2-end argument. In Chapter 2 of his thesis [6] about a two-phase- commit data update protocol, he elegantly argues that networks can lose data, can deliver data in a different order than it is sent, and can even dupli- cate data, and that a robust update protocol must work with all of these errors. He argues that, even if the network provides perfect data transfer, the two-phase-commit application must still perform these tasks by point- ing out that faulty application software or bad hardware can cause all of the preceding symptoms. He concludes that, because the ends must perform these tasks, the network need not worry excessively about them. This is the essence of how the Internet’s network protocol IP works: It is unreliable, and data can be out of order or duplicated. Reed’s thesis explains why sim- ple networks and complex applications are a good combination.

Reed’s seminal argument is extended to explain why networks should provide only simple basic network infrastructures, in a paper [1] by Saltzer, Clark, and Reed. They argue that it is impossible to anticipate the advanced services new applications will need, so providing just basic ser- vices is best. They point out that trying to meet the needs of unknown applications will only constrain these applications later. The paper argues that simplicity of basic network services creates flexibility in application development. Complex networks may allow easy development of certain types of applications (that is, those envisioned by the network designers) but hinder innovation of new applications that were beyond the grasp of these same network designers. The principle of simple networks was at odds with the belief of the telephone companies at the time, but fortunately it became a major influence in the design of the Internet.

David Isenberg is an interesting fellow. While working for a telephone company (AT&T) that was spending billions of dollars designing and implementing the next generation of intelligent networks, he was writing his classic article about the dawn of the stupid network [7]. He discusses problems of intelligent networks (such as the telephone network) and explains the advantages of networks with a stupid infrastructure, such as the Internet. He explains that some of the technical advantages of simple networks are these:

■■ They are inexpensive and easy-to-install because they are simple.

■■ There is abundant network infrastructure because it is inexpensive to build and maintain.

■■ They under specify the network data, which means they do not know or care about this data.

■■ They provide a universal way of dealing with underlying network details (such as IP).

Internet End-2-End Argument 41

Isenberg discusses how user control boosts innovation. One of the major contributions of this work is that it is the first to mention the value of stu- pid networks in the context of the user’s ability to perform experimenta- tion at will and share the results with friends and colleagues. His work was well received by many, but not by his management, who believed that the stupid Internet could never satisfy the demanding needs of the business community. It turned out that management at AT&T was wrong, and David Isenberg was right. He is now a successful independent consultant, writer, and industry pundit.

Another important contribution to the end-2-end argument was made by the author in his thesis [8] and in work with Scott Bradner, Marco Ian- siti, and H. T. Kung at Harvard University. It contributes several ideas to the end-2-end principle, by linking market uncertainty to the value of end- 2-end architecture. It expands on Isenberg’s ideas about the value of user innovation by explaining that allowing users to innovate creates value because there are many more users than service providers. Furthermore, because it often costs less for users to experiment, they may perform more experiments, thus increasing the expected value of the best of these exper- iments. The major contribution of this work is that it links the level of mar- ket uncertainty to the value of user experimentation and innovation. User innovation is of little value if market uncertainty is low because the service provider will create services that meet user needs as well as anybody. It is likely that every proposed service will meet the needs of a certain market.

Furthermore, the big centralized managed service providers will use resources more efficiently. It is when market uncertainty is high that user innovation has the greatest value because the larger number of experi- ments increases this value. This illustrates the link between market uncer- tainty and the value of end-2-end architecture.

The creators of the Internet believed in the end-2-end argument, and the basic Internet network- and transport-layer protocols IP, TCP, and UDP are examples of its application. The network-layer protocol IP guarantees lit- tle; data can be lost, reordered, and repeated. IP, however, is very flexible and allows different types of transport-layer protocols, such as UDP, TCP, and now SCTP. These different transport protocols built on the simple IP layer give applications the flexibility they need by allowing them to choose a transport protocol suitable to their needs. The network should not decide the type of transport for the applications. Some applications, such as DNS, found unreliable data service worked well; other applications, such as HTTP or FTP, need reliable data service. Different applications will demand different services, and the network should not constrain these choices. The end-2-end argument helped the designers of the Internet 42 Chapter 3

promote the development of applications by users because of the flexibil- ity it gave developers.

Dalam dokumen Network Services Investment Guide (Halaman 56-59)