Jump to content

End-to-end principle

From Wikipedia, the free encyclopedia

The end-to-end principle is a design framework in computer networking. In networks designed according to this principle, guaranteeing certain application-specific features, such as reliability and security, requires that they reside in the communicating end nodes of the network. Intermediary nodes, such as gateways and routers, that exist to establish the network, may implement these to improve efficiency but cannot guarantee end-to-end correctness.

The essence of what would later be called the end-to-end principle was contained in the work of Donald Davies on packet-switched networks in the 1960s. Louis Pouzin pioneered the use of the end-to-end strategy in the CYCLADES network in the 1970s.[1] The principle was first articulated explicitly in 1981 by Saltzer, Reed, and Clark.[2][a] The meaning of the end-to-end principle has been continuously reinterpreted ever since its initial articulation. Also, noteworthy formulations of the end-to-end principle can be found before the seminal 1981 Saltzer, Reed, and Clark paper.[3]

A basic premise of the principle is that the payoffs from adding certain features required by the end application to the communication subsystem quickly diminish. The end hosts have to implement these functions for correctness.[b] Implementing a specific function incurs some resource penalties regardless of whether the function is used or not, and implementing a specific function in the network adds these penalties to all clients, whether they need the function or not.

Concept

[edit]
According to the end-to-end principle, the network is only responsible for providing the terminals with best-effort connections. Features such as reliability and security must be provided by mechanisms and protocols located at the terminals.

The fundamental notion behind the end-to-end principle is that for two processes communicating with each other via some communication means, the reliability obtained from that means cannot be expected to be perfectly aligned with the reliability requirements of the processes. In particular, meeting or exceeding very high-reliability requirements of communicating processes separated by networks of nontrivial size is more costly than obtaining the required degree of reliability by positive end-to-end acknowledgments and retransmissions (referred to as PAR or ARQ).[c] Put differently, it is far easier to obtain reliability beyond a certain margin by mechanisms in the end hosts of a network rather than in the intermediary nodes,[d] especially when the latter are beyond the control of, and not accountable to, the former.[e] Positive end-to-end acknowledgments with infinite retries can obtain arbitrarily high reliability from any network with a higher than zero probability of successfully transmitting data from one end to another.[f]

The end-to-end principle does not extend to functions beyond end-to-end error control and correction, and security. E.g., no straightforward end-to-end arguments can be made for communication parameters such as latency and throughput. In a 2001 paper, Blumenthal and Clark note: "[F]rom the beginning, the end-to-end arguments revolved around requirements that could be implemented correctly at the endpoints; if implementation inside the network is the only way to accomplish the requirement, then an end-to-end argument isn't appropriate in the first place."[7]: 80 

The end-to-end principle is closely related, and sometimes seen as a direct precursor, to the principle of net neutrality.[8]

History

[edit]

In the 1960s, Paul Baran and Donald Davies, in their pre-ARPANET elaborations of networking, made comments about reliability. Baran's 1964 paper states: "Reliability and raw error rates are secondary. The network must be built with the expectation of heavy damage anyway. Powerful error removal methods exist."[9]: 5  Going further, Davies captured the essence of the end-to-end principle; in his 1967 paper, he stated that users of the network will provide themselves with error control: "It is thought that all users of the network will provide themselves with some kind of error control and that without difficulty this could be made to show up a missing packet. Because of this, loss of packets, if it is sufficiently rare, can be tolerated."[10]: 2.3 

The ARPANET was the first large-scale general-purpose packet switching network – implementing several of the concepts previously articulated by Baran and Davies.[11][12]

Davies built a local-area network with a single packet switch and worked on the simulation of wide-area datagram networks.[13][14][15] Building on these ideas, and seeking to improve on the implementation in the ARPANET,[15] Louis Pouzin's CYCLADES network was the first to implement datagrams in a wide-area network and make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself.[1] Concepts implemented in this network feature in TCP/IP architecture.[16]

Applications

[edit]

ARPANET

[edit]

The ARPANET demonstrated several important aspects of the end-to-end principle.

Packet switching pushes some logical functions toward the communication endpoints
If the basic premise of a distributed network is packet switching, then functions such as reordering and duplicate detection inevitably have to be implemented at the logical endpoints of such a network. Consequently, the ARPANET featured two distinct levels of functionality:
  1. a lower level concerned with transporting data packets between neighboring network nodes (called Interface Message Processors or IMPs), and
  2. a higher level concerned with various end-to-end aspects of the data transmission.[g]
Dave Clark, one of the authors of the end-to-end principle paper, concludes: "The discovery of packets is not a consequence of the end-to-end argument. It is the success of packets that make the end-to-end argument relevant."[19]: slide 31 
No arbitrarily reliable data transfer without end-to-end acknowledgment and retransmission mechanisms
The ARPANET was designed to provide reliable data transport between any two endpoints of the network – much like a simple I/O channel between a computer and a nearby peripheral device.[h] In order to remedy any potential failures of packet transmission normal ARPANET messages were handed from one node to the next node with a positive acknowledgment and retransmission scheme; after a successful handover they were then discarded,[i] no source-to-destination re-transmission in case of packet loss was catered for. However, in spite of significant efforts, perfect reliability as envisaged in the initial ARPANET specification turned out to be impossible to provide – a reality that became increasingly obvious once the ARPANET grew well beyond its initial four-node topology.[j] The ARPANET thus provided a strong case for the inherent limits of network-based hop-by-hop reliability mechanisms in pursuit of true end-to-end reliability.[k]
Trade-off between reliability, latency, and throughput
The pursuit of perfect reliability may hurt other relevant parameters of a data transmission – most importantly latency and throughput. This is particularly important for applications that value predictable throughput and low latency over reliability – the classic example being interactive real-time voice applications. This use case was catered for in the ARPANET by providing a raw message service that dispensed with various reliability measures so as to provide faster and lower latency data transmission service to the end hosts.[l]

TCP/IP

[edit]

Internet Protocol (IP) is a connectionless datagram service with no delivery guarantees. On the Internet, IP is used for nearly all communications. End-to-end acknowledgment and retransmission is the responsibility of the connection-oriented Transmission Control Protocol (TCP) which sits on top of IP. The functional split between IP and TCP exemplifies the proper application of the end-to-end principle to transport protocol design.

File transfer

[edit]

An example of the end-to-end principle is that of an arbitrarily reliable file transfer between two endpoints in a distributed network of a varying, nontrivial size:[3] The only way two endpoints can obtain a completely reliable transfer is by transmitting and acknowledging a checksum for the entire data stream; in such a setting, lesser checksum and acknowledgment (ACK/NACK) protocols are justified only for the purpose of optimizing performance – they are useful to the vast majority of clients, but are not enough to fulfill the reliability requirement of this particular application. A thorough checksum is hence best done at the endpoints, and the network maintains a relatively low level of complexity and reasonable performance for all clients.[3]

Limitations

[edit]

The most important limitation of the end-to-end principle is that its basic premise, placing functions in the application endpoints rather than in the intermediary nodes, is not trivial to implement.

An example of the limitations of the end-to-end principle exists in mobile devices, for instance with mobile IPv6.[27] Pushing service-specific complexity to the endpoints can cause issues with mobile devices if the device has unreliable access to network channels.[28]

Further problems can be seen with a decrease in network transparency from the addition of network address translation (NAT), which IPv4 relies on to combat address exhaustion.[29] With the introduction of IPv6, users once again have unique identifiers, allowing for true end-to-end connectivity. Unique identifiers may be based on a physical address, or can be generated randomly by the host.[30]

The end-to-end principle advocates pushing coordination-related functionality ever higher, ultimately into the application layer. The premise is that application-level information enables flexible coordination between the application endpoints and yields better performance because the coordination would be exactly what is needed. This leads to the idea of modeling each application via its own application-specific protocol that supports the desired coordination between its endpoints while assuming only a simple lower-layer communication service. Broadly, this idea is known as application semantics (meaning).

Multiagent systems offers approaches based on application semantics that enable conveniently implementing distributed applications without requiring message ordering and delivery guarantees from the underlying communication services. A basic idea in these approaches is to model the coordination between application endpoints via an information protocol[31] and then implement the endpoints (agents) based on the protocol. Information protocols can be enacted over lossy, unordered communication services. A middleware based on information protocols and the associated programming model abstracts away message receptions from the underlying network and enables endpoint programmers to focus on the business logic for sending messages.

See also

[edit]

Notes

[edit]
  1. ^ The 1981 paper[2] was published in ACM's TOCS in an updated version in 1984.[3][4]
  2. ^ The full quote from the Saltzer, Reed, Clark paper states:[3] "In a system that includes communications, one usually draws a modular boundary around the communication subsystem and defines a firm interface between it and the rest of the system. When doing so, it becomes apparent that there is a list of functions each of which might be implemented in any of several ways: by the communication subsystem, by its client, as a joint venture, or perhaps redundantly, each doing its own version. In reasoning about this choice, the requirements of the application provide the basis for the following class of arguments: The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the endpoints of the communication system. Therefore, providing that questioned function as a feature of the communication system itself is not possible, and moreover, produces a performance penalty for all clients of the communication system. (Sometimes an incomplete version of the function provided by the communication system may be useful as a performance enhancement.) We call this line of reasoning against low-level function implementation the end-to-end argument." (p. 278).
  3. ^ In fact, even in local area networks there is a non-zero probability of communication failure – "attention to reliability at higher levels is required regardless of the control strategy of the network".[5]
  4. ^ Put in economics terms, the marginal cost of additional reliability in the network exceeds the marginal cost of obtaining the same additional reliability by measures in the end hosts. The economically efficient level of reliability improvement inside the network depends on the specific circumstances; however, it is certainly nowhere near zero:[3] "Clearly, some effort at the lower levels to improve network reliability can have a significant effect on application performance. (p. 281)."
  5. ^ The possibility of enforceable contractual remedies notwithstanding, it is impossible for any network in which intermediary resources are shared in a non-deterministic fashion to guarantee perfect reliability. At most, it may quote statistical performance averages.
  6. ^ More precisely:[6] "THM 1: A correctly functioning PAR protocol with infinite retry count never fails to deliver, loses, or duplicates messages. COR 1A: A correctly functioning PAR protocol with finite retry count never loses or duplicates messages, and the probability of failing to deliver a message can be made arbitrarily small by the sender." (p. 3).
  7. ^ In accordance with the ARPANET RFQ[17] (pp. 47 f.) the ARPANET conceptually separated certain functions. As BBN points out in a 1977 paper:[18] "[T]he ARPA Network implementation uses the technique of breaking messages into packets to minimize the delay seen for long transmissions over many hops. The ARPA Network implementation also allows several messages to be in transit simultaneously between a given pair of Hosts. However, the several messages and the packets within the messages may arrive at the destination IMP out of order, and in the event of a broken IMP or line, there may be duplicates. The task of the ARPA Network source-to-destination transmission procedure is to reorder packets and messages at their destination, to cull duplicates, and after all the packets of a message have arrived, pass the message on to the destination Host and return an end-to-end acknowledgment. (p. 284)."
  8. ^ This requirement was spelled out in the ARPANET RFQ, "From the point of view of the ARPA contractors as users of the network, the communication subnet is a self-contained facility whose software and hardware is maintained by the network contractor. In designing Interconnection Software we should only need to use the I/0 conventions for moving data into and out of the subnet and not otherwise be involved in the details of subnet operation. Specifically, error checking, fault detection, message switching, fault recovery, line switching, carrier failures and carrier quality assessment, as required to guarantee reliable network performance, are the sole responsibility of the network contractor."[17]: 25 
  9. ^ Notes Walden in a 1972 paper, "Each IMP holds on to a packet until it gets a positive acknowledgment from the next IMP down the line that the packet has been properly received. If it gets the acknowledgment, all is well; the IMP knows that the next IMP now has responsibility for the packet and the transmitting IMP can discard its copy of the packet."[20]: 11 
  10. ^ By 1973, BBN acknowledged that the initial aim of perfect reliability inside the ARPANET was not achievable, "Initially, it was thought that the only components in the network design that were prone to errors were the communications circuits, and the modem interfaces in the IMPs are equipped with a CRC checksum to detect 'almost all' such errors. The rest of the system, including Host interfaces, IMP processors, memories, and interfaces, were all considered to be error-free. We have had to re-evaluate this position in the light of our experience.[21]: 1  In fact, as Metcalfe summarizes by 1973, "there have been enough bits in error in the ARPANET to fill this quota [one undetected transmission bit error per year] for centuries."[22]: 7–28  See also BBN Report 2816[23]: 10 ff  for additional elaboration about the experiences gained in the first years of operating the ARPANET.
  11. ^ Incidentally, the ARPANET also provides a good case for the trade-offs between the cost of end-to-end reliability mechanisms versus the benefits to be obtained thus. Note that true end-to-end reliability mechanisms would have been prohibitively costly at the time, given that the specification held that there could be up to 8 host-level messages in flight at the same time between two endpoints, each having a maximum of more than 8000 bits. The amount of memory that would have been required to keep copies of all those data for possible retransmission in case no acknowledgment came from the destination IMP was too expensive to be worthwhile. As for host-based end-to-end reliability mechanisms – those would have added considerable complexity to the common host-level protocol (Host-Host Protocol). While the desirability of host-host reliability mechanisms was articulated in RFC 1, after some discussion they were dispensed with (although higher-level protocols or applications were, of course, free to implement such mechanisms themselves). For a recount of the debate at the time see Bärwolff 2010,[24] pp. 56-58 and the notes therein, especially notes 151 and 163.
  12. ^ Early experiments with packet voice date back to 1971, and by 1972 more formal ARPA research on the subject commenced. As documented in RFC 660 (p. 2),[25] in 1974 BBN introduced the raw message service (Raw Message Interface, RMI) to the ARPANET, primarily in order to allow hosts to experiment with packet voice applications, but also acknowledging the use of such facility in view of possibly internetwork communication (cf. a BBN Report 2913[26] at pp. 55 f.). See also Bärwolff 2010,[24] pp. 80-84 and the copious notes therein.

References

[edit]
  1. ^ a b Bennett, Richard (September 2009). "Designed for Change: End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate" (PDF). Information Technology and Innovation Foundation. pp. 7, 11. Retrieved 11 September 2017.
  2. ^ a b Saltzer, J. H., D. P. Reed, and D. D. Clark (1981) "End-to-End Arguments in System Design". In: Proceedings of the Second International Conference on Distributed Computing Systems. Paris, France. April 8–10, 1981. IEEE Computer Society, pp. 509-512.
  3. ^ a b c d e f J. H. Saltzer; D. P. Reed; D. D. Clark (1 November 1984). "End-to-end arguments in system design" (PDF). ACM Transactions on Computer Systems. 2 (4): 277–288. doi:10.1145/357401.357402. ISSN 0734-2071. S2CID 215746877. Wikidata Q56503280. Retrieved 2022-04-05.
  4. ^ Saltzer, J. H. (1980). End-to-End Arguments in System Design. Request for Comments No. 185, MIT Laboratory for Computer Science, Computer Systems Research Division. (Online copy).
  5. ^ Clark, D. D., K. T. Pogran, and D. P. Reed (1978). “An Introduction to Local Area Networks”. In: Proceedings of the IEEE 66.11, pp. 1497–1517.
  6. ^ Sunshine, C. A. (1975). Issues in Communication Protocol Design – Formal Correctness. Draft. INWG Protocol Note 5. IFIP WG 6.1 (INWG). (Copy from CBI).
  7. ^ Blumenthal, M. S. and D. D. Clark (2001). "Rethinking the Design of the Internet: The End-to-End Arguments vs. the Brave World". In: ACM Transactions on Internet Technology 1.1, pp. 70–109. (Online pre-publication version).
  8. ^ Alexis C. Madrigal & Adrienne LaFrance (25 Apr 2014). "Net Neutrality: A Guide to (and History of) a Contested Idea". The Atlantic. Retrieved 5 Jun 2014. This idea of net neutrality...[Lawrence Lessig] used to call the principle e2e, for end to end
  9. ^ Baran, P. (1964). "On Distributed Communications Networks". In: IEEE Transactions on Communications 12.1, pp. 1–9.
  10. ^ Davies, D. W., K. A. Bartlett, R. A. Scantlebury, and P. T. Wilkinson (1967). "A Digital Communication Network for Computers Giving Rapid Response at Remote Terminals". In: SOSP '67: Proceedings of the First ACM Symposium on Operating System Principles. Gatlinburg, TN. October 1–4, 1967. New York, NY: ACM, pp. 2.1–2.17.
  11. ^ "The real story of how the Internet became so vulnerable". Washington Post. Archived from the original on 2015-05-30. Retrieved 2020-02-18. Historians credit seminal insights to Welsh scientist Donald W. Davies and American engineer Paul Baran
  12. ^ A History of the ARPANET: The First Decade (PDF) (Report). Bolt, Beranek & Newman Inc. 1 April 1981. pp. 13, 53 of 183. Archived from the original on 1 December 2012. Aside from the technical problems of interconnecting computers with communications circuits, the notion of computer networks had been considered in a number of places from a theoretical point of view. Of particular note was work done by Paul Baran and others at the Rand Corporation in a study "On Distributed Communications" in the early 1960's. Also of note was work done by Donald Davies and others at the National Physical Laboratory in England in the mid-1960's. ... Another early major network development which affected development of the ARPANET was undertaken at the National Physical Laboratory in Middlesex, England, under the leadership of D. W. Davies.
  13. ^ C. Hempstead; W. Worthington (2005). Encyclopedia of 20th-Century Technology. Routledge. ISBN 9781135455514. Simulation work on packet networks was also undertaken by the NPL group.
  14. ^ Clarke, Peter (1982). Packet and circuit-switched data networks (PDF) (PhD thesis). Department of Electrical Engineering, Imperial College of Science and Technology, University of London. "As well as the packet switched network actually built at NPL for communication between their local computing facilities, some simulation experiments have been performed on larger networks. A summary of this work is reported in [69]. The work was carried out to investigate networks of a size capable of providing data communications facilities to most of the U.K. ... Experiments were then carried out using a method of flow control devised by Davies [70] called 'isarithmic' flow control. ... The simulation work carried out at NPL has, in many respects, been more realistic than most of the ARPA network theoretical studies."
  15. ^ a b Pelkey, James. "8.3 CYCLADES Network and Louis Pouzin 1971-1972". Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968-1988. Archived from the original on 2021-06-17. Retrieved 2021-11-21. Pouzin returned to his task of designing a simpler packet switching network than Arpanet. ... [Davies] had done some simulation of [wide-area] datagram networks, although he had not built any, and it looked technically viable.
  16. ^ "The internet's fifth man". Economist. 13 December 2013. Retrieved 11 September 2017. In the early 1970s Mr Pouzin created an innovative data network that linked locations in France, Italy and Britain. Its simplicity and efficiency pointed the way to a network that could connect not just dozens of machines, but millions of them. It captured the imagination of Dr Cerf and Dr Kahn, who included aspects of its design in the protocols that now power the internet.
  17. ^ a b Scheblik, T. J., D. B. Dawkins, and Advanced Research Projects Agency (1968). RFQ for ARPA Computer Network. Request for Quotations. Advanced Research Projects Agency (ARPA), Department of Defense (DoD). (Online copy Archived 2011-08-15 at the Wayback Machine).
  18. ^ McQuillan, J. M. and D. C. Walden (1977). "The ARPA Network Design Decisions". In: Computer Networks 1.5, pp. 243–289. (Online copy). Based on a Crowther et al. (1975) paper, which is based on BBN Report 2918, which in turn is an extract from BBN Report 2913, both from 1974.
  19. ^ Clark, D. D. (2007). Application Design and the End-to-End Arguments. MIT Communications Futures Program Bi-Annual Meeting. Philadelphia, PA. May 30–31, 2007. Presentation slides. (Online copy).
  20. ^ Walden, D. C. (1972). "The Interface Message Processor, Its Algorithms, and Their Implementation". In: AFCET Journées d’Études: Réseaux de Calculateurs (AFCET Workshop on Computer Networks). Paris, France. May 25–26, 1972. Association Française pour la Cybernétique Économique et Technique (AFCET). (Online copy).
  21. ^ McQuillan, J. M. (1973). Software Checksumming in the IMP and Network Reliability. RFC 528. Historic. NWG.
  22. ^ Metcalfe, R. M. (1973). "Packet Communication". PhD thesis. Cambridge, MA: Harvard University. Online copy (revised edition, published as MIT Laboratory for Computer Science Technical Report 114). Mostly written at MIT Project MAC and Xerox PARC.
  23. ^ Bolt, Beranek and Newman Inc. (1974). Interface Message Processors for the Arpa Computer Network. BBN Report 2816. Quarterly Technical Report No.5, 1 January 1974 to 31 March 1974. Bolt, Beranek and Newman Inc. (BBN). (Private copy, courtesy of BBN).
  24. ^ a b Bärwolff, M. (2010). "End-to-End Arguments in the Internet: Principles, Practices, and Theory". Self-published online and via Createspace/Amazon (PDF, errata, etc.)
  25. ^ Walden, D. C. (1974) Some Changes to the IMP and the IMP/Host Interface. RFC 660. Historic. NWG.
  26. ^ BBN (1974). Interface Message Processors for the Arpa Computer Network. BBN Report 2913. Quarterly Technical Report No. 7, 1 July 1974 to 30 September 1974. Bolt, Beranek and Newman Inc. (BBN).
  27. ^ J. Kempf; R. Austein (March 2004). The Rise of the Middle and the Future of End-to-End: Reflections on the Evolution of the Internet Archichecture. Network Working Group, IETF. doi:10.17487/RFC3724. RFC 3724.
  28. ^ "CNF Protocol Architecture". Focus Projects. Winlab, Rutgers University. Archived from the original on June 23, 2016. Retrieved May 23, 2016.
  29. ^ Ward, Mark (2012-09-14). "Europe hits old internet address limits". BBC News. Retrieved 2017-02-28.
  30. ^ Steve Deering & Bob Hinden, Co-Chairs of the IETF's IP Next Generation Working Group (November 6, 1999). "Statement on IPv6 Address Privacy". Retrieved 2017-02-28.
  31. ^ "Information-Driven Interaction-Oriented Programming: BSPL, the Blindingly Simple Protocol Language" (PDF). Retrieved 24 April 2013.
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy