[A2k] We need a new formulation of end-to-end analysis

Philippe Aigrain (perso Wanadoo) philippe.aigrain at wanadoo.fr
Thu Jan 24 07:38:11 PST 2013


End-to-end analysis is the major theoretization of the Internet that was
proposed by Jerome Saltzer, David Reed and David Clark from 1981. In
their seminal paper and later ones, they formulated what became known as
the end-to-end principle, interpreted often as “application-specific
functions ought to reside in the end hosts of a network rather than in
intermediary nodes – provided they can be implemented ‘completely and
correctly’ in the end hosts”. Ths principle is much quoted by proponents
of strong network neutrality requirements, including myself. In reality,
Saltzer, Reed and Clark derive this “networks better be dumb or at least
not too smart” approach from an underlying analysis of what happens when
bits travel from an end (a computer connected to a network) to another
end in a network.

However both network neutrality and the end-to-end principle capture
only part of what we try to make them say. What we have in mind is that
the analysis of what happens in a network should be conducted by
considering what happens between the human using one device and another
human using another device or between one such human and a remote
device, such as a distant storage device, server or peer computer. We
need an end-to-end analysis which is understood as
<em>human-to-human</em> or <em>human-to-remote computer</em>.  What will
it change? One must first acknowledge that with this extended approach,
one can't hope to extend the probabilistic model which makes the
original formulation of Saltzer, Clark & Reed so compelling. The new
formulation can't replace the old one, it can only extend it with a
qualitative reasoning...

more at
http://paigrain.debatpublic.net/?p=6418&lang=en




More information about the A2k mailing list