High-Speed Networking Technology An Introductory Survey
High-Speed Networking Technology An Introductory Survey
June 1995
IBML
GG24-3816-02
International Technical Support Organization
High-Speed Networking Technology:
An Introductory Survey
June 1995
Take Note!
Before using this information and the product it supports, be sure to read the general information under
“Special Notices” on page xix.
Order publications through your IBM representative or the IBM branch office serving your locality. Publications
are not stocked at the address given below.
An ITSC Technical Bulletin Evaluation Form for readers′ feedback appears facing Chapter 1. If the form has been
removed, comments may be addressed to:
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 1992 1993 1995. All rights reserved.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is
subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Abstract
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Comments on the Third Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Comments on the Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Yet Another Revolution? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 User Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 The New Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 The Traditional Packet Data Network . . . . . . . . . . . . . . . . . . . . . . 3
1.4.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4.2 Internal Network Operation . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Is a Network Still Needed? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Alternative Networking Technologies . . . . . . . . . . . . . . . . . . . . . . 7
1.7 Traditional End-User Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.8 Traditional End-User Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 9
vi High-Speed Networking
6.10 A Theoretical View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.11 Summary of Packet Network Characteristics . . . . . . . . . . . . . . . . 145
6.12 High-Speed Packet and Cell Switching Architectures . . . . . . . . . . . 146
6.12.1 Traditional Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.12.2 Bus/Backplane Switches . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.12.3 Multistage Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Contents vii
11.1.1 Concept of Frame Relay . . . . . . . . . . . . . . . . . . . . . . . . . 228
11.1.2 Basic Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
11.1.3 Frame Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
11.1.4 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
11.1.5 Characteristics of a Frame Relay Network . . . . . . . . . . . . . . 232
11.1.6 Comparison with X.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
11.1.7 SNA Connections Using Frame Relay . . . . . . . . . . . . . . . . . 234
11.1.8 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
11.1.9 Frame Relay as an Internal Network Protocol . . . . . . . . . . . . 237
11.2 Packetized Automatic Routing Integrated System (Paris) . . . . . . . . 238
11.2.1 Node Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
11.2.2 Automatic Network Routing (ANR) . . . . . . . . . . . . . . . . . . . 240
11.2.3 Copy and Broadcast Functions . . . . . . . . . . . . . . . . . . . . . 242
11.2.4 Connection Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
11.2.5 Flow and Rate Control . . . . . . . . . . . . . . . . . . . . . . . . . . 244
11.2.6 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
11.2.7 Performance Characteristics . . . . . . . . . . . . . . . . . . . . . . . 246
Contents ix
15.2 Cyclic Reservation Multiple Access (CRMA) . . . . . . . . . . . . . . . . 351
15.2.1 Bus Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
15.2.2 Slotted Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
15.2.3 The Global Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
15.2.4 The RESERVE Command . . . . . . . . . . . . . . . . . . . . . . . . . 352
15.2.5 The Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
15.2.6 Operation of the Node . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
15.2.7 Limiting the Access Delay . . . . . . . . . . . . . . . . . . . . . . . . 354
15.2.8 Dual Bus Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 355
15.2.9 Priorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
15.2.10 Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
15.3 CRMA-II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
15.3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
15.3.2 Principles of CRMA-II . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
15.3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
x High-Speed Networking
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
General References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Digital Signaling Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Optical Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Radio LANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
ISDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Sonet/SDH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Asynchronous Transfer Mode (ATM) . . . . . . . . . . . . . . . . . . . . . . 426
Token Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
FDDI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
DQDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
MetaRing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
CRMA and CRMA-II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Fast Packet Switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Circuit Switched Systems (IBM ESCON) . . . . . . . . . . . . . . . . . . . . 427
Protocol Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Contents xi
xii High-Speed Networking
Figures
1. NRZ Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2. NRZI Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3. ISDN_BR Pseudoternary Coding Example . . . . . . . . . . . . . . . . . . 18
4. Operating Principle of a Continuous (Analog) PLL . . . . . . . . . . . . . 20
5. Clock Recovery in Primary Rate ISDN . . . . . . . . . . . . . . . . . . . . 21
6. Function of a Repeater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7. Manchester Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
8. Differential Manchester Coding Example . . . . . . . . . . . . . . . . . . . 24
9. Code Violation in AMI Coding . . . . . . . . . . . . . . . . . . . . . . . . . 26
10. Zeros Substitution in HDB3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
11. B8ZS Substitutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
12. 2-Binary 1-Quaternary Code . . . . . . . . . . . . . . . . . . . . . . . . . . 35
13. MLT-3 Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
14. Frequency Spectra of Scrambled and Unscrambled Signals . . . . . . . 37
15. IBM Type 1 Shielded Twisted Pair . . . . . . . . . . . . . . . . . . . . . . . 39
16. A Square Pulse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
17. Composition of a Square Pulse as the Sum of Sinusoidal Waves . . . . 41
18. The Same Pulse as before including 50 (25 Non-Zero) Terms . . . . . . 42
19. The Effect of Phase Distortion on the Thirteen-Term Summation . . . . . 43
20. Distortion of a Digital Pulse by a Transmission Channel . . . . . . . . . . 44
21. Resistance Variation with Frequency on a Typical TTP Connection . . . 48
22. Concept of an Echo Canceller . . . . . . . . . . . . . . . . . . . . . . . . . 49
23. Hybrid Balanced Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
24. Overview of the ADSL System . . . . . . . . . . . . . . . . . . . . . . . . . 57
25. Frequency Spectrum Use of ADSL (Initial Version) . . . . . . . . . . . . . 57
26. Frequency Division Multiplexing in DMT . . . . . . . . . . . . . . . . . . . 59
27. DMT Transmitter (Schematic) . . . . . . . . . . . . . . . . . . . . . . . . . . 61
28. DMT Receiver Schematic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
29. Optical Transmission - Schematic . . . . . . . . . . . . . . . . . . . . . . . 65
30. Typical Fiber Infrared Absorption Spectrum . . . . . . . . . . . . . . . . . 72
31. Typical Fiber Infrared Absorption Spectrum . . . . . . . . . . . . . . . . . 73
32. Fiber Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
33. An Optical Transmission System Using Coherent Detection . . . . . . . 82
34. Erbium Doped Optical Fiber Amplifier . . . . . . . . . . . . . . . . . . . . . 86
35. Typical Fiber Cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
36. Transmitting Video over a Fixed Rate Channel . . . . . . . . . . . . . . 107
37. Leaky Bucket Rate Control . . . . . . . . . . . . . . . . . . . . . . . . . . 118
38. A Cascade of Leaky Buckets . . . . . . . . . . . . . . . . . . . . . . . . . 119
39. Transporting Voice over a Packet Network . . . . . . . . . . . . . . . . . 122
40. Irregular Delivery of Voice Packets . . . . . . . . . . . . . . . . . . . . . 123
41. Assembly of Packets for Priority Discard Scheme . . . . . . . . . . . . 126
42. Effect of Packetization on Transit Time through a 3-Node Network . . 130
43. A Connection across a Connectionless Network . . . . . . . . . . . . . 134
44. Topology of a Multisegment LAN . . . . . . . . . . . . . . . . . . . . . . . 137
45. Routing Information of a Token-Ring Frame . . . . . . . . . . . . . . . . 138
46. Data Networking by Logical ID Swapping . . . . . . . . . . . . . . . . . . 139
47. Protocol Span of Packet Networking Techniques . . . . . . . . . . . . . 144
48. Two Approaches to Packet Switch Design . . . . . . . . . . . . . . . . . 146
49. Stage-by-Stage Serial Switch (24 Ports) . . . . . . . . . . . . . . . . . . 148
50. A Hierarchy of Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
51. Nested Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Figures xv
162. Wavelength Selective Network - Concept . . . . . . . . . . . . . . . . . . 387
163. MuxMaster Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
164. MuxMaster Field Demonstration . . . . . . . . . . . . . . . . . . . . . . . 389
165. MuxMaster System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
166. RingMaster Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
167. Ring Wiring Concentrator - Logical View . . . . . . . . . . . . . . . . . . 393
168. The Concept of Frequency Division Multiplexing . . . . . . . . . . . . . 399
169. Time Division Multiplexing Principles . . . . . . . . . . . . . . . . . . . . 400
170. Sub-Multiplexing Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
171. Three Primary Modulation Techniques . . . . . . . . . . . . . . . . . . . 407
172. Transmission and Reception of a Modulated Signal . . . . . . . . . . . 407
173. Sidebands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
174. Schematic of a Single Server Queue . . . . . . . . . . . . . . . . . . . . 413
175. Behavior of a Hypothetical Single Server Queue . . . . . . . . . . . . . 414
This publication is intended to help both customers and IBM systems engineers
to understand the principles of high-speed communications. The information in
this publication is not intended as the specification of any programming interface
provided by any IBM product. See the publications section of the applicable IBM
Programming Announcement for the specific product for more information about
which publications are considered to be product documentation.
Information in this book was developed in conjunction with use of the equipment
specified, and is limited in application to those specific hardware and software
products and levels.
IBM may have patents or pending patent applications covering subject matter in
this document. The furnishing of this document does not give you any license to
these patents. You can send license inquiries, in writing, to the IBM Director of
Licensing, IBM Corporation, 500 Columbus Avenue, Thornwood, NY 10594, USA.
The information contained in this document has not been submitted to any
formal IBM test and is distributed AS IS. The use of this information or the
implementation of any of these techniques is a customer responsibility and
depends on the customer′s ability to evaluate and integrate them into the
customer′s operational environment. While each item may have been reviewed
by IBM for accuracy in a specific situation, there is no guarantee that the same
or similar results will be obtained elsewhere. Customers attempting to adapt
these techniques to their own environments do so at their own risk.
The following document contains examples of data and reports used in daily
business operations. To illustrate them as completely as possible, the examples
contain the names of individuals, companies, brands, and products. All of these
names are fictitious and any similarity to the names and addresses used by an
actual business enterprise is entirely coincidental.
AnyNet APPN
AS/400 ESCON
IBM Micro Channel
NetView Nways
PAL Personal System/2
PS/2 Quiet
RISC System/6000 RS/6000
Series/1 System/36
System/370 XT
400 9076 SP1
xx High-Speed Networking
Preface
Audience
This publication is primarily intended for people who have an interest in the
fields of data communications or voice networking. Such people will often be:
Technical planners in user organizations who wish to broaden their
understanding of high-speed communications and the direction product
development in the industry is taking.
IBM systems engineers evaluating the potential of different systems approaches
may find the information helpful in understanding the emerging high-speed
technologies.
Preface xxiii
Chapter 14, “Radio LAN Technology”
This chapter overviews the technology of indoor radio and its application to
wireless LAN systems.
Chapter 15, “The Frontiers of LAN Research”
There is an enormous amount of research going on around the world aimed at
inventing a LAN protocol that will be optimal in the speed range above about
one gigabit per second. Two IBM research project LAN prototypes (MetaRing
and CRMA) and a further proposal, which integrates the desirable features of
both (CRMA-II), are discussed.
Chapter 16, “Lightwave Networks”
Until very recently the use of fiber optics in communications has been limited to
replacing electronic links with optical ones. Network designs have remained the
same (albeit with some accommodation for the higher speeds involved).
Recently researchers have begun to report trials of a new kind of network that
uses passive optical components rather than electronic ones. An outline of this
technology is presented in this chapter.
Chapter 17, “LAN Hub Development”
An alternative to the very high-speed LAN is a local cell or packet switch. Many
people believe that this technology will make very high-speed LANs redundant.
This chapter overviews the issue of high-speed LANs versus LAN hubs.
Appendix A, “Review of Basic Principles”
This appendix is a review of the basic principles involved in multiplexing (or
sharing) a link or switching device. It is included as background for readers who
may be unfamiliar with this area.
Appendix B, “Transmitting Information by Modulating a Carrier”
This material provides some elementary background to the discussion on indoor
radio for readers who may not have studied the subject before.
Appendix C, “Queueing Theory”
Queueing theory is the basis of network design and also the basis for
understanding much of the discussion in this book. This appendix presents the
important results of queueing theory and discusses its impact on high-speed
protocols.
Appendix D, “Getting the Language into Synch”
One problem in developing a book such as this is the inconsistency in language
between different groups in the EDP and communications industries. This
apppendix deals with words related to the word “synchronous” in order that the
reader may understand the usage in the body of the text.
The changes in this edition from the Second Edition are as follows:
As technology develops it is expected that this book will be updated and new
editions produced. Comments from readers are an enormous help. Please, if
you have any comment at all, send in the reader′s comment form.
The year and a half between editions saw an unprecedented acceleration in the
development of ATM. In 1991, ATM looked to be something that would be
available in about the year 2000. By 1993, ATM looked like it might become
available as early as 1995 (some early products were available already). This
prediction held true.
Preface xxv
xxvi High-Speed Networking
Acknowledgments
Peter Lenhard
IBM International Technical Support Organization
Raleigh Center
Since the inception of data communications in the late 1950s, techniques have
steadily improved, link speeds have progressively become faster, and equipment
cost has gradually declined. In general terms, it seems that the industry has
progressed in price/performance measures by about 20% per annum compound.
But now the cost of long communications lines is scheduled to reduce not by a
few percent but by perhaps 100 or even 1000 times over a very few years!
2 High-Speed Networking
Propagation Delay
At 80% of the speed of light, the speed of message propagation on a
communication line hasn′t changed.
Storage Effect
At very high speeds long communication lines store a large amount of
data. For example, a link from New York to Los Angeles involves a delay
of roughly 20 milliseconds. At the 100 Mbps speed of FDDI this means
that two million bits are in transit at any time in each direction . So the
link has approximately half a million bytes stored in transit.
This has a critical effect on the efficiency of most current link protocols.
Computer Technology
Computers continue to become faster and lower in cost at an
approximate rate of 30% per annum compounded. However, this is not
as simple as it sounds - some things (for example the cost of main
storage) have reduced faster than others (for example the costs of power
supplies, screens, and keyboards).
Data links are now considerably faster than most computers that attach
to them. In the past, the communications line was the limiting factor and
most computer devices were easily able to load the link at perhaps 95%
of its capacity. Today, very few computer devices are capable of
sustaining a continuous transfer rate of anything like the speed of a fast
fiber link.
The “state of the art” in public network fiber facilities today is a
transmission rate of 2.4 Gbps, but rates of many times this are
functioning in laboratories.
1.4.1 Objectives
It is important to consider first the aims underlying the architecture of the
traditional packet network.
1. The most obvious objective is to save cost on expensive, low-speed
communication lines by statistically multiplexing many connections onto the
same line. This is really to say that money is saved on the lines by spending
money on networking equipment (nodes).
For example, in SNA there are extensive flow and congestion controls which
when combined with the use of priority mechanisms enable the operation of
links at utilizations above 90%. These controls have a significant cost in
hardware and software which is incurred in order to save the very high cost
of links.
As cost of long line bandwidth decreases there is less and less to be gained
from optimization of this resource.
Chapter 1. Introduction 3
2. Provide a multiplexed interface to end-user equipment so that an end user
can have simultaneous connections with many different destinations over a
single physical interface to the network.
3. Provide multiple paths through the network to enable recovery should a
single link or node become unavailable.
4 High-Speed Networking
within the network nodes because there are very few network nodes
compared to the number of attaching devices and the total system cost
to the end user was minimized.
With recent advances in microprocessors and reductions in the cost of
slow-speed memories (DRAMs), the cost of operating a stabilizing
protocol within attaching equipment (or at the endpoints of the network)
has reduced considerably.
Packet Size
Blocks of user data offered for transmission vary widely in size. If these
blocks are broken up into many short “packets”, then the transit delay
for the whole block across the network will be considerably shorter. This
is because when a block is broken up into many short packets, each
packet can be processed by the network separately and the first few
packets of a block may be received at the destination before the last
packet is transmitted by the source.
Limiting all data traffic to a small maximum length also has the effect of
smoothing out queueing delays in intermediate nodes and thus providing
a much more even transit delay characteristic than is possible if blocks
are allowed to be any length. There are other benefits to short packets;
for example, it is easier to manage a pool of fixed length buffers in an
intermediate node if it is known that each packet will fit in just one buffer.
Furthermore, if packets are short and delivered at a constant, relatively
even rate, then the amount of storage needed in the node buffer pool is
minimized.
Also, on the relatively slow, high error rate analog links of the past, a
short packet size often resulted in the best data throughput, because
when a block was found to be in error then there was less data that had
to be retransmitted.
However, there is a big problem with short packet sizes. It is a
characteristic of the architecture of traditional packet switching nodes
that switching a packet takes a certain amount of time (or number of
instructions) regardless of the length of the packet! That is, a 1000-byte
block requires (almost) the same node resource to switch as does a
100-byte block. So if you break a 1000-byte packet up into 10, 100-byte
packets then you multiply the load on an intermediate switching node by
10! This effect wasn′t too critical when nodes were very fast and links
were very slow. Today, when links are very fast and nodes are
(relatively) slow this characteristic is the most significant limitation on
network throughput.
It should be pointed out that SNA networks do not break data up into
short blocks for internal transport. At the boundary of the network there
is a function (segmentation) which breaks long data blocks up into
“segments” suitable to fit in the short buffers available in some early
devices. In addition, there is a function (chaining) which enables the
user to break up very large blocks (say above 4000 bytes) into shorter
ones if needed, but in general, data blocks are sent within an SNA
network as single, whole, blocks.
Congestion Control
Every professional involved in data communication knows (or should
know) the mathematics of the single server queue. Whenever you have
a resource that is used for a variable length of time by many requesters
Chapter 1. Introduction 5
(more or less) at random, then the service any particular requester will
get is determined by a highly predictable but very unexpected result.
(This applies to people queueing for a supermarket checkout just as
much as messages queuing for transmission on a link.)
As the utilization of the server gets higher the length of the queue
increases. If requests arrive at random, then at 70% utilization the
average queue length will be about 3. As utilization of the resource
approaches 100% then the length of the queue tends towards infinity!
Nodes, links and buffer pools within communication networks are
servers, and messages within the network are requesters.
The short result of the above is that, unless there is strict control of data
flow and congestion within, the network will not operate reliably. (Since
a network with an average utilization of 10% may still have peaks where
utilization of some resources exceeds 90%, this applies to all traditional
networks.) In SNA there are extensive flow and congestion control
mechanisms. In an environment of very high bandwidth cost, this is
justified because these control mechanisms enable the use of much
higher resource utilizations.
When bandwidth cost becomes very low, then some people argue that
there will be no need for congestion control at all (for example if no
resource is ever utilized at above 30%, then it is hard to see the need for
expensive control mechanisms). It is the view of this author that
congestion and flow control mechanisms will still be needed in the very
fast network environment but that these protocols will be very different
from those in operation in today′s networks.
The network is still very necessary. The question is really whether “packet”
networks (or cell-based networks) are justified when “Time Division Multiplexing
(TDM) ” networks may end up as more cost effective.
There is a question about who should own the network. A typical user may well
buy a virtual network of point-to-point links from the PTT (telephone company),
but the PTT will provide these links by deriving TDM channels from its own
6 High-Speed Networking
wideband trunks. From the user point of view they don′t really have a network,
but from the PTT viewpoint this is a very important case of networking.
Chapter 1. Introduction 7
2. Design the network such that all of the link control and switching
functions can be performed in hardware logic. Software-based
switching systems cannot match the speed of the new links.
There is an international standard for “frame relay” called CCITT I.122
(see 11.1, “Frame Relay” on page 227). A rather different system being
prototyped by IBM research is called “Paris” (see 11.2, “Packetized
Automatic Routing Integrated System (Paris)” on page 238).
Cell Relay
The big problem with supporting voice and video traffic within a data
network is that of providing a constant, regular, delivery rate. Packet
networks tend to take data delivered to them at an even rate and deliver
it to the other end of the network in bursts. 2 It helps a lot in the network if
all data is sent in very short packets or “cells”. In addition the transit
delay for a block of data through a wide area network is significantly
shorter if it is broken up into many smaller blocks.
There is a standardized system for cell switching called “ATM”
(Asynchronous Transfer Mode) which provides for the transport of very
short (53-byte) cells through a Sonet (SDH) based network.
The IEEE 802.6 standard for metropolitan area subnetworks uses a
cell-based transfer mechanism.
LAN Bridges and Routers
Local area networks (LANs) are the most popular way of interconnecting
devices within an office or workgroup. Many organizations have large
numbers of LANs in geographically dispersed locations. The challenge
is to interconnect these LANs with each other and with large corporate
database servers.
LAN bridges and routers are a very popular technology for achieving this
interconnection. The problem is that it is very difficult to control
congestion in this environment. Also, neither of the popular LAN
architectures (Ethernet and token-ring) can provide sufficient regularity in
packet delivery to make them usable for voice or video traffic. FDDI-II
(Fiber Distributed Data Interface - 2) is a LAN architecture designed to
integrate voice with data traffic. See 13.6, “Fiber Distributed Data
Interface (FDDI)” on page 281.
A remote LAN bridge that has several links to other bridges (the multilink
bridge) is logically the same thing as a frame switch with a LAN gateway
attached. Most proposed frame switch architectures handle LAN
bridging as a part of the switch.
Metropolitan Area Networks
A MAN is just like a very large LAN (really a set of linked LANs) that
covers a city or a whole country. (There are many MAN trials going on
in various parts of the world but the first fully commercial service has
been introduced in Australia where a country-wide MAN called “Fastpac”
has “universal availability” over an area of three million square miles!)
MANs are different from LANs. In a LAN network, data from one user
passes along the cable and is accessible by other users on the LAN. In
8 High-Speed Networking
a MAN environment this is unacceptable. The MAN bus or ring must
pass through PTT premises only. End users are attached on
point-to-point links to nodes on the MAN bus. Data not belonging to a
particular user is not permitted to pass through that user′s premises.
MAN networks as seen by the end user are thus not very different from
high-speed packet networks. However, LANs and MANs are
“connectionless” - the network as such does not know about the
existence of logical connections between end users and does not
administer them. Thus each packet or frame sent to the network carries
the full network address of its destination. Most packet networks (fast or
slow) recognize and use connections such as sessions or virtual circuits.
It should be noted that there is no necessary relationship (except
convenience) between the internal protocol of the MAN and the protocol
used on the access link to the end user. These will often be quite
different.
What really happened was we decided to spend money on the device to save
money on communications. Today an optimal solution may well be very different
from traditional devices.
Chapter 1. Introduction 9
its screen keyboard - and with the same (effectively instantaneous) response
characteristics.
10 High-Speed Networking
Chapter 2. A Review of Digital Transmission Technology
Over the last 20 years, the continued development of digital technologies over
copper wire and of fiber optics 3 have together provided a significant increase in
available bandwidth for communication. In the case of digital transmission,
existing wires can be used to carry vastly increased amounts of information for
little increase (often a decrease) in cost.
Among all that has been written about these technologies, the important facts to
be remembered are:
1. Any (standard) telephone twisted pair of copper wires can carry data at a
speed of 2 million bits per second (2 Mbps) in one direction. 4 Newer
techniques (see Chapter 3, “High Speed on the Subscriber Loop (HDSL and
ADSL)” on page 55) are able to extend this to an amazing 6 Mbps over
distances of up to three miles or 2 Mbps FDX over the same distance
(without using repeaters).
2. A standard telephone channel is 64 Kbps (thousand bits per second).
3. Two pairs of copper wires that currently carry only one call each now have
the ability to carry 30 calls. (There are methods of voice compression that
will double this at the least and potentially multiply it by 16, that is, 512
simultaneous voice calls on a single copper wire pair.)
4. A single optical fiber as currently being installed by the telephone companies
is run at 2.4 Gbps or 3.6 Gbps. At 2.4 Gbps, around 32,000 uncompressed
telephone calls can be simultaneously handled by a single fiber WITHOUT
the help of any of the available compression techniques. It is important to
note that a single fiber can only be used in one direction at a time, so that
two fibers are needed. In 1995 optical multiplexors are becoming available
which will handle 20 1-Gbps channels on a single fiber. Researchers tell us
that it would be possible, in 1995, to construct an optical multiplexor for 1000
2-Gbps channels using off-the-shelf components . However, this is not
attractive, since it costs less to put multiple fibers into a single cable. The
common optical cable being used between exchanges in the United States
has 24 fibers in the cable.
Many, if not most, EDP communications specialists were unprepared for these
new technologies. This comes about because of the development of data
communications using telephone channels. EDP specialists became accustomed
to the assumed characteristics of the telecommunications environment. It was
thought by many that these characteristics were “laws of nature” and would
remain true forever. It was quite a surprise to discover that far from being laws
of nature, the characteristics of telecommunications channels could benefit from
technological advance just as EDP systems could.
3 The use of a light beam to transmit information (speech) was first demonstrated by Alexander Graham Bell in the year 1880.
It took 100 years and the advent of glass fiber transmission for the idea to become practical.
4 The actual speed that can be achieved here is variable depending on such things as the length of wire and the environment in
which it is installed. Over very short distances (up to 45 meters), TTP (telephone twisted pair) can be used at 4 Mbps (it is
used in this way by the IBM Local Area Network). All numeric examples used in this book are intended only to illustrate
concepts and therefore must NOT be construed to be exact.
Then two things happened. The first was that data communication requirements
increased (users were finding applications for many more terminals) and
terminal designs became more sophisticated. This resulted in a requirement for
faster transmission. The second was that PTTs found themselves unable to
provide end-to-end copper wire as they had in the past, and telephone channels
had to be used. Most PTTs used interexchange “carrier systems” that used
“frequency division multiplexing” between exchanges. Data users were given
these channels and had to find ways of using them for data.
Modem technology developed very quickly indeed and today modems have
become sophisticated, complex, and expensive devices. Sending 9600 bps
through a telephone channel relies on many different techniques of “modulation”
(variation of the signal to encode information) and the achievement borders on
the theoretical limits of a telephone channel. In general, it is necessary to (very
carefully) “condition” 5 lines for this kind of modem and the line cannot actually
be “switched” through an exchange. The line is simply a sequence of wires and
frequency multiplexed channels from one end to the other. With a telephone
channel, 9,600 bps has been the limit. (Newer techniques can increase this to an
amazing 16,800 bps.) Wider channels and higher speeds were obtained by
combining several voice channels into one within the exchange such that the
new channel has a “wider” carrying capacity. This is quite expensive.
Many people assumed that because of the need to limit the “bandwidth” within
the telephone system, there was something that meant the ordinary copper wire
twisted pairs played a part in this limitation. Expected problems included radio
frequency emissions and “print through” of signals in one wire onto adjacent
wires in the same cable by inductive coupling.
In fact, these, while always a consideration, were never the limiting factor. New
technology has enabled the sending of data in just the way that it was in the
early 1960s: a current in one direction for a one bit and in the opposite direction
for a zero bit. (More accurately, changes in amplitude and polarity of voltage
are used.) The techniques involved in digital baseband transmission are more
5 Conditioning involves careful selection of appropriate wire pairs and inserting tuned inductors (coils) at intervals along the
circuit to mitigate the effect of capacitance between the conductors.
12 High-Speed Networking
complex but the principle is the same, and there is no need for the complex and
expensive modem nor for “conditioning” the wires in the circuit.
The methods of encoding the information in the baseband technique are often
grouped under the heading “Pulse Amplitude Modulation” (PAM). 7 When analog
information (for example voice or video) is converted to digital form for
transmission the most common technique is called “Pulse Code Modulation”
(PCM). PCM-coded voice is almost always transmitted on a wire (baseband)
using PAM.
From the perspective of the transmission system we may insist that the device
present a stream of bits for transmission. The problem, then is to transmit that
bit stream unchanged from A to B.
This method of coding is used for short distance digital links such as between a
data terminal device and a modem (the RS-232 or V.24 interface uses NRZ
coding).
6 A brief introduction to the concepts involved is given in Appendix B, “Transmitting Information by Modulating a Carrier” on
page 407.
7 Strictly, the word “modulation” means imposing changes onto a “carrier” signal in order to transmit information. In baseband
transmission there is no carrier signal to be modulated; the digital bit stream is placed directly on the wire (or fiber).
Nevertheless, most authors use the term modulation to describe the coding of a baseband digital signal.
8 In fact, there is a simpler way of representing the information. A one bit might be the presence of a voltage and a zero bit the
absence of a voltage. (Early Morse code systems did use the absence of a voltage to delimit the start and end of each “dot”
o r “ d a s h ” . ) This technique is not used in modern digital signaling techniques because it is “unbalanced”. The need for direct
current (DC) balancing in digital codes is discussed later.
On the surface it looks simple. All the receiver has to do is look at its input
stream at the middle of every bit time and the state of the voltage on the line will
determine whether the bit is a zero or a one.
14 High-Speed Networking
be able to distinguish between bits even when the line state hasn′t changed
for many bit times.
With simple NRZ coding this is impossible, and something must be done to
the bit string to ensure that long strings of zeros or ones can′t occur.
This algorithm will obviously ensure that strings of zero bits do not cause a
problem. But what of strings of one bits? Strings of one bits are normally
prevented by insisting that the bit stream fed to the transmitter may not contain
long strings of one bits. This can be achieved in many ways:
• By using a “higher layer” protocol (such as SDLC/HDLC) that breaks up
strings of one bits for its own purposes. The HDLC family of protocols for
example insert a zero bit unconditionally after every string of 5 consecutive
ones (except for a delimiter or abort sequence).
• By using a code translation that represents (say) 4 data bits as 5 real bits.
Code combinations which would result in insufficient numbers of transitions
are not used. This is the system used in FDDI (see 13.6.5.3, “Data Encoding”
on page 290) for example. Code conversion from 4-bit “nibbles” to 5-bit
“symbols” is performed before NRZI conversion for fairly obvious reasons.
It seems obvious, but it must not be forgotten, that when devices are connected
together using electric wire there is an electrical connection between them. If
that connection consists of direct connections of components to the wire there
are several potential problems.
• Both the signaling wires and the grounds (earths) must be electrically
connected to one another. If the devices are powered from different supplies
16 High-Speed Networking
(or the powering is through a different route), 9 then a “ground loop” can be
set up.
Most people have observed a ground loop. In home stereo equipment using
a record turntable, if the turntable is earthed and if it is plugged into a
different power point from the amplifier to which it is connected, then you
often get a loud “hum” at the line frequency (50 or 60 cycles). The hum is
caused by a power flow through the earth path and the power supplies.
Ground loops can be a source of significant interference and can cause
overload on attaching components.
• If the connection is over a distance of a few kilometers or more (such as in
the “subscriber loop” connection of an end user to a telephone exchange) it
is not uncommon for there to be a difference of up to 3 volts in the earth
potential at the two locations. This can have all kinds of undesirable effects.
• It is not good safety practice to connect the internal circuitry of any device to
a transmission line. In the event of a power supply malfunction, it may be
possible to get the main′s voltage on the line.
This does not help other equipment connected to the line and could leave a
lasting impression on any technician who happened to be working on the
line at the time.
So, it is normal practice to isolate the line from the equipment by using either a
capacitor or a transformer. There are other reasons for using reactive coupling:
• A transformer coupling matches the impedance of the transmission line to
the device and prevents reflection of echoes back down the line.
• The received pulses are reshaped and smoothed in preparation for
reception.
• The transformer filters out many forms of line noise.
• Neither a capacitor nor a transformer will allow direct current (DC) to pass.
This means that the line is intentionally isolated from the user equipment. It
becomes possible to put an intentional direct current onto the line. For
example, in basic rate ISDN, a direct current is used on the same line as the
signal to provide power for simple receivers. Also, in token-ring systems, a
“phantom voltage” is generated in the attaching adapter and is used to
signal the wiring concentrator that this device should be connected to the
ring.
Transformer coupling is generally used in high-speed digital baseband
transmission systems. There are other advantages to transformer coupling:
• If the code is designed carefully, the transmission leads can be reversed
(swapped) at the connection to the device without affecting its ability to
correctly interpret the signal.
• Interference caused by “crosstalk” (the acquisition of a signal through
inductive and capacitive coupling from other wires in the same cable) usually
affects each signal wire equally. The fact that crosstalk signals tend to be
equal on both wires means that when they are put through the transformer
coupling they tend to cancel one another out.
The net is that crosstalk interference is greatly reduced.
9 More accurately, if there is any resistance in the connection between the grounds - there usually is.
Thus, any transmission code that can cause the line to spend more time in one
state than the other has a direct current (DC) component and is said to be
unbalanced. The presence of a DC component causes the transmission line and
the coupling device to distort the signal. This phenomenon is technically called
“baseline wander” and the effect is to increase the interference caused by one
pulse with a subsequent pulse on the line. (See “Intersymbol Interference” on
page 45.)
A DC balanced code causes the line to spend an equal amount of time in each
state. Thus on average there is no DC component and the above problems are
avoided. In addition DC balancing simplifies transmit level control, receiver
gain, and receiver equalization.
Both the NRZ and NRZI codes described above are unbalanced in this way and
so are unsuitable for high-speed digital transmission on electrical media (its fine
on optical media). 10 So we need a different kind of code.
10 The suggestion of DC balancing on a fiber seems absurd. However, this is not always so. Optical transmissions are sent and
received electronically. It is often advantageous to have an AC coupled front-end stage to a high gain receiver. So some
systems do balance the number of ones and zeros sent to give a DC balance to the electrical coupling stage in the receiver.
18 High-Speed Networking
Pulses representing zero bits must strictly alternate in state. Successive zero
pulses (regardless of how many one bits are in between) must be of opposite
states. Hence the coding is DC balanced. If they are the same state, then this is
regarded as a code violation.
The properties of this code are used in an elegant and innovative way by the
Basic Rate ISDN “S” interface. See the discussion in 8.1.3.3, “The ISDN
Basic Rate Passive Bus (“S” Interface)” on page 164.
The drawback of using this code compared to NRZI is that (other things being
equal) it requires 3 dB more transmitter power than NRZI. It also requires the
receiver to recognize three states rather than just a transition - this means a
slightly more complex receiver is required. Notice also that it is the one bit that
is represented as null voltage and the zero that is a voltage rather than the
opposite.
11 In the current IBM token-ring adapter this is actually 2 and a half bits or 5 bps. However, in principle it is possible to cut this
down to one and a half bits.
The concept is very simple. The VCO is a Voltage Controlled Oscillator and is
the key to the operation.
• The VCO is designed to produce a clock frequency close to the frequency
being received.
• Output of the VCO is fed to a comparison device (here called a phase
detector) which matches the input signal to the VCO output.
• The phase detector produces a voltage output which represents the
difference between the input signal and the output signal.
(In principle, this device is a lot like the tuner on an AM radio.)
• The voltage output is then used to control (change) the frequency of the VCO.
Properly designed, the output signal will be very close indeed to the timing and
phase of the input signal. There are two (almost conflicting) uses for the PLL
output:
1. Recovering the bit stream (that is, providing the necessary timing to
determine where one bit starts and another one ends).
2. Recovering the (average) timing (that is, providing a stable timing source at
exactly the same rate as the timing of the input bit stream).
Many bit streams have a nearly exact overall timing but have slight variations
between the timings of individual bits.
20 High-Speed Networking
The net of the above is that quite often we need two PLLs: one to recover the bit
stream and the other to recover a precise clock (illustrated in Figure 5 on
page 21). This is the case in most primary rate ISDN chip sets.
Figure 5. Clock Recovery in Primary Rate ISDN. Two PLLs are used because of the
different requirements of bit stream recovery and clock recovery.
PLL design is extremely complex and regarded by many digital design engineers
as something akin to black magic. It seems ironic that the heart of a modern
digital communications system should be an analog device.
PLL quality is extremely important to the correct operation of many (if not most)
modern digital systems. The basic rate ISDN “passive bus” and the token-ring
LAN are prime examples.
2.1.6.2 Jitter
Jitter is the generic term given to the difference between the (notional) “correct”
timing of a received bit and the timing as detected by the PLL. It is impossible
for this timing to be exact because of the nature of the operation being
performed. Some bits will be detected slightly early and others slightly late.
This means that the detected timing will vary more or less randomly by a small
amount either side of the correct timing - hence the name “jitter”. It doesn′ t
matter if all bits are detected early (or late) provided it is by the same amount -
delay is not jitter. Jitter is a random variation in the timing either side of what is
correct.
Jitter is minimized if both the received signal and the PLL are of high quality.
But although you can minimize jitter, you can never quite get rid of it altogether.
Jitter can have many sources, such as distortion in the transmission channel or
just the method of operation of a digital PLL. Sometimes these small differences
do not make any kind of difference. In other cases, such as in the IBM
Token-Ring, jitter accumulates from one station to another and ultimately can
result in the loss or corruption of data. It is jitter accumulation that restricts the
maximum number of devices on a token-ring to 260.
The signal can be boosted by simply amplifying it. This makes the signal
stronger, but (as is shown in the first example of Figure 6) it amplifies a
distorted signal. As the signal progresses further down the cable, it becomes
more distorted and all the distortions add up and are included in the received
signal at the other end of the cable. Analog transmission systems must not
allow the signal to become too weak anywhere along their path. This is because
we need to keep a good ratio of the signal strength to noise on the circuit.
Some noise sources such as crosstalk and impulse noise have the same real
level regardless of the signal strength. If we let the signal strength drop too
much the effect of noise increases.
In the digital world things are different. A signal is received and (provided it can
be understood at all) it is reconstructed in the repeater. A new signal is passed
on (as is shown in the second example of Figure 6) which is completely free
from any distortion that was present when the signal was received at the
repeater.
The result of this is that repeaters can be placed at intervals such that the signal
is still understandable but can be considerably weaker than would be needed
were the signal to be amplified. This means that repeaters can be spaced
further apart than can amplifiers and also that the signal received at the far end
of the cable is an exact copy of what was transmitted with no errors or distortion
at all.
22 High-Speed Networking
to transmit two bits which are the same in succession (such as 11 or 00), then
we have to make another transition in the link at the beginning of the bit time to
prepare ourselves to make the transition in the right direction. That is, if we
signal a one by going from a line state of zero to a state of one, then before we
can signal the next one bit we need to get the line back to the state of zero.
This is shown in Figure 7.
Figure 7. Manchester Encoding. A zero bit is signaled by a transition from a line state of
one to a line state of zero in the middle of the bit time. A one bit is signaled by a
transition from the zero state to the one state. There may or may not be a transition at
the beginning of the bit time. The absence of a transition at the middle of the bit time
constitutes a “code violation”.
Because it is the change in signal level that is used to communicate the value of
a bit rather than the amplitude of the signal, the signal level itself is relatively
unimportant. In principle, Manchester Code could work over a significant DC
signal. For example if the signal varied between +2 and +4 volts, then this
would be just as good as a signal that varied between -1 and +1 volts. The
receiver only has to detect the signal′s direction of change rather than the signal
level itself.
As noted above, Manchester Code is used in Ethernet LANs. In that context the
primary purpose is to allow easy detection of collisions. When two Manchester
Coded signals collide on a LAN, then the value of the resulting signal is the OR
of the input signals (not the arithmetic sum as one might expect). This is not
clean and neat because the two colliding signals will not be synchronized with
one another in any way. The resulting signal will almost always not be DC
balanced (the probability of a DC balanced result is very low indeed). Thus
when two signals collide you get a momentary loss of DC balance on the LAN
cable. It is this transient DC component that signals the sending workstation that
there has been a collision. This is a particularly elegant, simple, and very
effective way of detecting collisions. However it means that Ethernet is very
sensitive to low frequency interference on the LAN cable. A low frequency
signal (such as might be induced in a UTP cable running near a power cable)
will be seen by Ethernet equipment as a succession of collisions.
One practical problem of Manchester Code is that it is polarity sensitive. That is,
if the (two) wires carrying the signal are swapped for any reason, then a one bit
The most significant factor in the choice of this code for the IBM Token-Ring LAN
was the desire to minimize the “ring latency” (the time it takes for something to
travel around the ring). The mechanism used to achieve this is to minimize
necessary buffering (delay) in each attaching ring station. (Current IBM
token-ring adapters contain 1½ bits of delay.) To do this you need to eliminate
the “elastic buffer” which would be required if each station had an independent
clock. If we want to eliminate this elastic buffer and its concomitant delay, then
each station′s transmitter must run at exactly the same rate as the rate at which
data is being received. The problem here is that tiny variations introduced into
the timing by each station “add up” as the signal passes around the ring through
multiple stations. This means that each station must be able to derive a highly
accurate timing source from its received data stream. To enable the receiver′ s
PLL 13 to derive the most accurate clock possible you need a code with many
transitions in it. Hence the choice of Differential Manchester Code.
The next significant desire was to minimize the cost of each attaching adapter,
although function should not be sacrificed purely for cost saving. (It was felt that
as chip technology improved over the years the initial high cost of TRN chip sets
would reduce significantly - and this has indeed happened.)
The desired speed range of around 10 Mbps (in 1981) dictated the use of
shielded cable. The shielded twisted pair that was decided upon for
transmission can easily handle signals of several hundred Mbps, so there was
no advantage to be gained from limiting the baud rate (signaling frequency).
12 Differential Manchester coding is a member of a class of codes called “biphase modulated codes”
13 See 2.1.6.1, “Phase Locked Loops (PLLs)” on page 20.
24 High-Speed Networking
The baud rate is twice the bit rate. On a 4 Mbps token-ring the “baud rate” (that
is the number of state changes on the line) is 8 megahertz. A 16 Mbps
token-ring runs at 32 megabaud (16 megahertz). Some people consider this a
waste of bandwidth, since there are codes in use that allow up to 6 bits for each
state change. 14 FDDI, for example, uses a 4B/5B code that allows 100 Mbps to be
sent on the line as 125 megabaud.
14 In an analog carrier system where available bandwidth is severely restricted, then perhaps the word “waste” would be
justified. In a baseband LAN environment where the signal is the only one on the wire, this is difficult to call waste, since
were it not used for this purpose, the additional capability of the wire would be unused.
There are many suggested codes. BkZS, HDBk and kBnT are names of families
of codes (k=3, 4, 5... n= 2, 3..) which have great academic interest, but two
members of these families are very important commercially. These are HDB3
and B8ZS because they are used by primary rate ISDN.
15 Bit stuffing has another much worse effect: it adds a variable number of bits to the frame, destroying the strict frame
synchronization required for TDM systems.
26 High-Speed Networking
2.1.10 High Density Bipolar Three Zeros (HDB3) Coding
This code is used for 2 Mbps “E1” transmission outside the US and is defined by
the CCITT recommendation G.703. It is the code used in Primary Rate ISDN at
the 2 Mbps access speed.
The basic concept is to replace strings of four zero bits 16 with a string of three
zeros and a code violation. 17
0000 is replaced by 000V (V = a code violation)
But the code violation destroys the very thing that we are using the AMI code for
- its DC balanced nature. Using the above rule, a long string of zeros would be
replaced with:
000V000V000V...
Each code violation pulse would have to have the same polarity; otherwise, it
would be a one bit - not a code violation. Even short strings of zeros within
arbitrary data would cause an unbalance as the polarity of the preceding one bit
would not be predictable or controllable.
The rule is simple: if the polarity of the last violation is the same as that of the
most recent one bit, then send the string B00V (that is, invert the polarity of the
violation); if the polarities are different, send 000V. This rule is summarized as
follows:
Polarity of last Code Violation
+ -
Polarity of + B-00V- 000V+
preceding
one bit - 000V- B+00V+
The result can be seen in Figure 10 on page 28. Reading the bit stream from
left to right, the first pattern is B-00V- (indicating that the previous violation must
have been positive). The next violation is 000V+ and is of opposite polarity.
16 The size of the bit string replaced, in this case 4 bits is the “k” in the name of the code plus one. Thus in HDB3, k=3 and the
number of bits replaced is k+1=4.
17 The + or - suffixes attached to the B and V notations in the figure denote the polarity of the pulse.
Because the substitution is of eight zero bits, there is room in the string for more
than one code violation and a couple of valid bipolar pulses thrown in. The
substituted string is <000VB0VB>. As is illustrated in Figure 11 on page 29,
there are two strings. Selection of one to be used depends on the polarity of the
preceding one bit (you have to create a violation).
Note that because both strings are themselves DC balanced, multiple repetitions
of the same string are possible.
28 High-Speed Networking
Figure 11. B8ZS Substitutions. Either of two alternative strings are used depending on
the polarity of the preceding one bit.
One such environment is the “U” interface of basic rate ISDN. As discussed in
8.1.3, “ISDN Basic Rate” on page 161, this interface is not internationally
standardized. Different countries adopt different techniques. 4B3T code is used
in Germany.
The problem here is full-duplex transmission over a two-wire line. (A very good
trick by any test!) The challenge is to get high quality transmission between the
PTT exchange and the end user at very low cost .
Four binary bits (as a group) are transmitted as three ternary states (also as a
group); thus, there is a 25% reduction in the baud rate of the line compared with
AMI codes. This helps noise immunity, but at the cost of complicating the
receiver. In the ISDN_BR environment this means that 160 Kbps is transmitted
as 120 kbaud.
Ternary String
Notice the two modes. Ternary strings that are DC balanced in themselves
(such as the +-0 string) represent the same binary block in either mode.
Ternary blocks that are not DC balanced have two representations (one with
positive DC bias and its inverse with negative bias). The amount of bias is noted
in the table.
What happens is that the sender keeps track of the RDS (Running Digital Sum) of
the block being transmitted. When the sum is positive, the next code word sent
is from the Mode 2 column; if the sum is negative, the next code word sent is
from the Mode 1 column. The receiver doesn′t care about digital sums; it just
takes the next group of three states and translates it into the appropriate four
bits. The combination of 000 is not used because a string of consecutive groups
would not contain any transitions (the strings of --- and +++ are possible
because they alternate with each other if the combination 01110111 occurs in the
data).
30 High-Speed Networking
Table 2. Examples of 8B6T Encodings
00 +-00+-
... ...
0A 1+0+-0
0B +0-+-0
... ...
46 +0+-00
47 0++-00
... ...
90 +-+--+
91 ++--+-
... ...
SOSA +-+-+-
SOSB -+-+-+
... ...
EOP1 + + + + + +
... ...
EOP5 --0000
... ...
bad-code ---+++
zero-code 000000
Each 4 data bits are encoded as a 5-bit group for transmission or reception. This
means that the 100 Mbps data rate is actually 125 Mbps when observed on the
physical link. 19 Table 3 shows the coding used. This provides:
• Simplification of timing recovery by providing a guaranteed rate of transitions
in the data. Only code combinations with at least two transitions per group
are valid.
• Transparency and framing. Additional unique code combinations (5-bit
codes that do not correspond to any data group) are available. These are
used to provide transparency by signaling the beginning and end of a block
and to provide an easy way of determining byte alignment.
• Circuitry simplification. One of the principal reasons for using a block code
is to save cost in circuitry. If bits are processed at the line rate, then our
circuits must be able to operate at that rate. High-speed logic is significantly
more expensive than slower-speed logic. A block code allows us to
minimize the amount of serial processing required - after a block (byte or
half-byte) is received in serial it is then processed as a single parallel group.
For example in FDDI, bits are sent/received at 125 Mbaud, formed into 5-bit
groups, translated into 4-bit groups and then processed as “nibbles” (half
bytes) at 25 MHz. In the early days of FDDI, this meant that you could build
the serial part in expensive bipolar chip technology but do most of the real
19 It is incorrect to say that the link is operating at 125 Mbps because the rate of actual bit transport is 100 Mbps. The correct
term for the line rate is “baud”. A baud is a change in the line state. You could say (with equal truth) that the line is
operating at 125 Mbaud or at 25 Mbaud (you could consider each 5-bit group to be a “symbol”).
32 High-Speed Networking
processing in the much lower cost CMOS technology. (Now, you can do both
in CMOS but the principle remains.)
Table 4. Partial State Machine for 5B/6B Encoding Used with 100VG AnyNet.
Five-bit groups are sent as 6-bit combinations. There are two output tables and two
states to allow for DC balancing.
Input Quintet State 1 State 1 State 2 State 2
Output Sextet Next State Output Sextet Next State
00000 001100 2 110011 1
00011 001101 1 001101 2
00111 001011 1 001011 2
01100 101000 2 010111 1
10101 011000 2 100111 1
11101 010011 1 010011 2
11110 010010 2 101101 1
Table 4 shows a few of the code combinations for 5B/6B code as it is used in
100VG AnyNet. This is the transmit state machine.
• There are two states. Operation commences in State 1.
• In State 1 the output table contains sextets of either weight 2 (sextet contains
2 1′s and 4 0′s) or weight 3 (3 1′s and 3 0′s).
• In State 2 the output table contains sextets of weight 3 or weight 4 (4 1′s and
2 0′s).
• The coding scheme sends codes from either table to ensure that DC balance
is maintained.
Operation proceeds as follows:
1. At the beginning of a block of data the system is in State 1.
2. When an input quintet (for example 000000) is received the appropriate
output sextet (in this case 001100) is transmitted and then the system goes to
state 2 (selected from the Next State column).
3. If we now receive the quintet 11101 we are in state 2 so we transmit 010011
and stay in state 2.
4. If the next quintet is 01100 then we will transmit 010111 and go to state 1.
As shown in Figure 12 on page 35, the 2B1Q code uses 4 line states to
represent 2 bits. In the ISDN_BR environment this means that the 160 Kbps “U”
interface signal is sent at 80 kbaud. In ISDN_BR PAM (Pulse Amplitude
Modulation) is used to carry the 2B1Q signal. This is just ordinary baseband
digital signaling but the receiver must now be able to distinguish the amplitude
of the pulses (not just the polarity). There are four voltage states with each state
representing 2 bits.
34 High-Speed Networking
Figure 12. 2-Binary 1-Quaternary Code
Another point to note is that this code is not DC balanced! Further it does not
inherently contain sufficient transitions to recover a good clock. For example the
sequences:
00000000 or 0101010101 or 1010101010 or 11111111
all result in an unchanging line state leaving nothing for the receiver to
synchronize on.
MLT-3 is used in some LAN systems (FDDI over UTP-5 for example) to reduce
the required bandwidth but it signals at 1 baud per bit. The principle is
illustrated in Figure 13.
Similar to NRZI encoding, a zero bit is signaled by no change in the line status
from the previous bit. A one bit is signaled by a transition. There are three
signal levels here instead of two. The rules of MLT-3 are simple:
1. A one bit is signaled by a change in line state from the previous bit.
2. A zero bit is signaled by the absence of a change in line state.
3. When the line is in the zero state the next state change (to signal the next
one bit) must be to the opposite state from the immediately preceding state.
The reason this code is used is that the signal “uses less bandwidth” than does
ordinary NRZI code. This means that the signal energy is concentrated at lower
frequencies than NRZI while retaining a relatively simple transceiver structure.
(See 2.2.2, “The Characteristics of a Baseband Signal” on page 39.) One
consequence of this is that FDDI can run over UTP-5 using MLT-3 where it cannot
run over UTP-5 using NRZI code (the immediate problem is excessive EMS when
using NRZI).
Nevertheless, block codes do not perform all of the functions of line coding.
When block coding is used, it is usual to also use a line coding scheme
appropriate to the particular physical medium in use. Thus in FDDI, the data is
first block coded and then coded again in a manner appropriate to the physical
medium. When transmission is on fiber (or in some cases STP) NRZI coding is
used as well. When transmission is on UTP-5, then MLT-3 coding is used in
addition to the block code.
20 American
21 English
36 High-Speed Networking
For example, scrambling is used in the proposed 25.6 Mbps ATM link access
protocol (ATM25). This protocol is similar at the signal level to standard
token-ring and so a comparison of the two spectra is really a comparison of an
unscrambled versus a scrambled signal.
Figure 14 shows the frequency spectra of the unscrambled token-ring signal and
the scrambled ATM25 signal. In this figure both signals have been filtered for
transmission on UTP-3 (thus high-frequency components of the signal have been
removed).
In the unscrambled (token-ring) case, there are some strong peaks in the
frequency spectrum (18 dB above the curve trend) but in the scrambled (ATM)
case these peaks are gone. This means that the transmitted power is more
evenly distributed over the transmitted frequencies and a better-quality signal
results.
Figure 14. Frequency Spectra of Scrambled and Unscrambled Signals. The dotted line
in the graph shows token-ring 16 Mbps (unscrambled) and the unbroken line shows
ATM25 (scrambled) protocol. The ATM25 signal is a better signal because of the absence
of peaks in the spectrum.
38 High-Speed Networking
Figure 15. IBM Type 1 Shielded Twisted Pair. Each of two twisted pairs is shielded from
the other by an earthed polyester aluminum tape shield. The whole is further shielded in
copper braid.
22 Coaxial cables used for long-line telephone transmission in the past did not have a continuous plastic insulator but rather
small plastic insulators spaced at regular intervals. This type of cable has many better characteristics than the ones with
continuous plastic, but is very hard to bend around corners and to join.
23 The same shape as a sine wave. For example, the cosine function describes a wave exactly the same shape as the sine
function but shifted by π /2 in phase.
1 1
cos (2 π × t ) − cos (2 π × 3 t ) + cos (2 π × 5 t )
3 5
[ 1]
1 1
− cos (2 π × 7 t ) + ... + cos (2 π × 13 t )
7 13
1 1 1
1, 0, − , 0, , 0, − , ...
3 5 7
Note that the third and seventh terms have a negative sign. This does not
mean a negative amplitude (whatever that may be) but rather a phase shift
of π (radians). In the figure, the waveforms with a negative sign start below
the x-axis, whereas waveforms with a positive sign start above (reflecting the
phase shift).
4. The lowest frequency sine wave has the same period of repetition as the
square pulse under study.
40 High-Speed Networking
5. The sine waves in the series have successively smaller amplitudes as the
frequency is increased. This is not true of all signals but is a characteristic
of a square pulse. 24
6. As more terms are added to the series, the shape of the pulse becomes
more and more square, but some rather large “ears” have developed.
These ears are an artificial result of the way we do the mathematics (called
Gibbs effect ) - they are not present in the real signal.
7. In this example, as higher frequencies are added to the sum they make less
and less difference to the pulse shape and mainly contribute to making the
corners squarer.
Figure 17. Composition of a Square Pulse as the Sum of Sinusoidal Waves. The
left-hand column shows each new wave as it is added to the summation. The right-hand
column shows the progressive total as each new wave is added. Note that only the first,
third, fifth, seventh and thirteenth harmonics are shown. For reasons of space the
seventh, ninth and eleventh terms have been omitted from the figure, although they are
present in the final summation waveform in the lower right-hand corner.
24 s i n ( x ) = c o s ( x + π /2)
Consider sending a simple NRZ signal (say) a one bit is represented by +1 volt
and a zero bit by -1 volt. If we now send a bit stream of 01010101... we will get
the square wave shown earlier (Figure 16 on page 40). We can make the
following observations:
1. Each bit is represented by one state change of the line. That is, 1 bit equals
1 baud.
2. The repetition frequency is half of the bit rate. That is, the lowest frequency
sine wave present in the signal represents two bit times. So, 1 hertz = 2
baud (= 2 bits).
3. So, if the bit rate was 10 Mbps then the lowest frequency present would be
5 MHz .
4. Again, if the bit rate is 10 Mbps (5 MHz) then there are frequency
components in the signal of 5, 15, 25, 35... MHz.
5. So, if we were to send this signal over a communications channel that was
limited to 10 MHz, then we would put the square wave in at one end and get
a perfect sine wave out the other (all of the higher frequency components
having been removed). This would be somewhat inconvenient if our receiver
was expecting a square pulse! It is obvious that the closer that a received
pulse edge approaches a square shape then the easier it will be to receive.
In addition the received timing (important in some situations) will become
more and more accurate as the pulse approaches a square shape.
6. There are many simplifications here. The signal actually varies (if it did not
it could not carry information). Using the above example, if we sent the
signal 00110011... we would get a frequency spectrum of precisely half of the
one we got above. When the signal is carrying information (varying) there
will be many frequencies present and these will be changing. These are
usually expressed as a spectrum of frequencies.
There are two critical points that emerge from the above discussion:
1. Square-wave signals have significant frequency components of many times
the frequency of the square wave itself.
2. The various frequency components of a signal bear a critical relationship to
one another in terms of amplitude and phase. If this relationship is
disturbed, then the signal will be distorted.
This can be illustrated by considering the square pulse of Figure 17 on
page 41. In this figure, the right-hand lower diagram shows the summation
of the first thirteen terms. If we introduce some phase distortion by delaying
42 High-Speed Networking
the third, fifth and seventh terms by π /2 we get the pulse shown in Figure 19
on page 43.
Figure 19. The Effect of Phase Distortion on the Thirteen-Term Summation. The third,
fifth and seventh terms have been delayed by π /2.
A normal transmission channel will change any digital pulse sent along it (see
Figure 20).
44 High-Speed Networking
In the telephone subscriber loop environment, impulse noise takes the
form of “spikes” of between 10 µsec and 50 µsec in duration.
In the LAN environment, impulse noise can arise if an (unshielded)
twisted pair passes close (in the same duct) to normal electrical power
wiring. Whenever a high current device (such as a motor) is switched on
there is a current surge on the wire. Through stray coupling, a power
surge can cause impulse noise on nearby communications cabling.
Reflected Signals
If a signal traveling on a wire encounters a change in the impedance of
the circuit, part of the signal is reflected back towards the sender. Of
course, as it travels back to the sender, it could be reflected back in the
original direction of travel if it encounters another change in impedance.
Reflected signals (sometimes called echoes) can cause a problem in a
unidirectional environment because they can be reflected back from the
transmitter end, but this is not always very serious.
The real problem comes when a two-wire line is used for full-duplex
communication (such as at the ISDN_BR “U” interface). Here reflections
can cause serious interference with a signal going in the other direction.
Impedance changes over the route of a circuit can have a number of
causes, such as a change in the gauge or type of wire used in the circuit
or even a badly made connection. However, the worst impedance
mismatches are commonly caused by the presence of a “bridged tap”.
A bridged tap is simply where another wire (pair of wires) is connected
to the link but its other end is not connected to anything. Bridged taps
are quite common in some telephone environments.
Intersymbol Interference
Intersymbol interference takes place when a particular line state being
transmitted is influenced by a previous line state. This is usually a result
of the characteristics of the circuit causing the signal to be “smeared” so
that the end of one symbol (bit or group of bits) overlaps with the start of
the next.
Crosstalk
Crosstalk is when a signal from one pair of wires appears on another
pair of wires in the same cable - through reactive (capacitive or
inductive) coupling effects. In telephone subscriber loop situations,
crosstalk increases very rapidly when the signal frequency gets above
100 kHz. One effect of crosstalk is to place an unwanted signal on a wire
pair, thus interfering with the intended signal. Another effect of crosstalk
is that it causes loss of signal strength (attenuation) in the pair causing
the interference.
There are two kinds of crosstalk which can have different amounts of
significance in different situations. These are called “Near End
Crosstalk” (NEXT) and “Far End Crosstalk” (FEXT).
The most common type of NEXT occurs when the signal transmitted from
a device interferes with the signal in the other direction being received
by that device. This can be a problem in the LAN environment using
unshielded cable, since the transmit pair and the receive pair are bound
closely together in the cable.
There are wide differences between countries (and even within individual
countries) in the characteristics of this wiring.
Maximum Length
One of the most important criteria is the length of the wire. The
maximum length varies in different countries but is usually from four to
eight kilometers.
Wire Thickness
All gauges of wire have been used, from 16 gauge (1.291 mm) to 26
gauge (.405 mm). The smaller gauges are typically used for short
distances so that it is rare to find 26 gauge wire longer than 2½
kilometers (a little more than 8000 feet).
46 High-Speed Networking
Material
Most installed wire is copper but some aluminum wire is in use.
Insulation
The majority of this cable uses tightly wound paper insulation, but recent
installations in most countries tend to use plastic.
Twists in Cable
Telephone wire is twisted essentially to keep the two wires of a pair
together when they are bundled in a cable with up to 1000 other pairs.
Twists do, however, help by adding a small inductive component to the
cable characteristic.
The number of twists per meter is different for different pairs in the same
cable. (This is deliberately done to minimize crosstalk interference.)
Also, the uniformity of twists is not generally well controlled. These have
the effect of causing small irregularities in impedance, which can cause
reflections and signal loss due to radiation, etc.
Different Gauges on the Same Connection
It is quite common to have different gauges of wire used on the same
connection. Any change in the characteristics of the wire causes an
impedance mismatch and can be a source of reflections.
Bridged Taps
The worst feature of all in typical subscriber loops is the bridged tap. 25
This is just a piece of wire (twisted pair) connected to the line at one
end, but only with the other end left unconnected. In other words, the
circuit between the end user and the exchange has another
(unconnected) wire joined to it somewhere along its path. This happens
routinely when field technicians attach a user to a line without removing
the wires that attached previous users.
In practical situations subscriber loops with as many as six bridged taps
have been reported.
Bridged taps cause a large impedance mismatch (reflect a large amount
of the signal) and radiate energy (causing loss of signal strength and
potential problems with radio frequency emission).
Loading Coils
On typical telephone twisted pair cable, the effects of capacitance
between the two conductors dominates the characteristics of the circuit
and limits the transmission distance. For many years it has been a
practice to counteract some of the effect of capacitance by adding
inductance to the circuit. This was done (especially over longer
distances) by the insertion of “loading coils” into the loop.
It is estimated that up to 25% of the subscriber loops in the US have
loading coils in the circuit.
There is no available digital transmission technique which will work in
the presence of loading coils. They need to be removed if the circuit is
to be used for digital transmission.
48 High-Speed Networking
In the traditional analog telephone environment, two-wire transmission is used
from the subscriber to the nearest exchange and “four wire” (true full-duplex)
transmission is used between exchanges. In this traditional environment,
echoes can be a major source of annoyance to telephone users.
A better way of handling echoes is to use an echo canceller. The basic principle
of echo cancellation is shown in Figure 22.
In recent times devices called “adaptive filters” have become practical with
improvements in VLSI technology. An adaptive filter automatically adjusts itself
to its environment. 26 Echo cancellers can′t handle interference that is systematic
but not an echo (such as crosstalk from adjacent circuits), nor can they handle
the effects of impulse noise or (for that matter) multiple echoes. Adaptive filters
can do a very good job of cleaning up a signal when transmitting full-duplex on a
two-wire line.
Of course, echo cancellers can be digital and adaptive also, so that they can
adjust automatically to the characteristics of the circuit they are attached to.
26 An excellent account of adaptive filtering in the loop plant is given in Waring D.L et. al. 1991 .
50 High-Speed Networking
2.3 LAN Cabling with Unshielded Twisted Pair
All other things being equal, we know that as data rate increases the drive
distance available on copper wire decreases approximately proportionally to the
square roots of the data rates. So a 16 Mbps signal can travel only one-half of
the distance of a 4 Mbps signal using a copper wire with constant specifications.
However, all other things are almost never equal among signaling methods of
devices using different protocols, implementations, or data rates. All of these
considerations are further complicated by the fact that increasing data rates
cause increasing radiation. The regulatory agencies reasonably set an absolute
standard for radiation emission that does not vary with the data rate of the
signal.
However, this does not solve all the problems of UTP. As far as the issue of
electromagnetic compatibility (EMC) is concerned, neither of these choices offers
the protection provided by shielded twisted pair (STP) by the mere existence of
the shielding. In order to prevent unacceptable levels of EMC, radiation filters
are needed at every station and the cost of these must be considered in the
overall network media cost. In addition, when using UTP, impulse noise and
crosstalk from nearby cables can be a source of significant performance
If we take a very good transmission environment (such as STP) and use only a
small fraction of the available bandwidth we can send a very simple signal, use
low-cost transceivers and have effective communication. Such is the case with
token-ring (either 4 or 16 Mbps) on the STP medium. However, when the
bandwidth of the medium is limited and there are other factors (such as noise
and EMC emissions), then we have a very different story.
The combined effects of cable characteristics and EMC potential limit the usable
bandwidth of UTP-3 cable to around 30 MHz and UTP-5 to about 70 MHz.
Therefore, to get UTP to handle data at 100 Mbps we have to resort to more
complex methods.
FDDI over UTP-5
This is now standardized and uses MLT-3 coding described in 2.1.15.3,
“Multi-Level Transmit - 3 Levels (MLT-3)” on page 35.
100 Mbps Ethernet (100BaseTX) over UTP-5
This protocol uses two pairs of wire (one in each direction) in a UTP-5
cable. The same coding scheme as FDDI over UTP-5 (MLT-3) is used.
52 High-Speed Networking
100 Mbps Ethernet (100BaseT4) over UTP-3 Using Four Pairs
The first thing to be said about this protocol is that it does not send 100
Mbps over a single pair of UTP-3 wires. A typical UTP-3 cable contains
four pairs of wires. The protocol sends data on three pairs in parallel
and the data rate on each pair is 33.3 Mbps. The fourth pair is used for
collision detection. One restriction is that if there are more than four
pairs in the UTP cable then those additional pairs may not be used for
any purpose (this is to cut down on noise due to crosstalk). This is
discussed further in 13.3.1.1, “100VG AnyNet Transmission Protocol on
UTP-3” on page 273.
100 Mbps Ethernet over UTP-3 Using Two Pairs (100BaseT2)
This is a strong requirement that is under discussion in the IEEE
standards committee. A number of proposals exist but the problem is
complex and a resolution is expected to take some time.
100VG AnyNet
This protocol again uses all four pairs in the UTP-3 cable but is quite
different from 100BaseT4. It is described in 13.3, “100VG AnyNet - IEEE
802.12 (100BaseVG)” on page 271.
Many people think that because the use of fiber optics is growing very rapidly
that there is no progress being made in digital transmission on copper media.
Nothing could be further from the truth.
Better circuitry (higher density, higher speed, lower cost) enables us to use
much more sophisticated techniques for digital transmission than have been
possible in the past. As mentioned above, there are vast numbers of copper
“subscriber loops” installed around the world and there is a big incentive to get
better use out of them.
In the previous chapter, and later in 8.1.3, “ISDN Basic Rate” on page 161, the
principles of basic rate ISDN are discussed. This was an enormously successful
research effort, so much so that the American National Standards Institute
(committee T1.E1.4) is working on two projects aimed at increasing the link
speeds even further.
The primary reason for ADSL is for the distribution of video and image-based
services to private homes and small businesses over existing copper wire local
loops.
ADSL is a developing standard. The initial proposal was to provide a 1.5 Mbps
connection from the exchange to the end user with a 64 Kbps connection from
the end user to the exchange. An important point is that this is in addition to the
“regular” analog voice connection over the same pair of wires.
When ADSL was first proposed as a standards project, the objective looked
attainable but very difficult. Recent research results, however, have proven that
DMT transmission can achieve a much greater throughput than the original
objective. On good quality loops a data rate of 6 Mbps in one direction is
possible (and this has been demonstrated in field trials). Simultaneously with
this, a data rate of 384 Kbps has been demonstrated in the other direction.
56 High-Speed Networking
The form of ADSL currently undergoing field trials gives 1.544 Mbps downstream,
64 Kbps upstream plus simultaneous analog voice. The system is arranged so
that the analog voice transmission can still continue even if the ADSL NTUs
(Network Terminating Units) are powered off.
It is expected that the final version of ADSL will extend the downstream channel
frequency band down to 10 kHz overlapping with the upstream channel
frequency. The system requires complex echo cancellation in order to make this
possible but it is predicted that this function can be incorporated in the same
chip that performs all the other ADSL functions. Hence the cost of the additional
bandwidth capability is minimal.
The concept involved in DMT is not new, but its practical implementation has
had to wait for cost-effective hardware technology to become available. Digital
Signal Processing now provides such cost-effective technology.
Note: The details of DMT presented in the following discussion are as they were
proposed for use with HDSL. The details of ADSL implementation are slightly
different. (Mainly, the symbol rate in HDSL is 2000 baud - or one block every 500
µsec. In ADSL it is 4000 baud - or one block every 250 µsec.)
29 The concept of DMT was pioneered by A. Peled and A Ruiz of IBM Research (see Peled and Ruiz, 1980). However, most of
the current research work relating to the use of DMT for HDSL and ADSL was done by Amati Communications Corporation
and Stanford University. Thanks are due to Amati for providing much of the DMT information presented here.
30 Also called “subscriber loops”
58 High-Speed Networking
When sending a coherent signal (whether it is digital baseband or modulations
on a carrier) across this type of transmission channel, impairments that are
present in the band affect the whole signal. Thus the “trick” of designing a
transmission scheme is to pick a scheme that avoids or minimizes the problems
inherent in the channel.
The basic principle of DMT is to break up the transmission channel into many
(narrower) subchannels by Frequency Division Multiplexing (FDM).
The fact is, however, that the scheme is far from absurd! DMT is the best
transmission scheme yet developed for the telephone subscriber loop
environment. This comes about for a number of reasons:
1. If we treat the available bandwidth as a single channel and send any kind of
signal over it, impairments in one part of the channel affect the whole signal.
If we break up the channel into many narrower subchannels, then different
impairments and channel characteristics will apply to each subchannel. In
other words, different subchannels will have different levels of quality. We
now have an opportunity to optimize each subchannel individually. For
example, the high attenuation that affects subchannels in the higher parts of
the frequency range will not affect the subchannels in the lower parts of the
range. Different types of noise will affect some subchannels without affecting
others.
Shannon′s laws are still unbroken - they assume that impairments in a
channel uniformly affect all frequencies over the width of the channel. But
this is not the case here.
60 High-Speed Networking
4. Each sub-band is 2 kHz wide so the total bandwidth (potentially) used is 512
kHz.
5. Each signal uses Quadrature Amplitude Modulation (QAM) and may
potentially carry 0, 1, 2, or 3 bits per block. Fractions of a bit cannot be
carried - a bit time cannot extend past the end of the current block. (Some
subchannels may not be good enough to carry any data at all.)
6. “Trellis Coding” of the bit stream is used. This uses redundant signal states
to improve the performance of the receiver (detector). Sixteen signal states
are used to encode a maximum of 3 bits. This is a 2 to 1 redundancy but
more than pays for itself in improved noise rejection in the receiver.
The system is capable of being used for 768 Kbps full-duplex over two wires or
for 1.544 Mbps FDX over four wires. Operation takes place as follows:
1. A block of 400 bits arrives at the transmitter and the bits are allocated to
subchannels according to the previously agreed subchannel capacities. Any
integer number of bits from zero (for a very bad subchannel) to three (for a
very good one) is possible.
2. For each individual subchannel, the bits to be transmitted are digitally
modulated. A version of Quadrature Amplitude modulation is used. That is,
there are four possible phase states and four possible amplitude states.
This allows for 16 signal states and potentially 4 bits may be transmitted per
signal period (per symbol). 32 However, the redundancy used in the Trellis
Coding system reduces this to 3 bits per symbol.
32 The technology allows for the number of signal states to be different on different subchannels. A maximum of 2048 signal
states yielding a capacity of up to 11 bits per subchannel (10 bits with Trellis Coding) is possible.
62 High-Speed Networking
5. The FFT processing results in 256 complex numbers (pairs of values) which
represent the amplitude and phase of the signal as received on each
channel.
6. The subchannels are then equalized, decoded and converted to a serial bit
stream of 768 Kbps of data and 16 Kbps of control information.
7. In the “two-wire full-duplex” configuration an echo cancellation device will
typically be situated at the line side of the receiver.
8. In the case of Trellis Coding being used at the transmitter (as in this system),
the signal decoding takes place using a “Viterbi Decoder”. (This makes use
of the redundant code states in the Trellis Code to provide better noise
rejection - however, it causes additional delay in end-to-end data transfer.)
The Fast Fourier Transform (FFT) is just a computer algorithm for performing
the DFT quickly. The IFFT is a computer algorithm for performing the IDFT
quickly.
In DMT transmission, the FFT and its inverse are used in an unusual way.
The IFFT is being used to produce a frequency multiplexed signal, and the
FFT is used to decode that signal back into its original components.
3.2.5 Characteristics
DMT has a number of important characteristics that make it optimal in the
subscriber loop environment.
Noise Rejection
The most significant characteristic of DMT transmission is its excellent
noise rejection. The biggest problem in the subscriber loop environment
is impulse noise. These impulses are typically around 30 µsec in
duration but vary from about 10 µsec to 50 µsec. Because each signal
period is 500 µsec, each impulse noise event is a relatively small part of
the signal period. When it is processed by the Fourier transform at the
receiver, the impulse has a relatively small effect on the output.
Power Control
The signal amplitude of each subchannel can be separately controlled
simply by multiplying the amplitude/phase number by a constant before
sending it to the IFFT. So power can be adjusted dynamically to suit the
subchannel.
Line Quality Monitoring
When the system is started, the medium is analyzed at each of its
subchannels to determine subchannel quality and calculate what bit rate
64 High-Speed Networking
Chapter 4. An Introduction to Fiber Optical Technology
The use of light to send messages is not new. Fires were used for signaling in
biblical times, smoke signals have been used for thousands of years and
flashing lights have been used to communicate between warships at sea since
the days of Lord Nelson.
But the idea of using a glass fiber to carry an optical signal originated with
Alexander Graham Bell. But Bell′s idea had to wait some 80 years for better
glasses and low-cost electronics for it to become useful in practical situations.
Over the decade of the 1980s optical communication in the public communication
networks developed from the status of a curiosity into being the dominant
technology.
4.1.1 Concept
The basic components of an optical communication system are shown in
Figure 29, above.
• A serial bit stream in electrical form is presented to a modulator, which
encodes the data appropriately for fiber transmission.
• A light source (laser or Light Emitting Diode - LED) is driven by the
modulator and the light focused into the fiber.
• The light travels down the fiber (during which time it may experience
dispersion and loss of strength).
• At the receiver end the light is fed to a detector and converted to electrical
form.
• The signal is then amplified and fed to another detector, which isolates the
individual state changes and their timing. It then decodes the sequence of
state changes and reconstructs the original bit stream. 33
• The timed bit stream so received may then be fed to a using device.
33 This overview is deliberately simplified. There are many ways to modulate the transmission and the details will vary from this
example but the general principle remains unchanged.
34 Practical fiber systems don′t attempt to do this because it costs less to put multiple fibers in a cable than to use sophisticated
multiplexing technology.
66 High-Speed Networking
• In some tropical regions of the world, lightning poses a severe
hazard even to buried telephone cables! Of course, optical fiber isn′ t
subject to lightning problems but it must be remembered that
sometimes optical cables carry wires within them for strengthening
or to power repeaters. These wires can be a target for lightning.
No Electromagnetic Interference
Because the connection is not electrical, you can neither pick up nor
create electrical interference (the major source of noise). This is one
reason that optical communication has so few errors. There are very few
sources of things that can distort or interfere with the signal.
In a building this means that fiber cables can be placed almost anywhere
electrical cables would have problems, (for example near a lift motor or
in a cable duct with heavy power cables). In an industrial plant such as
a steel mill, this gives much greater flexibility in cabling than previously
available.
In the wide area networking environment there is much greater flexibility
in route selection. Cables may be located near water or power lines
without risk to people or equipment.
Distances between Repeaters
As a signal travels along a communication line it loses strength (is
attenuated) and picks up noise. The traditional way to regenerate the
signal restoring its power and removing the noise is to use a repeater.
These are discussed later in 4.1.7, “Repeaters” on page 85.35 (Indeed it
is the use of repeaters to remove noise that gives digital transmission its
high quality.)
In long-line transmission cables now in use by the telephone companies,
the repeater spacing is typically 24 miles. This compares with 8 miles
for the previous coaxial cable electrical technology. The number of
required repeaters and their spacing is a major factor in system cost.
Some recently installed systems (1995) have spacings of up to 70 miles. 36
Open Ended Capacity
The data transmission speed of installed fiber may be changed
(increased) whenever a new technology becomes available. All that
must be done is change the equipment at either end and change the
repeaters.
Better Security
It is possible to tap fiber optical cable. But it is very difficult to do and
the additional loss caused by the tap is relatively easy to detect. There
is an interruption to service while the tap is inserted and this can alert
operational staff to the situation. In addition, there are fewer access
points where an intruder can gain the kind of access to a fiber cable
necessary to insert a tap.
Insertion of active taps where the intruder actually inserts a signal is
even more difficult.
35 Repeaters have been in use for many years in digital electronic connections.
36 As will be seen later, optical amplifiers are replacing repeaters as the technology of choice in long-line communications.
68 High-Speed Networking
Optics for Transmission Only
Until very recently there was no available optical amplifier. The signal
had to be converted to electrical form and put through a complex
repeater in order to boost its strength. Recently, optical amplifiers have
emerged and look set to solve this problem (see 4.1.8, “Optical
Amplifiers” on page 86).
However, optical logic processing and/or switching systems seem to be a
few years off yet.
Gamma Radiation
Gamma radiation comes from space and is always present. It can be
thought of as a high-energy X-ray. Gamma radiation can cause some
types of glass to emit light (causing interference) and also gamma
radiation can cause glass to discolor and hence attenuate the signal. In
normal situations these effects are minimal. However, fibers are
probably not the transmission medium of choice inside a nuclear reactor
or on a long-distance space probe. (A glass beaker placed inside a
nuclear reactor for even a few hours comes out black in color and quite
opaque.)
Electrical Fields
Very high-voltage electrical fields also affect some glasses in the same
way as gamma rays. One proposed route for fiber communication
cables is wrapped around high-voltage electrical cables on transmission
towers. This actually works quite well where the electrical cables are
only of 30 000 volts or below. Above that (most major transmission
systems are many times above that), the glass tends to emit light and
discolor. Nevertheless, this is a field of current research - to produce a
glass that will be unaffected by such fields. It is a reasonable
expectation that this will be achieved within a very few years.
Some electricity companies are carrying fibers with their high voltage
distribution systems by placing the fiber inside the earth wire (typically a
1 inch thick copper cable with steel casing). This works well, but
long-distance high-voltage distribution systems usually don′t have earth
wires.
Sharks Eat the Cable(?)
In the 1980s there was an incident where a new undersea fiber cable was
broken on the ocean floor. Publicity surrounding the event suggested
that the cable was attacked and eaten by sharks. It wasn′t just the
press; this was a serious claim. It was claimed that there was something
in the chemical composition of the cable sheathing that was attractive to
sharks!
Other people have dismissed this claim as a joke and suggest that the
cable was badly laid and rubbed against rocks. Nevertheless, the story
has passed into the folklore of fiber optical communication and some
people genuinely believe that sharks eat optical fiber cable.
Gophers Really Do Eat the Cable
Gophers are a real problem for fiber cables in the United States. There
is actually a standardized test which involves placing a cable in a gopher
enclosure (conducted by a nature and wildlife organization) for a fixed,
specified length of time.
If a short pulse of light from a source such as a laser or an LED is sent down a
narrow fiber, it will be changed (degraded) by its passage down the fiber. It will
emerge (depending on the distance) much weaker, lengthened in time
(“smeared out”), and distorted in other ways. The reasons for this are as
follows:
Attenuation
The pulse will be weaker because all glass absorbs light. The rate at
which light is absorbed is dependent on the wavelength of the light and
the characteristics of the particular glass. Typical absorption
characteristics of fiber for varying wavelengths of light are illustrated in
Figure 30 on page 72.
Polarization
Conventional communication optical fiber is cylindrically symmetric.
Light travelling down such a fiber is changed in polarization. (In current
optical communication systems this does not matter but in future
systems it may become a critical issue.)
Dispersion
Dispersion is when a pulse of light is spread out during transmission on
the fiber. A short pulse becomes longer and ultimately joins with the
pulse behind, making recovery of a reliable bit stream impossible. There
are many kinds of dispersion, each of which works in a different way, but
the most important three are discussed below:
1. Material dispersion (chromatic dispersion)
Both lasers and LEDs produce a range of optical wavelengths (a
band of light) rather than a single narrow wavelength. The fiber has
different refractive index characteristics at different wavelengths and
therefore each wavelength will travel at a different speed in the fiber.
Thus, some wavelengths arrive before others and a signal pulse
disperses (or smears out).
37 Another way of saying this is that light has many frequencies or colors.
70 High-Speed Networking
2. Modal dispersion
When using multimode fiber, the light is able to take many different
paths or “modes” as it travels within the fiber. This is shown in
Figure 32 on page 74 under the heading “Multimode Step Index”.
The distance traveled by light in each mode is different from the
distance travelled in other modes. When a pulse is sent, parts of that
pulse (rays or quanta) take many different modes (usually all
available modes). Therefore, some components of the pulse will
arrive before others. The difference between the arrival time of light
taking the shortest mode versus the longest obviously gets greater
as the distance gets greater.
3. Waveguide dispersion
Waveguide dispersion is a very complex effect and is caused by the
shape and index profile of the core. However, this can be controlled
by careful design and, in fact, waveguide dispersion can be used to
counteract material dispersion as will be seen later.
Modal Noise
Modal noise is a phenomenon in multimode fiber. It occurs when a
connector doesn′t make a completely accurate join and some of the
propagation modes are lost. This causes a loss in signal quality (and
some quantity) and is thus described as “noise”.
None of these effects are helpful to engineers wishing to transmit information
over long distances on a fiber. But much can be done about it.
1. Lasers transmit light at one wavelength only. 38 Furthermore, the light rays
are parallel with one another and in phase. Light Emitting Diodes (LEDs)
that emit light within only a very narrow range of frequencies can be
constructed. So the problem of dispersion due to the presence of multiple
wavelengths is lessened.
2. If you make the fiber thin enough, the light will have only one possible path -
straight down the middle. Light can′t disperse over multiple paths because
there is only one path. This kind of fiber is called monomode or single-mode
fiber and is discussed in 4.1.2.3, “Fiber Geometry” on page 74.
3. The wavelength of light used in a particular application should be carefully
chosen, giving consideration to the different attenuation characteristics of
fiber at different wavelengths.
4. Types of dispersion that depend on wavelength can of course be minimized
by minimizing the linewidth of the light source (but this tends to add cost).
5. Material dispersion and waveguide dispersion are both dependent on
wavelength. Waveguide dispersion can be controlled (in the design of the
fiber) to act in the opposite direction from material dispersion. This more or
less happens naturally at 1300 nm 39 but can be adjusted to produce a
dispersion minimum in the 1500 nm band.
38 This is not exactly true. Lasers built for communications typically transmit a narrow range of different wavelengths. This is
discussed in 4.1.3, “Light Sources” on page 76.
39 This is a result of the core size and refractive index, and achieving this balance at 1300 nm was one reason for the choice of
core size and RI. 1300 nm was a good wavelength because in the early 1980s GaAs lasers were relatively easy to make
compared to longer-wavelength types.
Figure 30. Typical Fiber Infrared Absorption Spectrum. The curve represents the
characteristics of silicon dioxide (SiO 2) glass.
There are a wide range of glasses available and characteristics vary depending
on their chemical composition. Over the past few years the transmission
properties of glass have been improved considerably. In 1970 the “ballpark”
attenuation of a silicon fiber was 20 dB/km. By 1980 research had improved this
to 1 dB/km. In 1990 the figure was 0.2 dB/km. As the figures show, absorption
72 High-Speed Networking
varies considerably with frequency and the two curves show just how different
the characteristics of different glasses can be.
Some of the dopants (intentional impurities) added to the glass to modify the
refractive index of the fiber have the unwanted side effect of significantly
increasing the absorption. This is why single-mode fiber has typically lower
absorption than multimode - single-mode fiber has less dopant. The conclusion
that can be drawn from the absorption spectrum is that some wavelengths will
be significantly better for transmission purposes than others. For ordinary silica
glass the wavelengths of 850 nm and 1100 nm look attractive. For the
better-quality, germanium-dioxide-rich glass, wavelengths of around 1300 nm and
1550 nm are better. All this depends on finding light sources that will operate in
the way we need at these wavelengths.
Figure 31. Typical Fiber Infrared Absorption Spectrum. The curve shows the
characteristics for a glass made from silicon dioxide with about 4% of germanium dioxide
(GeO 2) added. The peak at around 1400 nm is due to the effects of traces of water in the
glass.
The above suggests that, even if fiber quality is not improved, we could get
10,000 times greater throughput from a single fiber than the current practical
limit.
74 High-Speed Networking
This is the effect you see when looking upward from underwater. Except for
the part immediately above, the junction of the water and the air appears
silver like a mirror.
Light is transmitted (with very low loss) down the fiber by reflection from the
mirror boundary between the core and the cladding.
Multimode Fiber
The expectation of many people is that if you shine a light down a fiber,
then the light will enter the fiber at an infinitely large number of angles
and propagate by internal reflection over an infinite number of possible
paths. This is not true. What happens is that there is only a finite
number of possible paths for the light to take. These paths are called
“modes” and identify the general characteristic of the light transmission
system being used. Fiber that has a core diameter large enough for the
light used to find multiple paths is called “multimode” fiber. (For a fiber
with a core diameter of 62.5 microns using light of wavelength 1300 nm,
the number of modes is around 228.)
The problem with multimode operation is that some of the paths taken by
particular modes are longer than other paths. This means that light will
arrive at different times according to the path taken. Therefore the pulse
tends to disperse (spread out) as it travels through the fiber. This effect
is one cause of “intersymbol interference”. This restricts the distance
that a pulse can be usefully sent over multimode fiber.
Multimode Graded Index Fiber
One way around the problem of multimode fiber is to do something to
the glass such that the refractive index of the core changes gradually, so
that light travelling down the center of the fiber effectively travels more
slowly than light that is bouncing around. This system causes the light to
travel in a wave-like motion such that light in different modes travels in
glass of different refractive index. This means that light that travels the
longest distance goes faster. The net is that the light stays together as it
travels through the fiber and allows transmission for longer distances
than does regular multimode transmission. This type of fiber is called
“Graded Index” fiber. Within a GI fiber light typically travels in up to 800
modes.
Note that the refractive index of the core is the thing that is graded.
There is still a cladding of lower refractive index than the outer part of
the core.
Single-Mode Fiber
If the fiber core is very narrow compared to the wavelength of the light in
use then the light cannot travel in different modes and thus the fiber is
called “single-mode” or “monomode”. It seems obvious that the longer
the wavelength of light in use, the larger the diameter of fiber we can
use and still have light travel in a single mode. The core diameter used
in a typical monomode fiber is nine microns.
It is not quite as simple as this in practice. A significant proportion (up
to 20%) of the light in a single-mode fiber actually travels down the
cladding. For this reason the “apparent diameter” of the core (the
region in which most of the light travels) is somewhat wider than the
core itself. Thus the region in which light travels in a single-mode fiber
is often called the “mode field” and the mode field diameter is quoted
The big problem with fiber is joining it. The narrower the fiber, the harder it is to
join and the harder it is to build connectors. Connectors are needed in things
like patch cables or plugs and sockets. Single-mode fiber has a core diameter
of 4 to 10 µm (8 µm is typical). Multimode fiber can have many core diameters
but in the last few years the core diameter of 62.5 µm in the US and 50 µ m
outside the US has become predominant. However, the use of 62.5 µm fiber
outside the US is gaining popularity - mainly due to the availability of equipment
(designed for the US) that uses this type of fiber.
4.1.3.1 Lasers
Laser stands for Light Amplification by the Stimulated Emission of Radiation.
Lasers produce far and away the best kind of light for optical communication.
• Laser light is single-wavelength only. This is related to the molecular
characteristics of the material being used in the laser. It is formed in
parallel beams and is in a single phase. That is, it is “coherent”.
This is not exactly true for communication lasers. See the discussion under
“Linewidth” below.
76 High-Speed Networking
• Lasers can be controlled very precisely (the record is a pulse length of 0.5
femto seconds 40).
• Lasers can produce relatively high power. In communication applications,
lasers of power up to 20 milliwatts are available.
• Because laser light is produced in parallel beams, a high percentage (50%
to 80%) can be transferred into the fiber.
• Most laser systems use a monitor diode to detect back facet power for
automatic power control. This can be used for diagnostic purposes.
40 10-15 seconds.
41 So named because each frequency resonates on a different path within the cavity.
42 Often we want to limit the power somewhat artificially to stay within eye safety limits for example.
78 High-Speed Networking
Fabry-Perot lasers vary an enormous .4 nm per degree Celsius of
temperature variation. Most of the single-mode lasers are significantly
better than this, but temperature control is critical.
When most lasers are modulated by OOK (turning them on and off) they
produce a “chirp” at the beginning of each pulse (this is a transient
frequency shift of as much as several gigahertz). This is a problem in
WDM systems and ones using coherent receivers. This is caused by
instantaneous heating effects - after all, the energy (“light”) produced is
infrared (heat).
Switching Time and Modulation
Of course, a fundamental operational characteristic of any laser is which
modulation techniques are possible and how fast they can operate. In
general, all lasers can be modulated by OOK (on/off keying) and some
by FSK (frequency shift keying). Other modulation techniques require an
external modulator to be placed into the light beam after it is generated.
Tuning Range and Speed
In many proposed WDM systems, transmitters, and/or receivers need to
be switched between different wavelengths (channels). There are many
techniques for doing this. In general, the faster the device can switch,
the narrower will be the range of channels over which it can switch.
Another point is that tunable lasers are seldom capable of continuous
tuning over an unbroken range of wavelengths. When they are tuned
they “jump” from one wavelength to another (corresponding to the
resonance modes of the laser cavity).
Modes
In optics, a mode is a path that light may take through a system. Thus
multimode fiber is fiber that allows for multiple paths. A multimode laser is
one that allows multiple paths within its cavity and hence produces light of
multiple wavelengths. Such lasers do not produce multiple wavelengths
simultaneously. Rather, the laser will switch from one mode to another very
quickly (sometimes spending only a few pico-seconds in any particular mode)
apparently at random during sending a single pulse.
Thus the word “mode” relates to the path on which light is travelling at a
particular instant in time. Light produced in a “multimode laser” is not
“multimode light”. Light produced by a “multimode laser” travels in a single
mode along a single mode fiber perfectly well.
80 High-Speed Networking
4.1.4.1 Coherent Detectors
It almost goes without saying that the output of the detectors discussed above is
not at the frequency (wavelength) of light (the electronics would not go that fast).
What you get is electrical pulses (hopefully) similar to the modulations in the
original signal.
This is just like the operation of a “crystal set”. A crystal set uses a tunable
resonant circuit to select the frequency and then the output is fed to a simple
half wave rectifier (the crystal). The output is just the original signal
rectified.
This, very simple, method of detection is called “incoherent detection”.
Most electronic radio receivers use a quite different method called “heterodyne
detection”. In the optical world heterodyne detection is called “coherent
detection”. (In electronics the word coherent is used in a much narrower sense.
It is reserved for systems where the detector “locks on” to the phase of the
received signal.)
Optical coherent detection (as illustrated in Figure 33 on page 82) has two major
advantages over incoherent detection:
1. Receivers can be significantly more sensitive than incoherent ones (by 15 to
20 dB). In general this is true of systems reported in the research literature.
However, some researchers (Green 1992) claim that incoherent detectors
built with an optical preamplifier can be just as sensitive and are a lot less
complex.
2. In WDM systems, where there are many channels packed closely together,
coherent detection allows a much better rejection of interference from
adjacent channels. (This allows channels to be packed more closely
together.)
In addition, there are some other advantages, such as better noise rejection.
Optical coherent detection systems can be used for all the possible types of
modulation (ASK, FSK, PSK, PolSK); however, they require a linewidth
significantly less than the bandwidth of the modulating signal. This means that
you can′t use coherent detectors with LED transmitters (or with unmodified
Fabry-Perot lasers). Also, they will not work in the presence of any significant
level of frequency chirp. Thus, if ASK (OOK) is used, you either have to do
something to the transmitter to minimize chirp or you should use a fixed laser
with an external modulator.
There are two significant problems with coherent reception, both of which can be
solved, but at a cost:
1. Optical coherent receivers are highly polarization sensitive and standard
single-mode optical fiber does not respect polarization. However,
polarization changes slowly and in single-channel systems it can be
compensated for. In multichannel WDM systems, acquiring the polarization
of a channel quickly after tuning is quite a challenge. (However in a WDM
system, if we used fiber that retained polarization, we could put alternate
channels at orthogonal polarizations and pack the channels even more
closely together.)
Various solutions (including using two detectors with orthogonal polarization)
have been suggested - but this would significantly increase the cost of such
a device.
2. Stabilization of the frequencies of both transmitter and receiver within close
tolerances is very difficult.
82 High-Speed Networking
Prototype coherent receivers have been available in laboratories since the
middle of the 1980s and have been used in many experiments. However, to the
knowledge of the author, coherent receivers are not yet available in any
commercial system.
4.1.5 Filters
In current optical systems, filters are seldom used or needed. They are
sometimes used in front of an LED to narrow the linewidth before transmission,
but in few other roles.
In proposed future WDM networks, filters will be very important for many uses:
• A filter placed in front of an incoherent receiver can be used to select a
particular signal from many arriving signals.
• WDM networks are proposed which use filters to control which path through
a network a signal will take.
There are many filtering principles proposed and many different types of devices
have been built in laboratories.
One of the simplest is the Fabry-Perot filter. This consists of a cavity bounded
on each end by a half-silvered mirror. 43
• Light is directed onto the outside of one of the mirrors.
• Some is reflected and some enters the cavity.
• When it reaches the opposite mirror some passes out and some is reflected
back.
• This continues to happen with new light entering the cavity at the same rate
as light leaves it.
If you arrange the cavity to be exactly the right size, interference patterns
develop which cancel unwanted wavelengths. The device can be tuned quite
quickly by attaching one of the mirrors to a piezo-electric device (you can get
accuracy here to less than the diameter of an atom!).
There is an ingenious variation of the Fabry-Perot filter. Two pieces of fiber are
used with their ends polished and silvered. The ends are placed precisely
opposite one another with a measured gap (this is the hard part). This avoids
the cost of getting the light into and out of a “regular” FP filter - because it
arrives and leaves on its own fiber.
There are many kinds of active, tunable filters which will be important in WDM
networks. These, like lasers, have the characteristic that the wider the tuning
range, the slower the tuning time.
43 This is really an optical version of the electronic “tapped delay line”, “transversal” filter or Standing Acoustic Wave (SAW)
filter. In the digital signal processing world this is done with a shift register. This is the physical realization of the
mathematical process of determination of a single term in a Fourier series.
In fairness, you can call OOK a special case of amplitude shift keying (ASK) in
which a number of discrete signal amplitude levels are used to carry a digital
signal.
Most digital communication systems using fiber optics use NRZI encoding. This
means that when you have a transition from light to no light or from no light to
light, a “1” bit is signaled. When there are two successive pulses of light or two
successive periods of dark then a zero bit is signaled. This is discussed in 2.1.2,
“Non-Return to Zero Inverted (NRZI) Coding” on page 16.
44 It is possible to modulate a laser signal in exactly the same way as regular AM radio. In fact, there are numerous commercial
systems doing just this. They just vary the amplitude of the optical signal in exactly the same way as we vary the amplitude
of the RF carrier in AM radio.
84 High-Speed Networking
4.1.6.4 Polarity Modulation (PolSK)
Lasers produce linearly polarized light. Coherent detectors are very strongly
polarization sensitive (it is one of the big problems). Another modulation
dimension can be achieved (potentially) by introducing polarization changes.
Unfortunately, current fiber changes the polarization of light during transit - but
there are techniques to overcome this.
This is not an available technique (not even in the lab) but feasibility studies are
being undertaken to determine if PolSK could be productively used for fiber
communications.
4.1.7 Repeaters
Until the commercial availability of optical amplifiers in 1992, the only way to
boost an optical signal was to convert it to electrical form, amplify or regenerate
it, and then convert it to optical form again. See 4.1.8, “Optical Amplifiers” on
page 86.
Repeaters have been the method of choice for boosting an optical signal.
(Electronic repeaters were discussed in 2.1.6.3, “Repeaters” on page 22.) A
repeater is a full receiver which reconstructs the bit stream and its timing. This
bit stream and its timing are used to drive a transmitter. This means that the
repeated signal has all dispersion and noise removed. However, repeaters are
more complex and more costly than simple amplifiers. Repeaters are very
inflexible. They have to be constructed for exactly the wavelength, protocol, and
speed of the signal being carried.
Figure 34. Erbium Doped Optical Fiber Amplifier. Although the device is powered
electrically, the amplification process is totally optical and takes place within a short
section of rare earth doped, single-mode fiber.
This is very significant because the amplifier is much less prone to failure than
an electrical repeater, operates at almost any speed, and is not dependent on
the digital characteristics (such as the code structure) in the signal. It should
45 Repeaters in electrical communication systems are discussed in 2.1.6.3, “Repeaters” on page 22.
86 High-Speed Networking
also cost significantly less. Many people believe that this device has begun a
“new generation” in optical systems.
The amplifier itself is simply a short (a few feet) section of fiber which has a
controlled amount of a rare earth element (Erbium) added to the glass. This
section of fiber is, itself, a laser.
The principle involved here is just the principle of a laser and is very simple.
Atoms of Erbium are able to exist in several energy states (these relate to the
alternative orbits which electrons may have around the nucleus). When an
Erbium atom is in a high-energy state, a photon of light will stimulate it to give
up some of its energy (also in the form of light) and return to a lower-energy
(more stable) state. This is called “stimulated emission”. “Laser” after all is an
acronym for “Light Amplification by the Stimulated Emission of Radiation”.
To make the principle work, you need a way of getting the Erbium atoms up to
the excited state. The laser diode in the diagram generates a high-powered
(10 milliwatt) beam of light at a frequency such that the Erbium atoms will
absorb it and jump to their excited state. (Light at 980 or 1,480 nanometer
wavelengths will do this quite nicely.) So, a (relatively) high-powered beam of
light is mixed with the input signal. (The input signal and the excitation light
must of course be at significantly different wavelengths.) This high-powered light
beam excites the Erbium atoms to their higher-energy state. When the photons
belonging to the signal (at a different wavelength which is not absorbed by
Erbium) meet the excited Erbium atoms, the Erbium atoms give up some of their
energy to the signal and return to their lower-energy state. 46 A significant point is
that the Erbium gives up its energy in exactly the same phase and direction as
the signal being amplified, so the signal is amplified along its direction of travel
only. (This is not unusual - when an atom “lasers” it always gives up its energy
in the same direction and phase as the incoming light. That is just the way
lasers work.)
The significant thing here is that Erbium only absorbs light (and jumps to a
higher-energy state) if that light is at one of a very specific set of
wavelengths. Light at other wavelengths takes energy from the Erbium and
is amplified.
So the device works this way. A constant beam of light (feedback controlled) at
the right frequency to excite Erbium atoms is mixed with the input signal. This
beam of light constantly keeps the Erbium atoms in an excited state. The signal
light picks up energy from excited Erbium atoms as it passes through the section
of doped fiber.
46 This doesn′t happen for all wavelengths of signal light. There is a range of wavelengths approximately 24 nm wide that is
amplified.
Other types of optical amplifiers (solid-state ones) exist but are still in the
research stage.
88 High-Speed Networking
4.1.8.1 Signal Mixing Devices
In any optical network it is necessary to combine signals and to split them
multiple ways. Such devices are in common use. 1x2, 2x2, and 2x1 couplers are
commercially available. Furthermore they have very low loss. Of course, they
do have some loss.
In proposed WDM network designs, there is often the need for a “reflective star”
(a device that accepts many input signals, mixes them, and then splits the
combined signal in as many directions as there were inputs). In laboratories
these have been built of many 2x2 couplers cascaded together. Many
researchers feel that this becomes impractical for devices larger than 100x100.
New devices have been designed to perform the “star” function in a single
device. These are called “wavelength flattened fused fiber arrays”. So far these
have not become commercial, but to realize WDM LANs stars of up to 1000x1000
will be necessary.
Fiber cables are made to suit the application they are to perform and there are
hundreds, perhaps thousands of types. The kinds of variations you see between
cables are as follows:
• Number of fibers in a single cable. This typically ranges from two to around
100. Outdoor telephone company single-mode fiber cables tend to have
about 30 fibers in the single cable. Multiple cables are usually installed on
routes where 30 fibers isn′t enough.
• Strength members. Many cables, particularly for outdoor applications have
steel wire at the core to provide strength.
• The optical characteristics of the fiber itself. Single-mode or multimode, loss
in dB per km, susceptibility to temperature changes, etc.
• Electrical wire. In some cable situations it is necessary to power repeaters
over long distances. One example of this is in submarine cables. Electrical
power cabling is often included to deliver power to the repeaters.
• Water proofing. It sounds illogical, but water proofing is often more
important in the fiber optical environment than it is in the electrical world!
In the early days of optical data communication (1983), IBM specified an optical
cable for future use which was 100/140 µm in diameter. The 100 micron core is
very wide and is certainly not the best size for communication. However, at the
time it was the best specification for making joints in the field. (This
specification is still supported by some systems - including FDDI.)
A cable join is a type of weld. The cable ends are cut, polished, butted up to
one another and fused by heat. (Incidentally, with some silica fibers you need
quite a high temperature - much higher than the melting point of ordinary soda
glass.) In practice, a light loss of only .1 dB is the current budget for power loss
in a single-mode fiber join. But it should be realized that .1 dB is the total loss
of one kilometer of cable.
In the situation where two fibers of different diameters are being joined with a
connector, than a lot of light is usually lost. This is a common situation, for
example, in the IBM ESCON channel system where fibers with a 62.5 µm core
can be connected to fibers with a 50 µm core. Loss of light in this situation
(about 3 dB) is unavoidable.
90 High-Speed Networking
4.1.10 Transmission System Characteristics
The characteristics of various transmission systems are summarized in Table 6.
Copper 2 Mbps 2 km 4 M
Multimode 802.5 32 M b p s 2 km 64 M
LED i n use
FDDI 100 M b p s 2 km 200 M
Calculating Dispersion
Waveguide dispersion is usually quoted in ps per nm per km at a given
wavelength. At 1500 nm a typical dispersion figure is 14 ps/nm/km. That is,
a pulse (regardless of its length) will disperse by 14 picoseconds per
nanometer of spectral linewidth per kilometer of distance travelled in the
fiber. So, in a typical single-mode fiber using a laser with a linewidth of 6 nm
over a distance of 10 km we have:
47 And that 1300 nm GaAs (Gallium Arsenide) lasers were easily available.
48 Recently researchers have succeeded in building Praseodymium doped fiber amplifiers which operate in the 1300 nm band but
these are inferior to the Erbium doped ones and are not yet commercially available. Nevertheless a gain of 23 dB at 1310 nm
has been reported in the literature.
92 High-Speed Networking
the fiber) and it is now possible (indeed standard practice) to balance the two
forms of dispersion at 1500 nm. Another way of minimizing dispersion (both
material and waveguide) is to use a narrow linewidth laser. These techniques
combined have meant that almost all new long distance single-mode systems
are being installed at 1500 nm.
There is one subtlety. That is, the bandwidth × distance product is not really a
constant for all lengths of fiber. There is a parameter called the “cutback
gamma” which is used to give the real variation of bandwidth versus distance.
The problem arises when you use short pieces of fiber to measure the
bandwidth and attempt to extrapolate to long distances. You end up predicting a
bandwidth that is too low. Conversely if you measure the bandwidth at really
long distances (which is what the fiber manufacturers do) and try to extrapolate
back to short distances you will predict a bandwidth that is actually much higher
than it is in reality. The simple formula is that BW is proportional to 1/length.
The refined formula is BW is proportional to 1/(length**gamma). 49 Typical
numbers for gamma are about 0.7-0.8.
To take proper account of this, many fiber suppliers give a table of available
bandwidth for each of many typical distances.
The critical issue is to look at what bandwidth the signal requires. Remember
that the cable is being quoted as an analog bandwidth and the signal is a digital
“baseband” signal. The correct thing to do here is to find the bandwidth
requirement of the signal you are using. What some people assume is that for
NRZ data you need a bandwidth that is 0.5 of the data frequency. So, FDDI
would be 1/2 of 125 Mbaud = 62.5 Mhz.
This is not a very accurate way because it ignores the difference between the
frequency requirements of the pulsed (square wave) signal and the sine wave
analog signal which the cable was measured. The requirement here is
determined by the characteristics of the receiver. For FDDI a practical system
might require 0.8 of the data frequency (baud rate) as a rule of thumb. It is
conceivable that a system might require up to 3 times the baud rate but this
seems quite unlikely.
Thus, if you have a transmitter of power -10 dBm and a receiver that requires a
signal of power -20 dBm (minimum) then you have 10 db of link budget. So you
might allow:
• 10 connectors at .3 db per connector = 3 db
• 2 km of cable at 2 db per km (at 1300 nm) = 4 db
• Contingency of (say) 2 db for deterioration due to aging over the life of the
system.
This leaves us with a total of 9 db system loss. This is within our link budget
and so we would expect such a system to have sufficient power. Dispersion is a
different matter and may (or may not) provide a more restrictive limitation than
the link budget.
94 High-Speed Networking
• High power is very important to achieve maximum distance between
repeaters.
• Component reliability is also very important because these systems
are typically multiplexed and a single failure affects many users.
Local Area Data Communications Systems
The most important thing about this kind of system is its need for
flexibility.
• The cost of transmitters and receivers, etc., is most critical (because
there are a large number of these and they form a large proportion
of the total).
• Cable cost is still important but much less so than in the wide area
environment. Compared to electrical cables, fiber cables are much
easier to install around a building (they are lighter and more
flexible).
• The critical thing in this environment is joining the cable and the
performance of connectors and patch cables.
• High power is important so that losses incurred in connectors and
patch cables can be accommodated.
• Reliability is also very important because a single failure can disrupt
the entire system.
So, in both types of application it is important to have high power and reliability.
These requirements lead to different system choices:
• For wide area telecommunications, single-mode fiber and long-wavelength
lasers constitute the system parameters of choice. In the 1980s, this meant
1300 nm wavelength lasers were predominant. In the 1990s, these have
been replaced by 1500 nm systems (in new plant) almost universally. The
remaining 1300 nm systems exist because the fiber that was installed has its
dispersion minimum at 1300 nm and changing to 1500 would have meant
digging up the ground and replacing the fiber. Recently, however, there has
been the invention of a device which can selectively correct
wavelength-dependent dispersion effects and can be used to equalize a fiber
designed for 1300 nm to 1500 nm.
• For Local Data Communications, the choice is for shortwave lasers (or LEDs)
and multimode fibers. In the 1980s, this meant wavelengths in the 850 nm
range, but in the 1990s, there has been a general move to 1300 nm (still with
LED transmitters and MM fiber). FDDI and the new ATM local area
connections have been standardized at 1300 nm.
However, we are about to witness a switch back to shorter wavelengths for
short-distance, high-speed connections. “CD lasers” are the kind of lasers
used in compact disk players and laser printers, etc. These lasers are very
low-cost (less than US $10) and are made by some 20 or so manufacturers.
Total industry volume of these lasers is about 4 million per year (1995).
These operate typically between 800 and 850 nm (for the CD player
application, the shorter the wavelength the better). The new standardized
fiber channel (for interconnecting computers within a machine room) allows
for transmission at 1 Gbps over a few hundred meters using these lasers.
There is a current proposal to use these lasers at 622 Mbps for ATM local
area connections (up to 300 meters).
In the WAN, when a user (almost always a telephone company) wants to install a
link, then they do a full study of the characteristics of the link involved, design
and analyze a solution, and custom-build it. This is easily justified by the fact
that the cost of installing links of this kind is very high.
In the local area world, the user typically wants to install a system which works
with “off-the-shelf” components. This means, that you need a very few
“rules-of-thumb” for interconnection of standardized components. In this
situation components are over-specified to allow for extreme situations. For
example, using FDDI (100 Mbps) over MM fiber (62.5 micron GI) the standard
says that the maximum distance allowed is 2 kilometers. In fact, if you use
good-quality cable and not too many connectors you can go to 5 kilometers
safely with most available equipment. But suppliers generally will not warrant
their equipment for use outside the guidelines inherent in the standard. This is
because the cost of people to do optimal design exceeds the amount saved
through optimization.
Optical engineering is significantly more complex than this short chapter might
suggest - designing optimal networks outside of manufacturers′ guidelines is a
highly skilled job. Be warned.
4.1.13.2 Solitons
Some researchers (Mollenauer, 1991) are working on a novel method of
propagating pulses in a fiber without dispersion. The word “soliton” is a
contraction of the phrase “solitary solution” because the phenomenon
represents a single solution to the propagation equation. The effect works this
way:
• The presence of light changes (decreases) the refractive index of glass (very
slightly).
• Light travels faster in glass of a lower refractive index.
• If you have a pulse of sufficient intensity and short-enough duration 50 the
faster (high-frequency components) at the beginning of the pulse are slowed
down a bit and the slower (low-frequency components) in the back are
speeded up.
96 High-Speed Networking
• Thus, if the pulse length and the intensity are right, the pulse will stay
together without dispersion over quite a long distance. 51 Note, it still suffers
attenuation and therefore requires amplification to retain its peculiar
non-dispersive characteristics.
Making the system work involves using amplifiers at regular intervals and
working with a signal which is significantly more intense than typical existing
systems.
This technology has not yet reached the level of laboratory prototype, but the
technology looks very attractive for long-distance links in the future.
Many factors influence the degree of hazard. Light intensity, wavelength, and
exposure duration are the most obvious. The intensity of ambient light is also
important. Because the pupil in the eye closes in the presence of bright
sunlight, laser emissions (say from surveying instruments) viewed in bright
sunlight are much less of a hazard than when the same emissions are viewed in
a darkened room.
51 Solitons can happen in other media where waves propagate such as waves in water. The Scientific American magazine of
December 1994 has an excellent article on this subject.
A “Class 1” laser is defined as one that “is inherently safe (so that the maximum
permissible exposure level cannot be exceeded under any condition), or are safe
by virtue of their engineering design”. Most (but not all) communication lasers
fall into this category. It is common, however, for a communication laser to have
a Class 1 emission level at the point of entry to the fiber but a much higher level
once the covers are removed from the device .
The only good reference is the appropriate standard itself (these have minor
differences country by country). However, as a rough guide, the following are
the Class 1 limits for exposure durations of up to 10 000 seconds:
Wavelength 700 to 1050 nm.
1.2 x 10-4 x C Watts. Where C is a correction factor depending on
wavelength (varies from 1 to 5).
Wavelength 1050 to 1400 nm.
6 x 10-4 x C Watts.
Wavelength longer than 1400 nm.
7 x 10-3 Watts.
Notice the large variation in allowable levels with wavelength.
The maximum allowable launch power for FDDI is -6 dBm at 1300 nm. This
corresponds to .25 mW of power. Since the Class 1 limit at 1300 nm is .6 mW we
can conclude that any FDDI transmitter that meets the FDDI specification for
maximum power output also meets the Class 1 standard at launch into the fiber.
(Under the covers it may not.) Another thing that should be kept in mind is that
the limit at 1550 nm is significantly higher than the limit at 1300 nm. This is
another advantage for longer-wavelength systems.
A careful analysis must be made of any optical fiber communication with laser
power emission levels approaching 1 milliwatt or more. This is particularly true
for short-wavelength diode systems, which have a lower allowable power level.
In the US, the Federal Food and Drug Administration (FDA) regulates and
enforces a laser product performance standard through the Center for Devices
and Radiological Health (CDRH). The standard is Part 1040.10 of Title 21,
Subchapter J of the US Code of Federal Regulations. The standard applies to
laser communication products as well as all other laser products. All laser
products sold in the US must be certified by the manufacturer with CDRH.
Throughout the rest of the world, IEC 825 is the primary standard, and is the
base document for the European Norm 60825, which is a mandatory standard for
laser products in Europe. All IBM laser products are certified to IEC 825. This is
the most restrictive of the current standards.
A voluntary standard in the US, ANSI/Z136.2, addresses the specific topic of the
safe use of optical fiber communication systems utilizing laser diode and LED
sources. It contains all the basic mathematical relations to describe what are
safe levels as a function of laser (LED) wavelength, NA of multimode fibers or
mode field diameters of single-mode fibers, use of optical aids or not, viewing
distance, etc. The criteria document for the development of this standard is a
98 High-Speed Networking
technical paper entitled ″Toward the development of laser safety standards for
fiber-optic communication systems″ by R.C. Petersen and D.H. Sliney (see
bibliography).
The kinds of existing networks that users want to integrate can be summarized
as follows:
• Traditional data networks
• Voice networks
• Interconnected LAN networks
• Multiprotocol networks
In addition there are opportunities for applications using:
• Image
• Full-motion video
Traditional data networks were built to handle both interactive and batch data
but were not built to handle image, voice or video traffic. The new types of
traffic put a completely new set of requirements onto the network.
52 Another peculiarity here is that the difference between the peaks and the troughs in data traffic becomes greater as the
network gets larger. This is not due to network size per se but rather is an effect that follows from the same cause.
Networks get larger because terminal costs decrease. As the cost of terminals and attachments decreases, users are able to
afford many terminals for dedicated functions. An example is in the development of banking networks. In the early networks
there was only one (expensive) terminal per branch and work was “queued” for it. It was in use all of the time with a
dedicated operator (with others taking over during lunch). Thus there was very little variance in the traffic over time (though
the mixture of transaction types changed quite radically). Now, with cheaper terminals, most bank branches have many
terminals and they are operated by their direct users, not by dedicated operators. Thus, in midmorning for example, after the
mail arrives, there is a processing peak with every terminal in use. At other times there can be little or no traffic.
53 There is an unfortunate conflict here in the usage of the word “block.” In the telephone world it describes the action of
preventing a call being set up due to lack of resources. In the data world a “block” is a logical piece of data which is kept
together for transport through the network.
54 At the link level, the sender is always known regardless of the content of the block. Later when released from the context of
the link, the only identification for the block is the routing information within the block itself.
A traditional IBM 3270 character screen showing multiple fields and many colors
averages about 2,500 bytes (the screen size is 1,920 bytes but other information
relating to formatting and field characteristics is present). The same screen
displayed as an image could be as much as 300 KB.
Data Rate
But this is altogether the wrong way to look at video. Over history we have
broadcast video (a PAL signal requires about seven MHz bandwidth) over a fixed
rate channel. Every point in the picture was sent (although via analog
transmission) in every frame. But the information content of a video frame is
inherently variable. The point about video is that the majority of frames are little
different from the frame before. If a still picture is transmitted through a video
system, all we need to transmit is the first frame and then the information
content of each subsequent frame is one bit. This bit says that this frame is the
same as the one before!
If a video picture is taken of a scene such as a room, then only a data rate of 1
bit per frame is necessary to maintain the picture (that is, 25 bps for PAL). As
soon as a person enters and walks across the room then there is much more
information required in the transmission. But even then much of the picture
area will remain unaffected. If the camera is “panned” across the room, then
each frame is different from the one before but all that has happened is that the
picture has moved. Most pixels (picture elements - bit positions) move by the
same amount and perhaps we don′t need to retransmit the whole thing.
There are many examples, including the typical head and shoulders picture of a
person speaking where most of the picture is still and only the lips are moving.
But in a picture of a waterfall many pixels will be different from ones before and
different in non-systematic ways. A video picture of a waterfall has a very high
information content because it contains many non-systematic changes.
The net result of the above is the conclusion that video is fundamentally variable
in the required rate of information transfer. It suggests that a variable rate
channel (such as a packet network) may be a better medium than a fixed rate
TDM channel for video traffic. Consider the figure below:
This is typical of existing systems that transmit video over a limited digital
transmission channel. Systems exist where quite good quality is achieved over
a 768 Kbps digital channel. When the signal is digitally encoded and
compressed, the output is a variable rate. But we need to send it down a fixed
capacity channel. Sometimes (most of the time) the required data rate is much
lower than the 768 Kbps provided. At other times the required data rate is much
higher than the rate of the channel. To even this out a buffer is placed before
the transmitter so that if/when the decoder produces too much data for the
channel it will not be lost. But when the data arrives at the receiver end of the
channel data may not arrive in time for the next frame, if that frame contained
too much data for the channel. To solve this, a buffer is inserted in the system
and a delay introduced so there will be time for irregularities in reception rate to
be smoothed out before presentation to the fixed rate screen.
Buffers, however, are not infinite and if the demands of the scene are for a high
data rate over an extended period of time, then data is lost when the buffers are
filled up (overrun). This is seen in “full motion” video conference systems which
typically operate over a limited channel. If the camera is “panned” too quickly
then the movement appears jerky and erratic to the viewer (caused by the loss
of data as buffers are overrun).
It is easy to see from the above example that it is quite difficult to fit video into a
limited rate channel. Always remember, however, that the average rate required
in the example above will be perhaps ten times less than the 768 Kbps provided
and that most of the channel capacity is wasted anyway!
Timing Considerations
Video traffic is like voice in one important respect - it is isochronous. Frames (or
lines) are delivered to the network at a constant rate and when displayed at the
other end must be displayed at the same rate. But packet networks tend to
deliver data at an uneven rate (this is sometimes called “packet jitter”).
Something needs to be done at the receiver end to even out the flow of packets
to a constant rate. As with voice, this can be done by inserting a planned delay
factor (just a queue of packets) at the receiver.
Redundancy
Even more than voice, video is very redundant indeed. The loss or corruption of
a few bits is undetectable. The loss of a few lines is not too much of a problem
since if we display the line from the previous frame unchanged, most times the
loss will be undetected. Even the loss of a frame or two here and there doesn′ t
matter much because our eyes will barely notice. Of course it must be noted
that when video is digitally coded and compressed, loss or corruption of packets
will have a much larger effect (because the data is now a lot less redundant).
Video Applications
Very often video applications are for one-way transmission (as in viewing
television or a movie). In this case the amount of delay that we may insert into
the system without detriment can be quite great (perhaps ten seconds or more).
The discussion above concluded that packet networks are a natural medium for
video transmission. But certainly we don′t mean “traditional” packet networks.
Many, if not most, existing packet networks don′t have sufficient total capacity to
All networks of finite capacity encounter congestion at various times. But with
video (as with voice) you can′t slow down the input rate to the network in order
to control congestion (as we do in data networks) because a video frame arriving
too late is simply garbage. If the network is congested the best we can do is to
throw some packets away until the network returns to normal. If this happens
only very infrequently, then video and voice users will not get too upset, but if it
happens often then the system can become unusable.
The concept is very simple. Imagine that a particular byte of encoded data
represents the intensity level of a particular point on the screen. A simple HSC
technique might be to take the high order four bits and send them in one packet
(marked essential) and the low order four bits in a different packet (marked
non-essential). In the normal case when the packets arrive at the destination the
byte is reconstructed. In the case of congestion, perhaps the packet containing
the less important low order bits has been discarded. The receiver would then
assume the low order four bits have been lost and treat them as zeros. The
result would be to give 16 levels of intensity for the particular point rather than
the 256 levels that would have been available had the less important packet not
been discarded. In practice, HSC techniques need to be designed in conjunction
with the encoding (and compression) methods. These can be very complex
indeed.
In principle, this is not too different from what we do in the analog broadcasting
environment.
Error Control
The worst problem in processing video is packet jitter (erratic delays in packet
delivery). Recovery from link errors by retransmission of data is not usable
within a packet network containing video for this reason. The best thing to do
with erred packets is to discard them immediately. Mis-routing due to errors in
the destination field in the header can have catastrophic effects. Packets should
have a frame check sequence field which should be checked every time the
packet travels over a link and the packet discarded if an error is found.
The best technique for handling errors in video involves using the information
from the previous frame and whatever has been received of the current frame to
build an approximation of the lost information. A suitable strategy might be to
just continue displaying the corresponding line from the previous frame, or if
only a single line is lost, extrapolating the information from the lines on either
side of the lost one.
High-Quality Sound
High-quality sound (CD-quality stereo) involves a very high bit rate. Regular CDs
use a bit rate of 4 Mbps. Encoding sound is, in principle, the same problem as
for voice but with a few differences for the network:
• High-quality sound (such as a film soundtrack) is continuous - unlike voice
transmission where talk exists in “spurts”.
• The data rate is much higher (but the same compression techniques that
worked for voice also work here).
To meet the above requirements networks will need to have the following
characteristics:
Totally Hardware Controlled Switching
There is no way that current software-based packet switched
architectures can come to even one hundredth of the required throughput
- even assuming much faster processors.
Also, traditional “rotating window” flow controls such as are used within
traditional packet networks require complex processing in software within the
network nodes. Processing in software would prevent a totally hardware-based
switching operation. It thus conflicts with the need for very fast routing
performed by hardware functions.
There are many ways to perform flow control external to the network. For
example, in TCP/IP networks, the IP switches do not participate in complex
congestion control procedures. Congestion control is performed through a
rotating window protocol operating between end users (TCP entities). This
works to a point but the network can′t enforce this on the end users and it is
quite possible for individual end users to adopt a non-conforming protocol (for
example UTP). In addition, the protocol involves substantial overhead and has
problems with fairness in certain situations. However, for data transport in
high-speed networks there will be some form of rotating window end-to-end
protocol which will perform a flow control function.
The method of operation suggested is that the attaching user node should
control its rate of data presentation to the network through a system called
“Leaky Bucket Rate Control” and that the network should monitor this traffic at
the network entry point to make sure that the end user node does not exceed its
allowance .
In a sense, input rate control mechanisms are not new. In traditional data
networks, end-user devices were connected to data switching nodes typically by
2,400 bps and 4,800 bps links. The data switching nodes themselves were
connected perhaps by “fast” 64 Kbps links. The speed of the attaching links and
the internal processing speed of the end-user devices themselves provided a
limitation on the rate at which data could be sent to the network. In addition,
these links were typically operated using “polling” link protocols. The networks
could (and many did) control the rate of data input by controlling the polling
process. In the emerging high-speed environment, devices are typically
connected to the network through a LAN and thus there is no longer the implicit
rate control provided by the slow speed of the attaching link.
This scheme has the effect of limiting the packet rate to a defined average, but
allowing short (definable size) bursts of packets to enter the network at
maximum rate. If the node tries to send packets at a high rate for a long period
of time, the rate will be equal to “n” per second. If however, there has been no
traffic for a while, then the node may send at full rate until the counter reaches
zero.
Figure 38. A Cascade of Leaky Buckets. Leaky bucket rate control is applied to
individual circuits and then to the total of a logical group.
The scheme can be dynamic in that the maximum value of the counter and/or
the rate at which the counter is incremented may be changed depending on
current conditions within the network (provided that the network has some
method of signaling these conditions to the end user).
The problem is that cells are very short. Cells do not have the space to contain
additional header overheads to enable error recovery (by retransmission) of
individual cells. Error recovery is done by retransmission of whole user data
blocks. For example, when a 2KB block is sent through a cell network, it is sent
as 43 cells. If an individual cell is lost (or discarded by the network) then the
error recovery operation is to re-send the whole logical data block! That is,
re-send all 43 cells.
This doesn′t matter too much for cells lost due to random errors on internode
links (error rates are extremely low). But in the case where the network
discards data due to congestion, there is a potential problem.
When a node discards, say, 1000 cells it is extremely unlikely that these cells will
come from a small number of logical data blocks. It is very likely that the 1000
cells will be from different logical blocks of user data. If the average user data
block length is 2KB (a low estimate) then the amount of data that must be
re-transmitted is 43,000 cells. So discarding 1000 cells has caused the
retransmission of 43,000 cells. But this only happens when the network is already
congested!
It is quite clear that unless there are very effective protocols in place to prevent
congestion and to limit the effects when congestion occurs this type of network
just isn′t going to work at all!
The coding system is called “Pulse Code Modulation” (PCM). The basic concept
of PCM is that each 8-bit sample is simply a coded measure of the amplitude of
signal at the moment of sampling. But this can be improved upon by a system
called “companding” (Compression/Expansion). It happens that the signal
spends significantly more time in the lower part of the scale than it does at the
peaks. So what we do is apply a non-linear coding so that the lower amplitude
parts of the waveform are coded with more precision than the peaks. (In basic
concept this is just like the “Dolby” system for improving the quality of tape
recordings.) In practice, PCM is always encoded this way but the standard is
different in different parts of the world. One system is called “ µ-law” and the
other “A-law”.
There is another problem here and that is that the bit timing of the “playout”
operation cannot ever be quite the same as the timing of the transmitter - unless
of course, there is a universal worldwide timing reference available. In
continuous transmission, there will be occasional overruns or underruns at the
receiver (clock slips) due to this lack of clock synchronization.
There are many ways of voice compression which rely on the fact that a voice
signal has considerable redundancy. (You can predict the general
characteristics of the next few samples it you know the last few.)
PCM and ADPCM are very good indeed in terms of quality. It is very difficult for
a listener to detect the difference between an original analog signal and one that
has gone through encoding and later decoding. And because digital
transmission is perfectly accurate, there is no loss in quality no matter how far
the signal travels.
There are many techniques available for encoding voice in this way. In the
encoded form the conversation consists of short bursts of packets. A device
called a “Voice Activity Detector” (VAD) is used to turn the encoding process on
or off. It also should be noted that even within a period of speech the encoded
information rate is variable.
One characteristic of the VAD is that it suppresses echoes. Provided the echo is
at a relatively low level, the detector will stop encoding the signal. However, this
is not perfect because when both parties talk simultaneously (a not unknown
phenomenon) each party could hear an echo of his/her own speech mixed up
with the voice of the other speaker.
A reasonably good quality variable rate voice coding scheme should result in a
peak data rate of around 20 Kbps or a little more during talk spurts and an
average rate (in each direction) of around 10 Kbps.
Thus variable rate voice puts a statistical load onto the network, but variable
rate coding does not remove the need for fast and uniform network transit
delays.
One suggested method is to code the voice packets in such a way as to put
“essential” and “quality improvement” cells in different packets. This is
conceptually shown in Figure 41 on page 126. The most significant bits of each
sample are placed into the same packet and the least significant bits into a
different packet. The packets are marked in the header to say which packet may
be discarded and which one may not.
58 An excellent discussion on this subject may be found in “A Blind Voice Packet Synchronization Strategy”.
The example shown above is intentionally very simple. In practice the coding
schemes used in this way will be variable rate ones and the algorithm will be
much more complex than just a selection of bits by their significance.
Nevertheless, the principle is still the same.
As the number of users increases and the capacity of the network increases, the
problem becomes less and less significant. One hundred broadcast-quality
video users with characteristics as described above will require perhaps 1,000
Mbps but the maximum total peak requirement might be no more than 1,200
Mbps. In the previous example, the peak requirement of a single user was four
times the average requirement. In the case of a hundred users, the peak (for
practical purposes) is only 20% greater than the average. This is the result
described in Appendix C.1.4, “Practical Systems” on page 417.
In the “paperless office” type of application, image users tend to spend more
“think time” looking at the screen once it is displayed. That means that the
transaction rates per terminal tend to be lower but perhaps that is because most
of the experience to date is with systems that are very slow in displaying the
image and the user is thereby encouraged to get all the information possible
from one display before looking at the next.
Image systems are only in their infancy, but many people consider that they will
become as common as interactive coded data systems are today. Storing the
enormous quantity of data required is a greater problem than transmitting it
(actually, transmitting it is the easy part).
An open question for the future is “what will be the effect of very high-quality
displays?” It is possible today to buy color displays with a resolution of 4,000
points by 4,000 points with 256 colors and excellent quality. (The main use of
these to date has been in air traffic control, military applications and in
engineering design.) The picture quality is so good that it rivals a color
photograph. The point here is that images with this level of resolution are many
times larger (even compressed) than the typical formats of today.
If these high resolution systems become popular then there will be significantly
higher requirements for the network.
Packets
The term “packet” has many different meanings and shades of meaning,
depending on the context in which it is used. In recent years the term has
become linked to the CCITT recommendation X.25 which specifies a data
network interface. In this context a packet is a fixed maximum length (default
128 bytes) and is preceded by a packet level header which determines its
routing within the network.
In the late 1960s the term “packet” came into being to denote a network in
which the switching nodes stored the messages being processed in main
storage instead of on magnetic disk. In the early days a “message switch”
stored received data on disk before sending it on towards its destination.
In a generic sense, “packet” is often used to mean any short block of data
which is part of a larger logical block.
The major advantages of breaking a block of data into packets for transmission
are:
1. The transit delay through the network is much shorter than it would be if the
data was transported in long blocks or “frames”.
2. Queues for intermediate links within the network are more easily managed
and offer a more uniform delay characteristic. See Appendix C.1.4,
“Practical Systems” on page 417. This results in less variation in the
end-to-end transit time.
3. Buffer pools and I/O buffers within intermediate nodes can be smaller and
are more easily managed.
4. When an error occurs on a link (whether it is an access link or a link within
the network itself) then there is less data to retransmit.
Now, if the 1024-byte block is broken up into four 256-byte packets, then the
following scenario will occur:
• User A sends the first packet to Node 1, taking 1 unit of time.
• Node 1 sends this packet to Node 2, but while this is happening User A is
sending packet 2 to node 1.
• While User A is sending the third packet, Node 1 is sending a packet to Node
2 and Node 2 is sending a packet to Node 3.
• This happens through the network until the last packet arrives at User B.
It is obvious from the diagram that sending the message as small packets has
reduced the network transit time to 7 units compared with the 16 units needed
without packetization. This is due to the effect of overlapping the sending of
parts of the message through the network.
The section of the figure headed “Direct Link” refers to what happens in the limit
as we keep reducing the packet size. When we finally get to a packet size of
one character, we have a network throughput equivalent to a direct link or TDM
operation of the nodes. Transit time in this case is 4 time units.
Other people, while agreeing with the requirements, believe that ATM (and even
frame relay) will be able to meet the requirements even in the short term.
On this one, the jury has not yet even begun to consider its verdict.
After 30 years of building data networks, this is still an issue on which there is
considerable disagreement and which can evoke strong feelings.
59 In early SNA each node had a number. When a frame was routed by a node there was a single table showing which link
traffic for a given node number must be sent on. The switching node knew nothing about routes or about connections.
The IBM experimental high-speed packet switch called “Paris” uses a method of
switching similar to that described above for TRN bridges. (See the description
in 11.2, “Packetized Automatic Routing Integrated System (Paris)” on page 238.)
Paris is different because the data switching part has no record of the existence
of connections. The routing decision is made by the switching nodes completely
Connections do exist in both these systems but in either system it is only the
source node that knows about it.
In the example there is a connection between the two end users. This
connection is known about and supported by nodes A and B, but the switching
nodes in the network do not know that a connection exists. The end-to-end
function holds the network address and the status of its partner end-to-end
function and looks after data integrity and secure delivery, etc.
Frames carry a header with a route number (in the case of SNA a destination
node number and a “virtual route” number). Determining a route is a very
simple and fast process - all the processor has to do is compare the route
number in the frame header with a routing table.
The great advantage of this technique is that it is probably the most efficient
software switching technique available. It requires less processing in the switch
than any other available method. The disadvantage is that determining routing
tables is a difficult and complex procedure. (Complexity increases exponentially
This process can be very fast (for a software technique). It is the principle
behind “ARPANET” routing and that of TCP/IP. Special switches exist which use
this technique and achieve throughputs of 50,000 packets per second.
The rationale here is that devices at the end points of the network (such as user
workstations) do not have the throughput requirements of intermediate nodes but
typically have significant compute power available. Thus it is considered to be
more efficient (cost effective) to place the compute load related to routing in the
end station.
Using the source routing technique the sending workstation appends (in the
network header) an ordered list of nodes and links (a routing vector) to every
packet sent. This routing vector is used by intermediate nodes to direct the
packet towards its destination.
In order to use source routing the sending node must be able to discover
enough of the network topology to calculate a route. This could be done either
by a discovery process (such as sending a route setup message along the
physical path of a new connection) or by keeping (or having access to) an
up-to-date topology database for the whole network.
A drawback of this method is that the routing vector in the packet header takes
some storage and is an overhead. But this is quite small and the benefits of
being able to make a fast routing decision outweigh the small increase in
bandwidth overhead.
Source routing is the method used in IBM Token-Ring Networks to control the
route a frame travels through a multisegment LAN. Source-routing bridges put
the responsibility of “navigating” through a multisegment LAN on the end
stations. A route through a multisegment token-ring LAN is described by a
sequence of ring and bridge numbers placed in the routing information field of a
token-ring frame. 60 The routing information field in a MAC frame, if present, is
part of the MAC header of a token-ring frame. The presence of routing
information means the end stations are aware of, and may make decisions
based upon, the routes available in a multisegment LAN. By contrast,
transparent bridges take the responsibility for routing frames through a
multisegment LAN, which leads to more complex bridges, and end stations that
are unaware of whether they are bridged, and if they are, what route they are
taking.
60 Each ring segment in a LAN has a ring number assigned when the bridges are configured. On an active ring it is held by the
Ring Parameter Server management function. Bridges are also given numbers when they are configured. Together, the ring
numbers and bridge numbers, known as route designators, are used to map a path through a multisegment LAN.
The broadcast indicators in the routing control field also control the way a bridge
treats the frame. The types of broadcast frames are:
Non-Broadcast, also known as Routed frames. The frame will travel a specific
route defined in the routing information field.
61 The high order bit in the destination MAC address is used to indicate a group address. A group address field would be
redundant in the source field because token-ring frames can only originate from a uniquely identified adapter.
62 IBM implementation of the IEEE 802.5 standard limits the number of bridge hops to seven.
Notice that logical ID swapping is an internal process for routing packets or cells
within a network. These internal network protocols are typically not specified by
international standards.
Of course, in most systems the tables do not exist in the form suggested in the
example - they will be set up in whichever way is most efficient within the using
system.
6.7.4.3 Characteristics
The system has the following characteristics.
Minimal Bandwidth Overhead
The ID part of the header is the only essential field for this method of
routing. The systems described above use between 10 and 20 bits for
this field.
Fixed Route(s)
Frames (packets, cells, etc.) flow on the fixed predetermined route. This
means that frames will (in most systems) arrive in the same sequence in
which they were sent.
Efficient Switching
Relatively few instructions are required to perform the switching function.
In order to route a packet towards its destination reference must be
made to the connection tables. So whatever process performs the
switching must have very fast access to the tables. In addition, when a
new connection is set up or an old one is terminated the tables must be
updated. (The database of network topology and loadings can, of course,
be maintained quite separately.)
In a software-based system (traditional packet switch) this is no problem
at all as the function that maintains the tables shares the same storage
as the code that does the switching.
In a system using a hardware-based routing mechanism, this mechanism
must have very fast (sub-microsecond) access to the tables. The
updating function must also have access though its demands for access
are not as critical (it can wait a bit).
This means that in a hardware implementation you need to have a
shared set of tables that is instantly accessible to the hardware switch
and also easily accessible from the control processor.
This can increase the cost of the implementation above that of the ANR
technique. (See 11.2.2, “Automatic Network Routing (ANR)” on
page 240.)
Existing network protocols that perform the above functions are generally viewed
to have too many “turnarounds” in them. This applies to OSI (layer 4), TCP
(especially) and most other common end-to-end protocols. An excellent
description of this problem may be found in A Survey of Light-Weight Transport
Protocols for High-Speed Networks .
The primary object of a good network layer protocol for the high-speed
environment is to minimize the number of “turnarounds” (when a sender must
wait for an acknowledgment from the receiver) in the protocol. To do this the
following principles may be employed:
• Optimism. Assume that the underlying node hardware and links are very
reliable. Errors will occur and they must be recovered, but they will be very
infrequent.
• Assume that the user (program) at the other end of the connection is there
and ready to receive all that we send. Don′t use a “handshaking protocol”
to find out first.
• Try to put as many small packets into one block as reasonably possible (if
your network can process large frames).
When SNA networks are run “over the top” of other networks (such as X.25 or
frame relay or a LAN) then we must do something to make sure that the
underlying network is very reliable. (This was a design decision in the early
days of SNA, to put the reliability (and its attendant cost) into the backbone part
of the network and avoid the cost of providing extensive network layer function.
At the time, this was optimal from a cost point of view.)
In the case of X.25, an X.25 virtual circuit is regarded as a link in SNA. But it is a
special link. No end-to-end link protocol is run over it because of the cost and
performance implications. This means that to run SNA over a packet network
(X.25), that network must be reliable. If the network loses data on a virtual
circuit and signals this to SNA, SNA will see this as the unrecoverable loss of a
link and all sessions travelling on that link are immediately terminated.
Application-level error recovery is then necessary in order to resume operation.
In some SNA systems there is an optional network layer protocol (called “ELLC”)
which does give full error recovery over an unreliable network connection. This
was developed in the early days of packet networks and today these networks
are generally very reliable and ELLC is considered unnecessary.
When SNA is run over a LAN network or a frame relay network, where the loss
of a frame or two occasionally is a normal event, an end-to-end protocol is used
to stabilize the connection. This is called IEEE 802.2 and is a “link layer”
protocol as far as the ISO model is concerned. Nevertheless, when an
end-to-end protocol is run across a network in order to recover from network
errors, it is just an academic point to discuss which OSI layer is being obeyed.
The function here is network layer class 4 regardless of how it is performed.
The big problem with running SNA over a disjoint packet network (even a LAN)
is that the network management and directory functions are not integrated with
those of SNA. This means that you run two networks, SNA and something else,
underneath where neither network knows about or can coordinate properly with,
the other.
If a fast packet switching architecture was to be fully integrated into SNA then a
different structure would be possible. 64 The “transmission network” part of SNA
could be replaced with a fast packet switching architecture. In addition to the
minimal adaptation function, a full network layer would need to be interposed
64 IBM has not announced any plan for doing what is suggested here. The discussion is included in order to bring into
perspective the interaction of SNA with high-speed networks. IBM cannot comment on future plans.
The degree of complexity can be best understood if the layer 1 and 2 functions
needed in non-ATM packet service are broken up into sublayers. 65 If the
functions performed by HDLC/SDLC link control are broken up in this way we get
the following structure:
Sublayer 1A
This layer adapts to the medium characteristics, transmission coding
and things like bit timing and jitter tolerance.
65 The sublayering concept described here is due to Vorstermans et. al. 1988. See bibliography.
As mentioned elsewhere in this document, ATM requires that only the physical
layer be implemented in network nodes.
66 For the purposes of this discussion the word “packet” should be taken to mean “physical block of data” such that acceptable
alternatives would be “cell” or “frame”.
There are many detailed differences in the way data is received into and
transmitted out of storage. The tasks to be performed when data is received
include:
1. Detection of the boundaries of characters and assembling bits into
characters. Associated with this there is the task of recognizing control
characters and sequences.
2. Detection and synchronization of the boundaries between blocks of data.
3. Transfer of the data and control information into the memory.
4. Processing the logic of link control.
5. Processing the switching logic.
6. Processing control and management logic.
There are similar tasks to be performed when data is transmitted.
Depending on the switch design some (or all) of these functions except the
switching logic, control and management can be performed outboard of the main
processor. In the first IBM 3705 all these functions were performed by the
controller program (even assembling bits into characters). In the most recent
3745 hardware everything up to and including the link control logic is performed
outboard either in hardware or in an outboard programmable device.
All switch functions including routing are performed on each adapter card. Data
transfer between adapter cards is done on a “backplane”. The backplane is a
parallel bus capable of transferring many bits in parallel (typically 64 or 128 bits
at a time). All adapters connect to this backplane.
The backplane has a high throughput by virtue of the parallelism of data transfer.
However, it has significant disadvantages.
• Total system throughput is the throughput of the backplane (because only
one adapter can be sending at any one time).
• There is considerable overhead in arbitration (deciding which adapter can
use the backplane next).
• Backplanes get slower and less effective the longer they are. You can′t have
more than a fairly small number of adapters before the whole system is
slowed down.
Structures like this are also used by many devices and in general have a higher
throughput possibility than memory-based switches, but they don′t have
sufficient potential for the job of trunkline ATM switch.
External links are different from internal ones because they are subject to much
more distortion than inter-chip connections. There are practical problems in
receiving the signal and recovering its timing on external connections that don′ t
exist on internal ones.
External links are serial by their nature and we can′t do much about that, but
connections between chips on a board can be parallel (to a limit imposed by the
number of wires you can attach to the chip). In practice, a multistage switch
such as the one described above could be built in CMOS technology with serial
internal connections up to a maximum speed of around 40 or 50 Mbps (on all
links). Above that, it is much more cost effective to use parallel connections
between switching elements (chips) than it is to use faster logic.
Most of this publication deals with technologies for transporting information from
one place to another. The effectiveness of these technologies in satisfying the
needs of an organization is determined by how the technology is realized into
practical systems. This chapter addresses the question of the effectiveness of
isolated transmission networks in fulfilling organizational needs. The discussion
applies equally to traditional packet switching networks and to new high-speed
services such as ATM.
Figure 50. A Hierarchy of Networks. One type of data network is built “ a b o v e ” the other.
This could be the common case of an SNA network running “over the top” of a private
“X.25 network”.
From a user perspective the network connects places where data is entered or
processed. That is, the users of the network are application programs within
processors (people using a terminal are inevitably connected to the network
through a program of some kind). These application programs exist within all
kinds of processors from PCs and workstations to dedicated data processing
elements such as servers and mainframes, etc. A good network should provide
seamless, transparent and reliable connection between these programs. This
means that from a user (organizational) perspective the interface to the network
is the application program interface within a processor.
The problem for an organization is how to build a network which will satisfy the
needs of the end users in a cost-effective way.
The thesis of this section is that while isolated transmission networks are
necessary in the public network environment, there are significantly better ways
available to users of private networks.
7.1.2.1 Adaptation
The first problem is making the outer network compatible with the data
structures and formats of the inner network. This is adaptation.
Again, using SNA as an example: In order to send an SNA frame over an X.25
service the data block must be split up into packets and sent as many smaller
blocks. In addition there is link error recovery and packet level flow control
which must be implemented on this interface. In the technology used for this
In sending SNA over a frame relay service we don′t need to packetize. But we
now have to detect and request retransmission of error frames. In addition link
level acknowledgments travel across the FR network and need to be processed
(in X.25 these are local).
To send SNA over ATM we will need adaptation to break a frame into cells,
reassemble the cells when received, perform error recovery by re-transmission,
and perform rate control at the input to the network.
The point is that whenever there is a difference between the architecture of the
inner and outer networks (there almost always is) then there is additional
processing and logic required for adaptation at the interface.
Figure 51. Nested Network. View as seen by the network management system of the
outer network. The transmission (inner) network is invisible to the user (outer network).
This resulted in SNA being a very highly reliable network structure but having a
relatively high cost if looked at separately from the devices connecting to it.
Reduction in function within the nodes would bring a lower cost but add cost to
each and every connecting device.
SNA therefore has only a minimal end-to-end network recovery function (OSI
layer 4 function).
When SNA users started attaching to the first X.25 public networks, a very bad
network stability problem was encountered. This was due to some early
networks being designed to discard data whenever congestion or errors
occurred. The designers of these networks believed that end users would
implement end-to-end protocols outside the network to give it stability. SNA did
not have these protocols (because of the cost tradeoff - it didn′t need them).
This point was covered obliquely in 7.1.2.1, “Adaptation” on page 154.
7.1.3 Conclusions
From the above discussion there are two important conclusions:
Private Network Environment
When the same organization owns all aspects of the network, it makes
sense to avoid the artificial barrier created by having a separate
transmission network. A single integrated, seamless structure will be
easier to manage, higher in performance, and lower in cost than a
separate structure.
Public Network
We can′t avoid having a separate transmission network when it is shared
among many users. A user must simply choose the most cost effective
service available (allowing for the differences in cost of the user′ s
equipment dictated by the public network chosen). Nevertheless, when
all else is equal, public facilities should be chosen to match as closely as
possible the user′s network. When there is a choice the simplest public
offering will usually be the best (leased lines will usually be better for the
user than public packet networks - in all ways except cost).
Much of the impetus for ISDN comes from the desire to use the installed copper
wire “subscriber loop” at higher speeds, with more reliability and new services.
In most countries the total value of installed copper wire subscriber loop
connections represents the largest single capital asset in that country. There is
an enormous potential benefit in getting better use from it.
Existing copper subscriber loops vary widely in quality and characteristics and
making use of it for digital connections represents a significant technical
challenge. See 2.2.4, “The Subscriber Loop” on page 46.
This document is not concerned with the details of narrowband ISDN. The
subject is discussed because it illustrates the use of modern digital signaling
techniques and TDM operation.
Thus, on the same physical wire as exists today a user has two independent
“telephone lines” instead of one, plus a limited ability to send data messages to
other users without the need to use either of the two B channels.
“B” (Bearer) Channel
A “B channel” is 64 Kbps in both directions simultaneously (64 Kbps
full-duplex). When a user places a “call” (for voice or for data) a
continuous path is allocated to that call until either party “hangs up”
(clears the call).
This is the principal service of (regular) ISDN. The B channel is an
end-to-end “clear channel” connection (derived by TDM techniques)
which may be used for voice or for data. It can be thought of in the
same way as an analog telephone connection but, of course, it is digital.
“D” Channel
The D channel is not an “end-to-end clear channel” like the B channels.
This D channel carries data in short packets and is primarily used for
signaling between the user and the network (when the user “dials” a
number, the number requested is carried as a packet on the D channel).
The D channel can also be used for sending limited amounts of data from
one end user to another through the network without using the high
capacity B channels 69 (this can be useful in a number of applications).
In Basic Rate the D channel operates at 16 Kbps.
This interface must transfer user data at a total rate of 144 Kbps full-duplex over
an existing two-wire subscriber loop. This is a complex problem, because
signals must travel in both directions over the same physical wires; and,
therefore, one device receiving a signal from another device will also receive an
echo of its own transmissions in the form of interference with the received
signal.
In the US, the situation is different. For legal reasons, all equipment on
customer premises must be open to competitive supply (especially the NT1). To
enable this the American National Standards Institute (ANSI) has published a
standard for the BRI U interface. In the US therefore, a supplier of EDP
equipment may decide to integrate the NT1 function within a terminal or a
personal computer and connect to the U interface. 70 In the rest of the world
suppliers of EDP equipment must connect to the S (or T) interface.
70 As of June 1995, all ISDN Basic Rate interface adapters announced by IBM use the S interface only.
Figure 53. “U” Interface Frame Structure. The M channel is used for service and
maintenance functions between the NT and the exchange.
8.1.3.2 Multiframing
The M channel used in Time Compression Mode (TCM) is a simple example of a
“superframe” or “multiframe”. Within each 38-bit TCM frame a single bit is used
to provide two independent clear channels. Every second bit (the M bit in every
second frame) is for a “service channel”. Since the frame rate is one every 250
µsec this gives an aggregate rate for the channel of 2048 bits per second.
One bit in every four frames belongs to the “service channel”. Since the frame
rate is 4096 frames per second (one every 250 µsec) the service channel
operates at 1024 bits per second.
Of course there is a need to identify which bit is which. To the “higher level”
38-bit frame, the M bit forms a single clear channel, but within itself it has a
structure. A multiframe begins with a code violation (that is, the M channel bit
violates the code of the 38-bit frame). This delimits the beginning of the
multiframe. After the code violation (CV) the next M channel bit belongs to the
transparent channel, the next to the service channel, the next to the transparent
channel, and then the next multiframe starts with another CV.
The passive bus consists of just four wires connected (through a transformer) to
the NT1. Two wires are used for transmission and two for reception. Up to eight
72 This same protocol is used for the “T” interface also, so it is often referred to as the S/T interface protocol.
The objective is for each TE to be able to communicate with the network using
either (or both) of the B channels and the D channel. Since a B channel is a
clear channel (that is, the device may send any sequence of bits without
restriction), only one TE may use a B channel at any one time. The D channel is
a packet channel and TEs are constrained to use a rigorous protocol (called
“LAPD” for Link Access Protocol D channel) for operation.
The frame formats and method of operation of the S and U interfaces are very
different and this means that NT1 has quite a lot of function to perform.
Nevertheless, it is important to realize that NT1 passes both the B and D channel
data through to the exchange without any form of modification or processing of
the data . NT1 does frame reformatting and electrical signal conversion only - it
does not change or look at the data.
Frame Formats
Figure 55 on page 166 shows the frame formats used on the S/T
interface. The first thing to notice about these is that the formats are
quite different depending on the direction of transmission (TE-to-NT, or
NT-to-TE).
Frame Timing
Frames are sent in each direction, as nearly as possible, exactly one
every 250 µsec. Since a frame is 48 bits long this means that the data
rate is 192 Kbps.
Each frame contains 2 bytes from each B channel and 4 bits from the D
channel. Note that these are spaced at regular intervals throughout the
frame.
Frame Generation
In the NT to TE direction (of course) the NT generates the frame and puts
data in each field. TEs can access (read) any or all fields in the frame as
appropriate.
In the TE to NT direction things are different. There is no single device
available to generate the frame so all TEs generate a frame and transmit
it onto the bus simultaneously .
This would normally be a formula for disaster: multiple devices writing
independent information onto the same wires at the same time.
However, this is not the case. The line code and the frame structure are
organized so that the framing bits will be the same polarity for all TEs.
Thus, provided the timing is accurate, all TEs may write the same frame
to the bus at the same time.
The line code used is Pseudoternary and is described in 2.1.5,
“Pseudoternary Coding” on page 18.
Frame Synchronization
Each TE derives the timing for its frame transmission from the timing of
the frame it receives from the NT. The TE starts its frame transmission
precisely 3 bits after it receives the start of a frame from the NT and
subsequent bits are transmitted using the clock derived from the
received bit stream.
There are problems here with jitter and with propagation delay. Each TE
will detect the frame from the NT at a (very slightly) different time (jitter).
A TE on the D channel sends and receives short packets of data. One primary
requirement is that TEs must share the D channel with some kind of fairness.
Consecutive packets may belong to different TEs. All communication is from the
network to/from individual TEs. There is no TE-to-TE communication possible
(except through the exchange).
There is no problem with transmission from the exchange to the TE. Since there
is only one device sending, it is able to intermix packets for different TEs at will.
But in the other direction, multiple TEs cannot transmit data on the same
channel at the same time. This is the same situation that existed in the past
with multiple terminal devices sharing a multidrop analog data link. As with
analog data links in the past, the problem is how to control which device (TE) is
allowed to send at a particular time. In the past “polling” was used for this
purpose but with digital signaling techniques available a much more efficient
technique is used.
As far as the TDM structure is concerned (that is, at the “physical layer”) the D
channel is a clear channel of 16 Kbps full-duplex. As frames are transmitted or
received by each TE, consecutive D channel bits are treated as though there was
no other information in between. A link control protocol is used between each
TE (through the NT) to the exchange.
Passive Bus - Theme and Variations: Within the above mode of operation,
several configurations of passive bus are possible.
LAPD Link Control: As mentioned above, LAPD is a member of the HDLC series
of link control protocols. 73
Each TE has at least one connection with the ISDN network. This connection is
used for passing call requests and call progress information. The TE may also
have other connections with services within the network. Thus, running on the
same link, there are multiple TEs, each (perhaps) with multiple simultaneous
connections to different points within the network. This means that there is a
need to address each endpoint uniquely.
In SDLC a single “control point” (which does not have an explicit link address)
identifies up to 255 secondary devices using a single-byte link address. In LAPB
(X.25), communication is always between peer entities and so the link address
may take only two values (X′01′ or X′03′). LAPD uses a 2-byte address which
contains two fields:
1. A SAPI (Service Access Point Identifier) which represents the service or
endpoint within the ISDN network.
2. A TEI (Terminal Endpoint Identifier) which identifies the TE.
Link control operation is conventional.
ISDN Frame Relay: Frame relay is a fast packet-switching service which was
designed to be used with ISDN. In fact its definition is part of the ISDN definition
- although, it is an additional “service” which may be provided by network
providers. The frame relay service may be accessed through a B channel, an H
channel (ISDN wideband access), or through the D channel.
The basic service of delivering low-rate packet data from one end user to
another using D channel access is not frame relay but rather is a basic service
of the ISDN network. Frame relay is more fully described in 11.1, “Frame Relay”
on page 227.
73 LAPD is described in detail in IBM ISDN Data Link Control - Architecture Reference .
In Europe, the transmission rate used is 2 Mbps. In the US, the speed used is
1.544 Mbps (the same as “T1”). This results in the availability of 30 B channels
with one (64 Kbps) D channel in Europe with 23 B channels plus one D channel
in the US. The systems are very similar and so only the European system is
described here.
In Europe the coding used is HDB3 which is described in 2.1.10, “High Density
Bipolar Three Zeros (HDB3) Coding” on page 27.
The frame structure is shown in Figure 56. Slot 0 is used for framing and
maintenance information. Slot 16 is a 64 Kbps D channel. All other slots may be
used for B channels or as part of an H channel.
Figure 56. ISDN Primary Rate Frame Structure (Europe). Arrows show the sequence of
slot concatenation for an H0 (wideband) channel, the D channel (slot 16) and a B channel
(slot 31).
Sonet and SDH are of immense importance because of the vast cost savings that
they promise for public communications networks.
This basic frame is called the Synchronous Transport Signal level 1 (STS-1). It is
conceptualized as containing 9 rows of 90 columns each as shown in Figure 58.
Figure 58. Sonet STS-1 Frame Structure. The diagramatic representation of the frame as
a square is done for ease of understanding. The 810 bytes are transmitted row by row
starting from the top left of the diagram. One frame is transmitted every 125
microseconds.
• The first three columns of every row are used for administration and control
of the multiplexing system. They are called “overhead” in the standard but
are very necessary for the system′s operation.
• The frame is transmitted row by row, from the top left of the frame to the
bottom right.
• The representation of the structure as a two-dimensional frame is just a
conceptual way of representing a repeating structure. In reality it is a string
of bits with a defined repeating pattern.
The physical frame structure above is similar to every other TDM structure used
in the telecommunications industry. The big difference is in how the “payload”
is carried. The payload is a frame that “floats” within the physical frame
structure. The payload envelope is illustrated in Figure 59 on page 175.
Notice that the payload envelope fits exactly within a single Sonet frame.
The payload envelope is allowed to start anywhere within the physical Sonet
frame and in that case will span two consecutive physical frames. The start of
the payload is pointed to by the H1 and H2 bytes within the line overhead
sections.
Very small differences in the clock rates of the frame and the payload can be
accommodated by temporarily incrementing or decrementing the pointer (an
extra byte if needed is found by using one byte (H3) in the section header).
Nevertheless, big differences in clock frequencies cannot be accommodated by
this method.
Figure 60. Synchronous Payload Envelope Floating in STS-1 Frame. The SPE is pointed
to by the H1 and H2 bytes.
8.2.2 SDH
In the rest of the world, Sonet is not immediately useful because the “E-3” rate
of 35 Mbps does not efficiently fit into the 50 Mbps Sonet signal. (The
comparable US PDH signal, the T-3, is roughly 45 Mbps and fits nicely.)
The CCITT has defined a worldwide standard called the Synchronous Digital
Hierarchy, which accommodates both Sonet and the European line speeds.
This was done by defining a basic frame that is exactly equivalent to (Sonet)
STS-3c. This has a new name. It is Synchronous Transport Module level 1 or
STM-1 and has a basic rate (minimum speed) of 155.52 Mbps. This is shown in
Figure 62.
Faster line speeds are obtained in the same way as in Sonet - by byte
interleaving of multiple STM-1 frames. For this to take place (as in Sonet) the
STM-1 frames must be 125- µsec frame aligned. Four STM-1 frames may be
multiplexed to form an STM-4 at 622.08 Mbps. This (again like Sonet) may carry
8.2.3 Tributaries
Within each payload, slower-speed channels (called tributaries) may be carried.
Tributaries normally occupy a number of consecutive columns within a payload.
A US T-1 payload (1.544 Mbps) occupies three columns, a European E-1 payload
(2.048 Mbps) occupies four columns. Notice that there is some wasted
bandwidth here. A T-1 really only requires 24 slots and three columns gives it
27. An E-1 requires 32 slots and is given 36. This “wastage” is a very small
price to pay for the enormous benefit to be achieved by being able to
demultiplex a single tributary stream from within the multiplexed structure
without having to demultiplex the whole stream.
The tributaries may be fixed within their virtual containers or they may float,
similar to the way a virtual container floats within the physical frame. Pointers
within the overhead are used to locate each virtual tributary stream.
8.2.5 Status
Sonet/SDH standards are now firm and equipment implementing them is
beginning to become available. However, there are many desirable extensions
that have not yet been standardized. For example, there is no generalized
standard for interfacing customer premises equipment to STS-3c (STM) available
as yet. The ITU-T has specified this for attachment to ATM networks and a
similar specification exists in FDDI.
8.2.6 Conclusion
Successful specification of a system which integrates and accommodates all of
the different line speeds and characteristics of US and European multiplexing
hierarchies was a formidable challenge. Sonet/SDH is a complex system but it
is also a very significant achievement. It is expected that equipment using SDH
will become the dominant form of network multiplexing equipment within a very
short time.
In the future, high-speed backbone wide area networks owned by the PTTs will
need to become a lot more flexible. It is predicted that there will be significant
demand for arbitrary amounts of bandwidth and for “variable bandwidth”, such
as is needed for a video signal. Many planners in the PTTs believe that TDM
technology is not sufficiently flexible to satisfy this requirement. This is partly
because of the perceived “waste” in using fixed rate services for variable traffic
(such as interactive image, variable-rate voice or variable-rate encoded video)
and partly because arbitrary variable amounts of bandwidth are very difficult to
allocate in a TDM system. This latter problem is called “bandwidth
fragmentation”.
For the example in Figure 63 assume we have a 4 Mbps link which consists of
64, 64 Kbps slots. The example is a trivial one but it illustrates the point. In
computer memory variable-length buffer pools (before virtual storage fixed the
problem), it was found that after operation for a long period of time perhaps only
20% of the memory would be in use and the other 80% would be broken up into
fragments that were too small to use. This is a significant waste of resource. In
addition, the control mechanisms needed to operate a scheme such as this
would be complex and more expensive than alternative schemes based on cell
multiplexing.
This problem is the primary reason that Broadband ISDN uses a cell-based
switching system.
A cell is really not too different from a packet. A block of user data is broken up
into packets or cells for transmission through the network. But there are
significant differences between cell-based networks and packet networks.
1. A cell is fixed in length. In packet networks the packet size is a fixed
maximum (for a given connection) but individual packets may always be
shorter than the maximum. In a cell-based network cells are a fixed length,
no more and no less.
2. Cells tend to be a lot shorter than packets. This is really a compromise over
requirements. In the early days of X.25 many of the designers wanted a
packet size of 32 bytes so that voice could be handled properly. However,
the shorter the packet size, the more network overhead there is in sending a
given quantity of data over a wide area network. To efficiently handle data,
packets should be longer (in X.25 the default packet size supported by all
networks is 128 bytes).
3. Cell-based networks do not use link-level error recoveries. In some
networks there is an error checking mechanism that allows the network to
throw away cells in error. In others, such as in ATM (described below) only
the header field is checked for errors and it is left to a “higher-layer”
protocol to provide a checking mechanism for the data portion of the cell if
needed by the application.
Figure 64. Cell Multiplexing on a Link. Cells belonging to different logical connections
(identified by the VPI and VCI) are transmitted one after the other on the link. This is not
a new concept in the data switching world but it is quite different from the fixed
multiplexing techniques used in the TDM approach.
74 This discussion on ATM is condensed from Asynchronous Transfer Mode (Broadband ISDN) - Technical Overview , IBM number
GG24-4330 by the same author.
ATM has expanded to cover much that is not strictly B-ISDN. B-ISDN is a carrier
interface and carrier network service. ATM is a technology that may be used in
many environments unrelated to carrier services. Holding long and convoluted
discussions about the fine distinctions between ATM and B-ISDN is pointless in
the context of understanding the technology. For most practical purposes the
terms ATM and B-ISDN are interchangeable. Hence in this book we will use the
term ATM almost exclusively.
In 1990 industry forecasters were saying that ATM would begin an experimental
phase in 1993, have early commercial products in perhaps 1997 and that the year
2000 would be the year of the mass usage. In 1993 they said commercial
products in 1994 and mass acceptance as early as 1995! Reality seems to be
commercial products in 1995 and mass acceptance in 1996/1997. (Although
many of the standards are not scheduled for completion until the middle of 1996!)
In the past two years an unprecedented consensus has formed throughout the
communications industry that ATM will be the universal networking standard.
This consensus has caused the vast bulk of development in the industry to be
shifted towards ATM development. Within the last year or so some 50
organizations have either foreshadowed or announced ATM products.
ATM Switches
Four ATM switches are shown in Figure 65. These perform the
backbone data transport within the ATM network. They are usually
classified as either private ATM switches or public ATM switches. The
difference between private and public ATM equipment could be trivial in
some cases but will often be quite major. Public and private switches
will differ in the kinds of trunks (links) supported, in accounting and
control procedures and in the addressing modes supported. There is
also the obvious question of size. Public network equipment will usually
need much higher throughput than will private equipment.
Public ATM switches are sometimes referred to as network nodes (NNs).
This is incorrect as the term network node is not defined in ATM
standards - even though there is a network node interface.
Private ATM switches and networks are sometimes called customer
premises nodes (CPNs) or customer premises networks. Again, this
terminology, while useful, is not defined in the ATM standards.
75 A dark fiber is a fiber connection provided by a carrier that is not restricted just like a copper wire leased line. It is dark
because there is no light in the fiber until the user puts it there.
76 See 8.2, “SDH and Sonet” on page 172.
77 However, it must be noted that ATM switches will often contain ATM endpoint functions as well as switch functions.
As far as the architecture of ATM is concerned, each link may have all possible
VPs and each VP may have all possible VCs within it. (In practice, nodes will
limit the maxima to much smaller values for practical considerations, such as
table space within a node.)
It is important to note that the scope of the numbering of each entity is just
within the entity above it in the hierarchy. For example, all VPs may exist on all
links - so VP number 2 may exist on link number 3 and there may be a VP
number 2 on link number 7 and both VPs are unrelated to one another. There
could be a VC number 17 in every VP in a node.
VPs and VCs are only numbers! They identify a virtual (logical) path along which
data may flow. They have no inherent capacity restrictions in terms of data
throughput. 78 That is, dividing the link into VPs and VCs has nothing whatever to
do with division of link capacity. You could saturate any link no matter what the
speed with data on just one VC - even if the link had all possible VPs and VCs
defined!
A good analogy is the US road system, where a single physical road can have
many route numbers (sometimes up to 30 or more). The cars traveling on the
road may consider themselves to be following any one of the numbered routes.
But at any point in time all the cars on the road may consider themselves to be
using the same route number.
78 There are deliberate restrictions imposed on the rate of entry of data into the network, but this is not relevant here.
Cell Size
An ATM cell is always 48 bytes of data with a 5-byte header; it cannot be
longer or shorter. This cell size was determined by the CCITT (now
called the ITU-T) as a compromise between voice and data requirements.
Generic Flow Control (GFC)
At the present time, the GFC function has not been standardized.
However, from the two cell headers it should be noted that the GFC field
does not appear in the cell header on the NN interface. Therefore it is
not carried through the network and has only local significance between
the ATM endpoint and the ATM switch to which it is attached.
VPI and VCI
The most important fields in the cell header are the virtual path identifier
(VPI) and the virtual channel identifier (VCI). Together these identify the
connection (called a virtual connection) that this cell belongs to. There is
In this mode of operation the first cell (or cells) of a group carries the full
network address of the destination within its data (payload) field. Subsequent
cells belonging to the same user data block do not carry a full network address,
but rather are related to the first cell by having the same VPI/VCI as it had.
The payload (data) part of a cell may contain errors. Transmission errors within
the data portion of the cell are not detected by the network (this is up to either
the end-user equipment or the adaptation layer).
Multicasting takes place over a tree structure, as illustrated in Figure 71. Its
official name is “point-to-multipoint connection”.
79 This depends on which “release” or “phase” of ATM we are using. Leaf to root communication is not supported in ATM
Forum phase 1.
A VP also has a QoS associated with it. VCs within a VP may have a lower QoS
than the VP but they cannot have a higher one.
Some variable bit rate encoding schemes for voice and for video are structured
in such a way that two kinds of cells are produced:
• Essential cells which contain basic information to enable the continued
function of the service,
• Optional cells which contain information that improves the quality of the
service (voice or picture quality)
If the end-user equipment marks a cell as low priority, that cell will be discarded
first if cells need to be discarded due to network congestion.
In this case a lower quality of service may be specified, which would allow the
network to discard cells belonging to these virtual connections in situations of
extreme congestion.
This is a lot easier said than done. Any system that allocates capacity in excess
of real capacity on the basis of statistical parameters allows the possibility
(however remote) that by chance a situation will arise when demands on the
network exceed the network resources. In this case, the network will discard
cells. The first to be discarded will be arriving cells marked as lower priority in
the CLP bit (cells already queued in buffers will not be discarded). After that
discarding will take place by service class or randomly depending on the node′ s
implementation.
The ATM endpoint is expected to operate a “leaky bucket” rate control scheme
(see 6.1.1.1, “Leaky Bucket Rate Control” on page 118) to prevent it from making
demands on the network in excess of its allowed capacity.
Depending on the network (and the user′s subscription parameters) the network
may either:
• Discard cells received in excess of the allowed maxima.
• Mark excess cells with the CLP (cell loss priority) bit to tell the network that
this cell is of low priority and may be discarded if required.
Since there is no real control of flow within the network this function is very
important.
Another function that is performed at the entry point to the network is collecting
traffic information for billing. There is no standard as yet but it seems certain
that many public networks will have a charging formula taking into account the
average and maximum allowed cell rate as well as the actual traffic sent. In
fact, it is quite possible that some administrations will charge only for the
maximum allowed capacity and connect time (the network must provide for the
capacity if the user has a circuit even if that capacity is not used). This kind of
system is used in other industries. In some countries, electricity is billed for
according to peak usage rate regardless of the total amount of electricity used.
9.2.3.10 Priorities
In traditional data networks connections often have priorities depending on the
type of traffic and its importance. Traffic of high priority will be transmitted on
any link before traffic of a lower priority. There are no priorities of this kind in
ATM. There is no priority field (as distinct from cell loss priority) in the cell
header.
In ATM there are two IDs for each virtual connection: the VPI and the VCI.
Some ATM switches may only know about and switch VPs. Other switches will
know about and switch both VPs and VCs.
An ATM switch must keep a table of VPIs relating to each physical link that it
has attached. This table contains a pointer to the outbound link where arriving
data must be routed. If the VP terminates in the particular ATM switch, then
each ATM switch must keep a table of VCs for each terminating VP. This table
contains pointers for the further routing of the data. The VC may be (for
example) a signaling channel and terminate in this particular ATM switch.
Alternatively, the VC may be logically connected to another VC through this ATM
switch in which case the ATM switch must route the data to the appropriate
outbound connection (identified by the link, VPI and VCI).
When the ATM switch routes the cell onward using only the VP then the VPI
number is changed. (VPIs only have meaning within the context of an individual
link.) When the ATM switch routes the cell by using both the VPI and the VCI
then the outgoing cell will have a different VPI and VCI.
Since the VPI and/or VCI fields in the cell header have been changed, the HEC
field in the cell header must be recalculated before the cell can be transmitted.
This is because, when a cell is switched from one link to another, the VPI is
always replaced 80 (regardless of whether the VCI is replaced or not) and the HEC
is always recalculated.
It is important to note that there is a separate VP table for every inbound link
and a separate VC table for each VP that terminates in this node.
Figure 73. VPIs and VCIs within a Link. VPI and VCI numbers are the same at each end
of a link (because there is nothing within the link that can change them)
Although some of these terms were introduced earlier they can now be treated
with more precision.
Virtual Channel (VC)
The best definition of a virtual channel is “a logical association between
the endpoints of a link that enables the unidirectional transfer of cells
over that link”.
This is not the definition used by either the ATM Forum or the ITU-T. The
ATM Forum defines it as “a communication channel that provides for the
sequential unidirectional transport of ATM cells”. The ITU-T definition is
“A concept used to describe unidirectional transport of ATM cells
associated by a common unique identifier value”.
In the first versions of ATM the VPI and VPCI will be “numerically
equivalent”.
Thus you could consider that there are two interfaces to an ATM network - one
an equipment interface (the UNI) and the other (the real end-user program
interface) through the AAL.
Figure 76 shows the data flow from one end user to another over an ATM
network. There are several important things to notice about this structure.
An ATM network carries cells from one end of the network to the other. While
the cell header has error checking in it (this is necessary because of the
possibility of misrouting cells which had bit errors in the header), there is no
error check on the data part of the cell. In addition, cells can be lost or
discarded during transport and in error situations (this depends on the specific
network equipment) could get cells out of sequence and/or duplicate cells.
For data, the adaptation layer provides the same service as the MAC layer of the
LAN - the non-assured transport of complete blocks. Figure 77 on page 205
shows the logical structure of an ATM network.
It can be seen that the functions provided by the AAL are very basic ones. They
are similar to those provided by a LAN at the MAC layer. This is the level of
function provided by the AAL (although the AAL handles many types of service
in addition to data). For data applications a logical link layer (such as IEEE
802.2) will still be needed for successful operation.
There is also a Signaling AAL (SAAL) defined; this adaptation layer does not
provide user-to-user services but is a series of AAL functions that support
signaling connections between ATM switches and/or between an ATM endpoint
and an ATM switch (network).
The internal logical structure of AAL-3/4 and AAL-5 is shown in Figure 79.
AAL-1 has a much simpler structure but the principle is the same.
ATM was defined to make the physical data transport function as independent as
possible from the ATM switching function and the things that go on above the
ATM layer. It is able to operate over a wide variety of possible physical link
types. These vary in speed, medium (fiber or copper), and structure to suit the
particular environment in which the link has to operate.
By 1995 there were several research networks and many early commercial ATM
networks in operation around the world.
Work on the standards is progressing very rapidly. The basic set of standards
are now available although there are many aspects still not addressed.
Nevertheless, as of January 1995 many of the standards are not very firm and
many aspects could change even in quite fundamental ways.
The (billion dollar) question is “what is the statistical behavior of variable rate
packetized voice and video systems when operated on a large scale”. Many
people (perhaps the majority) believe that it will all add up statistically very
nicely and stable operation of these systems will be possible with link and node
utilizations as high as 80% or more. Some other people say no. They believe
that the variation in the aggregate traffic load due to statistical variance could be
so great as to prevent network operation at resource utilizations much above
20%. (Some speakers expressed this view at the CCITT Study Group XVIII
meeting in Melbourne in December 1991.)
If network utilization can approach 80%, then the economic viability of ATM is
difficult to question. At loadings of 20% the economics are problematic. Time
will tell.
There are two quite separate aspects of circuit switching which need to be
considered:
1. The speed of the switched links themselves
2. The speed at which a circuit can be set up and later cleared
If it takes a “long time” (whatever this means) to set up a circuit then the system
will only be efficient for dedicated transfers of large files. If the setup time is
“short” then we could perhaps make a new connection (call) for every block (or
transaction) that we want to send.
84 The system decides when the transaction is finished by waiting for a defined time interval before clearing the call.
All that said, circuit switching can be attractive and cost competitive in a number
of environments. Circuit setup time depends a lot on the attaching protocols but
the IBM ESCON channels can set up a circuit in 2 µsec. Another recently
announced IBM device performs this same function (in a more limited
environment) in .5 µsec.
Figure 82. Principle of the ESCON Director (Circuit Switch). Fiber optical links are used
to connect stations to an electronic space division switch.
The ESCON system was very well described in a special issue of the IBM
Research and Development Journal (July 1992). The brief discussion here looks
at ESCON as a local communications architecture. This is only a small aspect of
the wider ESCON system.
Stations, whether controllers or processors (at the switching level the distinction
has no meaning), connect to the ESCON Director on optical fiber. The Director is
a (very) fast circuit switch. Above the physical connection structure known by
the Director there is a logical “virtual link” (channel) structure unknown to the
Director.
From the point of view of a computer engineer, the speed is quoted as 200
Mbps because this is consistent with the way it is expressed in processor
engineering. In communications, things are looked at differently. From the
point of view of a communications engineer the rate is 160 Mbps, because
that is how communications systems are usually expressed. For example, in
communications engineering the FDDI data rate is always quoted as 100
Mbps where the bit rate on the fiber is actually 125 Mbps (using 4/5 coding).
In most IBM documentation on ESCON the quoted data rate is 200 Mbps
because it is regarded as a channel.
10.1.1.2 Performance
Looking at this system as a form of packet switch, to transfer a block of data we
must:
1. Make a connection request.
2. Send the data.
3. Make a disconnection request.
Let us assume we want to transfer a 1,000-byte block. At a data rate of 160
Mbps, a 1,000-byte block takes 50 µsec to transmit. If it takes 3 µsec to establish
a connection and perhaps 2 µsec to clear it the total “overhead” is 5 µsec. This
gives a total time of 55 µsec.
If the system were a real packet switch, we wouldn′t have the connection
establishment and clearing overheads. But a real packet switch would have to
receive the data block in full, perform processing to determine the routing, and
then retransmit the data towards its destination. For the 1,000-byte block above
that is 50 µsec times two for transmission time plus perhaps 2 µsec for route
determination within the packet switch itself. A total time of 102 µsec. In this
example the circuit switch clearly outperforms the packet switching technique. 85
(In the real environment of a computer channel I/O system performance will be
even better than this example suggests, because computers tend to transfer
more than one block of data and control information per connection.)
10.1.1.3 Structure
Optical Links
The optical links use a wavelength of 1300 nm (dispersion minimum) over
either multimode or single-mode fiber. (The single-mode option is a
recently announced extension to allow for longer connection distances.)
LED transmitters and PIN detectors are used at a link speed of 200
Mbaud.
Data on the link is coded using an 8/10 coding system. The data rate on
the link is 160 Mbps (20 MBps). (This compares with the maximum
speed of the electronic channel system of 4.5 MBps.) Conceptually this
coding system is much the same as the 4/5 coding scheme used with
FDDI (this is described in more detail in 13.6.5.3, “Data Encoding” on
85 The example is a little simplistic because additional end-to-end protocol exchanges are needed in the real world. However,
they have the same effect on both circuit and packet switching systems, so they have been omitted.
86 DC balancing is not really needed on optical fiber in current single-channel systems like this one. However, balancing
enhances the operation of the electronics at either end of the circuit.
The frame switching technique is not new. Nor is it a “high speed” technology.
Understanding frame switching is, however, important to understanding the
genesis of frame relay.
In the diagram, when Box A needs to send something to Box B, header and
trailer information specific to the link control protocol is appended to the
beginning and end of the block to be sent. This block of data is called a
“frame”.
In a network situation, inside the frame there will typically be a network header
on the front of the user data.
87 In principle, the link protocol used doesn′t matter at all. Frame switching interfaces exist for many different types of link
control, such as BSC and LAPB as well as SDLC.
Figure 83. Frame Switching. The network viewed from Box A “looks l i k e ” Box B but
polling and error recovery are handled locally and do not pass through the network.
Frame switching is like alternative 2 above, but the polling function is handled
locally at each end of the network. Only data and control messages are
forwarded through the network. The details of how this works vary from one
implementation to another. A number of characteristics should be noted:
• The link control from the user to the network is terminated in the nearest
network node. This means that link time-outs are no longer a problem.
Since SNA has no time-outs in its network protocols (outside the physical
and link layers), SNA systems will not have any problem with time-outs.
Some other protocols do rely on time-outs and can experience problems in
this situation.
• Many implementations of this kind allow the end-user boxes to initialize their
communications at different times. This precludes those boxes from
exchanging their characteristics using XID (eXchange IDentification)
protocols. It is this characteristic which prevents many networks from
correctly handling SNA type 2.1 nodes. These nodes rely on XID type 3 to
tell each other their characteristics and will not function correctly unless the
frame switching network understands the XID protocols and properly
synchronizes the initialization.
88 in SDLC one can address up to 255 secondary “boxes” but the control end of the link has no address - it is specified by
context.
89 Contrast this with the description of frame relay. In frame relay the link control operates across the network and thus can be
used to recover from network losses of data.
An interim technology?
While frame relay should not be regarded as a true high-speed technology, it
is extremely important because:
1. It can be implemented fairly easily on existing packet switching
equipment.
2. It can provide an immediate throughput improvement of between two and
ten to one over previous technologies using existing equipment .
The frame relay local address is just the address field in the data link
frame. In frame relay this is called the Data Link Connection Identifier
(DLCI).
Figure 85. Frame Format. Link control fields are absent since they are ignored by the
network.
At a detail level they are quite different. In X.25 the basic service is a virtual call
(switched virtual circuit) although permanent virtual circuits are available in
some networks. frame relay is currently defined for permanent virtual circuits
only; however, standards bodies are working on a definition for switched
connections.
All of the above said, frame relay as an interface is much simpler than X.25 and
therefore can offer higher throughput rates on the same equipment.
90 In strict terms this kind of characterization is not very helpful any more. In the OSI model, networking is done at layer 3 -
therefore the layer at which we network IS layer 3. Functions of link control (layer 2) are paralleled in layer 4. There is a
real need to review our paradigms in this area.
SNA connections over FR networks 92 promise to be much more stable and more
efficient than similar connections over X.25 networks.
Error Recovery
Because there is a link control running end-to-end across the FR
network, network errors will be recovered by the link control.
When SNA is used over X.25 this does not happen. An error in the
network causes the catastrophic loss of all sessions using that virtual
circuit. 93
Interface Efficiency
Because there is no packetization or packet level protocol to perform,
the FR network interface is likely to use significantly less resource within
the attaching SNA product than is required to interface to X.25.
Network Management
FR has a signaling channel which allows the exchange of some network
management information between the device and the network. X.25 has
no such ability.
Nevertheless, at the current state of the definition it is not possible to
provide full seamless integration of SNA network management with the
management of the FR network over which it will run. The latest
releases of SNA equipment, however, are able to receive the network
diagnostic information from an FR network and present it to the user
integrated with the rest of the network management information.
Because you can′t look past the interface into the FR network itself, to
some extent the FR network (like X.25 networks) forms a “black hole” in
the SNA network management system.
Multidrop
Although it is not in the FR standard, there is a potential ability to use FR
for limited “multidrop” connection for devices located in close physical
91 Information in this section is derived from an early SNA prototype implementation. The description is conceptual and may not
accurately reflect the operational detail of any product.
92 It is convenient to refer to “FR networks” and to “X.25 networks” to mean “networks that support FR interfaces” and
“networks that support X.25 interfaces” regardless of how they operate inside. The majority of FR networks do not use FR
protocols internally.
93 Except in the case of the IBM System/36 and IBM AS/400 which are able to use an end-to-end link protocol (called ELLC)
across the X.25 network. ELLC uses the elements of procedure of LAPB as an end-to-end network protocol.
Figure 86. Frame Relay in Relation to IEEE Standards. Frame relay functions as an
alternative MAC layer which is used by the logical link control and medium access control
functions.
The link control protocol used across the LAN is called IEEE 802.2. This is just
another link control protocol like SDLC, LAPB and LAPD (all forms of HDLC).
The difference is that SDLC, LAPB and LAPD perform the functions of framing,
transparency (via bit-stuffing), error detection and addressing. In the LAN
environment these functions are provided by the MAC protocol and in frame
relay they are provided by the frame relay link control. IEEE 802.2 is simply a
link control protocol that leaves the responsibility for framing, addressing, etc., to
the MAC function. Thus 802.2 provides exactly the function that is needed for
frame relay. In addition 802.2 uses an addressing structure that allows multiple
Service Access Points (SAPs), which provide a way of addressing multiple
independent functions within a single device.
Figure 87. Frame Relay Format as Used in SNA. The LPDU format is exactly the same as
that used in the LAN environment.
Thus the link control used by SNA across a frame relay connection is IEEE 802.2
(the LAN link control). In addition SNA devices use the congestion information
provided by the frame relay network to control the rate of data presented to the
network (flow control).
The HDLC family of link controls uses a “rotating window” scheme to provide
delivery confirmation. Multiple blocks may be sent before the sender must wait
for an acknowledgment from the receiver. This means that several blocks may
be in transit at any time, and helps compensate for propagation delays.
In IEEE 802.2 this rotating window mechanism is used for flow control across a
LAN. Since most LANs contain bridges, there is a need to control the flow of
data in the case where the bridge becomes congested. The mechanism is
exactly suited to use across a frame relay network.
The flow control mechanism operates at the sender end only and the receiver is
not involved in its operation. The transmitter is allocated a number (n) of blocks
that it may send before it must wait for an acknowledgment. The receiver
acknowledges blocks by number and an acknowledgment of block number 3 (for
example) implies correct receipt of blocks numbered 1 and 2 (but, in practice,
most receivers acknowledge every block).
The described mechanism is about as much as any end-user device can do. It is
really a delivery rate control. However, it must be noted that if FR networks are
to give stable operation then they will need flow and congestion controls
internally.
11.1.8 Disadvantages
As discussed above (11.1.5, “Characteristics of a Frame Relay Network” on
page 232), faster links are required than would be needed in a better controlled
network (such as an SNA network) to handle the same amount of traffic. This is
partly because the network cannot know about priorities within the link and
partly because of the variability in frame lengths allowed.
In the environment where the cost of wide area links is dropping very quickly,
this may not be important.
The advantage of FR internally within the network then lies in the ability to
“throw away” error frames and not handle error recoveries. This means that far
fewer instructions are needed for the sending and receiving of data on
intermediate links. Buffer storage requirements within intermediate nodes are
reduced because there is no longer any need to hold “unacknowledged” data
frames after they have been sent. Standards work is under way to specify this
method of operation and the interconnections between FR nodes (using FR)
within a network.
However, it seems unlikely that FR will be widely used in this way. Many people
see FR as an interim technique which will give a substantial improvement in
data throughput on existing equipment. In the real world, networks offering FR
interfaces will also offer X.25 and perhaps LAN routing interfaces as well. These
networks will continue to use their existing internal network protocols (many use
TCP/IP internally).
94 The transmitter′s “forward” path is the “backward” path of blocks being received. Hence it is the BECN bit that a transmitter
must examine.
95 Paris is a research project not a product. Networking BroadBand Services (NBBS) architecture and its implementation in the
IBM 2220 Nways BroadBand Switch has been derived from the research work done in the Paris project.
Instead, the full route that the packet must take is included in the beginning of
the packet itself.
Note
In Figure 89, imagine that we wish to send a packet into the network at point x
and have the network deliver it to point y. The packet could go by several
routes but one possible route is through node B. The routing vector would
contain:
7,8,2,,,
• Since the packet arrives at node A there is no entry in the vector to denote
node A.
In a practical system, there will be a Frame Check Sequence field either in the
header or at the end of the data or in both. A receiving node should check the
FCS field before routing the data, as an error in the header could cause
misrouting of the packet.
This system places the responsibility for determining the route onto the sending
system. This means that the endpoint processor must have a means of
calculating an appropriate route. Hence the need for the endpoint processor to
have either a continuously updated topology database or access to a network
control function that has this information.
Sending data in the opposite direction is exactly the same process, except that
y is now the origin. This shows that a real connection between two end users
can take two completely different paths. In a practical system, for management
and control reasons, it would probably be necessary to have data belonging to a
single “call” (connection through the network) travel on the same path. There
are a number of potential ways of achieving this.
Figure 90 shows the format of a data packet. The “routing information” field
shown contains the ordered set of links specifying the route a packet must take.
Refer to Figure 89 on page 240. The numbers located within each node,
adjacent to the link attachment, represent link numbers within that node . These
are used in the following discussion.
Broadcast
In the Control field of the packet header there is a single bit which
causes the switching node to send a copy of the packet onto all its
outbound links (after removing the routing vector).
Selective Broadcast
The SID contains more information than just the identifier of the next link.
It contains 4 bits of control information. One of these is for selective
broadcast. When a SID is removed from the front of a packet, it is
examined and if the broadcast bit is “on” the packet is routed to all
outbound links on this node (except the link that the packet arrived on).
• The NCU calculates the route as a sequence of SIDs for both directions
between the end points and sends it back to the requesting EPP.
• The EPP then sends a connection setup request frame to the desired partner
EPP with the copy function set.
• The switching systems copy the request to every NCU along the desired
route. Each NCU will then check to make sure that sufficient capacity is
available for the desired connection.
This is a complex decision. For a data call, the amount of required capacity
will (on average) be many times less than the peak rate. The node must
allocate an appropriate level of capacity based on some statistical “guess”
about the characteristics of data connections. Voice and video traffic have
quite different characteristics and the NCU must allocate capacity
accordingly.
• For important traffic classes a second path may be precalculated so that it
will be immediately available in case of failure of the primary path.
• Each NCU along the path will reserve a certain amount of bandwidth for the
connection. This bandwidth reservation is repeated at intervals. If a
specified time elapses and there has not been another reservation for this
connection, the NCU will de-allocate the capacity.
• An NCU may disallow a connection request and notify the requesting EPP.
• The destination EPP replies (also copying each NCU along the path) using
the reverse path sent to it in the connection request. This ensures that both
The scheme used is called “leaky bucket” rate control. In order for a packet to
pass the leaky bucket, the counter must be non-zero. The “leaky bucket” is a
counter which has a defined maximum value. This counter is incremented (by
one) n times per second. When a packet arrives, it may pass the leaky bucket if
(and only if) the counter is non-zero. When the packet passes the barrier to
enter the network, the counter is decremented.
This scheme has the effect of limiting the packet rate to a defined average, but
allowing short (definable size) bursts of packets to enter the network at
maximum rate. If the node tries to send packets at a high rate for a long period
of time, the rate will be equal to “n” per second. If, however, there has been no
traffic for a while, then the node may send at full rate until the counter reaches
zero.
Paris, in fact, uses two leaky buckets in series, with the second one using a
maximum bucket size of 1 but a faster clock rate. The total effect is to limit input
to a defined average rate but with short bursts allowed at a higher rate (but not
the full speed of the transmission medium). The scheme is a bit conservative
but allocates capacity fairly. Paris has an adaptive modification based on
network congestion. It can alter the maximum rates at which the buckets “leak”.
The network provides feedback via a congestion stamp on the reverse path.
This feedback is used to alter the rate at which the counters are incremented,
and thus the rate at which packets are able to enter the network.
In addition to the delivery rate control, which restricts the rate at which data may
enter the network, there are end-to-end protocols that allow for error recovery
and for flow control for each individual connection.
These controls are different depending on the type of connection. For data a
protocol is used that allows for error recovery by retransmission of error (or lost)
packets and also provides flow control to allow for speed matching between the
end users. Voice traffic does not have any retransmission for recovery from
errors, but it does need a method of playout that allows for the possibility of
irregular network delays and for lost packets, etc.
11.2.6 Interfaces
Paris requires much less adaptation function than does ATM. This is because:
• Paris sends full frames (there is no need to break logical user blocks of data
up into small units for transmission).
• Paris performs error detection on the data part of the frame as well as the
routing header. There is no need for any additional function to provide error
detection on the data.
In addition to the adaptation function there is also an interfacing function needed
which converts the network interface protocol to the internal network protocol.
This is not quite the same thing as adaptation.
12.1.1 Overview
The important characteristics of the system are as follows:
1. plaNET nodes attach to very high-speed (up to 1 Gbps) links.
2. Data is transported through the network either as ATM cells or as
variable-length “frames”. Frames and cells are intermixed on links and
within switching nodes.
96 As with other IBM Research projects, plaNET is a research prototype system, not a product. IBM can make no comment about
possible use of plaNET technology in future IBM products.
12.1.2 Origin
The plaNET system is an integration of concepts and techniques taken from
earlier IBM Research work on the Paris system (see 11.2, “Packetized Automatic
Routing Integrated System (Paris)” on page 238) and from ATM technology (see
9.1, “Asynchronous Transfer Mode (ATM)” on page 182). The Orbit LAN is a
buffer insertion ring and uses the protocols described under the heading
“MetaRing” (although there is a crucial difference in the structure of the LAN
address). See 15.1, “MetaRing” on page 343.
This is quite a different thing from the conception of a network common within
the telecommunications industry. In the PTT “world” the network is seen as
something that extends from one “service interface” to another service interface
usually across a wide area. 97 The service interface is rigorously defined and is
designed to isolate the end user from the network. (Because the network is
provided by the PTT, and there are many end users, an end user must never be
able to interfere with or even to know about the internals of the PTT network.)
X.25 and ATM (B_ISDN) are just such isolating interfaces. This issue is
discussed further in Chapter 7, “Private Networks in the High-Speed
Environment” on page 153.
97 That is, the network as conceived by PTTs covers only layers 1 to 3 of the ISO model. In contrast plaNET (and SNA) extends
from user interface (layer 6/7) to user interface. Thus plaNET includes layers 4 to 6 which are not considered in the
traditional PTT view.
98 This does not mean the user application program but rather the supporting system software.
99 This is not exactly true, the capacity allocation function of nodes along the path do know about connections (in order to
allocate capacity). However, there is no routing table and the switching function within each node is not aware of the
connection.
Integration of WAN and LAN environments has been achieved by using the same
addressing techniques (ANR, label swap, tree) with identical header formats for
both environments. When data is received on a point-to-point link (such as
defined in ATM) the receiver knows which node sent it. It came from the node at
the other end of the link (there is only one). On a LAN there are many peer
devices connected to a common medium. When a frame is sent on a LAN,
usually there are the 48-bit destination and origin addresses in the header for
the LAN adapters to use to decide which adapter should receive it. Using the
same addressing structure as the WAN means that the usual LAN addresses are
not there!
The key to system operation is that Orbit adapters do NOT recognize their 48-bit
address; they use a short form local address. When the LAN is initialized, each
Orbit station is dynamically allocated a number (6 bits). In either the label swap
or ANR routing environments, Orbit stations recognize this address - not their
Switching Nodes
The switching nodes consist of a backplane and up to eight link or orbit
adapters. Each link adapter may operate at any speed up to 1.1 Gbps
but in practice links run at around 800 Mbps.
The backplane has an aggregate throughput limit of about 6.4 Gbps and
therefore there is a very low probability of congestion. (Congestion is
possible if a large number of broadcast requests are received at one
time, but the probability is very low.)
Link adapters are electronic with coaxial cable interfaces. These are
connected to wide area “dark” fiber or to Sonet (SDH) links by the use of
external converters.
Orbit Adapters
Orbit LANs can be dual ring, single ring, or bus in configuration. Each
Orbit LAN operates at a speed of just above 1 Gbps. A plaNET node may
connect up to eight Orbit LANs (but in that case could not connect to a
link because of the limit of eight adapters). A special Orbit adapter was
built for the IBM Micro Channel (the I/O bus in the IBM RISC
System/6000, the IBM PS/2 and some models of the IBM 4300 series).
The same Orbit adapter can be used in both RS/6000s and in PS/2s
although different software is, of course, required.
All end users in the system are connected through Orbit LANs; thus, no
link-connected end users are possible.
RS/6000 Control Point and Management Processor
In order for a plaNET node to operate it must be supported by a control
point processor. This is an application within any connected RS/6000.
Local area networks (LANs) and metropolitan area networks (MANs) consist of a
common cable to which many stations (devices) are connected. Connection is
made in such a way that when any device sends, then all devices on the
common medium are able to receive the transmission. This means that any
device may send to any other device (or group of devices) on the cable.
This gives the very obvious benefit that each device only needs one connection
to the cable in order to communicate with any other device on the cable.
An alternative would be to have a pair of wires from each device to each
other device (meaning n × (n-1)/2 connections between n devices would be
needed - for 10 devices this would mean 45 separate connections).
Because the LAN is a shared medium, if two devices try to send at the same
time, then (unless something is done to prevent it) they will interfere with each
other and meaningful communication will be impossible. What is needed is
some mechanism to either prevent devices from attempting to send at the same
time, or to organize transmission in such a way that mutual interference does
not result in an inability to operate.
13.1.1 Topologies
There is an almost infinite variety of ways to connect LAN cable. The two most
common types of LAN connection are illustrated in Figure 96.
The LAN protocol used, the configuration of the cable, and the type of cable are
always intimately related. Cables vary from two-wire twisted pair telephone
cable to optical fiber and speeds vary from 1 Mbps to many gigabits per second.
Figure 98 on page 260 shows a LAN with six devices (labelled A to F) attached.
The arrow above each device represents the flow of data being generated by the
device for transmission to some other device on the LAN. An “o” represents a
100 There is an exception in the case of analog radio-based LANs, where the use of Frequency Modulation (FM) transmission
ensures that the strongest signal is received correctly.
Figure 98. Transactions Arriving at a Hypothetical LAN. Arrivals are separated in time
and space. The problem is how to decide which device can send next.
If the system is to be satisfactory, then each user must get a “fair share” of the
LAN. This is usually taken to mean that:
• Data should be sent on the LAN in the order that it “arrives” (is generated).
• No device should be able to monopolize the LAN. Every device should get
equal service.
• Priority may be given to some types of traffic or user in which case higher
priority traffic should receive access to the LAN before lower priority traffic.
• Within each priority level data should be sent in the order that it was
generated and every device should get equal service.
Even though they are distributed over many locations separated perhaps by
great distances, transactions arriving at the LAN form a single logical queue.
The objective is to give access to the LAN to transactions in the queue in FIFO
(First In First Out) order.
If each device was able to know the state of the whole queue and schedule its
transmissions accordingly then the system could achieve its objective.
This, then, is the problem for any LAN system. A LAN system aims to provide
“fair” access for all attached devices but it is not possible for each device to
know about the true state of the notional “global queue”.
Fairness
A real communication system has other things to worry about than fairness.
Users are concerned with what functions a system delivers and, importantly,
at what cost. The challenge in design of a LAN protocol is to deliver the
optimal cost performance. Fairness and efficiency are important but the
resulting system is the objective.
SA Send Anyway
When a device has something to send it just sends anyway without
regard for any other device that may be sending.
This has never been seriously used for a cable LAN but was used in the
“Aloha” system where multiple devices used a single radio channel to
communicate one character at a time. Using Frequency Modulation
(FM), the strongest signal will be correctly received and the weaker
signal(s) will be lost.
This technique works but at very low utilizations. It requires a higher
layer protocol capable of retrying if data is lost.
Contention with Carrier Sense (Carrier Sense Multiple Access (CSMA) with or
without Collision Detection (CD))
Using this technique, before a device can send on the LAN it must
“listen” to see if another device is sending. If another device is already
sending, then the device must wait until the LAN becomes free. Even so,
if two devices start sending at the same time there will be a collision and
neither transmission will be received correctly. In CSMA/CD, devices
listen to their own signal to detect collisions. When a collision occurs the
devices must wait for different lengths of time before attempting to retry.
This collision detection feature is present in some techniques and not in
others. Either way, each user of the LAN must operate an “end-to-end”
protocol for error recovery and data integrity.
In all CSMA type LANs there is a gap in time between when one device
starts to send and before another potential sender can detect the
condition. The longer this gap is, the higher the chance that another
sender will try to send and, therefore, the higher the possibility of
collision. In practice, one of the major determinants of the length of the
gap is the physical length of the LAN. Thus the practical efficiency of this
kind of LAN is limited greatly by the physical length of the LAN. The
utilization of the carrier medium (usually a bus) is limited more by
collision probabilities than by data block sizes. In some situations, 20%
is considered quite good.
Performance:
101 There are many kinds of insertion ring. One of the earliest was implemented on the IBM Series/1 computer in 1981.
MetaRing is a highly sophisticated version of an old principle.
The accepted formula for the absolute maximum throughput on a CSMA/CD LAN
is:
1
Max i mu m Utilization =
1 + 6.44 ρ
end-to-end delay
ρ =
transmission time
Applying the formula to a regular 802.3 LAN at 10 Mbps, for a block length of
1000 bits (100 µsec transmit time), and a LAN length of 2 km (a delay of 10.4
µsec), we get a maximum utilization of 58%. At 100 Mbps the maximum
utilization (from the formula) is 13%. It is important to realize that these
percentage figures represent a kind of 100% utilization figure. To achieve this
utilization, attached stations would need to have an infinite queue length (and an
infinite access delay). Thus when evaluating queueing delays we need to treat
this “maximum utilization” as the media speed. (See Appendix C, “Queueing
Theory” on page 413.)
If we want a “reasonable” access delay for a station, we can look at Figure 175
on page 414. A queue length of two represents 70% utilization (roughly) of the
LAN. This is 70% of the above maximum, not 70% of the medium speed.
Thus for queueing delay representing 70% utilization for the regular 802.3 case
above, the maximum allowable LAN utilization is 58% x 70% or a useful
maximum utilization of 40%.
Figure 99. Maximum Utilization of a CSMA/CD LAN. Block size is expressed in 10s of
microseconds so at 100 Mbps the block size units represent thousands of bits.
For a 100 Mbps speed the block size scale shown goes from 1,000 to 10,000 bits.
(The maximum block length allowed on an 802.3 LAN is 1,500 bytes - 12,000 bits.)
It is easy to reach conclusions from the graph:
Figure 100 shows what happens as more and more load is presented to an
Ethernet. By “presented” here we mean that a station on the LAN has the data
and is trying to send it. Notice that when load gets too high for the LAN, instead
of data continuing to be transported at some maximum figure, LAN operation
collapses and nothing at all gets through! Why then does it work so well? ( and it
does! )
Figure 100. Relationship between Offered Load and Throughput on Ethernet. As “offered
load” increases so does throughput (the network handles the load) until a point is reached
where too many stations are trying to send. At this point the network collapses and no
traffic gets through.
Several proposals were considered by the IEEE. A good solution should provide
the following:
• It must be low in cost. Otherwise the user would be better off going to FDDI
(on copper or fiber).
This is not a big item, since all of the timings and parameters of 10 Mbps
Ethernet are maintained with 100Base-T. This means that the performance
characteristics (maximum utilization, etc.) are the same as for 10Base-T. The
major difference is in geographic extent, where all 100Base-T devices must be
located within 100 meters of a hub device. Bus connection as used in traditional
Ethernet is not possible.
It is widely believed that 100 Mbps data rates are achievable over relatively
short distances of UTP-3 (say 100 meters) but there is significant disagreement
over what the best approach might be and what the adapters might end up
costing. It may be that the required data rate is achievable only at the cost of
using very sophisticated digital signal processing techniques.
One significant problem is that the whole reason for wanting to use two pairs
(when most cables carry four at least) is so that the other pairs can be used for
other purposes (such as telephone). However, use of the additional pairs for
other things (especially decadic dialed telephone) introduces “crosstalk” noise
into the pairs we are using for 100Base-T2. But if you can′t use the additional
pairs for other purposes then you might as well use the 100Base-T4 (4-pair)
solution.
Originally conceived as one alternative for “100 Megabit Ethernet” the original
proposal was expanded to allow the connection of token-ring stations as well.
This is a highly practical solution to 100 Mbps LAN networking which addresses
both the performance and the cabling problems and appears to meet the
objectives stated above. There are three key concepts:
2. The bits of the MAC frame are considered to be a stream of 5-bit quintets.
3. Quintets are allocated to transmission channels on a round-robin basis such
that channel 0 is allocated quintets 1,5,9,13... Channel 1 is allocated quintets
2,6,10,14... Channels 2 and 3 are likewise allocated quintets.
4. Operation of a single channel is shown in Figure 102.
Figure 102. 100VG Data Encoding and Scrambling Process (Single Channel)
Figure 103. Isochronous Ethernet Network Topology. The dotted lines represent
isochronous 6.144 Mbps isochronous channels.
104 You could call this the baud rate, but when 4/5 code is used it is probably better to say that the baud rate is 4.056 MBaud with
5-bit symbols.
13.4.1.1 Gateways
Of course, the system is not very useful without connection to the wide area.
ISDN, T1 and E1 type connections would form modules within the hub and these
would be connected to the isochronous switch. The gateway function is a part of
the hub. Thus continuous isochronous channels could be constructed from any
workstation through the wide area network.
13.5 Token-Ring
Many excellent descriptions of the detailed operation of token-ring protocol are
available and so it will not be described in detail here. However, there are a
number of important features of token-ring that need to be reviewed in order to
understand why we need to use different protocols at very high speeds.
Configuration
A number of devices called “stations” 105 are connected by a series of
point-to-point links such that the whole forms a “ring”. Data is sent in
one direction only so that if station A (in Figure 104 on page 278) sends
data to station B then the data will pass through stations D, E, F, G and C
before it is received by station B. Data links are usually electrical
(although they may be optical) and the data rate is either 4 or 16 million
bits per second.
A ring is usually wired in the form of a “star”. A device called a Ring
Wiring Concentrator (RWC), which may be active or passive, is used at
the center and two pairs of wires (in the same cable) are connected
(point-to-point) from the RWC to each station. This is done so that
individual stations may be added to or removed from the ring
conveniently.
Figure 105. Token-Ring Frame Format. A token consists of the SD (Start Delimiter), AC
(Access Control) and FC (Frame Control) fields alone. Numbers denote the field lengths
in bytes.
The concept described above is very simple but a number of things must be
done to make sure that the concept works in practice:
Minimize Delay in Each Station
In some early LAN architectures, a whole data frame was sent from one
station to another and then retransmitted. This meant that the transmit
time for the frame was added to the transit time for every station that the
message passed through . This protocol actually works quite well where
there are very few (for example, four) devices in the LAN. In a LAN of
perhaps 100 devices, the “staging delay” becomes critical.
One key component of TRN is its use of Differential Manchester Coding
at the electrical level. This is discussed in 2.1.8, “Differential Manchester
Coding” on page 24. Potentially, this enables a ring station to have only
1 bit of delay, but in current adapters this is 2½ bits.
The monitor station generates a clock, and every other station derives its
timing from the received data stream and uses this derived timing to
drive its own transmitter. This is done to avoid having a large “elastic
buffer” (delay) in each node. But there is a down side. Jitter (the very
small differences in timing between the “real” timing and what the
receiver is able to recover - see 2.1.6.2, “Jitter” on page 21) adds up,
and after a while (240 stations) threatens to cause the loss of data. The
monitor station (but only the monitor station) indeed contains an elastic
buffer to compensate for jitter around the ring.
Structure of the Token
With only a one-bit delay in each station, how can a station receive the
token and then send data? It might repeat the token to the next station
before it had the opportunity to start transmission.
The technique used here relies on the fact that the token is a very short
(24 bits) fragment at the beginning of a frame header. Bit 3 of the Frame
Control field determines whether this is a token or the beginning of a
frame. So, when a station has something to send, it monitors for a
token. When it detects the token it changes the token bit in the FC field
to mark the token as “busy” and then appends its data.
When the ring speed is increased from 4 Mbps to 16 Mbps several things
happen:
Data transfer is faster.
It takes less time to transmit a frame.
Staging delay in each node is less.
Delay stays at 2½ bits but a bit now takes ¼ of the time.
The speed of light (and of electricity) hasn′t changed at all!
A major component of ring latency (propagation delay) is unchanged.
106 Electricity travels on twisted pair media at about 5 µsec per kilometer.
For operation at 16 Mbps, the token-ring protocol is modified such that when a
station finishes transmission it will immediately send a free token. This is called
“early token release”.
As ring speed is increased further, the TRN principle will still operate, but
throughput does not increase in the same ratio as the link speed:
Latency
When a station has transmitted its one frame it must send a token to let
the next station have a chance. It takes time for the token to travel to
another station and during this time no station can transmit - the ring is
idle. As ring speed is increased the transmission time becomes shorter
but this latency (between transmissions) is unchanged. This means that
in percentage terms, latency becomes more significant as speed is
increased.
The situation could be improved by allowing a station to send multiple
frames (up to some limit) at a single visit of the token. (FDDI does just
this.)
No “Guaranteed Bandwidth”
There is no mechanism available to guarantee a station a regular
amount of service for time-critical applications.
No “Isochronous” Traffic
Isochronous traffic (meaning non-packetized voice and video) is different
from the guaranteed bandwidth real-time characteristic mentioned in the
previous point. TRN does not allow this either.
Potentially Wasted Capacity
Only one active token may be on the ring at any time. This means that
only one station may transmit at any time. In the case (in the diagram
above) where station E is transmitting to station F, station C might
perhaps transmit to station A, thus doubling the throughput of the ring - if
a protocol could be found that made this possible.
107 In FDDI the word “synchronous” is used to mean “traffic which has a real time requirement”. That is, to transmit
synchronous traffic a station must gain access to the ring and transmit its frames within a specified time period. This is not
the usual meaning of the word synchronous. See the description in Appendix D, “Getting the Language into Synch” on
page 421.
Should there be a break in the ring, the stations can “wrap” the ring through
themselves. This is shown in Figure 107. The secondary ring is used to
complete the break in the primary ring by wrapping back along the operational
route.
The ring wiring concentrator could be a simple device which performs only the
RWC function or it could be quite complex containing a class A station with the
ring monitor function as well.
FDDI is a little more complex than suggested above, due to the need to handle
synchronous traffic. There are three timers kept in each ring station:
Token Rotation Timer (TRT)
This is the elapsed time since the station last received a token.
Target Token Rotation Timer (TTRT)
This is a negotiated value which is the target maximum time between
opportunities to send (tokens) as seen by an individual station. TTRT has
a value of between 4 milliseconds and 165 milliseconds. A
recommended optimal value in many situations is 8 milliseconds.
Within the asynchronous class of service there are eight priority levels. In
token-ring a token is allocated a priority using three priority bits in the token - a
station with the token is allowed to send frames with the same or higher priority.
In FDDI the priority mechanism uses the Token Rotation Timers rather than a
specific priority field in the token.
The sending station must monitor its input side for frames that it transmitted and
remove them. A receiving station only copies the data from the ring. Removal
of frames from the ring is the responsibility of the sender.
When a station completes a transmission, it sends a new token onto the ring.
This is called “early token release”. Thus there can only be one station
transmitting onto the ring at one time.
In summary:
• A token circulates on the ring at all times.
• Any station receiving the token has permission to transmit synchronous
frames.
• If there is time left over in this rotation of the token, the station may send as
much data as it likes (multiple frames) until the target token rotation time is
reached.
• After transmission the station releases a new token onto the ring.
• Depending on the latency of the ring, there may be many frames on the ring
at any one time but there can be only one token.
The power levels are expressed in dBm. 109 Two different transmitter power
ranges and two different receiver sensitivity “categories” are specified. These
are:
Most of the time a station simply passes data on around the ring. The need is to
pass this data with minimal delay in each station. This means that we need to
start transmitting a block towards the next station before it is completely
received. The received signal arrives at the rate of the upstream station′ s
transmitter. When data is sent onward to the next station in the ring, it is sent at
the rate of this station′s local oscillator. So data is being received at a different
rate from the rate that it is being transmitted!
The FDDI specification constrains the clock speed to be ± .005% of the nominal
speed (125 megahertz). This means that there is a maximum difference of .01%
between the speed of data received and that of data transmitted.
When a station has no data to send and is receiving idle patterns, the elasticity
buffer is empty. When data begins to arrive, the first 4 bits are placed into the
buffer and nothing is sent to the station. From then on, data bits are received
into the buffer and passed on out of the buffer in a FIFO manner.
If the transmit clock is faster than the receive clock, then there are (on average)
4.5 bit times available in the buffer to smooth out the difference. If the receive
clock is faster than the transmit clock, there are 5 bit positions in the buffer
available before received bits have to be discarded.
While this mechanism introduces additional latency into each attached station, it
has the advantage that it prevents the propagation of code violations and invalid
line states.
The net effect of 4B/5B encoding and NRZI conversion is that the maximum
length of signal without a state change is 3 bits.
This then is one of the major reasons for using the Manchester code for TRN.
Because of the guaranteed large number of state transitions in the code, the
recovery of accurate timing information is much easier and the necessary
circuitry is simpler and lower in cost.
It is assumed by FDDI that IEEE 802.2 logical link control will be used with FDDI,
but this is not mandatory.
The FDDI standard is structured in a different way from the others. Station
management is not a new function. Most of its functions are performed, for
example, by token-ring, but the functions are included in the physical and MAC
components. Also, the physical layer is broken into two to facilitate the use of
different physical media.
The delay in a TRN node is around two bits. In an FDDI node the delay will
depend on the design of the particular chip set but it is difficult to see how the
delay could be less than about 20 bits. This means that an FDDI ring will have a
longer latency than a 16 Mbps token-ring of the same size and number of
stations.
FDDI follows the same “early token release” discipline as 16 Mbps token-ring but
still only one station may transmit at any time.
Token Holding Time (THT) is the critical factor. If THT is relatively short, then a
station may only send a small amount of data on any visit of the token. If it is
set large then a station may transmit a lot of data. Since it can take a relatively
long time for the token to go from one station to another, during which no station
may transmit, the longer the THT the greater the data throughput of the ring.
A short THT means that “ring latency” can be relatively short, so that the delay
for a station to gain access to the ring is also short. A short THT therefore is
suitable for support of real-time applications. If the THT is very short, the system
gives better response time but low overall throughput. If it is set very long, then
you get a high throughput but a relatively poor response time.
The key tuning parameter is the “Target Token Rotation Time” (TTRT). At ring
initialization, all stations on the ring agree to the TTRT (the shortest TTRT
requested by any node is adopted). Stations then attempt to meet this target by
limiting their transmissions. TTRT is a parameter which may be set by system
definition in each node.
Work reported by Raj Jain (referenced in the bibliography) suggests that a value
of 8 milliseconds is a good compromise in most situations.
Of course, there may be only one token on the ring at any time and only one
station may transmit at any time. In a long ring (a large number of stations
and/or a great geographic distance), this represents some wasted potential.
13.7.1 SDDI
In 1992 a group of nine suppliers (one of which was IBM) announced a
specification and a series of products for the use of FDDI protocols over installed
STP cable plant. Operational equipment implementing this specification first
became available in 1992.
Transmission distance between workstation and hub (or between two directly
connected workstations) is limited to 100 meters.
This is intended for use in the single-ring configuration for the attachment of
individual workstations - it is not intended for use as the dual-ring backbone.
This is mainly because of the distance limitation.
1. Operation of the fiber-optic FDDI physical layer is unchanged . That is, the
TP-PMD (Twisted Pair Physical Medium Dependent) function presents data to
the FDDI physical layer and receives data from it exactly as if the data were
coming-from/going-to a fiber.
2. Figure 111 on page 290 illustrates the physical layer′ s operation. The
TP-PMD replaces only the detector and transmitter functions leaving
everything else the same.
3. In the transmit direction, data is input to the PMD in the top left-hand box of
Figure 113. Notice how it was converted to NRZI and how it is immediately
converted back to NRZ form.
4. The data is scrambled (randomized) to provide a more uniform transmitted
spectrum. Scrambling is discussed in 2.1.17, “Scrambling a.k.a.
Enciphering” on page 36.
5. The data is now converted to MLT-3 code (see 2.1.15.3, “Multi-Level Transmit
- 3 Levels (MLT-3)” on page 35) and sent on the outbound link.
6. On the input side the reverse happens, but note that the input is clocked
from the speed of the input line and the output is clocked from the local
oscillator in the adapter (node).
Within the technical community (and within the standards community) it is widely
accepted that transmission of FDDI protocols over distances of up to 100 meters
on UTP is technically possible (there are a number of proposed techniques).
There are two problems:
1. It is felt by many that sophisticated coding and signal processing techniques
will probably be needed to achieve the objective.
2. Even with advanced techniques there is disagreement over whether the
spurious (EMC) emission limit (imposed by law in some countries) can ever
be met. Certainly, if these limits can be met without shielding then the cable
installation standards will need to be tight and rigorous. (A badly installed
STP system can cause excessive EMC.)
These problems can be summarized in one word - cost. The challenge is not so
much building a system that will work but in building one that is cost effective.
13.8 FDDI-II
FDDI-II is an extension of FDDI that allows the transport of synchronized bit
streams, such as traditional (not packetized) digital voice or “transparent” data
traffic, across the LAN. This is called “isochronous” traffic. 110
Note: The FDDI-II standard is still in the process of development and is not yet
formally complete. While it seems unlikely that the broad concepts discussed
here will change, anything is possible.
Most varieties of LAN (Ethernet, TRN, etc.) can handle voice traffic if an
appropriate technique of buffering and assembly into packets is used. Regular
FDDI (1) is the best at this because its timed protocol allows a station to get
access at relatively regular intervals. However, these protocols are primarily
data protocols and cannot provide transparent carriage for an isochronous bit
stream such as unpacketized voice.
The key to understanding FDDI-II is the fact that the FDDI protocols are used
unchanged but travel within one channel of a time division multiplexed frame.
Isochronous traffic (voice, etc.) is handled by the TDM separately from the FDDI
data protocol. Looked at from the viewpoint of what happens on the LAN cable
itself, FDDI and FDDI-II are utterly different.
Figure 114 on page 296 shows a highly conceptualized structure of the FDDI and
FDDI-II nodes. The difference is that a time division multiplexor has been placed
between the FDDI media access control and the physical layer protocol. In
addition, there is an “isochronous medium access control” which provides
continuous data rate services.
110 It is very easy to get confused here by the terminology. See Appendix D, “Getting the Language into Synch” on page 421
for more information on the use of the words “synchronous”, “isochronous”, etc.
13.8.1 Framing
Like most voice-oriented TDM systems (ISDN_P, DQDB, SDH/Sonet, etc.), FDDI-II
uses a fixed format frame that is repeated every 125 µsec. 111 Each 125 µsec
frame is called a “cycle”. At 100 Mbps each cycle contains 1560 bytes plus a
preamble. The ring may operate at different speeds but can only change in
6.144 Mbps increments (because of the frame structure).
111 One frame per 125 µsec equals 8000 frames per second. If a single slot is 8 bits then this gives 64 Kbps - the speed of
“standard” digitized voice.
112 Because FDDI-II uses 4/5 code for sending on the link FDDI defines fields in terms of 4-bit “symbols”.
The WBCs and the DPG are byte interleaved with one another within the frame.
There is only one packet data channel. When one or more wideband channels
are allocated to packet data they are concatenated with each other and with the
dedicated packet group to form a single continuous bit stream. This continuous
bit stream is recovered and reconstructed at every node. The FDDI protocol is
used to operate this channel exactly as if it were the only bit stream on an
ordinary FDDI ring. Figure 116 shows the packet data channel as it exists within
the TDM frame. In the example four WBCs (labeled C to F) are concatenated
with the DPG to form a single contiguous “clear channel”.
Figure 116. Derivation of Packet Data Channel from the TDM Frame. In this example
WBCs 12 to 15 (labelled C, D, E and F) are concatenated with the DPG to form the packet
data channel.
Notice that each DPG contains a single byte from each WBC. So there are 96
DPGs in each frame. Because the frame rate is 8000 per second (frame is 125
µsec long), each byte represents a rate of 64 Kbps.
13.8.3 Operation
Initialization
The FDDI-II ring is initialized in “basic mode”. Basic mode is the name
given in FDDI-II for regular FDDI. This is used to set up the timers and
parameters for the FDDI data protocol.
If every active station is capable of operating in “hybrid mode” then after
initialization, the ring may switch its operation to this mode. In hybrid
mode the “cycle master” station creates and administers the TDM
structure thus making the isochronous circuit-switched service available.
Isochronous Bandwidth Allocation
The cycle master station may change the channel allocations (change a
WBC from isochronous operation to packet mode or vice versa) at almost
any time.
In order to change modes without disrupting anything, the cycle master
waits until it has the token. This means that no data traffic is using the
WBCs. The cycle master then changes the programming template in the
cycle header to reflect the new status. Stations inspect the cycle header
in each cycle to demultiplex the contents of that cycle, so the allocations
change in the very next cycle.
Allocating Isochronous Bandwidth to a Station
When a station wants to set up an isochronous circuit, its management
function sends a request to a channel allocator. Operation takes place
as follows:
1. The requesting station sends a control message (as a normal FDDI
frame) to its associated channel allocator. This request contains the
number of channels (64-Kbps slots) required as well as a minimum
number of channels that would be acceptable if the full number
cannot be allocated.
113 This is the reason we need the two functions of channel allocator and wideband channel allocator.
The operation of a gateway through ISDN primary rate at any isochronous speed
in excess of a single 64 Kbps channel is a technical challenge. While T1, E1, and
FDDI-II keep groups of 64 Kbps channels together and allow them to be used as
a single wideband channel, ISDN does not. When WAN connection is through
ISDN, you need some means of synchronizing data transfer on multiple channels
if you want to construct channels of bandwidth greater than 64 Kbps (such as 384
Kbps).
It should be pointed out that as yet there is no standard available for the
operation of FDDI-II to ISDN or FDDI-II to T1/E1 gateways.
The DQDB protocol was designed to handle both isochronous (constant rate,
voice) traffic and data traffic over a very high-speed optical link.
13.9.2 Concept
A DQDB MAN is in many ways just like any other LAN. A number of stations
(nodes) are connected to a common medium which allows data to be sent from
one node to another. Because of the use of a shared medium, there must be an
access protocol to control when a node is allowed access.
Different from most LANs, it is intended for operation over a wide geographic
area and at bit rates of up to 155 Mbps. Another difference from many LANs is
that it is also designed to handle isochronous (for example, non-packetized
voice) traffic.
13.9.3 Structure
114 Notice this is the same length as for Asynchronous Transfer Mode (ATM).
Figure 118. DQDB Frame Format as Seen by the DQDB Protocol Layer. Not shown is a
4-byte slot prefix used by the physical layer.
At the physical layer, preceding each slot there is a 4-byte field consisting of a
2-byte slot delimiter and 2 bytes of control information, which are used by the
layer management protocol. The slots are maintained within a 125 µsec frame
structure so that isochronous services can be provided.
115 An enhancement aimed at developing an “eraser node” which could allow slot reuse is under consideration by the IEEE 802.6
committee.
116 See 13.1.2, “Access Control” on page 259 for an explanation of the notion of a global queue.
What the node really does is to keep track of its position in the order of requests
from itself and from nodes downstream of itself on the bus.
117 The description here is conceptual. In reality the counters operate slightly differently from the way described but the net
effect is the same.
13.9.4.1 Priorities
There are four priorities defined in DQDB. To implement this there are three
request bits in a cell header. What happens is that each node must keep three
request counters. A passing slot decrements the highest priority non-zero
counter. A passing request increments the request counter for the
corresponding priority level and the request counters of all lower priority levels .
High priority traffic is always sent before traffic of lower priority. 118
The node is described as being “external” to the buses. That is, data “passes
by” the node and the node writes into empty slots using an OR function. (Slots
are preformatted to all zeros and when data is written to them it is done by
ORing the data onto the empty slot.) It is said that this gives better fault
isolation than token-rings because node failures that don′t result in a continuous
write operation will not affect the operation of the bus.
118 In detail there are some problems with priorities. In the presence of “bandwidth balancing” priorities don′t work and in any
case, the need for priorities in this environment is questionable.
This structure is one of the best features of DQDB. The definition allows the use
of many different physical connection mechanisms and specifies rigorously the
boundary between the DQDB protocol layer and the physical medium dependent
layer.
Figure 121, which shows the OR writing to the bus, really shows the interface
between the DQDB layer and the physical convergence layer. The physical
convergence layer is different for each supported medium. Media defined so far
include:
• Fiber connection at 35 and 34 Mbps
• Sonet connection at 45 Mbps
• Fiber connection at 155 Mbps
• Copper connections at T1 (1.544 Mbps) and E1 (2 Mbps) are under study.
The line codes used are different depending on the medium. For example, on
optical media DQDB will use an 8B/10B code similar in principle to the 4B/5B
code used in FDDI. (See 13.6.5.3, “Data Encoding” on page 290.) On a copper
medium, a different code is used.
Also, the exact node structure with respect to link connection may be different
for different media. On optical media, a node structure similar to FDDI will be
The problem is propagation delay, not the amount of time it takes for a slot
(carrying a request or data) to pass from node to node along the bus. In
Figure 122, consider the extreme condition where propagation delay consists of
five slot times between Node A and Node B.
Figure 122. DQDB Fairness. Problem is propagation delay along the bus.
• If Node A has a lot of data to send, then it may send immediately provided
there are no outstanding requests.
• Let us assume that this is the case and Node A wants to send at full rate.
• Now Node B wants to send. It queues the first segment for sending and
sends a request upstream. (Nodes always send a request even when the
request counter is zero.)
• Node A continues to send until the request from Node B reaches it.
• When the request reaches Node A, it will dutifully allow the next slot on Bus
A to pass in order to satisfy the request.
• Then Node A will continue to send into every available slot on Bus A.
• When the empty slot arrives at Node B, it will use the slot and send some
data.
• Typically, Node B will then want to send another segment of data.
• To do this it must place another request.
Notice the effect: Node A is able to send 10 segments of data for every one
segment that Node B can send! This is because it takes 10 segment times after
Node B sends a request for a free slot to reach it.
This has a much greater effect than might be thought at first. In the example
above, while the situation described is happening, Node A lets another free slot
pass. Node B will get a free slot, “think” it was the result of its last request,
send a segment and another request. Thus Node B now has two requests in
transit between itself and Node A. If/when another slot is released there will be
three. What is happening is that the small number of free slots let pass by Node
A satisfy requests which then generate more requests.
In practical systems, at least for the moment, the nodes will not be fast enough
to use every available slot anyway, so bandwidth balancing will be unnecessary
in early systems.
Consider Figure 123 on page 310. Segments are placed into cells and sent on
the bus. But how can a receiver decide which ones are intended for it? The first
segment of a data block has the destination node address inside it. So a
receiver only has to monitor for cells containing its address to determine which
cells should be received. But what about all the rest of the cells in the data
block? There is no header and thus no destination LAN address.
119 The currently defined maximum data block length is 9188 bytes.
VCIs are reused by sending nodes. Each VCI only has meaning within an
individual data block, so each sending node would strictly only need one VCI. In
practice, each node is allocated a few (about 4) VCIs which are cyclically reused.
There is a problem here that is not present in other LAN protocols. The node
receives a block in multiple segments. What if two nodes decide to send a block
of data to the same receiving node simultaneously? Segments belonging to
multiple user data blocks will be received mixed up with one another. To handle
this each node must have multiple receive buffers capable of holding the
maximum size user data block. The receiver must be implemented in such a
way as to allow multiple blocks to be reassembled simultaneously.
The above description related to DQDB protocol operation has nothing whatever
to do with isochronous service. As mentioned earlier, there are two kinds of
slots:
1. Queued Arbitrated (QA) slots
2. Pre-Arbitrated (PA) slots
Queued Arbitrated slots are managed by the DQDB protocol.
Pre-Arbitrated slots are allocated by the head of bus function (node containing
the frame generator) at predetermined fixed time intervals (typically once every
125 µsec). Preformatted slots containing a flag to say they are pre-arbitrated are
created by the frame generator. They are identified by the VCI (Virtual Channel
Identifier) number in the slot header. Because a slot contains 48 usable bytes
(every 125 µsec) a single slot identified by a single VCI gives 48, 64 Kbps
channels (the same as two US T1 circuits).
Nodes may use preallocated slots in any desired way. For example, single bytes
may be used to carry single voice channels, or a group of 32 contiguous bytes
may be used to carry a 2 Mbps clear channel link.
Precisely how the PA capability should be used is not defined by the IEEE 802.6
specification.
Figure 125. DQDB Looped Bus Configuration. In this configuration one node doubles as
the slot generator and the buses are looped into a ring configuration.
The network topology is still a dual bus but it is configured as a ring. All of the
nodes are capable of being the frame generator.
In addition, the “OR-WRITE” mode of attachment to the bus isolates the bus from
a large proportion of potential node malfunctions.
Figure 126. DQDB Reconfiguration. Each node is able to be a slot generator. Here, the
bus has been broken and the two nearest nodes have taken over the role of head of bus.
In some ways, a MAN is just like a big LAN - that is, a LAN covering a large
geographic area but there is a critical difference: A user device must never
interface directly to the MAN . That is, the MAN must never pass through end
user premises. The reason is obvious: data on the MAN belongs to many
different organizations and (even with security precautions such as encryption)
most users are not willing to have their data pass through a competitor′ s
building. Thus, nodes which access the MAN are always on telephone company
(PTT) premises. End users are connected through “access” nodes on
point-to-point links.
From a user perspective the MAN is just a fast cell-switching network. The fact
that a LAN-type of structure is used by the PTT is irrelevant to the user provided
that the user interface stays the same.
Services offered by the MAN to the subscriber are the same as those offered by
a LAN - although they are implemented in quite a different way.
Send Data
The subscriber may send data to any other subscriber anywhere on the
MAN. Since other subscribers may not want to receive data from
anywhere, the access nodes must provide a filter to prevent the
reception of unwanted messages.
Closed User Groups
Each subscriber address may be a member of one or more closed user
groups. A closed user group is just a filter that allows free
communication within members of the group (a list of addresses) but
prevents communication with addresses that are not in the group (except
where specifically allowed).
Broadcast
Each subscriber may send a broadcast message (indeed some protocols
require this ability for correct functioning). However, it would be very
dangerous for a subscriber to be able to broadcast to every address on
the MAN. This must be prevented. To do this means that the true
“broadcast” function of the DQDB protocol cannot be accessed (even
indirectly) by a subscriber.
What actually happens is that the subscriber equipment sends a
broadcast message to the access node and this node replicates the
message and sends a copy to each member of the appropriate closed
user group. That is, although the user sends a “broadcast” it is treated
by the network as a number of separate messages.
As will be seen in 13.9.10.3, “Switched Multi-Megabit Data Service (SMDS)” on
page 317, the protocol used between the MAN and the subscriber is DQDB itself.
This means that the access nodes are really a type of bridge with an active
filtering function built in. Access nodes also have network management and
accounting (billing) functions.
In the US, the connection between the end user and the network is the “link to
end user” as shown in Figure 128. This link will use the SMDS protocol and the
customer (user) may purchase the end-user gateway equipment from any
supplier.
Outside of the US (at least in some countries) the network supplier (PTT,
telephone company) will supply the end-user gateway equipment and thus the
end-user interface to the network would be the LAN interface. How this will work
legally and administratively is not yet settled and further discussion is outside
the scope of this document.
Using the Australian “Fastpac” network as an example, there are two accesses
to the network available - a 2 Mbps (4-wire copper, E1) interface and a 35 Mbps
(fiber, E3). The 2 Mbps interface is published and users may attach equipment
from any supplier to this interface. The 35 Mbps interface is considered
proprietary and users purchase the end-user gateway equipment as part of the
network service. Different countries may adopt quite different approaches to this
problem.
Another feature of MAN networks is their pricing. Since the network is shared
the user will be billed for data traffic (just as with X.25). In addition, there will be
a charge for link connection to the network (again just as X.25). However, since
there are no virtual circuits (no connections) there can be no charge for
connection holding time (a significant part of the cost in many X.25 networks).
Precise prices must wait until network providers decide on their tariffs.
The solution adopted was to design a new interface constructed solely from
standardized protocol elements but put together in a new way. The concept is
extremely simple. The user equipment builds an IEEE 802.6 frame including the
frame header and sends it to the network over an E1 (2 Mbps G.703/G.704
connection) using LAPB link control to protect against errors on the local access
link.
The connection to Fastpac uses only two logical channels. The first logical
channel is used for data and is made up by aggregating as many slots as
needed from the G.704 frame. This is similar to the “wideband” mode of ISDN.
At first sight this system looks ridiculous. The access from the user to the
network is a point-to-point link which runs at 2 Mbps full-duplex regardless.
What purpose is served by limiting the throughput of the link? This is actually
very sensible. Australia is a very large geographic area (2.9 million square
miles) and one objective of Fastpac is to provide service to all locations in the
country. Fastpac will cover the major cities, but what about small towns a long
distance from the city (as much as 1500 miles)? The structure enables the
access link to be multiplexed through Telecom Australia′s digital backbone
network. By limiting the number of slots, Telecom can provide access to Fastpac
from anywhere in a reasonably cost effective way.
Reference Configuration
There may be one or many pieces of customer equipment attached to the same
link. In the case where multiple customer devices are attached, these devices
have concurrent access to the network over the same link.
Interface Characteristics
SMDS is defined to use two speeds - US “T1” (1.544 Mbps) and US “T3”
(45 Mbps). The interface protocol is IEEE 802.6 (DQDB).
Service Features
• Connectionless Transfer
The user sends “datagrams” to the network containing a header with both
source and destination network addresses included. All data is sent this
way, there is no such thing as a “virtual circuit”. Datagrams may be up to
9188 bytes in length.
• Closed User Groups
The service defines a number of features which provide much the same
facilities as the “closed user group” does in an X.25 public network.
Conclusion
Within the past year or so a number of manufacturers have begun to offer local
area networks based on very-low-power radio communication at speeds of 1
Mbps and above. 120 Radio 121 is one way to achieve “wireless” communication.
The other common method uses infrared optical broadcast.
The task of a radio LAN is the same as that of any LAN - to provide peer-to-peer
communication in a local area. Ideally, it should appear to the user to be exactly
the same as a wired LAN in all respects (including performance). The radio
medium is different in many ways to wired media and the differences give rise to
unique problems and solutions. This section will concentrate on the aspects
unique to the radio medium and will discuss only in passing aspects that are
held in common with wired media.
Thus in a radio system everyone shares essentially the same space and this
brings about the biggest problem - sharing the limited available bandwidth.
120 Slower speed systems have been available for some years.
121 Some brief background material on sending digital data as modulations on a carrier may be found in Appendix B,
“Transmitting Information by Modulating a Carrier” on page 407.
Figure 132. Multi-Path Effect. The signal travels from transmitter to receiver on multiple
paths and is reflected from room walls and solid objects.
In the office and factory environments, studies have shown that the delay
spread is typically from 30 ns to 250 ns, of course depending on the
geometry of the area in question. (In the outdoor, suburban environment,
delay spread is typically between .5 µsec and 3 µsec.) Delay spread has two
quite different effects which must be countered.
Rayleigh Fading
Figure 133. Rayleigh Fading. The signal strength pattern in an indoor area can look like
this. The strength can be relatively uniform except for small areas where the signal
strength can fall to perhaps 30 dB below areas even one meter away.
14.1.4 Security
Because there are no bounds for a radio signal, it is possible for unauthorized
people to receive it. This is not as serious a problem as would appear since the
signal strength decreases with the fourth power of the distance from the
transmitter (for systems where the antenna is close to the ground - such as
indoor systems). Nevertheless it is a problem which must be addressed by any
radio LAN proposal.
14.1.5 Bandwidth
Radio waves at frequencies above a few GHz do not bend much in the
atmosphere (they travel in straight lines) and are reflected from most solid
objects. Thus radio at this frequency will not normally penetrate a building even
if it is present in the outdoor environment. Inside the building this means that
14.1.6 Direction
In general radio waves will radiate from a transmitting antenna in all directions.
By smart antenna design it is possible to direct the signal into specific directions
or even into beams. In the indoor environment, however, this doesn′t make a lot
of difference because of the reflections at the wavelengths used.
14.1.7 Polarization
Radio signals are naturally polarized and in free space will maintain their
polarization over long distances. However, polarization changes when a signal
is reflected and effects that flow from this must be taken into consideration in the
design of any indoor radio system.
14.1.8 Interference
Depending on which frequency band is in use there are many sources of
possible interference with the signal. Some of these are from other transmitters
in the same band (such as radar sets and microwave installations nearby).
Electric motors, switches, and stray radiation from electronic devices are other
sources of interference.
The 18 GHz band is for narrowband microwave applications and is not wide
enough for spread spectrum techniques. Nevertheless, one major radio LAN
system on the market uses this band.
PS
Capacity = B log 2 (1 + )
2 N0 B
Security
Spread spectrum was invented by military communications people for
the purpose of battlefield communications. Spread spectrum signals
have an excellent rejection of intentional jamming (jammer power must
be very great to be successful). In addition, the Direct Sequence (DS)
technique results in a signal which is very hard to distinguish from
background noise unless you know the peculiar random code sequence
used to generate the signal. Thus, not only are DS signals hard to jam,
they are extremely difficult to decode (unless you have the key) and quite
hard to detect anyway even if all you want to know is when something is
being transmitted.
At first sight this can be quite difficult to understand. We have spread the
spectrum but in order to do it we have increased the bit rate by exactly the signal
spread ratio. Surely the benefits of spreading the spectrum (such as the capacity
gain hypothesized above) are negated by the higher bit rate?
The secret of DSSS is in the way the signal is received. The receiver knows the
pseudo-random bit stream (because it has the same random number generator).
Incoming signals (after synchronization) are correlated with the known
pseudo-random stream. Thus the chip stream performs the function of a known
waveform against which we correlate the input. (There are many ways to do this
but they are outside the scope of this discussion.)
Figure 135. Direct Sequence Spread Spectrum Modulation. A pseudo-random bit stream
much faster (here 9 times the speed) than the data rate is EORed with the data. The
resulting bit stream is then used to modulate a carrier signal. This results in a much
broader signal.
For this to work, a receiving filter is needed which can select a single DSSS
signal from among all the intermixed ones. In principle, you need a filter that
can correlate the complex signal with a known chipping sequence (and reject all
others). There are several available filtering techniques which will do just this.
The usual device used for this filtering process is called a Surface Acoustic
Wave (SAW) filter.
122 They actually have to be “narrower” (than the available frequency band) rather than “narrowband” as such. That is, it is
possible (and very reasonable) to frequency hop among a number of DSSS channels.
In an FFH system (say 10-100 hops per bit) then corrupted chips will have little
effect on the user data. However, in an SFH system user data will be lost and
The large area can be broken up into smaller areas using lower power (short
range) transmitters. Many transmitters can then use the same frequency
(channel), provided they are far enough apart so that they don′t interfere with
one another. Everyone is familiar with this since radio stations in different cities
transmit on the same frequencies and rely on the distance between stations to
prevent interference. This gives a significant capacity increase for the whole
system. The penalty is that each user can only communicate directly with close
by stations - if a user needs to communicate over a longer distance then a
method of connecting hub (base) stations within each cell must be provided.
Figure 141 shows the notional construction of a cellular radio system. The
problem here is that the boundaries in reality are fuzzy following geography
rather than lines on a diagram. The concept here is as follows:
• Users within a cell are allocated a channel or set of channels to use and
operate at sufficient power only to reach other users (or a base station)
within the same cell.
In other kinds of systems there may be no wired backbone and the cellular
structure is only used as a means of increasing capacity. (Of course in this
case, communication is limited to users within individual cells.)
The task of the MAC protocol 123 is to control which station is allowed to use the
medium (transmit on a particular channel) at a particular time. It does not define
the overall operation of the LAN system. (For example, in this case the MAC
must allow for mobile stations but cannot prescribe how stations are handed off
when a cell boundary is crossed.)
14.7.4.1 Characteristics
The proposed MAC protocol has the following characteristics:
1. A single channel is used for each LAN segment. A channel may be either a
unique frequency (narrowband system), a CDMA derived channel or an SFH
hopping pattern. Multiple separate LANs or LAN segments may be
collocated by using separate channels.
2. A base station (BS) schedules access. The system can operate without a
base station in which case one of the mobiles performs the functions of the
base station. All mobiles contain the basic base station functions.
123 Many of the features of the radio LAN environment are similar to features of passive optical networks. A discussion of MAC
protocols in the passive optical network environment is presented in 16.2.3, “Access Protocols” on page 374.
14.7.4.2 Operation
An overview of the method of operation is shown in Figure 142. Operation
proceeds as follows:
The Frame
Different from the usual conception of a TDM frame, MAC frames are a
concept in time only . That is, a frame is not a synchronous stream of
bits but rather an interval of time. The length of a frame is variable and
controlled by the base station.
Frame Structure
The frame is structured into three intervals:
1. In Interval 1 the base station sends data to mobiles (in MB).
2. In Interval 2 mobiles that have been scheduled for transmission by
the base station may transmit.
3. In Interval 3 mobiles may contend for access to the air. This period
can use either ordinary contention (Aloha) or CSMA type protocols.
The length of the frame can be varied but a typical length is thought to
be around 50,000 bits.
Slots
For allocation purposes the length of each interval is expressed in
numbers of slots. When time is allocated to a particular mobile it is done
in numbers of slots. The size of a slot is yet to be determined but a
figure of 500 bits is used in the performance studies.
Researchers throughout the world are developing many proposals for LAN
protocols to operate at speeds above 1 Gbps. Morten Skov (1989) 124 asserts that
more than 50 such protocols have been reported in the literature. This chapter
deals with two prototype LAN systems developed by IBM Research which are
designed to operate in the very high-speed environment. Although both systems
have been built in prototype form and publicly demonstrated it must be
emphasized that these are experimental prototypes only . They were built to gain
a better understanding of the principles and problems of operation at speeds
above 1 Gbps. Information about them is included here for educational purposes
only.
15.1 MetaRing
MetaRing is an experimental high-speed LAN protocol designed to improve the
functioning of token passing rings at very high speeds. It was developed by IBM
Research at Yorktown Heights, New York.
124 Implementation of Physical and Media Access Protocols for High Speed Communication.
125 The term LAN here is used to mean “LAN segment”. A LAN consisting of multiple segments connected by bridges or routers
can of course have one (or two) device(s) transmitting simultaneously on each LAN segment.
Looking again at Figure 144 on page 343, if any user starts transmitting around
the ring to a close upstream neighbor (an extreme example would be User B
sending to User A) then this user could “hog” the ring and lock all other devices
out until it ran out of data (exactly what happens on English traffic roundabouts
in peak hour). A means of ensuring “fairness” is needed.
15.1.1 Fairness
MetaRing uses a counter-rotating control signal to allocate ring capacity to
requesting nodes. This control signal is called a SAT (short for SATisfy). A SAT
is not like a token and it travels in the opposite direction to the data. In order for
the SAT to travel in the opposite direction to the data, a path is needed for it to
travel on.
In MetaRing there are two rings which rotate in opposite directions - just like
FDDI without a token. Both rings are used for data transport (different from FDDI
where only one of the two rings is used for data - the other is a standby). The
SAT on one ring allocates access rights for the other ring. Both rings operate in
exactly the same way. There are two SATs, one on each ring, and each SAT
allocates capacity for access to the other ring.
Because the SAT can preempt data transmission it travels around the ring at
maximal speed (with only link delays and minor buffering in each node).
When a SAT is received by a node the node is given a predefined quota of data
it may send onto the other ring.
• A node is given permission to send a frame (or a quota of frames) when it
receives a SAT.
• If it has nothing to send or if it has sent its quota since the last SAT was
received then the SAT is forwarded.
• If not (meaning the node does have data to send but the ring has been busy
all the time since the last SAT arrived), then the SAT is held until the node
has sent a frame (quota of frames).
More formally:
The SATisfied Condition
The node is SATisfied if a quota of data has been sent between two
successive visits of the SAT message or if its output queue is empty.
The SAT Algorithm
When the SAT message is received, do the following:
• If the node is SATisfied then forward the SAT.
• Else hold until SATisfied and then forward the SAT.
After forwarding the SAT the node obtains its next quota of frames.
The SAT algorithm results in fair access.
• Each rotation of the SAT message gives the subset of busy nodes permission
to transmit the same amount of data.
• The SAT algorithm is deadlock free.
15.1.2 Priorities
A priority mechanism is implemented by assigning a priority number to the SAT.
When a SAT has a priority number a receiving node may only send frames with
that priority or a higher one.
When a node has priority traffic to send it may increase the priority number
(when it forwards the SAT) to ensure that its priority traffic is given preference.
Other nodes may increase the priority number further if they have still higher
priority traffic. When the node that increased the priority number detects that
there is no more priority traffic, then it must decrease the priority number in the
SAT to what it was before that node increased it.
Control messages may travel on either ring. Some control messages (such as
the SAT) travel in the opposite direction to the function they control. Others
travel in the same direction as the controlled function.
15.1.5 Addressing
When the ring is initialized, nodes are allocated a temporary address called a
Physical Access Name. Physical access names are really just the sequence of
nodes on the ring (1, 2, 3, etc.). Each node must keep a table relating the
physical access name of each node on the ring to the node′s physical
(unchanging) address. When a new node enters the ring a reinitalization
process takes place and all physical access names are reassigned.
The physical access name is used partly to determine which ring should be used
to send data to another node. There is a selective copy ability in addition to the
usual broadcast and point-to-point addressing modes. These are implemented
using the physical access names.
A protocol inconsistency arises in that the SAT messages normally travel in the
opposite direction to the ring they control. When the rings are wrapped, they
become one ring (although one that passes through each node twice). There is
a means included in the protocol that takes account of the situation of
connecting the two rings as one and the resulting condition of possible multiple
SATs and lost SATs.
15.1.7 Throughput
In practical LAN situations (in LANs with a large number of nodes), a high
percentage of the LAN traffic is local in scope. That is, most LAN traffic is within
a work group. Also, people typically site servers close to the work group being
served (a print server will usually be close enough for a person to walk over and
pick up the printout).
When this happens one node performs the function of slot generator and there is
a busy/empty bit in the beginning of each slot. A node may send into an empty
slot subject to the fact that the SAT protocol still operates normally. This
reduces ring latency at the cost of losing the capability to send complete frames
longer than the fixed slot size in a contiguous manner.
The bus itself is unidirectional but passes through each node twice (once in each
direction). Nodes transmit data on the “outbound bus segment” (that part of the
bus traveling away from the head-end). They receive data on the “inbound bus
segment” (that is, where the data flow is toward the head-end. (This is one
advantage of the folded bus configuration, that the nodes do not need to know
the position of other nodes on the bus in order to send data to them.) As will be
seen later, nodes receive control information on the outbound bus segment as
well as transmitting on that segment.
In practical systems the head-end function (and the tail-end too) would be
incorporated into every node so that the network would be configured as a
physical loop. Only one node at any time would perform the role of head-end
(and also of the tail-end). Then, if a break occurs anywhere in the bus it may be
reconfigured around the break. A node on one side of the break would become
the new head-end and a node on the other side of the break the new tail-end.
This technique would enable the bus to continue operation after a single break.
DQDB networks use this same principle as described in 13.9.9, “Fault Tolerance”
on page 311.
CRMA solves this problem by appointing a single node (the head-end node) to:
1. Keep track of the global queue size as it builds up.
2. Control a time reference so that attached nodes can know their positions in
the queue.
3. Control when nodes are allowed to transmit.
Notice that the RESERVE command doesn′t give any node permission to send
anything. When it returns to the head-end node it contains a count of the
amount of data (expressed in slots) that has become available for transmission
127 The size of the interval is a tuneable value. It is normally predetermined but may be modified by the head-end depending on
traffic conditions on the bus.
The START command travels around the bus and each node in turn may send
the amount of data that it requested for this cycle.
The head-end node then keeps a queue of cycles that it will allocate in the
future. With each cycle number it keeps the number of slots needed for the
cycle and also keeps track of the total number of slots for all outstanding cycles.
This whole process results in a much better FIFO ordering in the sending of data
than does, for example, a token controlled process. In addition it removes the
throughput inefficiency of the token principle caused by the latency delays
between when a node finishes sending and when another node may start.
Under conditions of load, every slot time on the LAN is usable.
The process requires cooperation between the head-end node and the other
nodes on the LAN. The head-end node never knows any detail about individual
nodes. All it sees is the total amount of data requested by all nodes for each
cycle.
For example, in Figure 98 on page 260 data arrives (or is generated by the
node) at each node at different times. If this was a CRMA LAN a RESERVE
command sent out at time 1 (let′s call it cycle 12) would tell the head-end node
the total amount of data that was available at nodes (users) A and E. But user A
would keep a record that it requested a number of slots in cycle 12 (so would
user E). A RESERVE command sent out at time 2 (call it cycle 13) would only
show the amount of data that had arrived at node D (since cycle 12). At time 3
(or cycle 14) the head-end would hear about still more data, this time from nodes
B and C.
Let us assume that the bus was busy with previously queued data until after the
RESERVE for cycle 14 had returned to the head-end. At this time then, nodes A
The head-end then issues a START command for cycle 12. The START
command will occupy a slot and will be followed by the total number of empty
slots requested by all reservations for cycle 12. The START command contains a
cycle number. When a node receives this command, it has permission to send
data into the next empty slot. It is allowed to send into as many empty slots as
it reserved for cycle 12 in the previous RESERVE command. Notice that nodes B,
C and D all have data waiting but are not allowed to send until START
commands are received with a cycle number matching the reservation they
made in a previous RESERVE command.
As soon as the head-end has finished generating the correct number of empty
slots for cycle 12 it will immediately issue a START command for cycle 13.
Notice that when the data is sent on the bus it is sent in the order in which
reservations were made. This gives a much better FIFO characteristic than
token-controlled access.
Of course there are limits. Each node can be allocated a maximum number of
slots that it can RESERVE on any one cycle. (This is to stop a node from
“hogging” the bus.) But when a node sends a block (frame) of data, it sends
that data into contiguous slots. This means that a node must be able to
RESERVE sufficient slots for the maximum sized frame that it can send.
CRMA has two additional commands - REJECT and CONFIRM. These commands
are used to implement a “backpressure” mechanism which limits the size of the
forward global queue so that access delay can be bounded (contained within
some limits).
The basic CRMA protocol described above is modified. The RESERVE command
no longer implies certainty. The RESERVE command is a request from the nodes
and must be accepted by the head-end. When the head-end sees the return of a
RESERVE command it may CONFIRM the cycle (that is, accept the reservation),
REJECT the cycle (and all previous cycles that have not been confirmed), or
START the cycle (if there are no cycles queued ahead).
Figure 151. CRMA Dual Bus Configuration. The system is logically two separate parts.
Data flow on one bus is controlled (allocated) by RESERVE commands on the other bus.
The advantage of the dual bus configuration is that it can double the potential
throughput. However, all the functions of the single bus configuration must be
doubled (which adds significant cost to the adapter). In addition, the nodes must
know the location on the bus of all other nodes. (For example, referring to the
diagram, if Node 3 wants to send to Node 1 then it must use Bus B; if it wants to
send to node N then it must use Bus A.) To do this there must be an information
exchange protocol so that each node can discover the location (upstream or
downstream) of each other node that it may send to (in order to determine which
bus to send on). The upstream or downstream location may change when the
15.2.9 Priorities
A priority scheme can be implemented by associating each cycle number with a
priority. That is, there might be a cycle 2 for priority 1 and a cycle 2 for priority
2. RESERVEs would be issued separately for each priority at very different rates.
This could lead to cycle 2,110 at priority 1 interrupting cycle 3,125 at priority 2.
There is no necessary link between cycle numbers at each priority.
The priorities would operate with high priorities preempting the lower ones.
Thus, a priority 1 START command (and the slots associated with it) could be
issued in the middle of a sequence of vacant slots being generated for priority 2.
So, the whole protocol is repeated at each priority level and higher priorities
preempt lower ones.
15.2.10 Characteristics
As a result of the method of operation, CRMA exhibits the following
characteristics:
• Efficiency. Cycles may be scheduled with no time gaps between them so
there is no time wasted on the LAN looking for a device that has data to
send next.
• Speed insensitivity. The protocol results in high bus utilizations even at very
high data rates (Gbps).
• Because of its slotted structure the protocol is easily extendable to handle
isochronous traffic. In this case the head-end node would generate
15.3 CRMA-II
Cyclic Reservation Multiple Access - II (CRMA-II) represents the frontier of
on-going LAN and MAN research. Although not implemented, the protocol has
been extensively studied and simulated. It was developed as a result of the
experience gained from the CRMA and MetaRing prototype projects. 128
It should be noted that CRMA-II (like CRMA and MetaRing) is a Medium Access
Control (MAC) protocol. There are many other functions that must occur on a
real LAN or MAN that are not part of a MAC protocol. These are primarily
management functions such as error recovery, monitoring, initialization and the
like.
15.3.1 Objective
CRMA-II uses many detailed features of the existing LAN/MAN protocols already
discussed, especially CRMA and MetaRing. Each of these had characteristics
which were very desirable and other characteristics which needed to be
improved. The objective of CRMA-II is to adopt the best features of these
protocols so as to arrive at the best possible result.
Cyclic Reservation
The cyclic reservation principle of CRMA has proven excellent at high
utilizations. Access delay for low utilization nodes has a strict upper
bound and fairness of access is extremely good. But:
1. CRMA is less good at very low utilizations. The minimal access
delay on a lightly loaded bus is the waiting time for a RESERVE and a
START command - which corresponds to one network round-trip
delay.
2. There is no reuse of slots on the bus once data has been received at
the destination. On a bus with many active nodes, there is
considerable potential for re-use of slots which increases the
capacity of the LAN significantly.
Buffer Insertion
Buffer insertion (MetaRing) on the other hand is an excellent principle at
low and medium utilizations - it gives low access delay and maximal
reuse of LAN capacity. At high utilizations, however, some precautions
must be taken:
128 CRMA-II is the result of work performed at the IBM Research Division, Zurich, Switzerland. A list of journal articles and
conference papers relating to CRMA-II may be found in the list of related publications.
15.3.2.1 Topology
CRMA-II is able to use ring, bus or folded bus topologies. The protocol is
designed to allow the building of a common set of interface chips that can be
used for any of the three topologies.
By using ADUs, the circuitry that handles the bit stream in serial form must
operate at the speed of the medium and therefore it must continue to use
expensive technology (such as GaAs). But as soon as a 40-bit group is received
it can then be processed in parallel (as a 32-bit data group) at 1/40th of the
medium speed using significantly lower-cost circuit technology.
15.3.2.3 Slots
Data in CRMA-II is transferred in a slotted format to allow for capacity allocation
and scheduling.
The principle involved is very similar to that used in DQDB or CRMA, but there is
a basic difference. In other LAN/MAN slotted systems a slot is a fixed entity
identified by a delimiter (such as a code violation) followed by a fixed number of
bits and immediately followed by another slot. That is, on the medium slots are
synchronously coupled with one another. In CRMA-II a slot is fixed in size but
special variable length frames carrying scheduling information are inserted as
needed between slots. This means that slots are loosely coupled with one
another.
A slot is delimited by a START ADU and an END ADU. The first eight bits of the
start ADU is a synchronization character. The format for 32-bit ADUs is shown in
Figure 152.
MAC commands which travel between the “scheduler” and the nodes no longer
“piggyback” in the slot headers of data slots . They are carried as special
(varying length entities) between regular data slots.
When a node sends data over multiple slots the basic slot marking is maintained
but much of the slot overhead is avoided by having only one END ADU (for the
whole frame) and only one addressing ADU at the start of the frame. See
Figure 153.
A Reserved slot may only be used by a node that has been granted a capacity
allocation through a CONFIRM command from the scheduler. (Whenever the
scheduler sends a group of CONFIRMs it immediately begins marking enough
slots Reserved to satisfy the amount of capacity it just confirmed.)
A node with a deferred allocation may access only after it has let pass the
indicated number of Free/Gratis slots.
15.3.2.8 Cycles
As described above individual reservation cycles are run one at a time with no
reservation in advance for future cycles. The single cycle concept simplifies the
reservation-based fairness control significantly because only two commands
(RESERVE, CONFIRM) are alternatively on the medium. This makes the protocol
extremely robust and enables the system to recover from command failures
without any additional command.
When one cycle finishes (when the scheduler has marked the allocated total
number of slots as reserved) the next cycle is started by issuing the RESERVE
command to collect requests from the nodes. The reservation cycle itself starts
when the RESERVE command has returned (after having circulated on the LAN)
and the scheduler has allocated the reservations.
All this produces a significant gap between two reservation cycles (that is
between the end of slot marking and the start of the next cycle). This however
does not mean that the system loses throughput. On the contrary, transmissions
continue to take place in Free/Gratis slots and since access to these slots is less
restricted (only when a node must defer) system throughput must be higher than
for the case of back-to-back cycles. In fact, slots are only marked as reserved to
correct unfairness and to guarantee a low bounded access delay.
15.3.2.9 Addressing
The system uses “short” addresses. A single ADU contains both destination and
origin LAN addresses. This means that the full LAN/MAN address isn′t used for
sending data. During the initialization procedure a set of local addresses are
allocated. Each node keeps a table relating the real (long form) LAN address
and the shortened addresses needed for send/receive operation.
If the scheduler were to mark only Free/Gratis slots there would be an apparent
problem here. Then, what if a node immediately before the scheduler on the
ring decides to use all the passing Free/Gratis slots thus preventing the
scheduler from getting any slots to reserve? Therefore the scheduler does not
only mark Free/Gratis slots to Free/Reserved it also marks passing Busy/Gratis
slots (these are slots containing data) to the Busy/Reserved status. When a
Busy/Reserved slot is received by a node, the Busy status is changed to Free
resulting in the creation of a Free/Reserved slot that may be used by a node
having a capacity allocation. So, ultimately the scheduler has caused the
creation of the correct number of Free/Reserved slots. Thus when the scheduler
allocates x slots it satisfies the allocation by immediately marking passing Gratis
slots (either Busy or Free) to the reserved status.
In order to use this principle the node has a buffer large enough to
accommodate the maximum sized frame between its receiver and its transmitter.
This is shown below:
The principle is basically the same as that described for MetaRing but with a few
differences:
• In MetaRing the rule is that a node may start sending provided there is
nothing being received from the ring and there is nothing in the insertion
buffer.
Because of the slotted transmission structure, the rule for sending in
CRMA-II is that when a Free/Gratis slot (or a Free/Reserved slot if the node
has reservations) is detected on the ring (or bus) segment and there is
nothing in the insertion buffer, the node may commence sending.
• Data is sent as a contiguous frame but with interspersed Start ADUs at slot
boundaries. This is illustrated in Figure 153 on page 360.
When the data is received at the destination node the destination node
reformats the multi-slot into a stream of single Free slots.
• While a node is transmitting a frame in this way, slots continue to arrive on
its inbound side. If a Free/Gratis slot arrives then it is discarded. If a
Free/Reserved slot arrives and the node has reservations then this slot also
As operation continues:
• Incoming data from the medium is delayed because of the data queued
ahead of it in the insertion buffer.
• When a Free/Gratis slot arrives it is discarded and since a slot full of data is
now being transmitted from the insertion buffer this empties the insertion
buffer of a slot full of data.
• In a busy ring, the node may find that the insertion buffer will not empty
quickly enough by just waiting for Free/Gratis slots. In this case the node
will request an allocation the next time the scheduler sends out a RESERVE
command. If the node has more data to send it will request slots from the
scheduler sufficient to both empty its insertion buffer and to send its next
frame.
• When the node receives a CONFIRM command containing a slot allocation it
will begin treating Free/Reserved slots in the same way as Free/Gratis slots
and discarding them. This process empties the insertion buffer.
• Once the insertion buffer is empty, data passing through the node from
upstream is no longer delayed.
• The node is allowed to send again as soon as it receives a Free/Gratis slot
or (if it has an allocation) a Free/Reserved slot.
15.3.3 Summary
CRMA-II is designed to provide optimal fairness under varying load conditions on
a very fast LAN or MAN. (The principle will work over a very wide range of
speeds but the objective is to operate well at 2.4 Gbps per second.)
1. The buffer insertion protocol is used to provide almost instant access at low
loadings and to allow for the sending of a frame of user data as a stream of
contiguous slots.
2. The reservation protocol allows fairness in operation (and in particular low
access delays) at from medium to very high utilizations.
3. Operation is such that both protocols operate at all times but one will tend to
dominate the other depending on load conditions.
4. It exhibits the same properties as discussed for MetaRing with respect to
efficient operation at any speed, throughputs well beyond the medium speed
due to slot reuse, insensitivity to ring length and the number of nodes, as
well as support of asynchronous, synchronous and isochronous traffic.
The optical networks so far described (FDDI, DQDB, MetaRing, SDH) all share a
common feature. Logically they could be implemented just as easily on copper
wire. In other words, fiber has been used as a substitute for copper wire
(although with many advantages, including speed).
It was mentioned earlier (4.1.2.2, “Transmission Capacity” on page 73) that the
potential data carrying capacity of fiber is enormous - at least ten thousand
times today′s 2 Gbps practical limit. The aim is to make use of this to build
networks with capacities of two or three orders of magnitude greater than we
have today.
speed of light
Wavelength =
frequency
129 In the world of optical communications, it has been usual to talk about the wavelength of a signal or a device without
reference to frequency. However, when you get to very narrow linewidth lasers and coherent detection systems, these are
normally expressed in MHz or GHz.
These will work but require amplifiers at frequent intervals down the
bus.
The problem is that as the fiber is tapped at each station the signal is
split and half carries on along the bus and half goes to the attached
station. In Figure 155, station 7 will get half the signal and then station 6
will get half of that (1/4 of the original signal). Station 5 will get half of
that (1/8) of the original signal until station 1 will get 1/128th of the
original signal. If there are n stations on the bus the signal degrades to
1
of what it was originally (where n = the number of stations).
2n
It is possible to build splitters that direct a smaller proportion of the
signal to the attached station but this is difficult and introduces other
problems.
Passive (Reflective) Stars
This configuration is illustrated in Figure 156 on page 372. These have
been used in several experimental systems because they can support
many more stations without amplification than can bus networks.
As shown in the figure, separate fibers are connected from each station
to the star. This device is really a combiner followed by a splitter. All
incoming signals are combined onto a single fiber and then this is split to
direct 1/n part of the combined signal back to each station. That is, the
output signal is reduced by 10log 10N - where N equals the number of
stations. Of course, a topology like this can handle significantly more
stations than can the bus topology described above.
A feature of the topology is that the star is a passive (non-powered)
device - a characteristic considered very important for reliability.
Nevertheless, any fiber LAN or MAN topology will require amplifiers if a
meaningful number of devices is to be handled.
Tree Structures
Tree structures such as that shown in Figure 157 are also possible.
In single channel systems with incoherent receivers this doesn′t matter a lot.
Even in the case of coherent receivers, if there is only one channel then the
receiver can “track” the frequency of the transmitter.
130 Optical transmitters (both LEDs and lasers) produce a band of wavelengths spread over a range. This is termed the “spectral
linewidth”. See the discussion in 4.1.3, “Light Sources” on page 76.
A connectionless (traditional LAN) approach means that you have the same
delay for every block transmitted. Of course, a virtual circuit approach is very
inefficient if a channel is occupied for a long period of time with only occasional
data transfer. In this case, it would be better to use a connectionless system
and be able to share the channel capacity.
When station A wants to send data to station B then they have to find a vacant
channel and both tune to it before station A can commence sending. The central
problem for the access protocol is: “How do we arrange for stations wanting to
communicate to use the same channel (wavelength)?” This is not a trivial
problem! First, the stations must have some ability to vary their operating
wavelength (switch from channel to channel). We could have:
• Each station allocated to a fixed transmitting channel and all stations able to
tune their receivers to any channel at will.
The biggest problem for the access protocol is exactly the same for all
high-speed LAN/MAN protocols. Although the speed of data transmission has
increased by many orders of magnitude, the speed of light in a fiber and of
electricity in a wire (at about 5 µsec per km) hasn ′ t changed. Propagation delays
are exactly the same as they are for traditional LANs, so propagation delays
become relatively much longer than block transmission times and therefore
very significant in terms of efficiency. Of course the problem increases
significantly with distance. It has the most effect on systems:
1. Which have any protocol exchange for pretransmission coordination?
(because each round-trip between sender and receiver incurs two
propagation delays.)
2. Which use CSMA/CD protocols? (because the probability of collision
increases with propagation delay.)
An extreme solution might be for each station to have a fixed transmitter and
receiver (tuned to the control channel) and tuneable ones for the data. Every
station would keep track of the channels in use and when a station wanted to
send, it would tell the receiver (and all other stations) that it was about to use
channel x. It would then notify other stations when the channel was free.
Although the chance of collision could be minimized by having each transmitter
select the next channel to use based on a (different) random number there is still
some chance of collision and such a system would be quite costly.
A few years later, other organizations (including IBM) were able to build on this
early work in projects to explore the use of WDM technology in the LAN
environment.
16.3.2.1 Lambdanet
Lambdanet is an experimental WDM system designed to explore a number of
different possible uses.
The network topology is a broadcast star as shown in Figure 156 on page 372
and discussed above.
• The experimental configuration used 18 stations.
• A 16x16 star coupler was used and extended to 18 connections by
attachment of two smaller coupling devices.
• The network itself is totally passive except for the attached stations.
• Each station transmits on a fixed unique frequency.
• The thing that makes Lambdanet different from other proposed architectures
is that nothing is tuneable . Each station separates the received signal (all 18
Overview
• Each station is equipped with a fixed frequency transmitter and a tuneable
receiver.
• Each station is allocated a unique wavelength from a band of wavelengths
from 1505 to 1545 nm.
• The tuneable receivers are actually fixed frequency incoherent detectors
each of which is preceded by a Fabry-Perot tuneable filter with a tuning
range of 50 nm. The tuning rate is about 10 µsec per nm which gives an
average time to locate a channel as 250 µsec.
131 Rainbow is a research project, not a product. IBM cannot make any comment on what uses, if any, may be made of the
Rainbow technology in future IBM products.
Figure 159. Rainbow-1 System Design. The design features a passive network with fixed
transmitters and tuneable receivers. The receivers are actually fixed incoherent detectors
preceded by tuneable Fabry-Perot filters.
Details
• The system was built with commercially available optical and electronic
components.
• The transmitter lasers had an unmodulated linewidth of less than 350 MHz.
These were modulated in such a way as to reduce the chirp problem but the
major factor in the control of chirp was the relatively wide channel spacings.
Conclusion: The system works well. A summary of the lessons learned may be
found in the paper by Paul E. Green (1992). Rainbow is a research prototype and
it is expected that the project will continue. The ultimate aim is to prove the
feasibility of a 1000 station WDM LAN/MAN operating at 1 Gbps.
Although these schemes have had some success as centralized switch fabrics,
they have never been adopted for use in WAN or LAN applications for a number
of reasons:
• A large number of separate link connections are required. If these have to
be physically separate, the configuration becomes very difficult to manage.
• Because a packet of data is received and retransmitted by a number of
nodes between sender and receiver there is an additional “staging delay”
which is significant if link speeds are low.
• Analog link error rates in the WAN environment meant that stage-by-stage
error recovery was desirable - something which adds complexity and is a
cause of congestion.
Figure 160. Multihop Optical Network - Logical View. Note that the first and last columns
are the same stations (A to D).
A node here is a simple 3x3 packet switch. Each data block is received
in full, the destination address examined and then passed to the
appropriate output queue. Queueing at the output side is suggested to
improve the performance in a congestion situation.
Network Capacity
The capacity of a network with this design is very high. All links may be
active simultaneously. Therefore the network capacity is:
Since data packets may be sent one after another on all links with no
separation or tuning delay the bandwidth efficiency (compared to other
WDM architectures) is extremely high.
Network Addressing Structure
When a packet is received, it must be routed to the appropriate output
port very quickly. There is no time to perform complex address
calculation or database lookup.
There are many schemes to do this. Automatic Network Routing (ANR),
described in 11.2.2, “Automatic Network Routing (ANR)” on page 240 is
one appropriate technique. Another proposal is to arrange the nodes (as
per the figure) such that the destination address is just a binary number.
When a frame arrives at a node the first bit (or bits) of the destination
address are used to route the packet and then the bits are stripped
away. This is the basis of so-called “Banyan” networks.
Congestion Control
Since there is nothing in the network to prevent most of the stations
transmitting to the same destination at once, congestion is a problem
and must be controlled.
The best ways to control congestion in a network like this are a
combination of input rate control and packet discard techniques. These
are described in 6.1, “Control of Congestion” on page 117.
If you really want to reduce the cost, why not do all the internal connections
electrically and save the cost of optical transceivers?
Time will tell, but multihop networks are a real and serious contender for many
network switching roles in the future.
Figure 162. Wavelength Selective Network - Concept. Connections through the network
are formed by routing individual wavelengths.
The principles involved here look superficially like the ones we know well from
electronic networking systems such as ATM. However, the optical case is
significantly more difficult than the electronic one. The problem is that we
cannot change the wavelength of a single channel (by passive optical means)
within the network. The transmitting and receiving stations must use the same
wavelength. This is not a problem for the workstations themselves but is a very
significant problem for the network.
In traditional networks (such as ATM) the subchannel (in ATM the VPI/VCI)
changes at every switching node. Because we cannot do this with optical
signals the problem of allocation of wavelengths becomes very difficult in any
kind of complex network.
The need for MuxMaster arises primarily from users with two or more large sites
in the same city (such as a large mainframe complex and a backup site). “ D a r k
Fiber” is available in the U.S. (and to a very limited extent in some other
countries) but it costs around $150 per month per mile per strand. A typical
mainframe interconnection might require six Excon channels using two fibers
each for a total of 12 fibers. MuxMaster allows for carrying all of the traffic on a
single fiber strand (with provision to back up the system on a second, single
strand).
Figure 163. MuxMaster Principle. Multiple optical wavelengths are carried on a single
fiber point-to-point between two locations.
133 MuxMaster is a research project NOT a product. IBM can make no comment on the possible use of this technology in future
products.
Figure 164 shows the configuration used in the first operational field trial.
Another simple possibility is the use of MuxMaster to interconnect LANs through
bridges. This is shown in Figure 165 on page 390.
The figure shows a simple ring configuration. Other configurations (of arbitrary
complexity) are possible but then there is a significant problem in network
control and path allocation, etc.
Figure 167. Ring Wiring Concentrator - Logical View. Wiring is point-to-point from each
station to the WRC. The ring (or bus) structure of the LAN is constructed by wiring within
the WRC.
Over time several problems with these early LAN systems emerged.
• “Broadband” (modulated carrier) systems over coaxial cable did not allow
much flexibility in cabling. Devices had to be placed at nodes on the cable
and the whole cable needed to be tuned to an exact length (multiple of the
wavelength). When new devices were added or old ones deleted the cable
had to be re-tuned (because of the loading effect of attaching cable stubs).
• With small numbers of devices in a restricted area, bus cabling did not (and
does not) prove too much of a problem. However, as LAN sizes grew,
severe management problems arose due to extreme difficulty in locating
cable faults and malfunctioning devices.
134 Just how LANs developed in the late 1960s and 1970s is fascinating but outside the scope of this book. For our purposes let
us accept that the first serious production LAN system was Ethernet.
The solution to these problems was the use of “star ring” wiring and the use of a
Ring Wiring Concentrator (RWC). Star ring wiring is shown in Figure 167 on
page 393.
The problem with both these systems is the inherent limitations imposed by the
use of UTP (especially the lower grade Telephone Twisted Pair). Attenuation in
the cable severely limits the distance you can send. In addition, reactive
components in the cable distort the signal.
When you put power into the RWC and then regenerate the signal it becomes a
hub .
Time will tell, but we could be witnessing the beginning of the end of the LAN.
There are several general techniques available for the sharing of a facility
between different data connections and/or voice circuits. These are general
techniques and apply (although with different levels of efficiency) to each
element of the system separately, so they are described here first.
Figure 168. The Concept of Frequency Division Multiplexing. The physical carrier
provides a range of frequencies called a “spectrum”, within which many channels are
able to coexist. Notice the necessary “buffer zones” between frequency bands.
Frequency division multiplexing has, in the past, found use in telephone systems
for carrying multiple calls over (say) a microwave link. It is also the basis of
cable TV systems where many TV signals (each with a bandwidth of 4 or 7 MHz)
are multiplexed over a single coaxial cable. It is also used in some types of
computer local area networks (LANs).
Attaching equipment is able to insert data into any slot and to take data from any
slot. Thus while the medium can run at a very high speed, each attachment
operates at a much lower data rate.
A.1.3 Packetization
This technique involves the breaking of incoming bit streams (voice or data) into
short “packets”. Different techniques use variable or fixed length packets.
Packets have an identifier appended to the front of them which identifies the
circuit or channel to which they belong. 136 In the TDM example above, a time slot
was allocated for a low-speed channel within every frame even if there was no
data to be sent. In the packet technique blocks are sent only when a full block is
136 Alternatively, they could have a routing header which identifies the source and destination of the packet.
A.1.4 Sub-Multiplexing
It is quite possible, indeed usual, for multiplexors to be “cascaded” as suggested
in Figure 170. A “high order” multiplexor is used to derive a number of lower
speed channels that then are further reduced by other (lower order)
multiplexors. This may then be reduced even further by lower and lower order
multiplexors.
Since a derived channel is just like the original channel only “narrower”, then
different multiplexing techniques can be used within one another. For example,
it is possible for a wideband microwave channel to be frequency divided into a
number of slower channels, and then for one lower speed channel to be further
The hierarchies of multiplexors used in different parts of the world are shown in
Table 15.
The technique offers savings for voice in that the gaps in speech and the
“half-duplex” characteristic of speech can potentially be exploited for other
conversations. Likewise, the gaps in traditional data traffic can be exploited.
There are multiplexors available that allocate a large number of voice channels
over a smaller number of real channels by this technique. Listening to any one
of the voice channels would provide the listener with intermixed phrases and
sentences from different conversations on the one real voice channel. In data
communications, the use of “statmuxes” that derive (for example) six or eight
slow (2400 bps) channels from a “standard” 9600 bps line are in common use.
All of these have problems in that they require some technique to recognize
“silence”, that is, to determine what not to send. In voice, a delay buffer is
needed so that when a word is spoken following a silence the need for a channel
is recognized and the channel made available WITHOUT chopping off the
beginning of the word.
In the past this technique has been used to improve utilization of expensive
undersea telephone cables but is not in common use for other situations
because of the cost and the impact on quality caused by the additional delays
and the interposition of yet another piece of equipment which degrades the
signal quality.
137 Another factor contributing to the expense is that it is difficult to apply large scale integration techniques to analog systems.
Analog equipment tends to have many more separate components than comparable digital systems (digital ones have more
circuits but many are packed together into a single component). This leads to a higher cost for the analog alternative.
138 Whether the routing header in the beginning of the block is correct is not determined until the end of the block has been
received and the Frame Check Sequence (FCS) is checked. So the block cannot be sent on until it is checked.
139 In the IBM 3725, for example, switching an intermediate block takes (total including link control) around 3,200 instructions.
Getting data into or out of the processor takes on the order of 100 nanoseconds per byte. So for INN (Intermediate Network
Node - meaning between two 3725s) operation a 3725 can “switch” somewhere around 350-400, 200-byte blocks per second
(70% utilization) and maybe 360 to 410 blocks per second if blocks are 100 bytes. (These figures are approximate and
depend heavily on environmental conditions.)
B.1.1.1 Sidebands
When a sinusoidal carrier signal is generated it varies by only a very small
amount. That is, the range of frequencies over which the carrier is spread is
very narrow. When such a signal is modulated, it seems reasonable that the
frequency spread (at least for AM and PM techniques) should remain
unchanged. Sadly, it doesn ′ t work quite this way. Modulation of a carrier always
produces a signal such as that shown in Figure 173.
You get a spread of frequencies equal to twice the maximum frequency of the
modulating signal. What you get is a carrier signal ( carrying no information )
surrounded by two “sidebands” (above and below). Each sideband uses a
frequency spread of exactly the maximum modulating frequency. All of the
modulation information is carried in the sidebands - the carrier contains no
information (it is nevertheless quite useful to have in some systems).
It is important to note that sidebands are generated for all three modulation
schemes. They are different in the sense that the precise form of the sidebands
is different for each different modulating technique.
OOK is not often used as a modulation technique for radio transmissions. This
is partly because the receiver tends to lose track of the signal during the gaps (0
bits) but mostly because it requires a very wide bandwidth for a given data rate.
Other transmission techniques are significantly better.
As explained earlier in 4.1.6.1, “On-Off Keying (OOK)” on page 84, OOK is the
primary method used in optical fiber communication.
140 There are many exceptions here, such as the use of spread spectrum techniques in some radio environments and the use of
baseband techniques to get the maximum throughput from a telephone subscriber loop circuit.
141 The word “keying” in general implies that the carrier is shifted between states in an abrupt (even brutal) manner. That is,
there is no synchronization between the shifting of the carrier and its phase.
B.1.2.3 Bandwidth
If you examine the composition of a square wave (or stream of pulses with
square corners) you find that it is composed of a number of sinusoidal waves of
different frequencies and phases. A square wave (stream of pulses) can be
represented as follows:
1 1
cos (2 π × t ) − cos (2 π × 3 t ) + cos (2 π × 5 t )
3 5
1 1
− cos (2 π × 7 t ) + ... + cos (2 π × 13 t ) + ...
7 13
In the equation t = time and represents the frequency of this component. Only
the odd numbered harmonics are present in a square wave. Notice that this is
an infinite series.
The point here is that a square wave with repetition frequency of 1 kHz has
sinusoidal components strong enough to matter up to about 9 kHz. This means
that to faithfully reproduce a square wave signal through carrier modulation
requires the use of quite a wide bandwidth. 142 When digitally modulating a carrier
it is common to “shape” the modulating pulses in such a way that the required
bandwidth usage is minimized.
B.1.2.5 Scrambling
If we transmit the same symbol repetitively in many situations there will be a
problem with keeping the signal within its allocated frequency band. This
applies in both radio and voiceband telephone environments. If we use an
encoding scheme that provides frequent transitions and is DC balanced then this
is normally sufficient. If not, we need to use a “scrambler” to change the data
into a form suitable for transmission (and a descrambler in the receiver).
142 Of course this is also true when using a baseband medium. The medium must have capacity to pass quite a wide bandwidth
if it is to pass square pulses accurately.
The complicating factor here is noise. If it were not for noise a fairly simple
receiver could discriminate between a very large number of states and we could
get very large numbers of bits per symbol. Unfortunately, noise and distortion in
the channel prevents this. In some environments the effects of noise can be
mitigated by using more transmitter power (or a shorter distance) but most times
this is not possible. In practice, noise sets a maximum limit on the number of
usable states regardless of the complexity of the receiver.
A baudy story
The term “baud” when referring to the speed of data transmission is more
often misused than used correctly. The baud rate is the signaling rate. That
is, it is the rate of state changes of the modulating signal (the symbol rate).
Thus if you have three bits per symbol and the baud rate is 2000 then bit rate
is 6000 bps.
In QAM multiple states of amplitude and phase are used together. It is usual to
show this as a “quadrature” illustrated above. Note that the position of the axes
bisects the number of states. This is just for illustration - there is no such thing
as a negative amplitude. In the example shown (more correctly called QAM-16)
there are now 16 states and thus 4 bits can be represented. In higher quality
channels QAM-64 (representing 6 bits) and QAM-256 (representing 8 bits) are
sometimes used.
One of the big advantages of QAM is that the amplitude and phase of a signal
can be represented as a complex number and processed using Fourier
techniques. (This is the heart of the “DMT”modulation technique discussed in
3.2, “Discrete Multitone Transmission (DMT)” on page 58.)
When a QAM symbol is received, the receiver will measure its phase and
amplitude. Due to the effects of noise, these measured values seldom match the
points on the quadrature exactly and a QAM receiver typically selects the
nearest point and decides that this is the symbol received. But what if this
received value was in error because noise changed it? In regular QAM this is
just an error. Trellis coding is a way of avoiding a substantial proportion of
these errors.
The concept is that the transmitter only sends a limited sequence of symbols. If
a particular symbol has just been transmitted then the next symbol must be from
a subset (not all) of the possible symbols. Of course this reduces the number of
bits you can represent with a particular quadrature. Typically, in a 16-state
quadrature, only eight states will be valid at any point in time (which eight
depends on the last one sent). This means that you can represent only 3 bits
rather than 4.
This relies on the concept of a sequence of line states. If you start out
transmitting a particular line state then the next state must be one of eight
states. When you transmit that state then the next one must be one of eight
also. Therefore if you start at a particular line state there are 64 possible
combinations for the next two states transmitted. What the receiver does is
correlate the sequence of line states received with all the possible sequences of
states. This works as follows:
1. When the receiver detects a line state, it does not immediately decide which
symbol has been received. Instead it allocates a “weight” (the mean
squared distance) between the detected point and all surrounding points.
2. As time progresses the receiver adds up the weights of all possible paths
(this is an enormous processing problem since the number of possible paths
grows exponentially).
3. After a number of symbol times a particular path is chosen to be correct,
based on the minimum total weight accumulated along all the possible
paths.
4. The sequence of symbols is then decoded into a sequence of bits.
The important characteristic here is that it is the sequence of line states that is
important and not any individual received line state. This gives very good
rejection of AWGN (Additive White Gaussian Noise) and many other types of
noise as well.
Vitterbi decoding is the process usually used in conjunction with Trellis coding.
It is simply a way of performing the path calculation such that the number of
possible paths stays within the ability of a signal processor to handle the
computation.
Depending on the number of points in the constellation, Trellis coding can show
a gain in signal-to-noise terms of 4 to 6 dB.
When the earliest interactive (or “online”) computer systems were built, the
designers often did not understand that queueing theory would dictate the
behavior of these new systems. Some serious and very costly mistakes were
made because the way communications systems actually behave is the opposite
of what normal intuition would suggest . That said, queueing theory can be
applied to any situation where many users wish to obtain service from a finite
resource in a serial fashion. Such situations as people queueing for the
checkout at a supermarket or motor cars queueing at an intersection are much
the same as data messages queueing for the use of a link within a data
switching device.
The graph in Figure 175 on page 414 shows queue length as a function of server
utilization for single server queues.
In order to discuss the way queues behave there are some technical terms that
must be understood.
Service Time is the time taken by the checkout clerk to process a particular
person. This will be different for each person depending on the number of
items to be processed.
Average Service Time is the average over time of a number of people processed
by the checkout.
Arrival Rate is the rate (people per hour) at which people arrive and join the
queue.
Queue Length is the number of people waiting in the queue at a particular time.
Average Queue Length is the average length of the queue over a given period of
time.
Server is the job of the checkout clerk.
Utilization of the Server is the percentage of time that the server is busy.
This is the average service time multiplied by the arrival rate divided by
the length of time in question. The utilization value is something between
zero and one but is often expressed as a percentage.
C.1.1 Fundamentals
In order to describe the behavior of a queue mathematically we need the
following information:
Input Traffic
• Arrival Rate
The average (over some specified period of time) rate at which
transactions (messages, people, etc.) arrive at a facility requiring
service.
Many of the numbers above are averages (means). If the arrivals are random
then there will be a deviation around the mean values. For example, the 90th
percentile is that value of time below which the random variable under study
C.1.2 Distributions
There are two things under discussion that are governed to some degree by
uncertainty:
1. The pattern of arrivals
2. The time it takes to service a given transaction
Both of these distributions may be exactly regular or completely random or
anything in between. Notice that the distribution of arrivals and the distribution
of service times are quite independent of one another. 143
The “in between” distributions are called “Erlang distributions with parameter
m”, or just “Erlang-m”. The “Erlang parameter” is a measure of randomness.
In Figure 175 on page 414 the different curves apply to different service time
distributions. Arrivals are assumed to be random.
143 This is not quite true. The rate of arrival of frames at a switching node is dependent on their length and the link speed. The
longer they are, the lower the rate of arrival. For the purpose of discussion this effect can be ignored.
ρ
Lq =
1 − ρ
2
ρ
Lw =
1 − ρ
Then note that the queue length is shorter than the mean queue size by the
quantity ρ ; that is, the difference is, on the average, less than one transaction:
Lq = L w + ρ
Ts
Tq =
1 − ρ
ρ × Ts
Tw =
1− ρ
The general rule is that no resource should be loaded more than 60% because
of the effect of the waiting time on system responses. This is still a good rule if
transactions arrive at truly random rates, are time sensitive and there are no
congestion control mechanisms in place to control things. But there are a
number of situations in which much higher utilizations are practical.
Regularity of Input Distribution and Service Time Distribution
If transactions arrive at a processing system (for example) at exactly one
millisecond intervals and if processing takes exactly one millisecond then
the facility will be utilized 100% and there will be no queue. It all hinges
on the word “exact”.
If the distribution of arrivals and of service times is truly random then the
“exponential” curve in Figure 175 on page 414 will hold true.
If arrivals are regular and service times are exponentially distributed
then the “constant” curve in the figure is appropriate.
144 SNA does not packetize or segment data for transport on INN links. However, the maximum length of blocks entering the
network can be controlled.
145 Another way of saying this is that if we sum n sources of bursty traffic then the total rate exhibits less burstiness as a result.
As the number (n) of sources increases, the ratio of the standard deviation to the mean of a summed rate approaches zero.
When describing the timing relationship between two things there are a number
of words that must be understood. Unfortunately, these words are commonly
used in different contexts with quite different meanings.
In this document the strict engineering definitions have been used except where
otherwise noted such as in the description of FDDI. See 13.6, “Fiber Distributed
Data Interface (FDDI)” on page 281. When reading other documents it is
necessary to try to understand exactly what the author means by the various
terms.
The word “asynchronous” is usually used to mean a data stream where the
characters are not synchronized with one another but the bits within each
character are .
Sometimes these concepts get mixed up as when SDLC data link protocol is
used to transmit data over an asynchronous circuit or when ASCII protocols are
used to send data through a synchronous channel.
In the FDDI standard the term synchronous is used in a completely different way:
It means data traffic that has some real-time or “guaranteed bandwidth”
requirement.
All of the above terms describe a timing relationship between two things. When
using these terms, it is essential that we know just what things are being related
and in what aspect they are being compared. Care is essential.
Cyclic-Reservation Multiple-Access Scheme for Gbit/s Enterprise Systems Connection (ESCON) Architecture.
LANs and MANs based on Dual-Bus J.C. Elliott and M.W. Sachs. IBM Journal of
Configuration M. Mehdi Nassehi. EFOC/LAN Research and Development. Vol 36, No 4.
90. July 1992.
Paris: An Approach to Integrated High-Speed Private Control Mechanisms for High Speed Networks by
Networks Israel Cidon and Inder S. Gopal. Israel Cidon and Inder S. Gopal. ICC ′90.
International Journal of Digital and Analog Real-Time Packet Switching: A Performance Analysis
Cabled Systems., Vol. 1., 1988, pp 77-85. by Israel Cidon, Inder Gopal, George Grover
The PlaNET/Orbit High Speed Network I. Gopal, P.M. and Moshe Sidi. IEEE Journal of Selected
Gopal, R. Guerin, J. Janniello and M. Kaplan. Areas in Communications, Vol. 6. No. 9,
Where was this published? December 1988.
Bandwidth Managenent and Congestion Control in Congestion Control for High Speed Packet Networks
plaNET Israel Cidon, Inder Gopal and Roch by Krishna Bala, Israel Cidon and Khosrow
Guerin. IEEE Communications Magazine, Sohraby. Infocom 90.
October 1991. A Blind Voice Packet Synchronization Strategy by
An Overview of the Aurora Gigabit Testbed David D. Joong Ma and Inder Gopal. IBM Research
Clark et. al. Proceedings of Infocomm′92 Report. RC 13893. 1988.
Bibliography 427
Linear Broadcast Routing Ching-Tsun Chou and Inder Multimedia in a Network Environment IBM ITSO Boca
S. Gopal. Journal of Algorithms 10, 490-517 Raton Center. IBM Order Number GG24-3947
(1989).
Multimedia Networking Performance Requirements
Packet Video and Its Integration into the Network James D. Russell. Proceedings of
Architecture Gunnar Karlsson and Martin TriComm′93. Plenum Press, New York.
Vetterli. IEEE J. on Selected areas in
Rationale, Directions and Issues Surrounding High
Communication, June 1989.
Speed Networks Imrich Chlamtac and William
R. Franta. Proceedings of the IEEE, Vol. 78,
Surveys No. 1. January 1990.
access unit. A unit that allows multiple attaching automatic single-route broadcast. A function used by
devices access to a token-ring network at a central some IBM bridge programs to determine the correct
point such as a wiring closet or in an open work area. settings for, and set the bridge single-route broadcast
• Access unit refers to either IBM 8228s or IBM configuration parameters dynamically, without
8230s. operator intervention. As bridges enter and leave the
• Multistation access unit refers specifically to IBM network, the parameter settings may need to change
8228s. to maintain a single path between any two LAN
• Controlled access unit refers specifically to IBM segments for single-route broadcast messages. See
8230s. also single-route broadcast .
broadband local area network (LAN). A local area coaxial (coax) cable. A cable consisting of one
network (LAN) in which information is encoded, conductor, usually a small copper tube or wire, within
multiplexed, and transmitted through modulation of a and insulated from another conductor of a larger
carrier. diameter, usually copper tubing or copper braid.
broadcast. Simultaneous transmission of data to coaxial tap. A physical connection to a coaxial cable.
more than one destination.
communication network management (CNM). The
bus. (1) In a processor, a physical facility on which process of designing, installing, operating, and
data is transferred to all destinations, but from which managing distribution of information and control
only addressed destinations may read in accordance among users of communication systems.
with appropriate conventions. (2) A network
configuration in which nodes are interconnected controller. A unit that controls input/output
through a bidirectional transmission medium. (3) One operations for one or more devices.
or more conductors used for transmitting signals or
crosstalk. The disturbance caused in a circuit by an
power.
unwanted transfer of energy from another circuit.
bus network. A network configuration that provides a
cyclic redundancy check (CRC). Synonym for frame
bidirectional transmission facility to which all nodes
check sequence (FCS) .
are attached. A sending node transmits in both
data link control (DLC) layer. (1) In SNA or Open EBCDIC. Extended binary-coded decimal interchange
Systems Interconnection (OSI), the layer that code. A coded character set consisting of 8-bit coded
schedules data transfer over a link between two characters.
nodes and performs error control for the link.
Examples of DLC are synchronous data link control electromagnetic interference (EMI). A disturbance in
(SDLC) for serial-by-bit connection and DLC for the the transmission of data on a network resulting from
System/370 channel. (2) See logical link control (LLC) the magnetism created by a current of electricity.
sublayer , medium access control (MAC) sublayer .
Note: The DLC layer is usually independent of the Ethernet network. A baseband LAN with a bus
physical transport mechanism and ensures the topology in which messages are broadcast on a
integrity of data that reach the higher layers. coaxial cable using a carrier sense multiple
access/collision detection (CSMA/CD) transmission
designated bridge. In a LAN using automatic method.
single-route broadcast, a bridge that forwards
single-route broadcast frames. See also root bridge ,
standby bridge . F
destination. Any point or location, such as a node, Federal Communications Commission (FCC). A board
station, or particular terminal, to which information is of commissioners appointed by the President under
to be sent. the Communications Act of 1934, having the power to
regulate all interstate and foreign communications by
destination address. A field in the medium access wire and radio originating in the United States.
control (MAC) frame that identifies the physical
location to which information is to be sent. Contrast Fiber Distributed Data Interface (FDDI). A
with source address . high-performance, general-purpose, multi-station
network designed for efficient operation with a peak
destination service access point (DSAP). The service data transfer rate of 100 Mbps. It uses token-ring
access point for which a logical link control protocol architecture with optical fiber as the transmission
data unit (LPDU) is intended. medium over distances of several kilometers.
destination service access point (DSAP) address. The filtered frames. Frames that arrive at a bridge
address of the link service access point (LSAP) for adapter but are not forwarded across the bridge,
which a link protocol data unit (LPDU) is intended. because of criteria specified in a filter program used
Also, a field in the LPDU. with the bridge program.
differential Manchester encoding. A transmission frame. The unit of transmission in some LANs,
encoding scheme in which each bit is encoded as a including the IBM Token-Ring Network and the IBM PC
two-segment signal with a signal transition (polarity Network. It includes delimiters, control characters,
change) at either the bit time or half-bit time. information, and checking characters. On a token-ring
Transition at a bit time represents a 0. No transition network, a frame is created from a token when the
at a bit time indicates a 1. token has data appended to it. On a token bus
network (IBM PC Network), all frames including the
Note: This coding scheme allows simpler token frame contain a preamble, start delimiter,
receive/transmit and timing recovery circuitry control address, optional data and checking
and a smaller delay per station than achieved characters, end delimiter, and are followed by a
with block codes. It also allows the two wires minimum silence period.
of a twisted pair to be interchanged without
causing data errors. frame check sequence (FCS). (1) A system of error
checking performed at both the sending and receiving
station after a block check character has been
downstream physical unit (DSPU). A controller or a accumulated. (2) A numeric value derived from the
workstation downstream from a gateway that is bits in a message that is used to check for any bit
attached to a host. errors in transmission. (3) A redundancy check in
which the check key is generated by a cyclic
Glossary 431
algorithm. Synonymous with cyclic redundancy check intermediate segments between source segments and
(CRC) . destination segments. Contrast with mesh network .
frequency pair. In the broadband IBM PC Network, hop count. The number of bridges through which a
the two frequencies or channels used by an adapter: frame has passed on the way to its destination.
one to transmit data to the network, and one to
Note: Hop count applies to all broadcast frames
receive data from the network.
except single-route broadcast frames.
functional address. In IBM network adapters, a
hop count limit. The maximum number of bridges
special kind of group address in which the address is
through which a frame may pass on the way to its
bit-significant, each “on” bit representing a function
destination.
performed by the station (such as “Active Monitor,”
“Ring Error Monitor,” “LAN Error Monitor,” or
“Configuration Report Server”). I
idles. Signals sent along a ring network when neither
G frames nor tokens are being transmitted.
gateway. A device and its associated software that impedance. The combined effect of resistance,
interconnect networks or systems of different inductance, and capacitance on a signal at a
architectures. The connection is usually made above particular frequency.
the reference model network layer. Contrast with
bridge and router . individual address. An address that identifies a
particular network adapter on a LAN. See also locally
group address. In a LAN, a locally administered administered address and universally administered
address assigned to two or more adapters to allow address .
the adapters to copy the same frame. Contrast
locally administered address with universally International Organization for Standardization (ISO).
administered address . An organization of national standards bodies from
various countries established to promote
group SAP. A single address assigned to a group of development of standards to facilitate international
service access points (SAPs). See also group address . exchange of goods and services, and develop
cooperation in intellectual, scientific, technological,
and economic activity.
H
hard error. An error condition on a network that
requires that the source of the error be removed or
J
that the network be reconfigured before the network jitter. Undesirable variations in the arrival time of a
can resume reliable operation. See also beaconing . transmitted digital signal.
Contrast with soft error .
“hello” message. A message used by the automatic LAN multicast. The sending of a transmission frame
single-route broadcast function of IBM bridge intended to be accepted by a group of selected data
programs to detect what bridges enter and leave the stations on the same LAN.
network and to cause single-route broadcast
parameters to be reset accordingly. The root bridge LAN segment. (1) Any portion of a LAN (for example,
sends a “hello” message on the network every 2 a single bus or ring) that can operate independently
seconds. but is connected to other parts of the establishment
network via bridges. (2) An entire ring or bus network
hertz (Hz). A unit of frequency equal to one cycle per without bridges. See ring segment .
second.
LAN segment number. The identifier that uniquely
distinguishes a LAN segment in a multi-segment LAN.
hierarchical network. A multiple-segment network
configuration providing only one path through latency. The time interval between the instant at
which an instruction control unit initiates a call for
limited broadcast. Synonym for single-route logical link control protocol data unit (LPDU). The
broadcast . unit of information exchanged between network layer
entities in different nodes. The LPDU consists of the
link. (1) The logical connection between nodes destination service access point (DSAP) and source
including the end-to-end link control procedures. (2) service access point (SSAP) address fields, the
The combination of physical media, protocols, and control field, and the information field (if present).
programming that connects devices on a network. (3)
In computer programming, the part of a program, in logical link control (LLC) sublayer. One of two
some cases a single instruction or an address, that sublayers of the ISO Open Systems Interconnection
passes control and parameters between separate data link layer (which corresponds to the SNA data
portions of the computer program. (4) To link control layer), proposed for LANs by the IEEE
interconnect items of data or portions of one or more Project 802 Committee on Local Area Networks and
computer programs. (5) In SNA, the combination of the European Computer Manufacturers Association
the link connection and link stations joining network (ECMA). It includes those functions unique to the
nodes. particular link control procedures that are associated
with the attached node and are independent of the
link station. (1) A specific place in a service access medium; this allows different logical link protocols to
point (SAP) that enables an adapter to communicate coexist on the same network without interfering with
with another adapter. (2) A protocol machine in a each other. The LLC sublayer uses services provided
node that manages the elements of procedure by the medium access control (MAC) sublayer and
required for the exchange of data traffic with another provides services to the network layer.
communicating link station. (3) A logical point within
a SAP that enables an adapter to establish logical unit (LU). In SNA, a port through which an
connection-oriented communication with another end user accesses the SNA network in order to
adapter. (4) In SNA, the combination of hardware and communicate with another end user and through
software that allows a node to attach to and provide which the end user accesses the functions provided
control for a link. by system services control points (SSCPs). An LU
can support at least two sessions, one with an SSCP
lobe. In the IBM Token-Ring Network, the section of and one with another LU, and may be capable of
cable (which may consist of several cable segments) supporting many sessions with other logical units.
that connects an attaching device to an access unit.
LU type 6.2. A type of logical unit that supports
local area network (LAN). A computer network sessions between two application programs in a
located on a user’s premises within a limited distributed data processing environment using the
geographical area. SNA general data stream, which is a structured-field
data stream, between two type 5 nodes, a type 5 node
and a type 2.1 node, and two type 2.1 nodes.
Note: Communication within a local area network is
not subject to external regulations; however,
communication across the LAN boundary may
be subject to some form of regulation.
M
MAC frame. Frames used to carry information to
local bridge function. Function of an IBM bridge maintain the ring protocol and for exchange of
program that allows a single bridge computer to management information.
connect two LAN segments (without using a
telecommunication link). Contrast with remote bridge MAC protocol. (1) In a local area network, the
function . protocol that governs communication on the
transmission medium without concern for the physical
Glossary 433
characteristics of the medium, but taking into account
the topological aspects of the network, in order to N
enable the exchange of data between data stations.
NetView. A host-based IBM licensed program that
See also logical link control protocol (LLC protocol) .
provides communication network management (CNM)
(2) The LAN protocol sublayer of data link control
or communications and systems management (C&SM)
(DLC) protocol that includes functions for adapter
services.
address recognition, copying of message units from
the physical network, and message unit format
Network Basic Input/Output System (NetBIOS). A
recognition, error detection, and routing within the
message interface used on LANs to provide message,
processor.
print server, and file server functions. The IBM
NetBIOS application program interface (API) provides
MAC segment. An individual LAN communicating
a programming interface to the LAN so that an
through the medium access control (MAC) layer
application program can have LAN communication
within this network.
without knowledge and responsibility of the data link
main ring path. In the IBM Token-Ring Network, the control (DLC) interface.
part of the ring made up of access units, repeaters,
network layer. (1) In the Open Systems
converters, and the cables connecting them. See also
Interconnection reference model, the layer that
backup path .
provides for the entities in the transport layer the
Manchester encoding. See differential Manchester means for routing and switching blocks of data
encoding . through the network between the open systems in
which those entities reside. (2) The layer that
Manufacturing Automation Protocol (MAP). A provides services to establish a path between
broadband LAN with a bus topology that passes systems with a predictable quality of service. See
tokens from adapter to adapter on a coaxial cable. Open Systems Interconnection (OSI) .
medium access control frame. See MAC frame . network management. The conceptual control
element of a station that interfaces with all of the
medium access control (MAC) protocol. In a local architectural layers of that station and is responsible
area network, the part of the protocol that governs for the resetting and setting of control parameters,
communication on the transmission medium without obtaining reports of error conditions, and determining
concern for the physical characteristics of the if the station should be connected to or disconnected
medium, but taking into account the topological from the network.
aspects of the network, in order to enable the
exchange of data between data stations. noise. (1) A disturbance that affects a signal and that
can distort the information carried by the signal. (2)
medium access control sublayer (MAC sublayer). In Random variations of one or more characteristics of
a local area network, the part of the data link layer any entity, such as voltage, current, or data. (3)
that applies medium access control and supports Loosely, any disturbance tending to interfere with
topology-dependent functions. The MAC sublayer normal operation of a device or system.
uses the services of the physical layer to provide
services to the logical link control sublayer and all non-broadcast frame. A frame containing a specific
higher layers. destination address and that may contain routing
information specifying which bridges are to forward it.
mesh network. A multiple-segment network A bridge will forward a non-broadcast frame only if
configuration providing more than one path through that bridge is included in the frame’s routing
intermediate LAN segments between source and information.
destination LAN segments. Contrast with hierarchical
network .
O
Micro Channel. The architecture used by IBM
Personal System/2 computers, Models 50 and above. observing link. The reporting link (or authorization
This term is used to distinguish these computers from level) between a bridge and a network management
personal computers using a PC I/O channel, such as program that authorizes the network management
an IBM PC, XT, or an IBM Personal System/2 program to perform all network management
computer, Model 25 or 30. functions except those restricted to the controlling
link. (The restricted functions include removing
adapters from a ring, changing certain bridge
configuration parameters, and enabling or disabling
certain bridge functions.)
Glossary 435
Request for Comment (RFC). The Internet Protocol network, as determined by the parameters carried in
suite is evolving through the mechanism of Request the message unit, such as the destination network
for Comments (RFC). Research ideas and new address in a transmission header.
protocols (mostly application protocols) are brought to
the attention of the internet community in the form of
an RFC. Some protocols are so useful that they are S
recommended to be implemented in all future
implementations of TCP/IP; that is, they become segment. See LAN segment , ring segment .
recommended protocols. Each RFC has a status
attribute to indicate the acceptance and stage of server. (1) A device, program, or code module on a
evolution this idea has in the TCP/IP protocol suite. network dedicated to providing a specific service to a
Software developers use RFCs as a reference to write network. (2) On a LAN, a data station that provides
TCP/IP software. facilities to other data stations. Examples are a file
server, print server, and mail server.
ring in (RI). In an IBM Token-Ring Network, the
receive or input receptacle on an access unit or service access point (SAP). (1) A logical point made
repeater. available by an adapter where information can be
received and transmitted. A single SAP can have
ring latency. In an IBM Token-Ring Network, the many links terminating in it. (2) In Open Systems
time, measured in bit times at the data transmission Interconnection (OSI) architecture, the logical point at
rate, required for a signal to propagate once around which an n + 1-layer entity acquires the services of
the ring. Ring latency includes the signal propagation the n-layer. For LANs, the n-layer is assumed to be
delay through the ring medium, including drop cables, data link control (DLC). A single SAP can have many
plus the sum of propagation delays through each data links terminating in it. These link “endpoints” are
station connected to the Token-Ring Network. represented in DLC by link stations.
ring network. A network configuration in which a session. (1) A connection between two application
series of attaching devices is connected by programs that allows them to communicate. (2) In
unidirectional transmission links to form a closed SNA, a logical connection between two network
path. A ring of an IBM Token-Ring Network is addressable units that can be activated, tailored to
referred to as a LAN segment or as a Token-Ring provide various protocols, and deactivated as
Network segment. requested. (3) The data transport connection
resulting from a call or link between two devices. (4)
ring out (RO). In an IBM Token-Ring Network, the The period of time during which a user of a node can
transmit or output receptacle on an access unit or communicate with an interactive system, usually the
repeater. elapsed time between log on and log off. (5) In
network architecture, an association of facilities
ring segment. A ring segment is any section of a ring necessary for establishing, maintaining, and releasing
that can be isolated (by unplugging connectors) from connections for communication between stations.
the rest of the ring. A segment can consist of a
single lobe, the cable between access units, or a single-route broadcast. The forwarding of specially
combination of cables, lobes, and/or access units. designated broadcast frames only by bridges which
See cable segment , LAN segment . have single-route broadcast enabled. If the network
is configured correctly, a single-route broadcast frame
root bridge. In a LAN containing IBM bridges that will have exactly one copy delivered to every LAN
use automatic single-route broadcast, the bridge that segment in the network. Synonymous with limited
sends the “hello” message on the network every broadcast . See also automatic single-route broadcast .
2 seconds. Automatic single-route broadcast uses
the message to detect when bridges enter and leave socket. Synonym for port .
the network, and to change single-route broadcast
parameters accordingly. See also designated bridge , soft error. An intermittent error on a network that
standby bridge . causes data to have to be transmitted more than
once to be received. A soft error affects the
router. An attaching device that connects two LAN network’s performance but does not, by itself, affect
segments, which use similar or different architectures, the network’s overall reliability. If the number of soft
at the reference model network layer. Contrast with errors becomes excessive, reliability is affected.
bridge and gateway . Contrast with hard error .
routing. (1) The assignment of the path by which a source address. A field in the medium access control
message will reach its destination. (2) The forwarding (MAC) frame that identifies the location from which
of a message unit along a particular path through a information is sent. Contrast with destination address .
Glossary 437
are unique. Contrast with locally administered unshielded twisted pair (UTP). See telephone twisted
address . pair .
Index 441
HDLC 16 Isochronous Ethernet 275
HDSL 55 isolated transmission network 154
header error check 193 ITU-T 187, 192, 200, 211
HEC 193, 194, 200
heterochronous 421
hierarchical source coding 109 J
hierarchy of networks 153 jitter 21, 111
high density bipolar three zeros 27 joining fiber cables 68, 90
high speed digital coding 26
high-performance switch 152
high-quality sound 110, 183
L
label swapping 250, 253
high-speed digital subscriber line 55
Lambdanet 380
HPS 152
LAN 183, 184, 187, 208, 213, 215, 344
hub 214
emulation 215
hybrid balanced network 49
LAN access control 259
LAN bridges and routers 8
I LAN hub development 393
IBM 3270 105 LAN network management 394
IBM 3705 147 LAN research 343
IBM 3725 147 LAN segment 344
IBM 3745 147 LAN topology 258
IBM 9032 222 LAPD 167
IBM 9076 152 LAPD link control 170
IBM Rainbow 381 laser 76
IDFT 63 laser safety 97
IEEE 802.2 208, 230, 236 latency 111, 210
IEEE 802.3 265 latency (token-ring) 281
IEEE 802.6 301 law of large numbers 418
IFFT 63 leakage 43
Image 112, 183, 184 leaky bucket 197, 244
image traffic 105 leaky bucket rate control 118
implicit rate control 4 LED 71, 76, 79
impulse noise 44 length of connection 101
in-sequence delivery 194 LFSID 140
indoor radio 319 light detectors 80
inductance 43 Light Emitting Diode 79
information capacity (fiber) 66 Light Emitting Diodes 71, 76
Input Distribution 417 light sources 76
insertion ring 263 light transmission 70
integrated preamplifier detector 80 lightwave network 367
Integrated Services Digital Network 159 line transmission termination 160
inter-symbol interference 321 linewidth (laser) 77, 373
interactive applications 112 link control (frame relay) 229
inverse discrete Fourier transform 63 link error recovery 114
IPD 80 LMI 230
irregular delivery of voice packets 123 loading coils 47
ISDN 159 local area network 257, 368
basic rate 34, 161 local distribution network 368
basic rate interface 161 local form session identifier 140
basic rate passive bus 164 Local Management Interface 230
frame relay 170 logical ID swapping 139, 237
primary rate interface 171 loss priority 193
pseudoternary coding 18 LPDU 236
ISI 321, 336 LT 160
ISM frequency bands 322
isochronous 183, 206, 215, 421
isochronous data transfer (FDDI-II) 300
Index 443
phase shift keying 84, 409 reflected signal 45
PHY 292 reflective star 371, 372
physical interfaces 210 repeater 22
physical layer 203 repeaters 67
physical layer protocol 288, 292 repeaters (optical) 85
Physical Medium Dependent Layer 292 request counter 305
PIN diodes 80 resequence 194
plaNET 148, 249 RESERVE 354
plesiochronous 173, 206, 421 resistance 43
Plesiochronous Digital Hierarchy 179 RII field 137
PLL 20 ring synchronization (FDDI) 289
PM 407 ring wiring concentrator 394
PMD 292 Ringmaster 264
point-to-multipoint 195 route determination 135
polarity modulation (optical) 85 route selection 239
polarization (fiber) 70 router 215
polarization (radio signals) 322 routing 184, 198
polarization division multiplexing 323 routing information field 137
PolSK 85 routing vector 136
power control 329
Practical ATM 212
pre-arbitrated slots 303 S
PRI 171 SAAL 209
primary rate 171 SAP 203
primary rate ISDN 21 SAPI 170
priority 196, 198 SAR 209
priority discard scheme 125 SAT 346
private 188, 189 SBA 298
private network 153 scheduling (CRMA-II) 362
propagation delay 3, 217 scrambling 410
protocol transparency 218 screened twisted pair 39
pseudo noise 325 SDDI 293
pseudoternary coding 18 SDH 172, 179, 186, 193, 202, 210, 287
pseudoternary line code 168 SDLC PAD 225
PSK 84, 409 SDM 323
pulse amplitude modulation 34 security 214
pulse code modulation 13, 121 segmentation 206, 208, 209
selective broadcast 242
semi-permanent 195
Q sequential delivery of packets 116
Q A M 411 server utilization 415
QoS 196 service
QPSX 301 access point 201
quadrature amplitude modulation 411 class 205
quality of service 196 interface 205
queue length 414 time 414
queue size 417 time distribution 417, 420
queued arbitrated slots 303 service access point identifier 170
queueing delay 251 SFH 330
queueing theory 413 Shannon-Hartly law 324, 330
queueing time 415 shielded twisted pair 38
shielded twisted pair (FDDI) 286
short hold mode 217
R short passive bus 170
radio LAN 210, 319 SID 242
Rainbow 381 sidebands 408
Rayleigh fading 320 signal mixing devices (optical) 89
reassembly 206, 208, 209 signaling 209
Index 445
wavelength division multiplexing 78, 96, 369
V wavelength selective network 387
VAD 125 WBC 297
variable bit rate services 206 WDM 78, 89, 96, 369
variable rate voice coding 125 WDM circuit switch 377
variable-length frames 251 wideband 407
variation in transit time 113 channel allocator (FDDI-II) 299
VCC 190, 194, 201 channels 171
VCI 191, 192, 193, 198, 201, 310 channels (FDDI-II) 297
VCL 190, 201 ISDN 159
VCO 20 WRC 393
VCS 201
video 183, 184, 188, 196, 206, 207, 215
-conferencing 207 X
applications 108 X.25 139, 233
in a packet network 108, 126 XID 226
requirements 112
virtual
call 233
channel 184, 188, 190, 200
channel connection 190, 201
channel identifier 191, 192
channel link 190, 201
channel switch 201
circuit 190
connection 192, 198
link 228
path 188, 201
path connection 202
path connection identifier 202
path identifier 190, 192
path link 202
path switch 202
path terminator 202
route 135
Vitterbi decoding 412
voice 183, 184, 188, 189, 196, 203, 206, 246
activity detector 125
compression 124
in a packet network 122
packetization 239
voltage controlled oscillator 20
VP 201
VP switching 200
VPC 202
VPCI 202
VPI 190, 192, 193, 198, 202
VPI/VCI swapping 195
VPL 202
VPS 202
VPT 202
W
waiting counter 305
waiting queue 415
WAN 183
waveguide dispersion (fiber) 71
Your feedback is very important to help us maintain the quality of ITSO Bulletins. Please fill out this
questionnaire and return it using one of the following methods:
• Mail it to the address on the back (postage paid in U.S. only)
• Give it to an IBM marketing representative for mailing
• Fax it to: Your International Access Code + 1 914 432 8246
• Send a note to REDBOOK@VNET.IBM.COM
Name Address
Company or Organization
Phone No.
IBML
ITSO Technical Bulletin Evaluation RED000 Cut or Fold
Along Line
GG24-3816-02
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
Cut or Fold
GG24-3816-02 Along Line
IBML
Printed in U.S.A.
GG24-3816-02