0% found this document useful (0 votes)
94 views47 pages

TSN Unit 1 PDF

This document discusses multiplexing techniques used in telecommunications transmission systems. It describes: 1) Open wire, paired cable, and two-wire versus four-wire transmission systems used to implement voice channels between switching systems. 2) The use of hybrid circuits to convert between two-wire subscriber loops and four-wire trunks for long-distance transmission. 3) Loading coils which were inserted into wire pairs to reduce amplitude distortion on longer loops by improving the voiceband response.

Uploaded by

Buvana Murugavel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views47 pages

TSN Unit 1 PDF

This document discusses multiplexing techniques used in telecommunications transmission systems. It describes: 1) Open wire, paired cable, and two-wire versus four-wire transmission systems used to implement voice channels between switching systems. 2) The use of hybrid circuits to convert between two-wire subscriber loops and four-wire trunks for long-distance transmission. 3) Loading coils which were inserted into wire pairs to reduce amplitude distortion on longer loops by improving the voiceband response.

Uploaded by

Buvana Murugavel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

UNIT-I

MULTIPLEXING
Transmission Systems
Functionally, the communications channels between switching systems are referred to as trunks. In
the past, these channels were implemented with a variety of facilities, including pairs of wires,
coaxial cable, and point-to-point microwave radio links. Except for special situations, trunk facilities
now utilize optical fibers.
Open Wire
A classical picture of the telephone network in the past consisted of telephone poles with crossarms
and glass insulators used to support uninsulated open-wire pairs. Except in rural environments, the
open wire has been replaced with multipair cable systems or fiber. The main advantage of an open-
wire pair is its relatively low attenuation (a few hundredths of a decibel per mile at voice
frequencies). Hence, open wire is particularly useful for long, rural customer loops. The main
disadvantages are having to separate the wires with cross arms to prevent shorting and the need for
large amounts of copper. (A single open-wire strand has a diameter that is five times the diameter of
a typical strand in a multipair cable. Thus open wire uses roughly 25 times as much copper as does
cable.) As a result of copper costs and the emergence of low electronics costs, open wire in rural
environments has been mostly replaced with cable systems using (digital) amplifiers to offset
attenuation on long loops.
Paired Cable

Figure 1.1 Multipair cable.

1
In response to overcrowded cross arms and high maintenance costs, multipair cable systems were
introduced as far back as 1883. Today a single cable may contain anywhere from 6 to 2700 wire
pairs. Figure 1.1 shows the structure of a typical cable. When telephone poles are used, a single
cable can provide all the circuits required on the route, thereby eliminating the need for crossarms.
More recently the preferred means of cable distribution is to bury it directly in the ground (buried
cable) or use underground conduit (underground cable).
Table 1.1 lists the most common wire sizes to be found within paired-cable systems. The lower
gauge (higher diameter) systems are used for longer distances where signal attenuation and direct-
current (dc) resistance can become limiting factors. Figure 1.14 shows attenuation curves for the
common gauges of paired cable as a function of frequency. An important point to notice in Figure
1.1 is that the cable pairs are capable of carrying much higher frequencies than required by a
telephone quality voice signal (approximately 3.4 kHz).

Figure 1.2 Single-wire transmission with ground return.

Figure 1.3 Two-wire transmissions.

Two-Wire Versus Four-Wire


All wire-line transmission in the telephone network is based on transmission through pairs of wires.
As shown in Figure 1.2, transmission through a single wire (with a ground return) is possible and
has been used in the past. However, the resulting circuit is too noisy for customer acceptance.
Instead, balanced pairs of wires as shown in Figure 1.3 are used with signals propagating as a
voltage difference between the two wires. The electrical current produced by the difference signal
flowing through the wires in opposite directions is called a “metallic current.” In contrast, current
propagating in the same direction in both wires is referred to as common-mode or longitudinal
current. Longitudinal currents are not coupled into a circuit output unless there is an imbalance in
the wires that converts some of the longitudinal signal (noise or interference) into a difference
signal. Thus the use of a pair of wires for each circuit provides much better circuit quality than does
single-wire transmission. Some older switching systems used single-wire (unbalanced) transmission
to minimize the number of contacts. Unbalanced circuits were only feasible in small switches where
noise and crosstalk could be controlled.
Virtually all subscriber loops in the telephone network are implemented with a single pair of
wires. The single pair provides for both directions of transmission. If users on both ends of a
connection talk simultaneously, their conversations are superimposed on the wire pair and can be
heard at the opposite ends. In contrast, wire-line (and fiber) transmission over longer distances, as
between switching offices, is best implemented if the two directions of transmission are separated
onto separate wire pairs. It is now commonplace to use fiber for the feeder portion of a subscriber
loop, but the drop to a residence is a single pair per telephone.
Longer distance transmission requires amplification and most often involves multiplexing.
These operations are implemented most easily if the two directions of transmission are isolated from
each other. Thus interoffice trunks typically use two pairs of wires or two fibers and are referred to
as four-wire systems. The use of two pairs of wires did not necessarily imply the use of twice as
much copper as a two-wire circuit.
Sometimes the bandwidth of a single pair of wires was separated into two subbands that were
used for the two directions of travel. These systems were referred to as derived four-wire systems.
Hence, the term four-wire has evolved to imply separate channels for each direction of transmission,
even when wires may not be involved. For example, fiber optic and radio systems that use separate
channels for each direction are also referred to as four-wire systems. .
The use of four-wire transmission had a direct impact on the switching systems of the toll
network. Since toll network circuits were four-wire, the switches were designed to separately
connect both directions of transmission. Hence, two paths through the switch were needed for each
connection. A two-wire switch, as used in older analog end offices, required only one path through
the switch for each connection.
Two-Wire-to-Four-Wire Conversion
At some point in a long-distance connection it is necessary to convert from two-wire transmission of
local loops to four-wire transmission on long-distance trunks. In the past, the conversion usually
occurred at the trunk interface of the (two-wire) end office switch. Newer digital end office switches
are inherently “four-wire,” which means the two-wire-to-four-wire conversion point is on the
subscriber (line) side of the switch as opposed to the trunk side. A generalized interconnection of
two-wire and four-wire facilities for a connection is shown in Figure 1.4. The basic conversion
function is provided by hybrid circuits that couple the two directions of transmission as shown.
Hybrid circuits have been traditionally implemented with specially interconnected transformers.
More recently, however, electronic hybrids have been developed. Ideally a hybrid should couple all
energy on the incoming branch of the four-wire circuit into the two-wire circuit, and none of the
incoming four-wire signal should be transferred to the outgoing four-wire branch.

Figure 1.4 Two wire to Four wire conversion

When the impedance matching network Z exactly matches the impedance of the two-wire circuit,
near-perfect isolation of the two four-wire branches can be realized. Impedance matching used to be
a time-consuming, manual process and was therefore not commonly used. Furthermore, the two-
wire circuits were usually switched connections so the impedance that had to be matched would
change with each connection. For these reasons the impedances of two-wire lines connected to
hybrids were rarely matched. The effect of an impedance mismatch is to cause an echo, the power
level of which is related to the degree of mismatch.
Loading Coils
The attenuation curves shown in Figure 1.5 indicate that the higher frequencies of the voice
spectrum (up to 3.4 kHz) experience more attenuation than the lower frequencies. This frequency-
dependent attenuation distorts the voice signal and is referred to as amplitude distortion. Amplitude
distortion becomes most significant on long cable pairs, where the attenuation difference is greatest.

Figure 1.5 Effect of loading on 24-gauge cable pair


The usual method of combating amplitude distortion on intermediate-length (315-mile) wire pairs
is to insert artificial inductance into the lines. The extra inductance comes from loading coils that are
inserted at 3000-, 4500-, or 6000-ft intervals. Figure 1.5 shows the effect of loading coils on a 24-
gauge loop. Notice that the voiceband response up to 3 kHz is greatly improved, but the effect on
higher frequencies is devastating.
Prior to the introduction of wire-line and fiber carrier systems, loading coils were used extensively
on exchange area interoffice trunks. Loading coils are also used on the longer, typically rural,
subscriber loops. Here, too, carrier systems have displaced most of the single pairs of wires being
used on long routes. Multiplexing technique for the time. The same basic technique has since been
used in numerous applications with digital speech for satellite and land-line applications. These
systems are generally called digital speech interpolation (DSI) systems,

FDM Multiplexing and Modulation

The introduction of cable systems into the transmission plant to increase the circuit packing density
of open wire is one instance of multiplexing in the telephone network. This form of multiplexing,
referred to as space division multiplexing, involves nothing more than bundling more than one pair
of wires into a single cable. The telephone network uses two other forms of multiplexing, both of
which use electronics to pack more than one voice circuit into the bandwidth of a single
transmission medium. Analog frequency division multiplexing (FDM) has been used extensively in
point-to-point microwave radios and to a much lesser degree on some obsolete coaxial cable and
wire-line systems. FDM is also utilized in fiber optic transmission systems, where it is referred to as
wavelength division multiplexing (WDM). Digital time division multiplexing (TDM) is the
dominant form of multiplexing used in the telephone networks worldwide.
Frequency Division Multiplexing
As indicated in Figure, an FDM system divides the available bandwidth of the transmission medium
into a number of narrower bands or subchannels. Individual voice signals are inserted into the
subchannels by amplitude modulating appropriately selected carrier frequencies. As a compromise
between realizing the largest number of voice channels in a multiplex system and maintaining
acceptable voice fidelity, the telephone companies established 4 kHz as the standard bandwidth of a
voice circuit. If both sidebands produced by amplitude modulation are used (as in obsolete N1 or N2
carrier systems on paired cable), the subchannel bandwidth is 8 kHz, and the corresponding carrier
frequencies lie in the middle of each subchannel. Since doublesideband modulation is wasteful of
bandwidth, single-sideband (SSB) modulation was used whenever the extra terminal costs were
justified. The carrier frequencies for single-sideband systems lie at either the upper or lower edge of
the corresponding subchannel, depending on whether the lower or upper sideband is selected. The
A5 channel bank multiplexer of AT&T used lower sideband modulation. r?
FDM Hierarchy
In order to standardize the equipment in the various broadband transmission systems of the original
analog network, the Bell System established an FDM hierarchy as provided in Table 1.2. CCITT
recommendations specify the same hierarchy at the lower end. Optical technology is customarily
defined in terms of the wavelength of the optical signal as opposed to the corresponding frequency.
Actually, the usable bandwidth of an FDM voice channel was closer to 3 kHz due to guard bands
needed by the FDM separation filters.
TABLE 1.2 FDM Hierarchy of the Bell Network
Multiplex Level Number of Formation Frequency Band
Voice Circuits (kHz)

Voice channel 1 0-4


Group 12 12 voice circuits 60-108
Supergroup 60 5 groups 312-552
Mastergroup 600 10 supergroups 564-3,084
Mastergroup 1,200-3,600 Various 312,564-17,548
Jumbogroup 3,600 6 master groups 564-17,548
Mux
Jumbogroup Mux 10,800 3 jumbo groups 3,000-60,000

levels. Each level of the hierarchy is implemented using a set of standard FDM modules. The
multiplex equipment is independent of particular broadband transmission media.
All multiplex equipment in the FDM hierarchy used SSB modulation. Thus, every voice circuit
required approximately 4 kHz of bandwidth. The lowest level building block in the hierarchy is a
channel group consisting of 12 voice channels. A channel group multiplex uses a total bandwidth of
48 kHz. Figure 1.20 shows a block diagram of an A5 channel group multiplexer, the most common
A-type channel bank used for first-level multiplexing. Twelve modulators using 12 separate carriers
generate 12 double-sideband signals as indicated. Each channel is then bandpass filtered to select
only the lower sideband of each double-sideband signal. The composite multiplex signal is
produced by superposing the filter outputs. Demultiplex equipment in a receiving terminal uses the
same basic processing in reverse order.
Notice that a sideband separation filter not only removes the upper sideband but also restricts the
bandwidth of the retained signal: the lower sideband. These filters therefore represented a basic
point in the analog telephone network that defined the bandwidth of a voice circuit. Since FDM was
used on all long-haul analog circuits,long-distance connections provided somewhat less than 4 kHz
of bandwidth. (The loading coils discussed previously also produce similar bandwidth limitations
into a voice circuit.)
As indicated in Table 1.2, the second level of the FDM hierarchy is a 60-channel multiplex
referred to as a supergroup. Figure shows the basic implementation of an LMX group bank that
multiplexes five first-level channel groups. The resulting 60- channel multiplex output is identical to
that obtained when the channels are individually translated into 4-kHz bands from 312 to 552 kHz.
Direct translation requires 60 separate SSB systems with 60 distinct carriers. The LMX group bank,
however, uses only five SSB systems plus five lower level modules. Thus two-stage multiplexing, as
implied by the LMX group bank, requires more total equipment but achieves economy through the
use of common building blocks.
Because a second-level multiplexer packs individual first-level signals together without guard
bands, the carrier frequencies and bandpass filters in the LMX group bank must be maintained with
high accuracy. Higher level multiplexers do not pack the lower level signals as close together.
Notice that a master group, for example, does not provide one voice channel for every 4 kHz of
bandwidth. It is not practical to maintain the tight spacing between the wider bandwidth signals at
higher frequencies. Furthermore, higher level multiplex signals include pilot tones to monitor
transmission link quality and aid in carrier recovery.
1.3.2 Time Division Multiplexing
Basically, time division multiplexing (TDM) involves nothing more than sharing a transmission
medium by establishing a sequence of time slots during which individual sources can transmit
signals. Thus the entire bandwidth of the facility is periodically available to each source for a
restricted time interval. In contrast, FDM systems assign a restricted bandwidth to each source for
all time. Normally, all time slots of a TDM system are of equal length. Also, each subchannel is
usually assigned a time slot with a common repetition period called a frame interval. This form of
TDM is sometimes referred to as synchronous time division multiplexing to specifically imply that
each subchannel is assigned a certain amount of transmission capacity determined by the time slot
duration and the repetition rate. With this second form of multiplexing, subchannel rates are allowed
to vary according to the individual needs of the sources. The backbone digital links of the public
telephone network (T-carrier, digital microwave, and fiber optics) use a synchronous variety of
TDM.
Time division multiplexing is normally associated only with digital transmission links. Although
analog TDM transmission can be implemented by interleaving samples from each signal, the
individual samples are usually too sensitive to all varieties of transmission impairments. In contrast,
time division switching of analog signals is more feasible than analog TDM transmission because
noise and distortion within the switching equipment are more controllable.
T-Carrier Systems
The volume of interoffice telephone traffic in the United States has traditionally grown more rapidly
than local traffic. This rapid growth put severe strain on the older interoffice transmission facilities
that are designed for lower traffic volumes. Telephone companies were often faced with the
necessary task of expanding the number of interoffice circuits. T-carrier systems were initially
developed as a cost-effective means for interoffice transmission: both for initial installations and for
relief of crowded interoffice cable pairs.

Despite the need to convert the voice signals to a digital format at one end of a T1 line and back
to analog at the other, the combined conversion and multiplexing cost of a digital TDM terminal was
lower than the cost of a comparable analog FDM terminal. The first T-carrier systems were designed
specifically for exchange area trunks at distances between 10 and 50 miles.
A T-carrier system consists of terminal equipment at each end of a line and a number of
regenerative repeaters at intermediate points in the line. The function of each regenerative repeater is
to restore the digital bit stream to its original form before transmission impairments obliterate the
identity of the digital pulses. The line itself, including the regenerative repeaters, is referred to as a
span line. The original terminal equipment was referred to as D-type (digital) channel banks, which
came in numerous versions. The transmission lines were wire pairs using 16- to 26-gauge cable. A
block diagram of a T-carrier system is shown in Figure 1.40.
The first T1 systems used D1A channel banks for interfacing, converting, and multiplexing 24
analog circuits. A channel bank at each end of a span line provided interfacing for both directions of
transmission. Incoming analog signals were time division multiplexed and digitized for
transmission. When received at the other end of the line, the incoming bit stream was decoded into
analog samples, demultiplexed, and filtered to reconstruct the original signals. Each individual TDM
channel was assigned 8 bits per time slot. Thus, there were (24)(8) = 192 bits of information in a
frame. One additional bit was added to each frame to identify the frame boundaries, thereby produc-
ing a total of 193 bits in a frame. Since the frame interval is 125 msec, the basic T1 line rate became
1.544 Mbps. This line rate has been established as the fundamental standard for digital transmission
in North America and Japan. The standard is referred to as a DS1 signal (for digital signal 1).
A similar standard of 2.048 Mbps has been established by ITU-T for most of the rest of the world.
This standard evolved from a Tl-like system that provides 32 channels at the same rate as the North
American channels. Only 30 of the channels in the El standard, however, are used for voice. The
other two are used for frame synchronization and signaling. The greatly increased attenuation of a
wire pair at the frequencies of a DS1 signal (772 kHz center frequency) mandates the use of
amplification at intermediate points of a T1 span line. In contrast to an analog signal, however, a
digital signal can not only be amplified but also be detected and regenerated. That is, as long as a
pulse can be detected, it can be restored to its original form and relayed to the next line segment. For
this reason T1 repeaters are referred to as regenerative repeaters. The basic functions of these
repeaters are:
1. Equalization
2. Clock recovery
3. Pulse detection
4. Transmission
Equalization is required because the wire pairs introduce certain amounts of both phase and
amplitude distortion that cause intersymbol interference if uncompensated. Clock recovery is
required for two basic purposes: first, to establish a timing signal to sample the incoming pulses;
second, to transmit outgoing pulses at the same rate as at the input to the line.
Regenerative repeaters are normally spaced every 6000 ft in a T1 span line. This distance was
chosen as a matter of convenience for converting existing voice frequency cables to T-carrier lines.
Interoffice voice frequency cables typically used loading coils that were spaced at 6000-ft intervals.
Since these coils were located at convenient access points (manholes) and had to be removed for
high-frequency transmission, it was only natural that the 6000-ft interval be chosen. One general
exception is that the first regenerative repeater is typically spaced 3000 ft from a central office. The
shorter spacing of this line segment was needed to maintain a relatively strong signal in the presence
of impulse noise generated by older switching machines.
The operating experience of T1 systems was so favorable that they were continually upgraded
and expanded. One of the initial improvements produced TIC systems that provide higher
transmission rates over 22-gauge cable. A TIC line operates at 3.152 Mbps for 48 voice channels,
twice as many as a T1 system.
Another level of digital transmission became available in 1972 when the T2 system was
introduced. This system was designed for toll network connections. In contrast, T1 systems were
originally designed only for exchange area transmission. The T2 system provided for 96 voice
channels at distances up to 500 miles. The line rate was 6.312 Mbps, which is referred to as a DS2
standard. The transmission media was special low-capacitance 22-gauge cable. By using separate
cables for each direction of transmission and the specially developed cables, T2 systems could use
repeater spac- ings up to 14,800 ft in low-noise environments. The emergence of optical fiber sys-
tems made copper-based T2 transmission systems obsolete.
TDM Hierarchy
In a manner analogous to the FDM hierarchy, AT&T established a digital TDM hierarchy that has
become the standard for North America. Starting with a DS1 signal as
Digital TDM Signals of North America and Japan
Digital Number of Multiplexer Bit Rate Transmission
Signal Voice Designation (Mbps) Media
Number Circuits

DS1 24 D channel bank (24 1.544 T1 paired cable


analog inputs)

DS1C 48 M1C (2 DS1 inputs) 3.152 T1C paired cable

DS2 96 M12 (4 DS1 inputs) 6.312 T2 paired cable

DS3 672 M13 (28 DS1 inputs) 44.736 Radio, Fiber

DS4 4032 M34 274.176 T4M coax, WT4


(6 DS3 inputs) waveguide, radio

a fundamental building block, all other levels are implemented as a combination of some number of
lower level signals. The designation of the higher level digital multiplexers reflects the respective
input and output levels. For example, an M12 multiplexer combines four DS1 signals to form a
single DS2 signal. Table lists the various multiplex levels, their bit rates, and the transmission
media used for each. Notice that the bit rate of a high-level multiplex signal is slightly higher than
the combined rates of the lower level inputs. A similar digital hierarchy has also been established by
ITU-T as an international standard.
Digital Pair-Galn Systems
Following the successful introduction of T1 systems for interoffice trunks, most major
manufacturers of telephone equipment developed digital TDM systems for local distribution. These
systems are most applicable to long rural loops where the cost of the electronics is offset by the
savings in wire pairs. No matter what the distance is, unexpected growth can be most economically
accommodated by adding electronics, instead of wire, to produce a pair-gain system. The possibility
of trailer parks, apartment houses, or Internet service providers springing up almost overnight
causes nightmares in the minds of cable plant forecasters. Pair-gain systems provide a networking
alternative to dispel those nightmares. Digital pair-gain systems are also useful as alternatives to
switching offices in small communities. Small communities are often serviced by small automatic
switching systems normally unattended and remotely controlled from a larger switching office
nearby. These small community switches are referred to as community dial offices Because T2
transmission systems have become obsolete, the Ml 2 function exists only in a functional sense
within Ml3 multiplexers, which multiplex 28 DS1 signals into 1 DS3 signal. (CDOs). A CDO
typically provides only limited service features to the customers and often requires considerable
maintenance. Because digital pair-gain systems lower transmission costs for moderate-sized groups
of subscribers, they are a viable alternative to a CDO: Stations in the small community are serviced
from the central office by way of pair-gain systems. A fundamental consideration in choosing
between pair gain systems and remote switching involves the traffic volumes and calling patterns
within the small community.
The first two digital pair-gain systems used in the Bell System were the subscriber loop multiplex
(SLM) system and, its successor, the subscriber loop carrier (SLC-40) system. Although these
systems used a form of voice digitization (delta modulation) different from that used in T-carrier
systems (pulse code modulation), they both used standard T1 repeaters for digital transmission at
1.544 Mbps. Both systems also converted the digitized voice signals back into individual analog in-
terfaces at the end office switch to achieve system transparency. Notice that the SLM system
provided both concentration and multiplexing (80 subscribers for 24 channels) while the SLC-40
was strictly a multiplexer (40 subscribers assigned in a one-to-one manner to 40 channels).
ITU Digital Hierarchy
Level Number Number of Voice Multiplexer Bit Rate (Mbps)
Circuits Designation
E1 30 2.048
E2 120 M12 8,448
E3 480 M23 34.368
E4 1920 M34 139.264
E5 7680 M45 565.148

The SLM and SLC-40 systems used delta modulation voice coding because it was simpler than
pulse code modulation as used in T1 systems and was therefore less costly to implement on a per-
channel basis a desirable feature for modular system implementations. The original T1 systems, on
the other hand, minimized electronics costs by using common encoders and decoders, which
precluded implementation of less than 24 channels (an unnecessary feature in an interoffice
application). By the late 1970s low-cost, integrated circuit implementations of standard pulse code
modulation became available that led the way to the first (1979) installation of the SLC-96, a sub-
scriber carrier system using voice coding that was compatible with T1 systems and the emerging
digital end office switching machines.

The SLC-96 system (which is functionally equivalent to four T1 lines) can interface directly with a
digital end office and not be demultiplexed into 24 distinct analog interfaces. Thus this capability,
which is referred to as integrated digital loop carrier (IDLC), greatly reduces the prove-in distance
where the digital carrier becomes less expensive than separate subscriber pairs.
Data under Voice
After the technology of T-carrier systems had been established, AT&T began offering leased digital
transmission services for data communications. This service, known as Dataphone Digital Service
(DDS), uses T1 transmission links with special terminals (channel banks) that provide direct access
to the digital line. An initial drawback of DDS arose because T-carrier systems were originally used
only for exchange area and short toll network trunks. Without some form of long-distance digital
transmission, the digital circuits in separate exchange areas could not be interconnected. AT&T’s
original response to long-distance digital transmission was the development of a special radio
terminal called the 1A radio digital terminal (1A-RDT). This terminal encoded one DS1 signal
(1.544 Mbps) into less than 500 kHz of bandwidth. As shown in Figure, a signal of this bandwidth
was inserted below the lowest frequency of a master group multiplex. Since this frequency band is
normally unused in TD or TH analog radio systems, the DS1 signal could be added to existing
analog routes without displacing any voice channels. The use of frequencies below those used for
voice signals leads to the designation “data under voice” (DUV).
It is important to point out that DUV represented a special development specifically intended for
data transmission and not for voice services. In fact, DUV was used only to provide long-distance
digital transmission facilities for DDS.
PULSE TRANSMISSION
All digital transmission systems are designed around some particular form of pulse response. Even
carrier systems must ultimately produce specific pulse shapes at the detection circuitry of the
receiver. As a first step, consider the perfectly square pulse shown in Figure 4.1. The frequency
spectrum corresponding to the rectangular pulse is derived in Appendix A and shown in Figure 4.2.
It is commonly referred to as a sin(x)/x response:

Where w = radian frequency 2nf,T -duration of a signal interval


The high percentage of energy within this band indicates that the signal can be confined to a
bandwidth of 1/2" and still pass a good approximation to the ideal waveform. In theory, if only the
sample values at the middle of each signal interval are to be preserved, the bandwidth can be
confined to HIT. From this fact the maximum baseband signaling rate in a specified bandwidth is
determined as
Rmax=2BW (4.2)
Figure 4.1 Definition of a square pulse

Figure 4.2 Spectrum of square pulse with duration T.


Where, R = signaling rate, = 1 /T
BW - available bandwidth
The maximum signaling rate achievable through a low-pass bandwidth with no intersymbol
interference is equal to twice the bandwidth. This rate Rmaxis sometimes referred to as the Nyquist
rate.
Although discrete, square-shaped pulses are easiest to visualize, preservation of the square shape
requires wide bandwidths and is therefore undesirable. A more typical shape for a single pulse is
shown in Figure. The ringing on both sides of the main part of the pulse is a necessary
accompaniment to a channel with a limited bandwidth.
Normally, a digital transmission link is excited with square pulses (or modulated equivalents
thereof), but bandlimiting filters and the transmission medium itself combine to produce a response
like the one shown. Figure 4.3 shows pulse output in negative time so the center of the pulse occurs
at t = 0. Actually, the duration of the preringing is limited to the delay of the channel, the filters, and
the equalizers.
An important feature of the pulse response is that, despite the ringing, a pulse can be transmitted
once every T seconds and be detected at the receiver without interference from adjacent pulses.
Obviously, the sample time must coincide with the zero crossings of the adjacent pulses. Pulse
responses like the one shown in Figure can be achieved in channel bandwidths approaching the
minimum (Nyquist) bandwidth equal to one-half of the signaling rate.
Intersymbol Interference
As the signaling rate of a digital transmission link approaches the maximum rate for a given
bandwidth, both the channel design and the sample times become more critical. Small perturbations
in the channel response or the sample times produce nonzero overlap at the sample times called
intersymbol interference. The main causes of intersymbol interference are:
1. Timing inaccuracies
2. Insufficient bandwidth
3. Amplitude distortion
4. Phase distortion
Timing Inaccuracies
Timing inaccuracies occurring in either the transmitter or the receiver produce intersymbol
interference. In the transmitter, timing inaccuracies cause intersymbol interference if the rate of
transmission does not conform to the ringing frequency designed into the channel. Timing
inaccuracies of this type are insignificant unless extremely sharp filter cutoffs are used while
signaling at the Nyquist rate.
Since timing in the receiver is derived from noisy and possibly distorted receive signals,
inaccurate sample timing is more likely than inaccurate transmitter timing. Sensitivity to timing
errors is small if the transmission rate is well below the Nyquist rate.
Insufficient Bandwidth
The ringing frequency shown in Figure is exactly equal to the theoretical minimum bandwidth of the
channel. If the bandwidth is reduced further, the ringing frequency is reduced and intersymbol
interference necessarily results.
Some systems purposely signal at a rate exceeding the Nyquist rate, but do so with prescribed
amounts of intersymbol interference accounted for in the receiver. These systems are commonly
referred to as partial-response systems so called because the channel does not fully respond to an
input during the time of a single pulse. The most common forms of partial-response systems are
discussed in a later section.
Amplitude Distortion
Digital transmission systems invariably require filters to bandlimit transmit spectrums and to reject
noise and interference in receivers. Overall, the filters are designed to produce a specific pulse
response. When a transmission medium with predetermined characteristics is used, these
characteristics can be included in the overall filter design. However, the frequency response of the
channel cannot always be predicted adequately. A departure from the desired frequency response is
referred to as amplitude distortion and causes pulse distortions (reduced peak amplitudes and
improper ringing frequencies) in the time domain. Compensation for irregularities in the frequency
response of the channel is referred to as amplitude equalization.
Phase Distortion
When viewed in the frequency domain, a pulse is represented as the superposition of frequency
components with specific amplitude and phase relationships. If the relative amplitudes of the
frequency components are altered, amplitude distortion results as above. If the phase relationships of
the components are altered, phase distortion occurs. Basically, phase distortion results when the
frequency components of a signal experience differing amounts of delay in the transmission link.
Compensation of phase distortion is referred to as phase equalization.
ASYNCHRONOUS VERSUS SYNCHRONOUS TRANSMISSION
There are two basic modes of digital transmission involving two fundamentally different techniques
for establishing a time base (sample clock) in the receiving terminal of a digital transmission link.
The first of these techniques is asynchronous transmission, which involves separate transmissions of
groups of bits or characters. Within an individual group a specific predefined time interval is used
for each discrete signal. However, the transmission times of the groups are unrelated to each other.
Thus the sample clock in the receiving terminal is reestablished for reception of each group. With
the second technique, called synchronous transmission, digital signals are sent continuously at a
constant rate. Hence the receiving terminal must establish and maintain a sample clock that is
synchronized to the incoming data for an indefinite period of time.

Figure 4.10 Spectral density of bipolar coding.


Code Space Redundancy
In essence, bipolar coding uses a ternary code space but only two of the levels during any particular
signal interval. Hence bipolar coding eliminates dc wander with an inefficient and redundant use of
the code space. The redundancy in the waveform also provides other benefits. The most important
additional benefit is the opportunity to monitor the quality of the line with no knowledge of the
nature of the traffic being transmitted. Since pulses on the line are supposed to alternate in polarity,
the detection of two successive pulses of one polarity implies an error. This error condition is known
as a bipolar violation. No single error can occur without a bipolar violation also occurring. Hence
the bipolar code inherently provides a form of line code parity. The terminals of T1 lines are
designed to monitor the frequency of occurrence of bipolar violations, and if the frequency of
occurrence exceeds some threshold, an alarm is set.
In T-carrier systems, bipolar violations are used merely to detect channel errors. By adding some
rather sophisticated detection circuitry, the same redundancy can be used for correcting errors in
addition to detecting them. Whenever a bipolar violation is detected, an error has occurred in one of
the bits between and including the pulses indicating the violation. Either a pulse should be a 0 or an
intervening 0 should have been a pulse of the opposite polarity. By examining the actual sample
values more closely, a decision can be made as to where the error was most likely to have occurred.
The bit with a sample value closest to its decision threshold is the most likely bit in error. This
technique belongs to a general class of decision algorithms for redundant signals called maximum
likelihood or Viterbi decoders . Notice that this method of error correction requires storage of pulse
amplitudes. If decision values only are stored, error correction cannot be achieved (only error
detection).
An additional application of the unused code space in bipolar coding is to purposely insert bipolar
violations to signify special situations such as time division multiplex framing marks, alarm
conditions, or special codes to increase the timing content of the line signals. Since bipolar
violations are not normally part of the source data, these special situations are easily recognized. Of
course, the ability to monitor the quality of the line is compromised when bipolar violations occur
for reasons other than channel errors.
4.3.3 Binary Af-Zero Substitution
A major limitation of bipolar (AMI) coding is its dependence on a minimum density of l’s in the
source code to maintain timing at the regenerative repeaters. Even when strings of 0’s greater than
14 are precluded by the source, a low density of pulses on the line increases timing jitter and
therefore produces higher error rates. Binary Non-zero substitution (BNZS) augments a basic
bipolar code by replacing all strings of N 0’s with a special JV-length code containing several pulses
that purposely produce bipolar violations. Thus the density of pulses is increased while the original
Hata are obtained by recognizing the bipolar violation codes and replacing them at the receiving
terminal with N0’s.
As an example, a three-zero substitution algorithm (B3ZS) is described. This particular
substitution algorithm is specified for the standard DS-3 signal interface in North America, It was
also used in the LD-4 coaxial transmission system in Canada.
In the B3ZS format, each string of three 0’s in the source data is encoded with either 00V or
BOV. A 00V line code consists of 2-bit intervals with no pulse (00) followed by a pulse
representing a bipolar violation (V). A BOV line code consists of a single pulse in keeping with the
bipolar alternation (B), followed by no pulse (0), and ending with a pulse with a violation (V). With
either substitution, the bipolar violation occurs in the last bit position of the three 0’s replaced by the
special code. Thus the position of the substitution is easily identified.
The decision to substitute with 00V or BOV is made so that the number of B pulses (unviolated
pulses) between violations (V) is odd. Hence if an odd number of l’s has been transmitted since the
last substitution, 00V is chosen to replace three 0’s. If the intervening number of l’s is even, BOV is
chosen. In this manner all purposeful violations contain an odd number of intervening bipolar
pulses. Also, bipolar violations alternate in polarity so that dc wander is prevented. An even number
of bipolar pulses between violations occurs only as result of a channel error. Furthermore, every
purposeful violation is immediately preceded by a 0. Hence considerable systematic redundancy
remains in the line code to facilitate performance monitoring.
Example. Determine the B3ZS line code for the following data sequence:
101000110000000010001. Use + to indicate a positive pulse, - to indicate a negative pulse, and
0 to indicate no pulse.

B3ZS Substitution Rules


Number of Bipolar Pulses (1’s)
Since Last Substitution
Polarity of Odd Even
Preceding Pulse
— 00- +0+
Solution. There are two possible
+ 00+ -0-
sequences depending on
whether an odd or even number of pulses has been transmitted following the previous violation:
Example indicates that the process of breaking up strings of O’ s by substituting with bipolar
violations greatly increases the minimum density of pulses in the line code. In fact, the minimum
density is 33% while the average density is just over 60%. Hence the B3ZS format provides a
continuously strong timing component. Notice that all BNZS coding algorithms guarantee
continuous timing information with no restrictions on source data. Hence BNZS coding supports any
application in a completely transparent manner.
Another BNZS coding algorithm is the B6ZS algorithm used on obsolete T2 transmission lines.
This algorithm produces bipolar violations in the second and fifth bit positions of the substituted
sequence.
ITU recommends another BNZS coding format referred to as high-density bipolar (HDB) coding.
As implemented in the El primary digital signal, HDB coding replaces strings of four 0’s with
sequences containing a bipolar violation in the last bit position. Since this coding format precludes
strings of 0’s greater than three, it is referred to as HDB3 coding. The encoding algorithm is
basically the same as the B3ZS algorithm described earlier. Notice that substitutions produce
violations only in the fourth bit position, and successive substitutions produce violations with
alternating polarities.

fundamental feature of end-to-end digital connectivity as provided by ISDN is 64-kbps transparent


channels referred to as clear-channel capability (CCC) [18], Two aspects of a bipolar/AMI line code
as used on T1 lines preclude CCC: robbed signaling in the least significant bit of every sixth frame
and the need to avoid all-0’s codewords on the channel. Bit robbing for signaling is avoided with
common- channel signaling (also an inherent requirement for ISDN deployment). Two means of
augmenting T1 lines to allow transparent channels have been developed.
Digital Biphase
Bipolar coding and its extensions BNZS and PST use extra encoding levels for flexibility in
achieving desirable features such as timing transitions, no dc wander, and performance monitor
ability. These features are obtained by increasing the code space and not by increasing the
bandwidth. (The first spectral null of all codes discussed so far, including an NRZ code, is located at
the signaling rate 1/7Y)
Many varieties of line codes achieve strong timing and no dc wander by increasing the bandwidth
of the signal while using only two levels for binary data. One of the most common of these codes
providing both a strong timing component and no dc wander is the digital biphase code, also
referred to as “diphase” or a “Manchester” code.
A digital biphase code uses one cycle of a square wave at a particular phase to encode a 1 and one
cycle of an opposite phase to encode a 0. Notice that a transition exists at the center of every
signaling interval. Hence strong timing components are present in the spectrum. Furthermore, logic
0 signals and logic 1 signals both contain equal amounts of positive and negative polarities. Thus dc
wander is nonexistent. A digital biphase code, however, does not contain redundancy for
performance monitoring. If in-service performance monitoring is desired, either parity bits must be
inserted into the data stream or pulse quality must be monitored.

The frequency spectrum of


a digital biphase signal is
derived in Appendix C
and plotted in Figure 4.13, where it can be compared to the spectrum of an NRZ signal Notice that a
digital biphase signal has its first spectral null at 2IT. Hence the extra timing transitions and
elimination of dc wander come at the expense of a higher frequency signal. In comparison to three-
level bipolar codes, however, the digital biphase code has a lower error rate for equal signal-to-noise
ratios. Examination of the frequency spectra in Figure shows that the diphase spectrum is similar to
an NRZ spectrum but translated so it is centered about 1/T instead of direct current. Hence digital
biphase actually represents digital modulation of a square wave carrier with one cycle per signal
interval. Logic 1 ’s cause the square wave to be multiplied by +1 while logic 0’s produce
multiplication by -1. The “Ethernet” IEEE 802.3 local area data network uses digital biphase
(Manchester) coding.
4.3.6 Differential Encoding
One limitation of NRZ and digital biphase signals, as presented up to this point, is that the signal for
a 1 is exactly the negative of a signal for a 0. On many transmission media, it may be impossible to
determine an absolute polarity or an absolute phase reference. Hence the decoder may decode all l’s
as 0’s and vice versa. A common remedy for this ambiguity is to use differential encoding that
encodes a 1 as a change of state and encodes a 0 as no change in state. In this manner no absolute
reference is necessary to decode the signal. The decoder merely detects the state of each signal
interval and compares it to the state of the previous interval. If a change occurred, a 1 is decoded.
Otherwise, a 0 is determined.
Differential encoding and decoding do not change the encoded spectrum of purely random data
(equally likely and uncorrelated l’s and 0’s) but do double the error rate. If the detector makes an
error in estimating the state of one interval, it also makes an error in the next interval. An example
of a differentially encoded NRZ code and a differentially encoded diphase signal is shown in Figure
4.14. All signals of differentially encoded diphase retain a transition at the middle of an interval, but
only the 0’s have a transition at the beginning of an interval.
The 6 CRC bits (CB1 to CB6) of each extended superframe represent a CRC check of all 4608
information bits in the previous superframe. Besides providing end-to-end performance monitoring,
the CRC virtually precludes the chances of false framing on a data bit position. Even though static
user data can easily simulate the ITS, it is extremely unlikely that user data can spuriously generate
valid CRC codes in successive superframes.
The performance parameters measured and reported by the 4-kbps data link (DL) are framing bit
errors, CRC errors, out-of-frame (OOF) events, line code (bipolar) violations, and controlled slip
events. Individual events are reported as well as event summaries. The four performance summaries
reported are:
1. Errored seconds (ESs) (ES = at least one CRC event)
2. Bursty seconds (BSs) (BS = 2-319 ESs)
3. Severely errored seconds (SESs) (SES = >319 ESs or OOFs)
4. Failed seconds (FSs) (10 consecutive SESs)
ESF CSUs typically determine the above parameters on 15-min intervals and store them for up to 24
hr for polling by a controller, The SES report conforms to ITU recommendation G.821. In addition
to supporting remote interrogation of performance statistics, the data link carries alarm information,
loopback commands, and protection switching commands.
In addition to the previously mentioned features, ESF introduces a new option for per-channel
signaling via the robbed signaling bits in every sixth frame. Because an ESF is 24 frames long, there
are four signaling bits in every channel in every superframe as opposed to 2 bits in SF format
(Figure 4.34). Whereas the two signaling bits in the SF format are designated as A and B bits, the
four bits in the ESF case are designated A, B, C, and D. Three signaling modes are defined: 2-state
where all bits are A bits, 4-state where the signaling bits are ABAB, and 16-state where the
signaling bits are ABCD. The SF format provides the first two signaling modes but not the last.

TIME DIVISION MULTIPLEX LOOPS AND RINGS


In this section a particular form of a TDM network is described that is quite useful in
interconnecting distributed nodes. The basic structure of interest is referred to as a TDM loop or
TDM ring and is shown in Figure 4.36.
Basically, a TDM ring is configured as a series of unidirectional (two-wire) links arranged to
form a closed circuit or loop. Each node of the network is implemented with two fundamental
operational features. First, each node acts as a regenerative repeater merely to recover, the incoming
bit stream and retransmit it. Second, the net-
Calculation of the CRC actually includes F bits that are set to 1 for purposes of CRC calculation
only. Thus, channel errors in the F bits do not create CRC errors (unless they occur in the CRC bits
themselves.The nodes recognize the TDM frame structure and communicate on the loop by re-

Time division multiplex loop.


moving and inserting data into specific time slots assigned to each node. As indicated in Figure, a
full-duplex connection can be established between any two nodes by assigning a single time slot or
channel for a connection. One node inserts information into the assigned time slot that propagates
around the loop to the second node. The destination node removes data as the assigned time slot
passes by and inserts return data in the process. The return data propagates around the loop to the
original node where it is removed and replaced by new data, and so forth.
Since other time slots are not involved with the particular connection shown, they are free to be
used for other connections involving arbitrary pairs of nodes. Hence a TDM loop with C time slots
per frame can support C simultaneous full-duplex connections.
If, as channels become available, they are reassigned to different pairs of nodes, the transmission
facilities can be highly utilized with high concentration factors and provide low blocking
probabilities between all nodes. Thus a fundamental attraction of a loop network is that the
transmission capacity can be assigned dynamically to meet changing traffic patterns. In contrast, if a
star network with a centralized switching node is used to interconnect the nodes with four-wire
links, many of the links to particular nodes would be underutilized since they cannot be shared as in
a loop configuration.
Another feature of the loop-connected network is the ease with which it can be reconfigured to
accommodate new nodes in the network. A new access node is merely inserted into the nearest link
of the network and the new node has complete connectivity to all other nodes by way of the TDM
channels. In contrast, a star structured network requires transmission to the central node and
expansion of the centralized switching facilities.The ability to reassign channels to arbitrary pairs of
nodes in a TDM loop implies that the loop is much more than a multiplexer. It is, in fact, a
distributed transmission and switching system. The switching capabilities come about almost as a
by-product of TDM transmission. TDM loops represent the epitome of integrated transmission and
switching.
TDM loops have been used within computer complexes to provide high capacity and high
interconnectivity between processors, memories, and peripherals. The loop structure in this
application is sometimes more attractive than more conventional bus structures since all
transmission is unidirectional and therefore avoids timing problems on bidirectional buses that limit
their physical length. Furthermore, as more nodes are added to a bus, the electrical loading increases,
causing a limitation on the number of nodes that can be connected to a bus. Loops, on the other
hand, have no inherent limits of transmission length or numbers of nodes.
The loop structure of is topologically identical to the token-passing ring developed by IBM and
standardized by the IEEE as a 802.5 local area network. However, a token-passing ring operates
differently than a TDM loop in that there is only one channel. When a node on a ring becomes
active, it uses the entire capacity of the outgoing link until it is through sending its message. In
contrast, a node on a loop uses only specific time slots in the TDM structure, allowing other nodes to
be simultaneously “connected” using other time slots. In essence, a TDM loop is a distrib- uted-
circuit switch and an 802.5 ring is a distributed-packet switch.
A particularly attractive use of a loop with high-bandwidth links is shown in Figure. This figure
illustrates the use of add-drop multiplexers (ADMs) that access whatever bandwidth is needed at a
local node but pass the rest on to other nodes. In typical applications the amount of bandwidth
allocated to each node is quasi-static: It is changed only in response to macroscopic changes in
traffic patterns, possibly as a function of the time of day. This basic operation is generally referred
to as a cross-connect function as opposed to a switching function, which involves call-by-call recon-
figurations. An important point to note about Figure 4.37 is the ability to utilize a general-purpose
physical topology but define an arbitrary functional topology on top of it.
Switch Switch

Figure 4.38 Use of reverse loop to circumvent link failures in TDM loops.

Switch Switch
Figure 4.37. Functional mesh, fiber loop and ADMs

One obvious limitation of a loop is its vulnerability to failures of any link or node. The effect of a
node failure can be minimized by having bypass capabilities included in each node. When bypassed,
a node becomes merely a regenerative repeater, as on T-carrier transmission links. Link failures can
be circumvented by providing alternate facilities. Figure 4.38 shows one particular structure using a
second, reverse-direction loop to provide backup capabilities in the case of failures. When fully
operational, the network can use the reverse loop as a separate, independent network for traffic as
needed. Whenever a failure occurs, the nodes adjacent to the break establish a new loop by
connecting the forward path to the reverse path at both places. Hence all nodes continue to have full
connectivity to any node on the new loop.
A particular example of the use of the dual reverse loop for both protection and distributed
queued access to the channels is the distributed queued dual-bus (DQDB) system developed by
QPSX in Australia and standardized by the IEEE as an 802.6 metropolitan area network.

SONET/SDH
The first generations of fiber optic systems in the public telephone network used proprietary
architectures, equipment, line codes, multiplexing formats, and maintenance procedures. Some
commonality with other systems in the network came from suppliers who also supplied digital radio
systems. In these cases, the multiplexing formats and maintenance protocols emulated counterparts
in the radio systems, which also had proprietary architectures. The only thing in common with all of
the radio and fiber systems from all of the suppliers was that the interface to the network was some
number of DS3 cross-connect signals. Proprietary multiplexing formats for multiple DS3 signals
evolved because there was no higher level standard compatible with the applications. A DS4 signal,
which is composed of six DS3 signals, requires too much bandwidth for radio systems and carries a
larger cross section of channels (4032) than needed in many applications.
The Regional Bell Operating Companies and interexchange carriers (IXCs), the users of the
equipment, naturally wanted standards so they could mix and match equipment from different
suppliers. This became particularly important as a result of competition among the IXCs who
desired fiber interfaces to the local exchange carriers (LECs) but did not want to necessarily buy
from the same suppliers as the LECs. (It might be necessary for an IXC to interface with a different
supplier at each LEC.) To solve these problems, and others, Bellcore initiated an effort that was
later taken up by the T1X1 committee of the Exchange Carriers Standards Association (ECS A) to
establish a standard for connecting one fiber system to another at the optical level (i.e., “in the
glass”). This standard is referred to as the synchronous optical network (SONET). In the late stages
of the development of this standard, CCITT became involved so that a single international standard
exists for fiber interconnect between telephone networks of different counties. Internationally, the
standard is known as the synchronous digital hierarchy (SDH), The SONET standard addresses the
following specific issues:
1. Establishes a standard multiplexing format using some number of 51.84-Mbps (STS-1) signals
as building blocks.
2. Establishes an optical signal standard for interconnecting equipment from different suppliers.

3. Establishes extensive operations, administrations, maintenance, and provisioning (OAM&P)


capabilities as part of the standard.
4. Defines multiplexing formats for carrying existing digital signals of the asynchronous
multiplexing hierarchy (DS1, DS1C, DS2, DS3).
5. Supports CCITT (ITU-T) digital signal hierarchy (El, E2, E3, E4).
6. Defines a DSO identifiable mapping format for DS1 signals.
7. Establishes a flexible architecture capable of accommodating other applications such as
broadband ISDN with a variety of transmission rates. Wide-bandwidth signals (greater than
51.84 Mbps) are accommodated by concatenating multiple STS-1 signals. A STS-3c signal, for
example, is an 155.52-Mbps signal that is treated by the network as a single entity.
At the lowest level is the basic SONET signal referred to as the synchronous transport signal level 1
(STS-1). Higher level signals are referred to as STS-N signals. An STS-N signal is composed of N
byte-interleaved STS-1 signals. The optical counterpart of each STS-N signal is an optical carrier
level N signal (OC-N). Table also includes ITU nomenclature for the SDH, which refers to signals
as synchronous transport modules N (STM-N). Because common applications of the ITU signal
hierarchy cannot efficiently use a 51.84-Mbps signal, the lowest level STM signal is a 155.52Mbps
(STS-3c) signal.
Although the SONET specification is primarily concerned with OC-N interconnect standards,
STS-1 and STS-3 electrical signals within the SONET hierarchy are useful within a switching office
for interconnecting network elements (e.g., multiplexers, switching machines, and cross-connect
systems).

SONET Multiplexing Overview


The first step in the SONET multiplexing process involves generation of a 51.840-Mbps STS-1
signal for each tributary. The STS-1 signal contains the tributary (payload) traffic plus transport
overhead. As indicated in the figure, a variety of tributary types are accommodated:
1. A single DS3 per STS-1 that can be a standard asynchronous DS3 signal generated by an Ml3
or M23 multiplexer. Asynchronous DS3 inputs are passed transparently through the system to a
DS3 output. Because this transparent option exists, any 44.736Mbps signal can be carried
within the payload envelope.
2. A group of lower rate tributaries such as DS1, DS 1C, DS2, or El signals can be packed into the
STS-1 payload,
3. A higher rate (wideband) signal can be packed into a multiple number of concatenated STS-1
signals. Prevalent examples of higher rate signals are 139.264Mbps fourth-level multiplexes of
ITU or a
broadband ISDN signal at 150 Mbps. Each of these applications requires three STS-1 signals
concatenated together to form an STS-3C signal. Higher levels of concatenation (to form STS-
NC signals) are possible for higher rate tributaries. Concatenated STS-1 signals contain internal
control bytes that identify the signal as a component of a higher speed channel so the integrity
of the concatenated data can be maintained as it passes through a network.

Functional block diagram of SONET multiplexing.


An STS-N signal is created by interleaving bytes from N STS-1 signals that are mutually
synchronized. All timing (frequency) adjustment is done in generating each of the individual STS-1
signals. STS-1 signals that originate in another SONET node with a possibly different frequency are
rate adjusted with the equivalent of byte stuffing (described later) to become synchronized to the
clock of the local node. No matter what the nature of the tributary traffic is, all STS-1 tributaries in a
STS-N signal have the same high-level format and data rate.
Optical carrier level-IV signals are generated by first scrambling the STS-N signal (except for
framing bytes and STS-ID bytes) and then converting the electrical signal to an optical signal. Other
than scrambling, the OC-N signal is generated with direct conversion to an optical signal. Thus, the
data rates, formats, and framing of the STS- N and OC-N signals are identical.
A SONET system is defined as a hierarchy of three levels—sections, lines, and paths as indicated
in Figure. Each of these levels has overhead bandwidth dedicated to administering and maintaining
the respective level. As indicated in above Figure, one of the overhead functions provided within an
STS-N signal involves calculation and transmission of a parity byte for the entire STS-N signal.
Parity is also defined for the other levels of the architecture as described in the following section.
SONET Frame Formats
The frame format of an STS-1 signal is shown in Figure. As indicated, each frame consists of 9 rows
of 90 bytes each. The first 3 bytes of each row are allocated to transport overhead with the balance
available for path overhead and payload mapping.

SONET SYSTEM HIERARCHY


The transport overhead is itself composed of section overhead and line overhead. Path overhead is
contained within the information payload as indicated.
The 9 rows of 87 bytes (783 bytes in all) in the information payload block is referred to as the
envelope capacity. Because the frame rate is 8 kHz, the composite data rate of each STS-1 signal
can be represented as the sum of the transport overhead rate and the information envelope capacity:
STS-1 rate = overhead rate + information envelope rate = 9 x 3 x 8 x 8000
+ 9 x 87 x 8 x 8000 = 1.728 x 106 + 50.112 x 106
= 51.840 Mbps
The internal format of the envelope capacity is dependent on the type of tributary traffic being
carried. One aspect of the envelope format that is common to all types of traffic is the 9 bytes of
path overhead indicated in Figure 8.13. The actual location and purpose of this overhead are
described in the next two sections.
STS-1 frame format.
As a specific example of a higher level (STS-N) signal, Figure 8.14 depicts the details of an STS-
3 signal that also represents the STM-1 signal format in ITU terminology. Transmission of the bytes
occurs row by row and left to right. Thus, the first 3 bytes of an STS-3 frame are the three framing
bytes Al, A1, Al. Most of the section and line overhead functions within an STS-3 signal are carried
in the STS-1 number 1 overhead. Thus many of the corresponding bytes of the other STS-1 signals
are unused and are so designated with an asterisk. Notice, however, that path overhead is included in
the information envelope for each of the STS-1 signals.
After a frame of an STS-N signal is scrambled, a parity byte (BIP-8) is generated that provides even
parity over corresponding bits in all bytes of the STS-N frame. This parity byte is inserted into the
section overhead of the first STS-1 signal of the next STS-N frame
STS-3 frame format.

SONET Operations, Administration, and Maintenance


The SONET standard places significant emphasis on the need for operations, administration, and
maintenance (OAM) of an end-to-end system. As shown in Figure 8.15, the OAM architecture is
based on the section, line, and path layers described previously. OAM standardization is a
requirement for mixing equipment from multiple vendors and ease of management of all levels of a
system (an individual repeater section or an end-to-end path).
Section Overhead
The functional allocation of the 9 bytes of section overhead in each STS-1 frame shown in Figure
are:
 A1 Framing byte = F6 hex (11110110)
 A2 Framing byte = 28 hex (00101000)
 C1 STS-1 ID identifies the STS-1 number (1,..., N) for each STS-1 within an STS-N
multiplex
 B1 Bit-interleaved parity byte providing even parity over previous STS-N frame after
scrambling
 El Section-level 64-kbps PCM orderwire (local orderwire)
 FI A 64-kbps channel set aside for user purposes
 D1-D3 An 192-kbps data communications channel for alarms, maintenance, control, and
administration between sections
The fact that there is such a richness of maintenance support at the section level (from one repeater
to another) is indicative of the recognized need for extensive OAM facilities and the availability of
economical technology to provide it.

SONET overhead layers.


Line Overhead
The functional allocation of the 18 bytes of line overhead in each STS-1 frame shown in Figure are
as follows:
 H1-H3 Pointer bytes used in frame alignment and frequency adjustment of payload data.
 B2 Bit-interleaved parity for line-level error monitoring
 Kl, K2 Two bytes allocated for signaling between line-level automatic protection
switching equipment
 D4-D12 A 76-kbps data communications channel for alarms, maintenance, control,
monitoring, and administration at the line level
 Z1,Z2 Reserved for future use
 E2 A 64-kbps PCM voice channel for line-level orderwire
Notice that the line-level OAM facilities are similar to those available at the section level with the
addition of the protection switching signaling channel and HI, H2, and H3 pointer bytes use for
payload framing and frequency adjustment.
Path Overhead
There are 9 bytes of path overhead included in every block (9 x 87 bytes) of information payload.
The important aspect of this overhead is that it is inserted when the tributary data are packed into the
synchronous payload envelope (SPE) and not removed (processed) until the tributary data are
unpacked. Thus, it provides end-to-end OAM support independent of the path through the
synchronous network, which may involve numerous intermediate multiplexers, cross-connect
switches, or add-drop multiplexers. The exact location of these 9 bytes within the payload envelope
is dependent on pointer values defined in the next section. The functions of the path overhead bytes
are:
 J1 A 64-kbps channel used to repetitively send a 64-byte fixed-length string so a receiving
terminal can continuously verily the integrity of a path; the contents of the message are user
programmable
 B3 Bit-interleaved parity at the path level
 C2 STS path signal label to designate equipped versus unequipped STS signals and, for
equipped signals, the specific STS payload mapping that might be needed in receiving
terminals to interpret the payloads
 G1 Status byte sent from path-terminating equipment back to path-originating equipment to
convey status of terminating equipment and path error performance (received BIP error
counts)
 F2 A 64-kbps channel for path user
 H4 Multiframe indicator for payloads needing frames that are longer than a single STS
frame; multiframe indicators are used when packing lower rate channels (virtual tributaries)
into the SPE
 Z3-Z5 Reserved for future use
Payload Framing and Frequency Justification
Payload Framing
The location of the 9 bytes of path overhead in the STS-1 envelope is not defined in terms of the
STS-1 transport framing. Instead, the path overhead is considered to be the first column of a frame
of data referred to as the SPE, which can begin in any byte position within the STS-1 payload
envelope (see Figure 8.16). The exact location of the beginning of the SPE (byte J1 of the path
overhead) is specified by a pointer in bytes HI and H2 of the STS line overhead. Notice that this
means that an SPE typically overlaps two STS-1 frames.
The use of a pointer to define the location of the SPE frame location provides two significant
features. First, SPE frames do not have to be aligned with higher level multiplex frames. It may be
that when first generated, an SPE is aligned with the line overhead at the originating node (i.e., the
pointer value is 0). As the frame is carried through a network, however, it arrives at intermediate
nodes (e.g., multiplexers or cross connects) having an arbitrary phase with respect to the outgoing
transport framing. If the SPE had to be frame aligned with the outgoing signal, a full SPE frame of
storage and delay would be necessary. Thus, the avoidance of frame alignment allows SPEs on
incoming links to be immediately relayed to outgoing links without artificial delay. The location of
the SPE in the outgoing payload envelope is specified by setting the HI, H2 pointer to the proper
value (0-782).
The second advantage of the pointer approach to framing SPE signals is realized when direct access
to subchannels such as DSls is desired. Because the pointer provides immediate access to the start of
an SPE frame, any other position or time slot within the SPE is also immediately accessible. If the
tributary uses a byte-synchronous mapping format, individual channel bytes have fixed positions
with respect to the start of the SPE. This capability should be compared to the procedures required to
demultiplex a DS3 signal. In a DS3 signal there is no relationship between the higher level framing
and the lower level DS2 and DS1 framing positions. In essence, two more frame recovery processes
are needed to identify a DSO time slot. The use of pointers in the SONET architecture eliminates the
need for more than one frame recovery process when accessing byte-synchronous lower level
signals.

Representative location of SPE.


Frequency Justification
Although it is generally intended that SONET equipment be synchronized to each other or to a
common clock, allowances must be made for the interworking of SONET equipment that operates
with slightly different clocks. Frequency offsets imply that an SPE may be generated with one clock
rate but be carried by a SONET transport running at a different rate. The means of accommodating a
frequency offset is to accept variable SPE frame rates using dynamic adjustments in the SPE
pointers. Pointer adjustments allow SPE frames to float with respect to the transport overhead to
maintain a nominal level of storage in interface elastic stores. Figure shows the basic means of
accommodating a slow incoming SPE. If the elastic store begins to empty, positive byte stuffing is
invoked to skip one information time slot (the slot immediately following the H3 byte) and
simultaneously incrementing the pointer to delay the SPE frame by one byte.
Virtual Tributaries
To facilitate the transport of lower rate digital signals, the SONET standard uses sub- STS-1 payload
mappings referred to as virtual tributary (VT) structures, as shown in Figure 8.19. This mapping
divides the SPE frame into seven equal-sized subframes or VT blocks with 12 columns (108 bytes)
in each. Thus, the subframes account for
7 x 12 = 84 columns with the path overhead and two unused columns (reserved bytes R) accounting
for the remainder of the 87 columns in an SPE. The rate of each VT structure is determined as 108 x
8 x 8000 = 6.912 Mbps.

SPE mapping for virtual tributaries


SONET Virtual Tributaries

The VT structures can be individually assigned to carry one of four types of signals. Depending
on the data rate of a particular signal, more than one signal may be carried within a VT structure as
a VT group. All signals within a VT group must be of the same type, but VT groups within a single
SPE can be different types. The particular lower rate signals accommodated as VTs are listed in
Table. The last column indicates how many of the lower rate signals are carried in a single SPE if all
seven VT groups are the same type.
VT-SPE payloads are allowed to float within an STS-1 SPE in the same fashion as pointers to
SPE payloads are allowed to float at the STS-1 level. Thus, a second level of pointer logic is defined
for VT payloads. Again, a floating VT-SPE allows for minimal framing delays at intermediate
nodes and for frequency justification of VT-SPEs undergoing transitions between timing
boundaries. High-rate VT-SPEs are accommodated by inserting an information byte into V3 while
slow-rate VT-SPEs are accommodated by stuffing into the information byte immediately following
V3 when necessary.
Each VT1.5 uses three columns of data to establish 108 bytes in a VT1.5 payload. There are four
such payloads in a 12-column VT group. The VI, V2, V3, V4 bytes of the payload have fixed
positions within the STS-1 payload. The remaining 104 bytes of the VT 1.5 signal constitute the
VT1.5 payload, the start of which is the V5 byte pointed to by VI and V2.
Asynchronous Mapping
The DS1 bit stream is inserted into the information bits (I) with no relationship to the VT-SPE
frame or byte boundaries. As indicated, there are two stuffing opportunities (Si and S2) available in
every four-frame superframe. Thus, the VT1.5 superframe carries 771,772, or 773 information bits
depending on the value of the stuff control bits C1 and C2. The nominal number of information bits
in each frame is 193 x 4 = 772. Nominal frames carry information in S2 while stuffing in Sj.

Super frame structure for V T I . 5 tributaries


 A VTl uses three columns of an SPE far 108 bytes in a 500 user, super frame.
 VI, V2, V3, V4 bytes have fixed locations in a SEE identified by the but two bits ofH4.
 VI and V2 point to V5 which is the first byte of the floating VT1.5 SPE.
 SPE overhead bytes V5, J2, Z6, Z7 occur in identical relative positions of an SPE.
Because the asynchronous operation is compatible with the asynchronous network, it is the
format used in most SONET applications. The major advantage of the asynchronous mode of
operation is that it provides for totally transparent transmission of the tributary signal in terms of
information and in terms of information rate. The major disadvantage of the asynchronous mode is
that 64-kbps DSO channels and signaling bits are not readily extracted.
Byte-Synchronous Multiplexing
In contrast to the asynchronous mapping, the byte-synchronous payload mapping allocates specific
bytes of the payload to specific bytes (channels) of the DS 1 tributary signal. Hence, this mode of
operation overcomes the main drawback of the asynchronous mode in that 64-kbps DSO channels
and signaling bits within the payload are easily identified. In fact, when the DS1 tributary arises
from legacy applications, the signaling bits of a DS1 are moved from the least significant bit (LSB)
of every sixth frame of respective channels and placed in dedicated signaling bit positions within the
VT-SPE. Thus byte-synchronous multiplexing offers an additional feature of converting from in-slot
signaling to out-slot signaling for DS1 signals.
An important aspect of the byte-synchronous format is the absence of timing adjustments for the
source DS 1 signal. Thus, the DS 1 interface necessarily requires a slip buffer to accommodate a
DS1 source that may be unsynchronized to the local SONET clock. Although slips in byte
synchronously mapped DS 1

DS1 mappings in VT1.5 SPE: (a) asynchronous; (b) byte synchronous.


signals may occur at the SONET network interface (e.g., SONET gateway), slips cannot occur within
the SONET network because internal nodes rate adjust the VT1.5 payloads with pointer adjustments.
E1 Mappings
El signals are mapped into VT2 signals with the same basic procedures used for DS1 s. As shown in
Figure, the VT2 signal is composed of four columns of bytes in an STS-1 that produce a total of 144
bytes. After removing the VI, V2, V3, and V4 bytes, the VT2 payload has 140 bytes. Formats for
asynchronously mapped Els and byte synchronously mapped Els. Notice that the byte-synchronous
mapping for a 30-channel El carries channel-associated signaling in slot 16-the form of out-slot
signaling designed into El signals at their inception. The same basic format supports common-
channel signaling, which is sometimes referred to as a’

Super frame structure for VT2 tributaries


 A VT2 uses four columns of an SPE for 144 bytes in a 500 𝝁sec, superframe.
 VI, V2, V3, V4 bytes have fixed locations in a SPE identical by die last two bits of H4.
 VI and V2 point to V5 which is the first byte of the floating VT2 SPE.
 SPE overload bytes V5, J2, Z6, Z7 occur in identical relative positions of an SPE.
.31-channel El format. In this case channel 16 is the CCS channel and channels 1-15 and 17-31 are
the bearer channels. Thus, the multiplex mapping is not changed, just the nomenclature of the
channels and the SPE type designation in the VT path overhead byte V5.
DS3 Payload Mapping
The previous section describes several alternatives for packing virtual tributaries into an STS-1
envelope. When all seven VTs in an envelope are VT1.5s, a total capacity of 28 DSls is provided—
the same as a DS3 signal. Thus one method of carrying a DS3 signal involves demultiplexing it into
its constituent DS1 (or DS2 signals) and packing the constituents as virtual tributaries. This
approach is attractive in that the virtual tributaries are individually accessible for cross-connect or
add-drop multiplexer systems. If the application does not need to access the individual tributaries, it
is simpler to pack the DS3 signal directly into an STS-1, as indicated in Figure 8.24. The payload
mapping in Figure 8.24 treats the DS3 signal simply as a 44.736-Mbps data stream with no implied
internal structure. Thus, this
mapping provides transparent transport of DS3-rate data streams.

El mappings in VT2 SPE: (a) synchronous; (b) byte synchronous.


Each row of a nine-row SPE envelope contains 87 x 8 = 696 bits, which can carry 621 or 622
DS3 data bits depending on the value of the C bits. Notice that this format has five C bits, which
allows for single and double bit error correction. The path overhead (POH) bytes carry the 9 bytes
of POH
E4 Payload Mapping
Asynchronous 44.736-Mbps (DS3) payload mapping

Asynchronous 139.264-Mbps (E4) payload mapping.


One example of a SONET super rate mapping is shown in Figure for a 139.264Mbps fourth-level
ITU-T signal (E4), This signal is packed into a 155.52-Mbps STS-3c (or STM-1) signal. Figure
shows only the synchronous payload envelope .(SPE-3c), not the 9 bytes of section and line
overhead in each row. Notice that there is only one column of POH within the SPE-3c envelope.
The POH bytes carry the 9 bytes of overhead.
The payload mapping in Figure 8.25 treats the 139.264-Mbps signal as a transparent data stream
with no implied internal structure. Each row of a nine-row SPE-3c envelope contains 87 x 3 = 261
bytes, which can carry 1934 or 1935 data bits depending on the value of the C bits. Notice that this
format also has five C bits, which allows for single and double bit error correction.
SONET Optical Standards
The optical interface standard defined for “mid-span-meet” of SONET equipment allows for either
NRZ or RZ line codes on single-mode fibers. Generation of the OC-N signal from the STS-N signal
requires a scrambler as shown in Figure 8.26. The scrambler is synchronized to each STS-N frame
by presetting the shift register to all l’s immediately after transmitting the last Cl byte of the STS-N
section overhead. Thus, the frame codes (A1,A2) and STS-1 ID (Cl) code are not scrambled. A
minimum level of timing content is assured by the Al, A2, and Cl bytes along with the static
overhead bits of the STS-N frame that are anti-coincident with the scrambler sequence. Because the
scrambler is preset at the same point of every frame, every bit position in successive frames
experience the same scrambler value. Thus, when static overhead is “exclusive ored” with the
scrambler, the same data values arise.
SONET scrambler.

The BER objective is 1 x 10“10 for optical sections of 40 km or less. Equipment from separate
manufacturers can be freely interchanged for applications with distances up to 25 km. Longer
distances may require joint engineering.
SONET systems are specified to operate with central wavelengths at 1310 nm with SMF fibers or
at 1550 nm with DS-SMF fibers. Operation at 1310 nm with DS-SMF fibers or at 1550 nm with
SMF fibers is not disallowed but must be jointly engineered. A range of laser wavelength tolerances
and maximum allowable spectral widths is
Representative Maximum Spectra Widths of SONET Sources
specified for both 1310 and 1550 nm. Table 8.10 provides representative values of the
specifications.
SONET Networks
A basic block diagram of a SONET network is shown in Figure. Gateway network elements
(GNEs) provide interfaces to external (asynchronous) digital signals. These signals are mapped
(synchronized) and unmapped (desynchronized) by the gateway using the appropriate mapping
format. At this point only bit stuffing is used to synchronize the asynchronous tributaries to
SONET. No pointer adjustments occur in the GNE. As the STS-N signals propagate through the
network, pointer adjustments in pointer processing (PP) interfaces may be applied at internal
network elements (NEs), but the lower level interface mappings that occur at the GNEs are
untouched. If a particular NE accesses VT payloads, VT payloads in the same VT group that pass
through the node may experience VT pointer adjustments. Otherwise, V.T pointer adjustments do
not occur (only the STS-1 level signals are rate adjusted). The following paragraphs summarize
pointer processing aspects of a SONET network:
1. Pointer justification events (PJEs) never occur in an originating GNE.
2. A desynchronizer experiences continuous PJEs only as a result of a synchronization difference
between the originating GNE and the terminating

SONET network elements: S, synchronizer; PP, pointer processor; D, desynchronizer

GNE. Synchronization differences/failures at internal nodes of a SONET network produce


continuous pointer adjustments, but these get removed when the SPE passes through a node
that is synchronized to the source GNE.
3. PJE bursts occur for two possible reasons. The first is a result of a reference switch and a
subsequent phase adjustment of a node’s local clock to align it with the phase of the new
reference. Bursts can also occur as a result of clock noise in multiple nodes producing near-
simultaneous pointer adjustments. In order for all of these adjustments to propagate to a
desynchronizing gateway, all of the elastic stores in the path must be at the appropriate
threshold. This can only happen if the source GNE has previously produced some abnormal
behavior such as a loss of a reference or sustained a rather large amount of wander.
4. A pointer adjustment at the SPE level does not affect a VT signal unless it is passed to a node
that accesses the VT and that particular adjustment happens to cause a pointer movement at the
VT level. Even when this occurs, the VT pointer adjustment must pass through the network
(without absorption) to the desynchronizing gateway to affect the outgoing tributary signal. On
average, one of every 30 PJEs at the STS-1 level produces a PJE at the VT1.5 level.
A block diagram of an SPE synchronization circuit (PP) depicting two halves of pointer
processing: one half extracts (desynchronizes) the SPE payload from a received signal and the other
half synchronizes the SPE to the local STS-1 frame rate. The RX pointer processing block extracts
the payload data from the received signal and passes it to the elastic store. The TX pointer
processing block monitors the fill level of the elastic store and makes pointer adjustments to
maintain a nominal level of storage. The size of the elastic store only needs to be on the order
ofbytes in length, not a full frame. The ability to use a relatively small elastic store (as compared to
frame-length elastic stores in the asynchronous network) is one of the features of a pointer-based
synchronization architecture: The payloads are allowed to float with respect to the STS-1 frame
boundaries.
Frequency of Pointer Justification Events
If all NEs of a SONET island use a timing reference that is traceable to a common primary reference
source (PRS), PJEs occur only as a result of distribution-induced clock wander that produces no
sustained frequency offset. Thus, when all NEs are synchronized to the same reference, PJEs occur
at random times and have equal numbers of positive and negative values over the long run.
Continuous PJEs occur only when there is a reference failure at some NEs within a SONET island
or the island is intentionally designed to operate in a plesiochronous mode. If the reference failure
occurs at some internal node of the SONET island, the resulting PJEs are removed at the next node
in the path that is still locked to the same reference as the gateway NE. Thus, a tributary
desynchronizer at a GNE must deal.

Block diagram of SPE synchronizing equipment RX, receiver, TX, transmitter


SONET Desynchronizers
SONET desynchronizers are necessarily designed with very low clock recovery band- widths to
smooth the effects of (1) isolated pointer adjustments, (2) continuous pointer adjustments, (3)
pointer adjustment bursts, or (4) combinations of the latter two. A pointer burst is defined as the
occurrence of multiple pointer adjustments of one polarity occurring within the decay time of the
desynchronizer circuit (i.e., the reciprocal of the desynchronizer closed-loop PLL bandwidth). Thus,
it is ironic that as the clock recovery bandwidth is narrowed to smooth the effect of a burst, the
probability of a burst occurrence is increased (by definition only). Extremely narrow PLL
bandwidths are easiest to implement using digital filtering techniques commonly referred to as bit
leaking. Bit leaking is essentially a mechanism for converting byte-sized pointer adjustments into
bit- (or fractional-bit-) sized timing adjustments.
VTl.5 desynchronize hardware functional components

The microprocessor is used to perform long-term averaging of phase adjustments in lieu of


dedicated logic that requires large counters and wide word sizes for low-bandwidth DSP filtering.
The first function of the microprocessor is to determine the average DS1 payload frequency offset
represented by all frequency adjustment events (bit stuffs, VT pointer adjustments, and STS pointer
adjustments). After the average frequency adjustment is determined, a stuff ratio value is calculated
that allows insertion into a DS3 signal as shown. (The M12 stage is embedded in the M13
multiplexer.) The elastic store fill level is used for very long term adjustments in the output
frequency that arise from finite precision limits of the DSP calculations and for accommodating
variations in the DS3 clock, which is typically not synchronized to the SONET line clock.
SONET RINGS
As has been mentioned earlier in this book, the development of large switching machines and
transmission systems with extremely large cross sections has impacted telecommunications network
architectures with a trend toward fewer hierarchical levels. An undesirable consequence of this
trend is increased dependence on the operational status of individual switching machines and
transmission paths. A SONET self-healing ring, or more simply a SONET ring, is a network
architecture that specifically addresses network survivability. Two basic types of self-healing rings
are shown in Figure: a unidirectional ring and a bidirectional ring. The main difference between the
two types of rings is how the two directions of a duplex connection are established.
In a unidirectional ring a single time slot of the entire ring is assigned to both halves of a
connection. As indicated in Figure, traffic is normally carried only on the (unidirectional) working
path with the counterrotating path used for protection. In the example, an STS-1 (out of an OC-48)
might be carried directly from A to B, but the returning STS-1 would be carried from B through C
and D to A. A bidirectional ring, on the other hand, establishes both halves of the duplex connection
over the shortest path in the ring. Thus, no fiber is identified as a pure working fiber and another as
a pure protection fiber. Because bidirectional rings provide shorter round trip delays for most
connections and allow reuse of time slots on the ring, it is the preferred mode of operation for
interoffice networks. Rings for subscriber access applications do not carry much traffic between
ADM nodes and therefore are more suited to a unidirectional mode of operation.
Unidirectional Path-Switched Ring
As shown in Figure , a unidirectional path-switched ring (UPSR) transmits the same information
from A to B in both directions around the ring. Normally, only the working path is accessed by the
(b)
receiving node: If a failure occurs, a node can select the data on the protection channel. Notice that
in the example shown selection of the protection path actually leads to a shorter path for the
connection from A to B.
(a) Unidirectional and (b) bidirectional rings.
Bidirectional Line-Switched Ring
Bellcore defines two versions of
bidirectional line-switched rings (BLSRs): a
two-fiber BLSR and a four-fiber BLSR. On
a two-fiber BLSR protection is provided by
reserving bandwidth in each of two counter
rotating fiber paths . If all traffic is to be
protected, only 50% of the total system
capacity can be used. Under normal
conditions connections between two nodes
utilize the shortest path between the nodes. If
a fault in either direction of transmission
occurs, the nodes adjacent to the fault perform ring switches as indicated. A ring switch involves
switching traffic from working channels of the failed facility to spare channels of the other facility
on the side of the node on which the fault occurs. The protection-switched traffic propagates all the
way around the ring, being ignored by intervening nodes, until it is switched back
to the working channels by the other node next to the fault. Notice that all nodes (including the
nodes adjacent to the fault) communicate on working
channels in the same manner as they did before the
protection switching. That is, the path terminations are not
part of the protection path. The main impact of the rotection
switch is an increase in delay for affected traffic (and a
momentary insertion of extraneous data when the switch
occurs).
UPSR protection switching.
On a four-fiber BLSR two pairs of fibers are provided for each direction of transmission—one
bidirectional working pair and another pair for protection of the first pair. Thus, working and
protection channels are carried on different physical facilities. Again, connections are normally set
up to use the shortest distance of

Two-Fiber BLSR protection switches

Four-Fiber BLSR protections switches


travel for each side of a connection. If a failure occurs on only a working facility, protection
switching occurs similar to “span switching” of a point-to-point system: The traffic is merely
switched to and from the protection facility by nodes adjacent to the fault. However, if a fault
affects both the working and the protection facilities, a ring switch is needed as shown. Again,
protection-switched traffic propagates all the way around the ring without being accessed by
intervening nodes. All traffic accesses still occur on the working channels even though the same
information is passing through the nodes in the protection path.
A four-fiber BLSR obviously requires more facilities than a two-fiber BLSR but has numerous
advantages. First, the protected capacity of the system is twice as large. Second, fiber failures on
only the working pair can be accommodated by a span switch with minimal disruption to traffic.
Third, multiple separate failures can occur on working pairs and be accommodated by multiple span
switches. Fourth, the presence of a spare pair simplifies maintenance testing and possible upgrading
of facilities. For these reasons, a four-fiber BLSR is generally favored.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy