Section -3.5
Section -3.5
5
NG SDH, MSPP, RPR, SYNCHRONISATION,
PRINCPLE of DWDM & FTTH
Section 3.5
Next Generation SDH enables operators to provide more data transport services while
increasing the efficiency of installed SDH/SONET base, by adding just the new edge nodes,
sometime known as Multi Service Provisioning Platforms (MSPP) / Multi Service Switching
Platforms (MSSP), can offer a Combination of data interfaces such as Ethernet, 8B/10B,
MPLS(Multi Protocol Label Switching) or RPR(Resilient Packet Ring), without removing
those for SDH/PDH. This means that it will not be necessary to install an overlap network or
migrating all the nodes or fiber optics. This reduces the cost per bit delivered, and will attract
new customers while keeping legacy services. In addition, in order to make data transport more
efficient, SDH/SONET has adopted a new set of protocols that are being installed on the
MSPP/MSPP nodes. These nodes can be interconnected with the old equipment that is still
running.
Difficulty of mapping newer (Ethernet, ESCON, FICON, Fiber Channel etc) services to
the existing SDH transport network.
Inability to increase or decrease available bandwidth to meet the needs of data services
without impacting traffic.
-together in Next generation SDH solved the above issues and adding three main
features to traditional SDH:
1. Integrated Data Transport i.e. Ethernet tributaries in addition to 2Mb, 140 Mb, STM-
1,4,16 ----GFP
2. Integrated non blocking, wide-band cross connect (2Mb granularity) making the
efficient use of the transport network in delivering data services ---VCAT
Intelligence for topology discovery, route computation and mesh based restoration------
LCAS
migrating all the nodes or fiber optics. This reduces the cost per bit delivered,
Next Generation SDH is Packet Friendly and have IP router like capabilities. It does not
matter if the client stream has constant or variable bit rates.
“VCAT provides more granularity, LCAS provides more flexibility and GFP
efficiently transports asynchronous or variable bit rate data signals over a synchronous or
constant bit rate”.
Hence,
GFP adds dynamism to legacy SDH. GFP is most economical way of adopting high
speed services, constant bit rate and variable bit rate, in SDH networks and can provide basis
for evolving RPR.
Customer Operator
Ethernet S
Na GFP VC LCAS
D
H
?
tiv M
FICON e U
Generic Virtual Link SONET/
Int Frame Concatenation X/
erf Capacity D SDH/
ESCON Procedure Adjustment E OTN
ac M
es Scheme
U
FC X
Fibre LAPS
Channel
3.1 GFP-F: -
GFP-F (Framed) is a layer 2 encapsulation in variable sized frames. Optimised for data
packet protocols such as DVD, PPP and Ethernet, MPLS etc Frame mode supports rate
adaptation and multiplexing at the packet/frame level for traffic engineering. This mode maps
entire client frame into one GFP frames of constant length but gaps are discarded. The frame is
stored first in buffer prior to encapsulation to determine its length. This introduces delay and
latency.
3.2 GFP-T:
Transparent mode accepts native block mode data signals and uses SDH frame merely
as a lightweight digital wrapper. GFP-T is very good for isocronic or delay sensitive protocols
&SAN (ESCON). GFP-T is used for FC, Gigabit Ethernet etc.
SDH concatenation consists of linking more than one VCs to each other to obtain a rate
that does not form part of standard rates. Concatenation is used to transport pay loads that do
not fit efficiently into standard set of VCs.
1. Contiguous concatenation
2. Virtual concatenation
VCs are routed individually and may follow different paths, within the network, only
the path originating and path terminating equipment need to recognize and process the virtually
concatenated signal structure as shown in Fig. 5
Contiguous Concatenation
C-4 C-4
C-4 C-4
C-4 C-4 C-4 C-4
Virtual Concatenation
VC-4
Path 1 #1
Differential Delay
VC-4 VC-4 VC-4
#1 #1 #1
VC-4 VC-4
VC-4 #2
#2 #2
VC-4 VC-4-2v
Path 2 #2
LCAS enables the payload size of VCG (group of VCs) to be adjusted in real time by
adding or subtracting individual VCs, from VCG dynamically, without incurring hits to active
traffic. In LCAS, signalling messages are exchanged between the two VCs end points to
determine the number of concatenated payloads and synchronize the addition/removal of SDH
channels using LCAS control packets.
Benefits of LCAS :-
Customer
Operator
B. Bandwidth on Schedule
LCAS is not only used for dynamic bandwidth adjustment but also for survivability
options for next generation SDH. LCAS is a tool to provide operators with greater flexibility in
provisioning of VCAT groups, adjusting their bandwidth in service and provide flexible end-to-
end protection options. LCAS is defined for all high and low order payloads of SDH.
The main application of this system shall be for multi-service traffic switching and
aggregation at MAC layer, traffic grooming and traffic consolidation of TDM traffic at SDH
layer from access network towards core network. Another prominent application of MSPP
shall be, multiple SDH ring inter connection at STM1 tributary interfaces as well as at STM4
& 16 aggregate interfaces. The equipment shall provide an integrated cross connect matrix to
switch digital signals at SDH layer.
The MSPP equipment shall be capable of simultaneously interfacing the PDH streams and
mapping / de-mapping into SDH payloads and vice-versa, thus enabling the co-existence of
SDH & PDH on the same equipment. This is the greatest advantage for the network as SDH
and PDH existing in the present network can integrate easily which in turn enables quick
bandwidth provisioning to the customer.
MSPP is implemented with two different back haul transmission rates, viz. STM-16 and
STM-64. TEC has also been working on the STM-64 in BSNL Metro networks. Apart from
the standard interfaces on the tributary side, the revised STM-16 provides POS (packet over
SDH) capability on Ethernet interface at 10Mb,100 Mb, and 1000Mb. The equipment is also
envisaged to support DS-3 of SONET. The encapsulation of Ethernet on SDH capability shall
be in accordance with ITU-T G.7041. the system should support Tandem Connection
Monitoring (TCM) on N1 byte and N2 byte for HO path & LO path respectively.
ADMs supporting GFP and VCAT are known as Multi Service Provisioning Platform
(MSPP). Service providers can now deliver packet based transport services using existing
SDH infrastructure. GFP and VCAT is located at the endpoint s of the network, therefore
MSPP need only be deployed at the edge of the transport network. MSPP targets all
application connecting ultra-high capacity backbones to end customers at their premises. The
advent of GFP has created a spur of customer located equipment and MSPP cards that
function as aggregating Ethernet traffic onto SDH rings. The generic structure of a next
generation MSPP is shown in (fig1). This platform consists of the integration of metro WDM
with Ethernet /RPR and SDH VC-4 switching fabrics. Integration means both direct inter
working, in terms of WDM wavelengths, and full NMS/control plane integration for
management and path provisioning.
MSPP
MSPP MSPP
MSPP
5. Layer 2 switching.
17. Traffic consolidation of TDM traffic at SDH layer from access towards core network.
19. Interfacing the PDH streams (2Mb, 34Mb, 140Mb) and mapping / De-mapping into SDH
payloads and vice-versa.
Key Technologies
A key set of technologies for delivering client services efficiently via MSPP are:
VCAT is used to provide better data granularity, GFP is used to wrap the data in a
converged TDM network, & LCAS is used to dynamically allocate& manage B/W.
1. SPRs
2. RPRs
Shared Protection Ring MSPPs supports SPRs to provide Ethernet and packet
transport over SDH infrastructure. The implementation of this technology varies from vendors
to vendors. It allows the provisioning of bandwidth on the SDH ring for packet transport by
statistical multiplexing Ethernet traffic on to a shared packet ring (Circuit) that each MSPP
node can access.
Over few years demand for Internet protocol is growing at a fast pace while
voice demand is remaining more or less stable. Circuit switched voice traffic has to be
converted into packet switched data traffic. This does not match with the present SDH
technology. Protocols like Frame relay, ATM &PPP are inefficient, costly and complex to scale
the increasing demand for data services.
RPR is a MAC layer, ring based protocol that combines intelligence of IP routing and
statistical multiplexing with the bandwidth efficiencies and resiliency of optical rings. RPR
network consist of two counter rotating fibre rings that are fully utilized for transport at all
times for superior fibre utilisation. RPR permits more efficient use of bandwidth using
statistical multiplexing. It also eliminates the need for manual provisioning, because the
architecture lends itself to the implementation of automated provisioning. Moreover, there is
no need for channel provisioning as each ring member can communicate with every other
member based on MAC address. RPR also provides two priority queues at the transmission
level, which allow the delivery of delay and jitter sensitive application, such as voice and video.
RPR is fibre based ring network architecture. Data is carried in packets rather than over
TDM circuits. RPR networks retain many of the performance characteristics, such as
protection, low latency and low jitter on SDH. RPR architecture is highly scalable, very reliable
and easy to manage in comparison to legacy point to point topologies. RPR achieves a loop free
topology across the rings with rapid re-convergence on ring break. RPR supports auto
discovery of other RPR network elements on the ring. New RPR nodes announce themselves to
their direct neighbours with control messages and distribute changes in their settings or
topologies.
The emerging solution for metros data transport applications is Resilient Packet
Ring (RPR). RPR is a newly proposed standard of Ethernet transport. The goal of RPR is to
increase the manageability and resiliency of Ethernet services while providing maximum
capacity and usage over an established SDH ring. It has two features:
RPR is originated from a protocol called dynamic packet transport (DPT). RPR can be
seen as a way towards simpler n/w architecture for packet transport because management is
centralized and controls both switching and transport. Protection and restoration in transport
layer (SDH or WDM) can be switched off reducing cost and complexity. Next-generation
SDH devices such as MSPPs (multi-service provisioning platforms) are evolving to support
RPR.
packet based
RPR is a MAC protocol supporting dual counter rotating rings that can potentially
replace traditional SDH rings. RPR MAC introduces the concept of a transit path. At each
node on an RPR ring, traffic is not destined for the node, simply passes through, avoiding the
queuing and scheduling on a hop-by-hop basis.
SYNCHRONISATION
Synchronous is the first word in the term SDH for a very good reason. If
synchronization is not guaranteed, considerable degradation in network function, and even total
failure of the network can be the result.
New digital technologies and value-added time sensitive services like real-time Video
on Demand, high speed Internet access, Videoconferencing, Bank to Bank crypted data
exchange, multimedia applications, are based on reliable network architectures (Internet, GSM,
ATM, SDH, xDSL and many other network technologies).
All those architectures and services underlie one basic principle: networks must be
synchronized.
Service Consequences
This table clearly shows that the more services have a high value added, the more they
require to be supported by a very reliable synchronization function.
Synchronization is the means used in digital transmissions in order to ensure that all
network elements (NE) operate at the same frequency.
When a message has to be transmitted between two points (i.e. two cities), the internal
clock in the sender will control the frequency of information sent from this node (f1). A second
clock located on the receiver node will control the frequency at which received data is read (f2).
The basic principle of synchronization is that the two frequencies must be exactly the
same in order to allow the receiver to interpret the digital signal properly.
Should f1>f2, the sender will transmit information faster than the receiver can
“understand” it, this will "miss" information ("slips of deletion").
Should f2>f1, the sender is slower than the receiver, this will duplicate a part of
received information ("slips of repetition).
To avoid this worst case scenario, all network elements are synchronized to a central
clock. This central clock is generated by a high-precision primary reference clock (PRC) unit
conforming to ITU-T Recommendation G.811. This specifies an accuracy of 1X10 -11. This
clock signal must be distributed throughout the entire network. A hierarchical structure is used
for this; the signal is passed on by the subordinate synchronization supply units (SSU) and
synchronous equipment clocks (SEC). The synchronization signal paths can be the same as
those used for SDH communications.
The clock signal is regenerated in the SSUs and SECs with the aid of phase-locked
loops. If the clock supply fails, the affected network element switches over to a clock source
with the same or lower quality, or if this is not possible, it switches to hold-over mode. In this
situation, the clock signal is kept relatively accurate by controlling the oscillator by applying
the stored frequency correction values for the previous hours and taking the temperature of the
oscillator into account. Clock islands must be avoided at all costs, as these would drift out of
synchronization with the passage of time and the total failure disaster would be the result. Such
islands are prevented by signaling the network elements with the aid of synchronization status
messages (SSM, part of the S1 byte). The SSM informs the neighboring network element about
the status of the clock supply and is part of the overhead. Special problems arise at gateways
between networks with independent clock supplies. SDH network elements can compensate for
clock offsets within certain limits by means of pointer operations. Pointer activity is thus a
reliable indicator of problems with the clock supply.
In general, the quality of timing will deteriorate as the number of synchronized clocks
in tandem increases and hence for practical synchronization network design, the number of
network elements in tandem should be minimized. Based on theoretical calculations it is
recommended that the longest chain should not exceed 10 SSUs and 20 SECs interconnecting
any SSUs with restriction that the total number of SECs is limited to 60 (refer to figure 3). It is
preferable that all SSUs and SECs are able to recover timing from at least two synchronization
trails. The slave clock shall reconfigure to recover timing from an alternative trail if the original
trail fails. Where possible synchronization trails should be provided over diversely routed
paths. In the event of a failure of synchronization distribution, all network elements will seek to
recover timing from the highest hierarchical level clock source available. To effect this, both
SSUs and SECs may have to reconfigure and recover timing from one of their alternate
synchronization trails.
SSM and squelching may be used on SDH trails for correct reference transfer between
the SSUs. The use of SSM also makes it possible to recover timing for the SEC clocks in the
chain from the opposite direction if the signal in the original direction fails.
When planning the placing for SSUs the importance of the node locations for the
traffic networks to be synchronized and the synchronization network itself is considered. The
maximum number of SEC clocks between two SSUs has also to be taken into account (refer
to figure 3). When planning the synchronization trails first the transmission systems for the
transfer of synchronization are selected. Secondly the timing configuration of the selected
systems is planned in detail.
As fig. 1 shows, the progression of the technology can be seen as an increase in the
number of wavelengths accompanied by a decrease in the spacing of the wavelengths. Along
with increased density of wavelengths, systems also advanced in their flexibility of
configuration, through add-drop functions, and management capabilities.
3.1 WDM
Traditional, passive WDM systems are wide-spread with 2, 4, 8, 12, and 16 channel
counts being the normal deployments. This technique usually has a distance limitation of less
than 100 km.
3.2 CWDM
Today, coarse WDM (CWDM) typically uses 20-nm spacing (3000 GHz) of up to 18
channels. The CWDM Recommendation ITU-T G.694.2 provides a grid of wavelengths for
target distances up to about 50 km on single mode fibers as specified in ITU-T
Recommendations G.652, G.653 and G.655. The CWDM grid is made up of 18 wavelengths
defined within the range 1270 nm to 1610 nm spaced by 20 nm.
3.3 DWDM
Dense WDM common spacing may be 200, 100, 50, or 25 GHz with channel count
reaching up to 128 or more channels at distances of several thousand kilometers with
amplification and regeneration along such a route.
A key advantage to DWDM is that it's protocol and bit rate-independent. DWDM-based
networks can transmit data in SDH, IP, ATM and Ethernet etc. Therefore, DWDM-based
networks can carry different types of traffic at different speeds over an optical channel. DWDM
is a core technology in an optical transport network. Dense WDM common spacing may be
200, 100, 50, or 25 GHz with channel count reaching up to 128 or more channels at distances
of several thousand kilometers with amplification and regeneration along such a route.
The concepts of optical fiber transmission, loss control, packet switching, network
topology and synchronization play a major role in deciding the throughput of the network.
The losses caused by the physical effects on the signal due by the type of materials used
to produce fibres limit the usable wavelengths to between 1280 nm and 1650 nm. Within this
usable range the techniques used to produce the fibres can cause particular wavelengths to have
more loss so we avoid the use of these wavelengths as well.
- Is frequency specific
2. Multiplexer/ demultiplexer:
3. Amplifier:
- Post-amplifier boosts signal pulses at the transmit side (post amplifier) and on the receive side
(preamplifier)
- In line amplifiers (ILA) are placed at different distances from the source to provide recovery
of the signal before it is degraded by loss.
• Less costly in the long run because increased fiber capacity is automatically available;
don't have to upgrade all the time
(c) Transponders
(d) Regenerators
Adding and Dropping only specific wavelengths from the joint optical signal.
This may use complete de-multiplexing or other techniques.
To cater for the huge amount of data expected in an optical network even the
cross-connects have to work on a purely optical level.
1.0 INTRODUCTION
Growing demand for high speed internet is the primary driver for the new access
technologies which enable experiencing true broadband. Today’s, there is an increasing
demand for high bandwidth services in market around the world. However, traditional
technologies, like Digital Subscriber Line (DSL) and cable modem technologies, commonly
used for “broadband access,” which have access speeds to the order of a megabit per second,
with actual rates strongly dependent on distance from the exchange (central office) and quality
of the copper infrastructure, can not fulfill today’s customer demand for bandwidth hungry
applications such as high-definition TV, high-speed Internet access, video on demand, IPTV,
online gaming, distance learning etc. Amongst various technologies, the access methods based
on the optical fiber has been given extra emphasis keeping into long term perspective of the
country. It has many advantages over other competing access technologies of which ‘Being
Future Proof’ and providing ‘True Converged Network’ for high quality multi-play are the
salient ones. The stable and long term growth of Broadband is, therefore, going to be dependent
on robust growth of fiber in the last mile.
However, for providing multi-play services (voice, video, data etc.) and other futuristic
services fiber in the local loop is must. The subscriber market for multi-play is large and
growing and includes both residences and businesses. Businesses need more bandwidth and
many of the advanced services that only fiber can deliver. All view Multi- Play as a strong
competitive service offering now and into the future and are looking at fiber as the way to
deliver. Optical fiber cables have conventionally been used for long-distance communications.
However, with the growing use of the Internet by businesses and general households in recent
years, coupled with demands for increased capacity, the need for optical fiber cable for the last
mile has increased. A primary consideration for providers is to decide whether to deploy an
active (point-to-point) or passive (point-to-multipoint) fiber network.
FTTH is now a cost-effective alternative to the traditional copper loop. “Fiber to the
Home” is defined as a telecommunications architecture in which a communications path is
provided over optical fiber cables extending from an Optical Line Terminal (OLT) unit
located in telecommunications operator’s switching equipment connects to an Optical
Network Terminal (ONT) at each premise. Both OLTs and ONTs are active devices. This
communications path is provided for the purpose of carrying telecommunications traffic to
one or more subscribers and for one or more services (for example Internet Access,
Telephony and/or Video-Television). FTTH consists of a single optical fiber cable from the
base station to the home. The optical/electrical signals are converted and connection to the
user’s PC via an Ethernet card. FTTH is the final configuration of access networks using
optical fiber cable.
FTTB regarded as a transitional stage to FTTH. By introducing fiber cables from the
fiber termination point to the home living space or business office space FTTB can be
converted to full FTTH. Such a conversion is desirable as FTTH provides better capacity and
longevity than FTTB. Optical fiber cable is installed up to the metallic cable installed within
the building. A LAN or existing telephone metallic cable is then used to connect to the user.
A method of installing optical fiber cable by the curb near the user’s home. An optical
communications system is then used between the remote unit (optical signal/electrical
conversion unit) installed outside (such as near the curb or on Street Cabinet) from the installation
center. Finally, coaxial or other similar cable is used between the remote unit and user.
FTTH provides Service Provider’s with the ability to provide “cutting edge”
technology and “best-in-class” services.
subscriber density and the return on investment (ROI). At present different technology options
are available for FTTH architecture .The network can be installed as an active optical network,
or a passive optical network (PON.
A Home Run Fiber architecture is one in which a dedicated fiber line is connected at the
central office (CO) to a piece of equipment called an Optical Line Terminator (OLT). At the
end user location, the other side of the dedicated fiber connects to an Optical Network Terminal
(ONT). Both OLTs and ONTs are active, or powered, devices, and each is equipped with an
optical laser The Home Run fiber solution offers the most bandwidth for an end user and,
therefore, also offers the greatest potential for growth. Over the long term Home Run Fiber is
the most flexible architecture; however, it may be less attractive when the physical layer costs
are considered. Because a dedicated fiber is deployed to each premise, Home Run Fiber
requires the installation of much more fiber than other options, with each fiber running the
entire distance between the subscriber and the CO.
User’s Premise
Point To Point
CO
User’s Premise
P2M Switched
Ethernet
CO
With Active Star Ethernet (ASE) architecture, end users still get a dedicated fiber to
their location; however, the fiber runs between their location and Ethernet switch. Like Home
Run Fiber, subscribers can be located as far away from the Ethernet switch and each subscriber
is provided a dedicated “pipe” that provides full bidirectional bandwidth. Active Star Ethernet
reduces the amount of fiber deployed; lowering costs through the sharing of fiber.
There are two common splitter configurations are being used for PON architecture i.e.
centralized and the cascaded approaches
Splitter Splitter
1X32 1X32
Central Office
A cascaded split configuration results in pushing splitters deeper into the network as
shown in fig.6. Passive Optical Networks (PONs) utilise splitter assemblies to increase the
number of homes fed from a single fibre. In a Cascaded PON, there will be more than one
splitter location in the pathway from central office to customer. Currently, standard splitter
formats range from 1 x 2, 1 x 4, 1 x 8, 1 x 16 and 1 x 32 so a network might use a 1 x 4 splitter
leading to a 1 x 8 splitter further downstream in four separate locations. Optimally, there would
eventually be 32 fibers reaching the ONTs of 32 homes.
1X16
1X8
1X2
Splitters
Splitters
1X2
1X4
1X4 1X16
Central Office
There are several “flavors” of PON technology, i.e. new access technology named
APON (ATM Passive Optical Network), BPON (Broadband Passive Optical Networking),
EPON (Ethernet Passive Optical Networking) and GPON (Gigabit Passive Optical
Networking) which delivers gigabit-per-second bandwidths while offering the low cost and
reliability.
4.2.1 APON
ATM PON (APON) was standardized by the ITU in 1998 and was the first PON
standard developed. It uses ATM principles as the transport method and supports 622 Mbps
downstream services and 155 Mbps upstream service shared between 32-64 splits over a
maximum distance of 20 km.
4.2.2 BPON
Shortly after APON, Broadband PON (BPON) followed and is very similar to APON.
BPON also uses ATM, but it also boasts superior features for enhanced broadband services like
video. BPON has the higher performance numbers then APON pre-splitting maximum of 1.2
Gbps downstream and 622 Mbps upstream.
4.2.3 EPON
The IEEE standardized Ethernet PON (EPON) in the middle of 2004. It uses Ethernet
encapsulation to transport data over the network. EPON operates at rates of 1.25Gbps both
downstream and upstream (symmetrical) over a maximum reach of 20
4.2.4 GPON
Gigabit PON (GPON) is the next generation of PON’s from the line of APON and
BPON. The ITU has approved standard G.984x for it. GPON will support both ATM and
Ethernet for Layer 2 data encapsulation so is clearly an attractive proposition. It supports 2.5
Gbps downstream and 1.25 Gbps upstream.
Residential refers to private users in their homes. Residential users may live in
“MDU” (multi-dwelling units such as apartments/condominiums) or “SFU” (single
family dwelling units such as stand-alone houses/villas/landed property).
Business refers to large (corporate), medium, and small (Small Business, Small Office
Home Office) business users. Businesses may occupy “MTU” (multi-tenanted units such as
office blocks/towers) or “STU” (single-tenanted units such as a stand-alone office building or
warehouse).
1. OLT: The OLT resides in the Central Office (CO). The OLT system provides
aggregation and switching functionality between the core network (various network interfaces)
and PON interfaces. The network interface of the OLT is typically connected to the IP network
and backbone of the network operator. Multiple services are provided to the access network
through this interface,.
2. ONU/ONT: This provides access to the users i.e. an External Plant / Customer
Premises equipment providing user interface for many/single customer. The access node
installed within user premises for network termination is termed as ONT. Whereas access node
installed at other locations i.e. curb/cabinet/building, are known as ONU. The ONU/ONT
provide, user interfaces (UNI) towards the customers and uplink interfaces to uplink local
traffic towards OLT.
Provision to support protection for taking care of fiber cuts, card failure etc.
Typical distance between OLT & ONT can be greater than 15Km (with unequal
splitting - up-to 35Km)
PON and fiber infrastructure can also be used for supporting any one way
distributive services e.g. video at a different wavelength
PON is configured in full duplex mode in a single fiber point to multipoint (P2MP)
topology. Subscribers see traffic only from the head end, and not from each other. The OLT
(head end) allows only one subscriber at a time to transmit using the Time Division Multiplex
Access (TDMA) protocol. PON systems use optical splitter architecture, multiplexing signals
with different wavelengths for downstream and upstream.
Broadcast Video
TDM Telephony
Video on Demand
On –line Gaming
IPTV etc
Wireless Services