0% found this document useful (0 votes)
71 views27 pages

EtherNet Overview

Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views27 pages

EtherNet Overview

Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

EtherNet Overview

If you have an existing network, theres a 90% chance its Ethernet. If youre installing a new network, theres a 98% chance its Ethernet the Ethernet standard is the overwhelming favorite network standard today. Ethernet was developed by Xerox, DEC, and Intel in the mid-1970s as a 10Mbps (Megabits per second) networking protocolvery fast for its dayoperating over a heavy coax cable (Standard Ethernet). Today, although many networks have migrated to Fast Ethernet (100 Mbps) or even Gigabit Ethernet (1000 Mbps), 10-Mbps Ethernet is still in widespread use and forms the basis of most networks. Ethernet is defined by international standards, specifically IEEE 802.3. It enables the connection of up to 1024 nodes over coax, twisted-pair, or fiber optic cable. Most new installations today use economical, lightweight cables such as Category 5 unshielded twisted-pair cable and fiber optic cable.

How Ethernet Works


Ethernet signals are transmitted from a station serially, one bit at a time, to every other station on the network. Ethernet uses a broadcast access method called Carrier Sense Multiple Access/Collision Detection (CSMA/CD) in which every computer on the network hears every transmission, but each computer listens only to transmissions intended for it. Each computer can send a message anytime it likes without having to wait for network permission. The signal it sends travels to every computer on the network. Every computer hears the message, but only the computer for which the message is intended recognizes it. This computer recognizes the message because the message contains its address. The message also contains the address of the sending computer so the message can be acknowledged. If two computers send messages at the same moment, a collision occurs, interfering with the signals. A computer can tell if a collision has occurred when it doesnt hear its own message within a given amount of time. When a collision occurs, each of the colliding computers waits a random amount of time before resending the message. The process of collision detection and retransmission is handled by the Ethernet adapter itself and doesnt involve the computer. The process of collision resolution takes only a fraction of a second under most circumstances. Collisions are normal and expected events on an Ethernet network. As more computers are added to the network and the traffic level increases, more collisions occur as part of normal operation. However, if the network gets too crowded, collisions increase to the point where they slow down the network considerably.

Standard (Thick) Ethernet (10BASE5)

Uses thick coax cable with N-type connectors for a backbone and a transceiver cable with 9-pin connectors from the transceiver to the NIC. Both ends of each segment should be terminated with a 50-ohm resistor. Maximum segment length is 500 meters. Maximum total length is 2500 meters. Maximum length of transceiver cable is 50 meters. Minimum distance between transceivers is 2.5 meters. No more than 100 transceiver connections per segment are allowed.

Thin Ethernet (ThinNet) (10BASE2)

Uses Thin coax cable. The maximum length of one segment is 185 meters. The maximum number of segments is five. The maximum total length of all segments is 925 meters. The minimum distance between T-connectors is 0.5 meters. No more than 30 connections per segment are allowed.

T-connectors must be plugged directly into each device.

Twisted-Pair Ethernet (10BASE-T)

Uses 22 to 26 AWG unshielded twisted-pair cable (for best results, use Category 4 or 5 unshielded twisted pair). The maximum length of one segment is 100 meters. Devices are connected to a 10BASE-T hub in a star configuration. Devices with standard AUI connectors may be attached via a 10BASE-T transceiver.

Fiber Optic Ethernet (10BASE-FL, FOIRL)

Uses 50-, 62.5-, or 100-micron duplex multimode fiber optic cable (62.5 micron is recommended). The maximum length of one 10BASE-FL (the new standard for fiber optic connections) segment is 2 kilometers. The maximum length of one FOIRL (the standard that preceded the new 10BASE-FL) segment is 1 km.

A General Overview Of Ethernet Networks


Ethernet network is one of the most widely used Local Area Networks (LAN). Its popularity is based on the fact that Ethernet is cheaper to acquire and easy to manage when compared to other networks. Ethernet was discovered by Bob Metcalfe, in 1976, while he was carrying out a research at the Palo Alto Research Centre. The purpose of the research into Ethernet was to connect a PC to a laser printer. The Initial version of Ethernet has now been developed and superseded by Ethernet Version II. This version was developed by the joint efforts of DEC, Intel and Xerox. This version of Ethernet has now been standardized by the IEEE 802 Committee and is known as IEEE 802.3 The initial problem identified with Ethernet was the fact that it did not cope well with real time traffic (e.g. video and audio) due to limited bandwidth guarantee. Ethernet overcame this problem by increasing the bandwidth in line with the current requirement. Ethernet has developed from 10mbps to 100mbps and now there is 1Gbps Ethernet. See Ethernet Card A short fall of Ethernet is that there is no guarantee that computers or nodes contending for network access will get access within a specified time. Ethernet uses coaxial, fibre optic or twisted pair cable. Ethernet uses a shared media Bus-type network structure. All nodes on the network share a common Bus and thus Nodes contend for network access. This is the situation when only one node may communicate on the Bus network at a given time.
http://www.networkingnext.com/basicnetworking/ethernetlan.html

http://www.networkingnext.com/basicnetworking/ethernetlan.htmlIn

order for Ethernet to allow the efficient communication of nodes it uses the Carrier Sense Multiple Access with Collision Detection (CSMA/CD). Nodes on this type of network monitor the bus to determine if it is busy. A node sends its signal only after it confirms that the bus is not in use. Collisions occur occasionally on Ethernet buses. When a collision occurs, transmitting nodes must stop the transmission and send out a jamming or busy signal. The nodes then revert to a random waiting period before they can resend the signal. Thus only one node can use the bus at a particular time. This Is Worth Seeing

The main Ethernet types are 10-mbps Ethernet, Fast Ethernet (100mbps) and Gigabit Ethernet. Ethernet fits into the Datalink and the physical layer of the OSI model.

Texts Consulted
Computer Networking Ed Tattle

Data & Computer Communications William Stallings Distributed Systems & Networks William Buchanan

Websites
http://www.convergedigest.com/tutorials/ethernet1/page1.asp http://www.ixiacom.com/pdfs/DS-LM10GE-LANWAN.pdf http://www.networkingnext.com/basicnetworking/ethernetlan.html

http://www.garrettcom.com/pdf/hickory_tech.pdf http://www.rad.com/networks/1997/nettut/ethernet.html

Links
Ethernet innovations Ethernet tutorial

From Lantronix.
The Ethernet network technology

Information on Ethernet (IEEE 802.3) local area network technology.


D-Link Systems, Inc.

Home page for D-Link Systems Inc.


Ethernet LAN analyzer product

A product description for a full duplex LAN analyzer product.


Ethernet technical reference page

Discussion the various types of Ethernet network types.


Ethernet Tutorials and Resources

Ethernet information, including tutorials, FAQs, and guides.


Ethernet/Fast Ethernet Technology Center

Information on Ethernet/Fast Ethernet technology, including articles, columns


Ethernet/IEEE 802.3 LAN Overview

Cisco Systems, provides links to sections of a document describing Ethernet/IEEE 803.2.


Fast Ethernet design rules

Details of procedures for designing a Fast Ethernet (100 Base-TX, 100 Base-FX) network.
Introduction to fast Ethernet

Introduction to 100Base-T: fast Ethernet technology, from the Fast Ethernet Alliance.
Kalpana Etherswitch product page

Product information on the Kalpana Etherswitch EPS-2015RS, a stackable 15-port Ethernet switch which features full duplex ethernet communications capabilities.
Migration to Switched Ethernet LANs Overview of Ethernet

History of Ethernet, as well as various cabling techniques.


Overview of Ethernet networks

Overview of Ethernet and information on the role of the IEEE in determining local area network

standards.
Quick reference guide to Ethernet

Links to documents describing Ethernet, components, media, the Auto-Negotiation system, multisegment configuration guidelines, and information on the Ethernet Configuration Guidelines book.

Books:

Search for books about Ethernet

(amazon.com) (amazon.com)

"Ethernet: The Definitive Guide " by Charles E. Spurgeon

Introduction: Ethernet

Introduction
Since the Local Area Networks (LAN) concept was defined 30 years ago, many technologies have been developed to occupy this area of the market. Names such as Token Ring, Token Bus, DQDB, FDDI, LATM, 100VG and Any LAN were once relatively common; however Ethernet has out-lived them all, becoming the de facto standard, used in almost all LAN installations. Despite the limited performance achieved initially, a number of important reasons made Ethernet a winner, including low cost, simplicity, flexibility, and scalability. The most important factor was technological unification, because this guaranteed smooth interworking without the need for specialized gateways. In other words the network is a mean to connect computers, but not the goal, and Ethernet received the necessary support from manufacturers and service providers to finally be universally accepted.

Brief History Medium Access Control Physical Media Ethernet Frames Ethernet Evolution Use of Full Duplex Topologies Logical Link Control Layer Gigabit Ethernet and 10 Gigabit Ethernet

A Brief History of Ethernet


The term Ethernet does not specify a unique technology but a family of technologies for local, metropolitan and access networks covered by IEEE 802.3 standards (see Figure 1.1). In Jan. 2005 there were four data rates defined for operation over different media including coaxial, twisted pair, fibre optics and wireless (see Table 1.1). Different versions are known as: Ethernet at 10 Mbit/s, Fast Ethernet at 100 Mbit/s, Gigabit Ethernet at 1000 Mbit/s, and 10 Gigabit Ethernet at 10 Gbit/s.

Figure 1.1 Ethernet layers vs. OSI model. Some layers are optional, depending on the version. Ethernet was originally defined as a shared media technology where all the stations had to compete to get access to the common transmission medium. However, continuous evolution soon removed this limitation and new versions were developed where stations do not have to compete for transmission resources.

Figure 1.2 ALOHA, a pre-Ethernet network, was developed to communicate between Hawaiian islands.

ALOHAnet - the predecessor of Ethernet


One network has always been considered the precursor to Ethernet. It was known as ALOHAnet and was developed in the late 60s by Norm Abramson at the University of Hawaii (see Figure 1.2).

ALOHA was a digital radio network designed to transmit independent packets of information between the islands. Stations willing to communicate had to follow a simple protocol:

1. Any station can transmit a packet at any time, indicating the destination address. 2. Once the packet has been sent, the transmitter keeps waiting for the
acknowledgment (ACK) from the receiver.

3. Stations are always listening and reading the destination address of all packets.
If a packet received matches the station's address, it then verifies that the CRC of the packet is correct before answering with a short ACK packet to the transmitter. If after a certain time the ACK is not received by the transmitter, because of a bad CRC or any other reason, the packet is resent.

4.

The time the transmitter waits for the ACK must be at least twice the latency of the network. That is to allow time for the packet to reach the most distant destination, and then the ACK to reach the transmitter. One of the common CRC errors occurred when two or more stations tried to transmit at the same time. This caused interference making it impossible for any packet to be received. This situation was known as a collision. Collisions mean that the maximum theoretical efficiency of ALOHA-like systems is about 18%. An improved version, known as Slotted ALOHA, synchronised stations by dividing transmit time into windows. Stations were able to start a transmission only at specific times, to reduce the probability of collisions. This increased the maximum efficiency to 36%.

1.2.2 Ethernet in Palo Alto


The first Ethernet was designed in 1973 by Bob Metcalfe in Xerox Corporation's Palo Alto laboratory (see Figure 1.3). It was able to operate at 3 Mbps over a shared coaxial cable, using a remarkable access protocol known as Carrier Sense Multiple Access Collision Detect (CSMA/CD). This was a simple algorithm (see Medium Access Control) that improved efficiency by up to the 80%, depending on the network configuration and traffic load [3].

Figure 1.3 A drawing of the first Ethernet system by Bob Metcalfe. In 1980 a consortium formed by Digital, Intel and Xerox (known as the DIX cartel) developed the 10 Mbps Ethernet. Finally, in 1983, the IEEE standards board approved the first IEEE 802.3 standard, which was based on the DIX Ethernet.

10

Medium Access Control


The poor performance shown by ALOHA systems drove the development of CSMA/CD to provide a more efficient Medium Access Control (MAC) protocol that would minimize the impact of collisions on efficiency.

11

12

Figure 1.4 CSMA/CD flow chat operation in half duplex.

Carrier Sense Multiple Access with Collision Detection


The first part of this protocol, the CSMA, forces any station wanting to transmit to follow these steps:

1. 2. 3. 4. 5. 6.

Listen to the channel to check if another transmission is in progress. If the channel is idle transmit immediately and then wait for the ACK. If the channel is busy go to step 1. The receiving station checks the CRC, if it is correct, sends the ACK. A time-out happens if the transmitter does not receive ACK, go to step 1. If the transmitter receives the ACK, operation has finished successfully.

Despite the precautions of the CSMA protocol, two or more stations may still attempt to transmit at about the same time, and then a collision will occur. Collisions cannot be avoided completely, but their effect can be minimized by reducing the duration of the collision. An important improvement can be made if the station continues listening to the channel while transmitting, it will then be able to stop the transmission immediately after a collision is detected (Collision Detection or CD). A jamming signal is sent instead, to tell all the stations that a collision has happened (see Figure 1.4). This addition completes the CSMA/CD protocol.

Collisions
Collisions are a normal part of Ethernet half duplex operation, however, if there is a high number of collisions network efficiency is severely affected. We can also see that if the transmitter detects the collision before sending the last byte, this reduces the effects (see Carrier Sense Multiple Access with Collision Detection). To make this possible frames have to be long enough to completely fill the medium, then, if a collision happens the transmitter will detect the collision and will restart the process rather than waiting for an ACK that never arrives. In order to completely fill the medium, frames must have a Minimum Frame Size (MFS) to compensate for propagation delays, and other types of delays suffered before they reach the edge of the network. Consequently small frames must be padded out to reach MFS (see Table 1.1). The MFS (bits) is directly related to the length of the LAN (metres) and the transmission rate (bits/sec). For Ethernet and Fast Ethernet the MFS is 64 bytes, and for Gigabit Ethernet it is 416 or 520 bytes. When full duplex versions of Ethernet are used collisions are avoided, and so MFS does not apply. Exceptionally, a late collision may occur after the last bit of the 64 bytes has been transmitted. In this case the CSMA/CD layer is unaware that a collision has occurred, and hence will not try to re-send the packet. Resending the packet hence becomes the responsibility of the higher layer protocols.

13

The Physical Media

Multiple Physical Media


Ethernet has adopted many of the transmission media available, including Coaxial, UTP, STP, Multimode and Monomode fibres, in order to support the various market demands (see Table 1.1). Generally new Ethernet versions have adopted legacy physical media to allow smooth migration. In some cases Ethernet has also adopted reliable physical layers that were designed for other technologies (for example fibre channel and Gigabit Ethernet), to speed up the development and bringing onto the market.

Media Independent Interfacing


One of the aims of Ethernet has been to provide media independence by separating Controllers and Transceivers functionally and physically:

1. Controllers hold the common functionalities, such as MAC protocol and


interfaces with higher layers.

2. Transceivers are specific to each type of media and include functions such as
codification or traffic functions.

14

15

Figure 1.5 Line encoding technologies. Depending on the media a + signal corresponds to high voltage on copper or high intensity on optical fibre, and a signal to low voltage or low intensity. PAM5 uses 5 levels (-2, -1, 0, 1, 2), several pairs (two in 100BASE-T and four in 1000BASE-T), and a complex encoding rule to generate the symbols transmitted in parallel over each of the pairs.

1.4.2.1 Attachment Unit Interface


When 10BASE5, the first commercial Ethernet solution, was manufactured it was only able to operate over thick coaxial cable (see Table 1.1). The evolution from single physical media was first made by the AUI (Attachment Unit Interface), which was developed for 10Mbit/s. The intention was to avoid the difficulty of routing thick and inflexible coaxial cable to each station (see Figure 1.6).

Figure 1.6 AUI is a little bit more than a connection cable between the Ethernet card and the transceiver. The AUI includes four types of signals: Transmit Data, Receive Data, Collision Presence, and Power. AUI cable can be used over distances up to 50m.

Medium Independent Interface


The MII (Medium Independent Interface) was designed to guarantee the use of Fast Ethernet (100Mbit/s) by different applications, for example desktops would use UTP and fibre backbones. This aim forced the transfer of some functions like codification to the transceiver, because each media uses a different scheme.

16

Figure 1.7 MII used for Fast Ethernet at 100 Mbit/s. The pinout provides four groups signals: power, management such as clock, transmit/receive at one fourth of the data rate, control signals such as CS and CD. The MII includes four groups of signals: transmission, reception, collision control, management, and power.

Gigabit Medium Independent Interface


The Gigabit Medium Independent Interface (GMII) was defined as a logical interface which can only exist inside a chip, therefore it does not define any type cable or connector. GMII functionality is very similar to MII just minor differences apply. For example the clock is provided by the controller and the data transfer is at byte size (8 circuits) rather than nibble size (4 circuits).

17

Figure 1.8 The basic 802.3 MAC Frame format.

18

The Ethernet Frames


There have been three basic formats of the Ethernet frame: DIX, IEEE 802.3 and IEEE 802.3x (see Figure 1.8).

Formats
The DIX frame was the first format adopted by the Digital, Intel and Xerox cartel. In 1983, when the IEEE released the first 802.3 standard, the Start Frame Delimiter (SFD) field was defined, which was a little more than a name change. More important was the Length field, since this allows management of the padding operation at the MAC layer, rather than passing this function to higher protocol layers. In 1997 the IEEE accepted the use of both Type and Length interpretations of the field that had previously been Type in DIX frames and Length in IEEE 802.3 (1983) frames.

Frame Fields
The structure of an IEEE 802.3 `ethernet' frame is shown below: Preamble - a sequence of 7 bytes, each set to `10101010'. Used to synchronize the receiver before actual data is sent. SDF - Start Delimiter, one byte of alternating 1's and 0's, the same as the preamble, except that the last two bits are 1. This is an indication to the receiver that anything following the last two 1's is useful, and must be read into the network adapter's memory buffer for processing. DA, SA - Destination and Source MAC addresses. There are three types of address: a) unique, 48 bit address assigned to each adaptor, each manufacturer gets their own range; b) broadcast: all 1s which means that all the receivers must process the frame; c) multicast: first bit is 1 to refer a group of stations (see Figure 1.9).

Figure 1.9 The 24 bits block administrated by the IEEE is known as OUI (Organizationally Unique Identifier). A vendor obtains a OUI number and then has another 24 bit block to build up to 2 exp 24 Ethernet devices. Type - a descriptor of the client protocol being transported (IP, IPX, AppleTalk,etc). Length - the size of the `data field', not including any `pad field' added in order to obtain minimum frame size. The maximum size is 1518 bytes (preamble and SDF are not included). LLC (Logical Link Control) - the payload, can contain from 48 up to 1500 bytes of data.

19

PAD - all frames must be at least 64 bytes long (see Medium Access Control), if the frame is smaller then the frame contains a `pad field' to make this up to 64 bytes. CRC (Cyclic Redundancy Check) - the value of this field is used to check if the frame has been received successfully or the contents have been corrupted.

Ethernet Evolution
Since the first Ethernet card was manufactured the technology has evolved continuously, showing a great ability to adapt to new technologies and increasing business requirements [2].

Figure 1.10 From shared media to dedicated media

The evolution from shared to dedicated media


Sharing media means not only sharing bandwidth but also sharing the problems. A simple discontinuity in a 10BASE-2 or 10BASE-5 cable, could mean that all the attached devices are unavailable. 10BASE-T addressed this problem by dedicating a cable to each station and connecting all of them to a hub (see Figure 1.10). The first hubs were only central points of the cabling system, but soon enough intelligence was added to detect anomalies, and to disconnect stations that were causing problems. New generations of hub were also able to support several bit rates and cable types.

The Evolution from Shared to Dedicated LAN Bandwidth

Figure 1.11 Segmentation and switching Stations which are part of a shared bandwidth network have to compete for resources, this inevitably produces collisions that reduce performance. To cut down on collisions the first strategy was segmentation using bridges, this subdivided the network into multiple collision domains. This reduced the number of stations competing for the same resource. The second step was to assign the whole of one segment to those station with high bandwidth requirements. The final step was to configure a network that is totally switched (see Figure 1.11).

20

Figure 1.12 Frame bursting. In Gigabit Ethernet a station is allowed to send multiple frames to make the HDX mode more efficient. Only the first frame requests the extension

Frame Bursting in Half Duplex


Half Duplex transmission can be very inefficient, especially when sending small frames with padding. To help this, in Gigabit Ethernet the frame bursting feature enables a device to send over 8000 bytes in a burst (see Figure 1.9). The first frame is sent in a normal manner, then without dropping the carrier the second is sent and so on up to the limit allowed. Each frame is separated by a gap without data, but the carrier remains on to stop other stations starting a transmission.

Figure 1.13 Segmentation reduces the probability of collision; full duplex and switching removes it completely.

The Use of Full Duplex


One of the fundamental problems of CSMA/CD is that the more traffic there is on the network, the more collisions there will be. That is when utilisation increases, the number of collisions increases also, and the network could become unmanageable and could collapse. The use of Full Duplex (FDX) eliminates the possibility of collisions making CSMA/CD (see Medium Access Control) unnecessary because: The Carrier Sense protocol is not necessary as the media is never busy because there is a link dedicated for each transmitter/receiver couple. The Collision Detection protocol is not necessary either because collisions never happen, and no jamming signals are necessary.

In fact it is not necessary to have a MAC layer because access to the media is always guaranteed; simultaneous transmission from and reception by the same station occurs without any interference. The second key consequence of adopting FDX is that distance limitations are removed. Note that LAN distance and frame size were restricted to allow stations to detect collisions while transmitting (see Figure 1.4). In FDX systems the distance between stations depends on the

21

characteristics of the media and the quality of the transmitters, but predefined limits do not apply (see Table 1.1).

Figure 1.14 Full duplex (FDX) operation allows two way transmission simultaneously without contention, collisions, extension bits or re-transmissions. The only restriction is that a gap must be allowed between two consecutive frames. FDX also requests Flow Control, this is transmitted by the receiver to request that the transmitter temporarily stops transmitting.

Full Duplex and the Flow Control


One side-effect of FDX happens when a transmitter, that is constantly sending packets, causes the receiver buffer to overflow. To avoid this a PAUSE protocol was defined. This provides a mechanism whereby a congested receiver can ask the transmitter to stop its transmissions. This protocol is based on a short packet known as a PAUSE frame (see Figure 1.15). It contains a timer value, expressed as a multiple of 512 bit-times; this specifies how long the transmitter should remain silent. If the receiver is no longer congested before this time has passed, it may send a second PAUSE frame with a value of zero to resume the transmission. The PAUSE protocol operates only on point-to-point links, and cannot be forwarded through bridges, switches, or routers.

Figure 1.15 PAUSE frame which used for the Flow Control protocol. The unit of the PAUSE time is equivalent to 512 bits time. If PAUSE time is 0 it requests to resume the transmission. Gigabit Ethernet introduces the asymmetric flow control concept which lets a device indicate that it may send PAUSE frames but declines to respond to them. If the link partner is willing to cooperate, PAUSE frames will flow in only one direction on the link.

Virtual LAN
A VLAN is a network that is logically segmented on an organisational basis, by functions, project teams, or applications rather than on a physical or a geographical basis. The network can be reconfigured through software rather than by physically unplugging and moving devices or wires. Stations are connected by switches and routers to form broadcast domains (see Figure 1.16).

22

VLANs are created to provide the segmentation of services despite the physical configuration of the network. Two distant stations separated by thousand of kilometres could be part of the same virtual segment. VLANs address scalability, security, and network management. Routers in VLAN topologies are very important because they provide broadcast filtering, addressing, and traffic flow management.

Figure 1.16 Virtual LAN vs. Segmented LAN

Topologies
The first Ethernet networks were implemented with a coaxial bus structure, and up to 100 stations per segment. Individual segments could be interconnected with repeaters, as long as multiple paths did not exist between any two stations. During the 1980s bridges and routers reduced the number of stations per segment to split traffic in a more logical way, according to the user requirements. Separating traffic by departments, users, servers or any other criteria reduces collisions while increasing aggregated network performance. Since the early 1990s the network configuration of choice has been the star-connected topology. The centre of the star is a hub, a switch or a router, then all connections are point-to-point links from the centre to the station. This topology has proved to be the most flexible and easy to manage in LAN networks, and is independent of the technology and the physical medium being used. New high speed versions have gained increasing acceptance since 2000, competing for the Campus and Metropolitan markets where point-to-point, ring, and even meshed topologies are common. The adoption of fibre optics has been key to increasing distance and bit rate. A new

23

standard for the local loop has been approved (IEEE P802.3ah, June 2004) so that Ethernet can compete for broadband access where twisted copper pair is the common physical layer.

Figure 1.18 Topologies

The Logical Link Control Layer


The mission of the The Logical Link Control (LLC) is to make Ethernet appear to be a point-topoint network regardless of whether the MAC layer is using a shared or dedicated transmission medium. LLC can provide three types of services:

1. Unacknowledged connectionless, which is a simple datagram service just for


sending/receiving frames. Flow and error control is provided by higher layers. 2. Acknowledged connectionless - received frames are verified and ACK is sent despite the connection has not been set up. 3. Connection oriented - this service establishes a virtual circuit between two stations.

Figure 1.19 LLC format. DSAP (Destination Service Access Point) and SSAP (Source Service Access Point) use a one byte field assigned by IEEE that is used to identify the location of the memory buffer on source and destination devices where the data from the frame should be stored. The `Control' field is either 1 or 2 bytes long depending on which service is specified in the DSAP and SSAP fields. For example, if the value is 3 which indicates an `un-numbered format' frame, which then signifies that the LLC uses an unacknowledged, connectionless service.

24

Gigabit and 10 Gigabit Ethernet


Compared with previous Ethernet standards Gigabit Ethernet has removed many limitations, such as network size, that were associated with shared media, shared bandwidth and half duplex. The aim is to provide reliable and high speed services. The new Gigabit topologies rely on switches that connect stations using dedicated and, generally, full duplex optical links. The exception is 1000BASE-T which was designed to provide a migration path for existing UTP Ethernet installations. The rest specifically address the metropolitan (MAN) and wide area network (WAN) markets.

Gbps Ethernet
The standard for 1000Mbps Gigabit Ethernet was standardized in 1998 under the name IEEE 802.3z/ab which describes two Ethernet architectures 1000BASE-X (which defines three versions: CX, LX and SX) and 1000BASE-T to run over UTP Cat 5 cable or better (see Figure 1.20).

Figure 1.20 1 Gigabit Ethernet defines several transmission media: 802.3z (1000BASE-X) based on the existing fibre Channel technology and covers three different types of media, and 802.3ab (1000BASE-T) which uses the popular UTP.

Gbps Ethernet and Beyond


There are two main differences between 10 Gigabit Ethernet and previous Ethernet versions. First is the inclusion of a long-haul (40+ km) optical transceiver for single mode fibre that can be used to build MANs. The second is the WAN option, which allows 10 Gigabit Ethernet to be transparently transported across existing SDH/SONET infrastructures.

Future of Ethernet
Manufacturers have recently been working on a higher-speed version of SONET (Synchronous Optical Network), boosting its capacity from 10G bps to 40G bps, which may have an impact on Ethernet's future. One group of manufacturers wants to piggyback on this work and develop a

25

40G bps version of Ethernet based largely on the STM-256/OC-768 specification.

Figure 1.21 Ethernet layers. MII and Auto-negotiation are optional. However, other groups want to maintain the multiple of 10 strategy which would see 100 Gbps Ethernet as the next logical step. There is also a suggestion that vendors are more interested in putting 10 Gigabit Ethernet into the local telephone exchanges in order to obtain better returns, than investing in higher-speed Ethernet. Faster Ethernet definitely has a future, but its placement and timescale are very uncertain at the moment.

26

27

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy