0% found this document useful (0 votes)
97 views50 pages

Barramento PCI: PDF Generated At: Mon, 29 Apr 2013 11:40:35 UTC

Conventional PCI is a local computer bus standard introduced in 1993 for connecting hardware devices in a computer. It supports functions found on a processor bus in a standardized format independent of any processor. Typical PCI cards included network cards, sound cards, modems, and disk controllers. Later revisions added 66MHz and 133MHz standards. PCI provided separate memory and I/O address spaces and used auto-configuration so that devices could request resources and be assigned addresses and interrupts by the firmware at startup.

Uploaded by

Kay Smith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views50 pages

Barramento PCI: PDF Generated At: Mon, 29 Apr 2013 11:40:35 UTC

Conventional PCI is a local computer bus standard introduced in 1993 for connecting hardware devices in a computer. It supports functions found on a processor bus in a standardized format independent of any processor. Typical PCI cards included network cards, sound cards, modems, and disk controllers. Later revisions added 66MHz and 133MHz standards. PCI provided separate memory and I/O address spaces and used auto-configuration so that devices could request resources and be assigned addresses and interrupts by the firmware at startup.

Uploaded by

Kay Smith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Barramento PCI

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.
PDF generated at: Mon, 29 Apr 2013 11:40:35 UTC
Contents
Articles
Conventional PCI 1
PCI-X 25
PCI Express 29

References
Article Sources and Contributors 46
Image Sources, Licenses and Contributors 47

Article Licenses
License 48
Conventional PCI 1

Conventional PCI
Conventional PCI
PCI Local Bus

Three 5-volt 32-bit PCI expansion slots on a motherboard (PC bracket on left side)
Year created July 1993

Created by Intel

Supersedes ISA, EISA, MCA, VLB

Superseded by PCI Express (2004)

Width in bits 32 or 64

Capacity 133 MB/s (32-bit at 33 MHz--the standard


configuration)
266 MB/s (32-bit at 66 MHz or 64-bit at 33 MHz)
533 MB/s (64-bit at 66 MHz)

Style Parallel

Hotplugging interface Optional

Conventional PCI (PCI is an initialism formed from Peripheral Component Interconnect,[1] part of the PCI
Local Bus standard and often shortened to just PCI) is a local computer bus for attaching hardware devices in a
computer. The PCI bus supports the functions found on a processor bus, but in a standardized format that is
independent of any particular processor. Devices connected to the bus appear to the processor to be connected
directly to the processor bus, and are assigned addresses in the processor's address space.[2]
Attached devices can take either the form of an integrated circuit fitted onto the motherboard itself, called a planar
device in the PCI specification, or an expansion card that fits into a slot. The PCI Local Bus was first implemented in
IBM PC compatibles, where it displaced the combination of ISA plus one VESA Local Bus as the bus configuration.
It has subsequently been adopted for other computer types. PCI and PCI-X are being replaced by PCI Express,[citation
needed]
but as of 2011[3], most motherboards are still made with one or more PCI slots, which are sufficient for many
uses.
The PCI specification covers the physical size of the bus (including the size and spacing of the circuit board edge
electrical contacts), electrical characteristics, bus timing, and protocols. The specification can be purchased from the
PCI Special Interest Group (PCI-SIG).
Typical PCI cards used in PCs include: network cards, sound cards, modems, extra ports such as USB or serial, TV
tuner cards and disk controllers. PCI video cards replaced ISA and VESA cards, until growing bandwidth
requirements outgrew the capabilities of PCI; the preferred interface for video cards became AGP, and then PCI
Express. PCI video cards remain available for use with old PCs without AGP or PCI Express slots.[4]
Many devices previously provided on PCI expansion cards are now commonly integrated onto motherboards or
available in universal serial bus and PCI Express versions.
Conventional PCI 2

History
Work on PCI began at Intel's Architecture Development Lab circa
1990.
A team of Intel engineers (composed primarily of ADL engineers)
defined the architecture and developed a proof of concept chipset
and platform (Saturn) partnering with teams in the company's
desktop PC systems and core logic product organizations. The
original PCI architecture team included, among others, Dave
Carson, Norm Rasmussen, Brad Hosler, Ed Solari, Bruce Young,
Gary Solomon, Ali Oztaskin, Tom Sakoda, Rich Haslam, Jeff
Rabe, and Steve Fischer.
PCI (Peripheral Component Interconnect) was immediately put to A typical 32-bit, 5 V-only PCI card, in this case, a
SCSI adapter from Adaptec.
use in servers, replacing MCA and EISA as the server expansion
bus of choice. In mainstream PCs, PCI was slower to replace
VESA Local Bus (VLB), and did not gain significant market
penetration until late 1994 in second-generation Pentium PCs. By
1996, VLB was all but extinct, and manufacturers had adopted
PCI even for 486 computers.[5] EISA continued to be used
alongside PCI through 2000. Apple Computer adopted PCI for
professional Power Macintosh computers (replacing NuBus) in
mid-1995, and the consumer Performa product line (replacing LC
PDS) in mid-1996.

Later revisions of PCI added new features and performance


A motherboard with two 32-bit PCI slots and two
improvements, including a 66 MHz 3.3 V standard and 133 MHz sizes of PCI Express slots
PCI-X, and the adaptation of PCI signaling to other form factors.
Both PCI-X 1.0b and PCI-X 2.0 are backward compatible with some PCI standards.
The PCI-SIG introduced the serial PCI Express in 2004. At the same time, they renamed PCI as Conventional PCI.
Since then, motherboard manufacturers have included progressively fewer Conventional PCI slots in favor of the
new standard.

PCI History[6]
Spec Year [7]
Change Summary

PCI 1.0 1992 Original issue

PCI 2.0 1993 Incorporated connector and add-in card specification

PCI 2.1 1995 Incorporated clarifications and added 66 MHz chapter

PCI 2.2 1998 Incorporated ECNs, and improved readability

PCI 2.3 2002 Incorporated ECNs, errata, and deleted 5 volt only keyed add-in cards

PCI 3.0 2002 Removed support for the 5.0 volt keyed system board connector
Conventional PCI 3

Auto configuration
PCI provides separate memory and I/O port address spaces for the x86 processor family, 64 and 32 bits, respectively.
Addresses in these address spaces are assigned by software. A third address space, called the PCI Configuration
Space, which uses a fixed addressing scheme, allows software to determine the amount of memory and I/O address
space needed by each device. Each device can request up to six areas of memory space or I/O port space via its
configuration space registers.
In a typical system, the firmware (or operating system) queries all PCI buses at startup time (via PCI Configuration
Space) to find out what devices are present and what system resources (memory space, I/O space, interrupt lines,
etc.) each needs. It then allocates the resources and tells each device what its allocation is.
The PCI configuration space also contains a small amount of device type information, which helps an operating
system choose device drivers for it, or at least to have a dialogue with a user about the system configuration.
Devices may have an on-board ROM containing executable code for x86 or PA-RISC processors, an Open Firmware
driver, or an EFI driver. These are typically necessary for devices used during system startup, before device drivers
are loaded by the operating system.
In addition, there are PCI Latency Timers that are a mechanism for PCI Bus-Mastering devices to share the PCI
bus fairly. "Fair" in this case means that devices will not use such a large portion of the available PCI bus bandwidth
that other devices are not able to get needed work done. Note, this does not apply to PCI Express.
How this works is that each PCI device that can operate in bus-master mode is required to implement a timer,
called the Latency Timer, that limits the time that device can hold the PCI bus. The timer starts when the
device gains bus ownership, and counts down at the rate of the PCI clock. When the counter reaches zero, the
device is required to release the bus. If no other devices are waiting for bus ownership, it may simply grab the
bus again and transfer more data.[8]

Interrupts
Devices are required to follow a protocol so that the interrupt lines can be shared. The PCI bus includes four
interrupt lines, all of which are available to each device. However, they are not wired in parallel as are the other PCI
bus lines. The positions of the interrupt lines rotate between slots, so what appears to one device as the INTA# line is
INTB# to the next and INTC# to the one after that. Single-function devices use their INTA# for interrupt signaling,
so the device load is spread fairly evenly across the four available interrupt lines. This alleviates a common problem
with sharing interrupts.
PCI bridges (between two PCI buses) map the four interrupt traces on each of their sides in varying ways. Some
bridges use a fixed mapping, and in others it is configurable. In the general case, software cannot determine which
interrupt line a device's INTA# pin is connected to across a bridge. The mapping of PCI interrupt lines onto system
interrupt lines, through the PCI host bridge, is similarly implementation-dependent. The result is that it can be
impossible to determine how a PCI device's interrupts will appear to software. Platform-specific BIOS code is meant
to know this, and set a field in each device's configuration space indicating which IRQ it is connected to, but this
process is unreliable.
PCI interrupt lines are level-triggered. This was chosen over edge-triggering in order to gain an advantage when
servicing a shared interrupt line, and for robustness: edge triggered interrupts are easy to miss.
Later revisions of the PCI specification add support for message-signaled interrupts. In this system, a device signals
its need for service by performing a memory write, rather than by asserting a dedicated line. This alleviates the
problem of scarcity of interrupt lines. Even if interrupt vectors are still shared, it does not suffer the sharing problems
of level-triggered interrupts. It also resolves the routing problem, because the memory write is not unpredictably
modified between device and host. Finally, because the message signaling is in-band, it resolves some
synchronization problems that can occur with posted writes and out-of-band interrupt lines.
Conventional PCI 4

PCI Express does not have physical interrupt lines at all. It uses message-signaled interrupts exclusively.

Conventional hardware specifications


These specifications represent the most
common version of PCI used in normal
PCs.
• 33.33 MHz clock with synchronous
transfers
• Peak transfer rate of 133 MB/s (133
megabytes per second) for 32-bit
bus width (33.33 MHz × 32 bits ÷
8 bits/byte = 133 MB/s)
• 32-bit bus width
• 32- or 64-bit memory address space
(4 gigabytes or 16 exabytes)
• 32-bit I/O port space Diagram showing the different key positions for 32-bit and 64-bit PCI cards

• 256-byte (per device) configuration


space
• 5-volt signaling
• Reflected-wave switching
The PCI specification also provides options for 3.3 V signaling, 64-bit bus width, and 66 MHz clocking, but these
are not commonly encountered outside of PCI-X support on server motherboards.
The PCI bus arbiter performs bus arbitration among multiple masters on the PCI bus. Any number of bus masters
can reside on the PCI bus, as well as requests for the bus. One pair of request and grant signals is dedicated to each
bus master.

Card voltage and keying


Typical PCI cards have either one or two key notches, depending
on their signaling voltage. Cards requiring 3.3 volts have a notch
56.21 mm from the card backplate; those requiring 5 volts have a
notch 104.47 mm from the backplate. "Universal cards" accepting
either voltage have both key notches. This allows cards to be fitted
only into slots with a voltage they support.

Connector pinout
The PCI connector is defined as having 62 contacts on each side of A PCI-X Gigabit Ethernet expansion card with both
the edge connector, but two or four of them are replaced by key 5 V and 3.3 V support notches.

notches, so a card has 60 or 58 contacts on each side. Pin 1 is


closest to the backplate. B and A sides are as follows, looking down into the motherboard connector.[7][9][10]
Conventional PCI 5

32-bit PCI connector pinout


Pin Side B Side A Comments

1 −12 V TRST# JTAG port pins (optional)

2 TCK +12 V

3 Ground TMS

4 TDO TDI

5 +5 V +5 V

6 +5 V INTA# Interrupt lines (open-drain)

7 INTB# INTC#

8 INTD# +5 V

9 PRSNT1# Reserved Pulled low to indicate 7.5 or 25 W power required

10 Reserved IOPWR +5 V or +3.3 V

11 PRSNT2# Reserved Pulled low to indicate 7.5 or 15 W power required

12 Ground Ground Key notch for 3.3 V-capable cards

13 Ground Ground

14 Reserved 3.3 V aux Standby power (optional)

15 Ground RST# Bus reset

16 CLK IOPWR 33/66 MHz clock

17 Ground GNT# Bus grant from motherboard to card

18 REQ# Ground Bus request from card to motherboard

19 IOPWR PME# [11]


Power management event (optional) 3.3 V, open drain, active low.

20 AD[31] AD[30] Address/data bus (upper half)

21 AD[29] +3.3 V

22 Ground AD[28]

23 AD[27] AD[26]

24 AD[25] Ground

25 +3.3 V AD[24]

26 C/BE[3]# IDSEL

27 AD[23] +3.3 V

28 Ground AD[22]

29 AD[21] AD[20]

30 AD[19] Ground

31 +3.3 V AD[18]

32 AD[17] AD[16]

33 C/BE[2]# +3.3 V

34 Ground FRAME# Bus transfer in progress

35 IRDY# Ground Initiator ready

36 +3.3 V TRDY# Target ready

37 DEVSEL# Ground Target selected


Conventional PCI 6

38 Ground STOP# Target requests halt

39 LOCK# +3.3 V Locked transaction

40 PERR# SMBCLK SDONE Parity error; SMBus clock or Snoop done (obsolete)

41 +3.3 V SMBDAT SBO# SMBus data or Snoop backoff (obsolete)

42 SERR# Ground System error

43 +3.3 V PAR Even parity over AD[31:00] and C/BE[3:0]#

44 C/BE[1]# AD[15] Address/data bus (lower half)

45 AD[14] +3.3 V

46 Ground AD[13]

47 AD[12] AD[11]

48 AD[10] Ground

49 M66EN Ground AD[09]

50 Ground Ground Key notch for 5 V-capable cards

51 Ground Ground

52 AD[08] C/BE[0]# Address/data bus (lower half)

53 AD[07] +3.3 V

54 +3.3 V AD[06]

55 AD[05] AD[04]

56 AD[03] Ground

57 Ground AD[02]

58 AD[01] AD[00]

59 IOPWR IOPWR

60 ACK64# REQ64# For 64-bit extension; no connect for 32-bit devices.

61 +5 V +5 V

62 +5 V +5 V

64-bit PCI extends this by an additional 32 contacts on each side which provide AD[63:32], C/BE[7:4]#, the PAR64
parity signal, and a number of power and ground pins.

Legend
Ground pin Zero volt reference

Power pin Supplies power to the PCI card

Output pin Driven by the PCI card, received by the motherboard

Initiator output Driven by the master/initiator, received by the target

I/O signal May be driven by initiator or target, depending on operation

Target output Driven by the target, received by the initiator/master

Input Driven by the motherboard, received by the PCI card

Open drain May be pulled low and/or sensed by multiple cards

Reserved Not presently used, do not connect

Most lines are connected to each slot in parallel. The exceptions are:
Conventional PCI 7

• Each slot has its own REQ# output to, and GNT# input from the motherboard arbiter.
• Each slot has its own IDSEL line, usually connected to a specific AD line.
• TDO is daisy-chained to the following slot's TDI. Cards without JTAG support must connect TDI to TDO so as
not to break the chain.
• PRSNT1# and PRSNT2# for each slot have their own pull-up resistors on the motherboard. The motherboard may
(but does not have to) sense these pins to determine the presence of PCI cards and their power requirements.
• REQ64# and ACK64# are individually pulled up on 32-bit only slots.
• The interrupt lines INTA# through INTD# are connected to all slots in different orders. (INTA# on one slot is
INTB# on the next and INTC# on the one after that.)
Notes:
• IOPWR is +3.3 V or +5 V, depending on the backplane. The slots also have a ridge in one of two places which
prevents insertion of cards that do not have the corresponding key notch, indicating support for that voltage
standard. Universal cards have both key notches and use IOPWR to determine their I/O signal levels.
• The PCI SIG strongly encourages 3.3 V PCI signaling,[7] requiring support for it since standard revision 2.3,[9]
but most PC motherboards use the 5 V variant. Thus, while many currently available PCI cards support both, and
have two key notches to indicate that, there are still a large number of 5 V-only cards on the market.
• The M66EN pin is an additional ground on 5 V PCI buses found in most PC motherboards. Cards and
motherboards that do not support 66 MHz operation also ground this pin. If all participants support 66 MHz
operation, a pull-up resistor on the motherboard raises this signal high and 66 MHz operation is enabled.
• At least one of PRSNT1# and PRSNT2# must be grounded by the card. The combination chosen indicates the
total power requirements of the card (25 W, 15 W, or 7.5 W).
• SBO# and SDONE are signals from a cache controller to the current target. They are not initiator outputs, but are
colored that way because they are target inputs.
• PME# (19A) - Power management event (optional) which is supported in PCI version 2.2 and higher. It's a 3.3 V,
open drain, active low signal.[11] PCI cards may use this signal to send and receive PME via the PCI socket
directly, which eliminates the need for a special Wake-on-LAN cable.[12]

Mixing of 32-bit and 64-bit PCI cards in different width slots


Most 32-bit PCI cards will function properly in 64-bit PCI-X slots, but the bus speed will be limited to the clock
frequency of the slowest card, an inherent limitation of PCI's shared bus topology. For example, when a PCI 2.3,
66-MHz peripheral is installed into a PCI-X bus capable of 133 MHz, the entire bus backplane will be limited to
66 MHz. To get around this limitation, many motherboards have multiple PCI/PCI-X buses, with one bus intended
for use with high-speed PCI-X peripherals, and the other bus intended for general-purpose peripherals.
Many 64-bit PCI-X cards are designed to work in 32-bit mode if inserted in shorter 32-bit connectors, with some loss
of speed.[13][14] An example of this is the Adaptec 29160 64-bit SCSI interface card.[15] However, some 64-bit
PCI-X cards do not work in standard 32-bit PCI slots.[16]Wikipedia:Identifying reliable sources
Installing a 64-bit PCI-X card in a 32-bit slot will leave the 64-bit portion of the card edge connector not connected
and overhanging. This requires that there be no motherboard components positioned so as to mechanically obstruct
the overhanging portion of the card edge connector.
Conventional PCI 8

Physical card dimensions

Full-size card
The original "full-size" PCI card is specified as a height of 107 mm (4.2 inches) and a depth of 312 mm
(12.283 inches). The height includes the edge card connector. However, most modern PCI cards are half-length or
smaller (see below) and many modern PCs cannot fit a full-size card.

Card backplate
In addition to these dimensions the physical size and location of a card's backplate are also standardized. The
backplate is the part that fastens to the card cage to stabilize the card and also contains external connectors, so it
usually attaches in a window so it is accessible from outside the computer case. The backplate is typically fixed to
the cage by either a 6-32 or M3 screw, or with a separate hold-down bracket.
The card itself can be a smaller size, but the backplate must still be full-size and properly located so that the card fits
in any standard PCI slot.

Half-length extension card (de facto standard)


This is in fact the practical standard now – the majority of modern PCI cards fit inside this length.
• Width: 0.6 inches (15.24 mm)
• Depth: 6.9 inches (175.26 mm)
• Height: 4.2 inches (106.68 mm)

Low-profile (half-height) card


The PCI organization has defined a standard for "low-profile" cards, which basically fit in the following ranges:
• Height: 1.42 inches (36.07 mm) to 2.536 inches (64.41 mm)
• Depth: 4.721 inches (119.91 mm) to 6.6 inches (167.64 mm)
The bracket is also reduced in height, to a standard 3.118 inches (79.2 mm). The smaller bracket will not fit a
standard PC case, but will fit in a 2U rack-mount case. Many manufacturers supply both types of bracket (brackets
are typically screwed to the card so changing them is not difficult).
These cards may be known by other names such as "slim".
• Low Profile PCI FAQ [17]
• Low Profile PCI Specification [18]

Mini PCI
Mini PCI was added to PCI version 2.2 for use in laptops; it uses
a 32-bit, 33 MHz bus with powered connections (3.3 V only; 5 V
is limited to 100 mA) and support for bus mastering and DMA.
The standard size for Mini PCI cards is approximately a quarter of
their full-sized counterparts. As there is no access to the card from
outside the case, unlike desktop PCI cards with brackets carrying
connectors, there are limitations on the functions they may
perform.
Mini PCI Wi-Fi card Type IIIB
Conventional PCI 9

Many Mini PCI devices were developed such as Wi-Fi, Fast Ethernet,
Bluetooth, modems (often Winmodems), sound cards, cryptographic
accelerators, SCSI, IDE–ATA, SATA controllers and combination
cards. Mini PCI cards can be used with regular PCI-equipped
hardware, using Mini PCI-to-PCI converters. Mini PCI has been
superseded by the much narrower PCI Express Mini Card.

Technical details of Mini PCI


PCI-to-MiniPCI converter Type III
Mini PCI cards have a 2 W maximum power consumption, which
limits the functionality that can be implemented in this form factor.
They also are required to support the CLKRUN# PCI signal used to
start and stop the PCI clock for power management purposes.
There are three card form factors: Type I, Type II, and Type III cards.
The card connector used for each type include: Type I and II use a
100-pin stacking connector, while Type III uses a 124-pin edge
connector, i.e. the connector for Types I and II differs from that for
Type III, where the connector is on the edge of a card, like with a
SO-DIMM. The additional 24 pins provide the extra signals required to
route I/O back through the system connector (audio, AC-Link, LAN, MiniPCI and MiniPCI Express cards in
phone-line interface). Type II cards have RJ11 and RJ45 mounted comparison

connectors. These cards must be located at the edge of the computer or


docking station so that the RJ11 and RJ45 ports can be mounted for external access.

Type Card on outer edge of host system Connector Size Comments

IA No 100-Pin Stacking 7.5 × 70 × 45 mm Large Z dimension (7.5 mm)

IB No 100-Pin Stacking 5.5 × 70 × 45 mm Smaller Z dimension (5.5 mm)

IIA Yes 100-Pin Stacking 17.44 × 70 × 45 mm Large Z dimension (17.44 mm)

IIB Yes 100-Pin Stacking 5.5 × 78 × 45 mm Smaller Z dimension (5.5 mm)

IIIA No 124-Pin Card Edge 2.4 × 59.6 × 50.95 mm Larger Y dimension (50.95 mm)

IIIB No 124-Pin Card Edge 2.4 × 59.6 × 44.6 mm Smaller Y dimension (44.6 mm)

Mini PCI is distinct from 144-pin Micro PCI.[19]

PC/104-Plus and PCI-104


The PC/104-Plus and PCI-104 embedded form factors include a stacking 120 pin PCI connector.

Other physical variations


Typically consumer systems specify "N × PCI slots" without specifying actual dimensions of the space available. In
some small-form-factor systems, this may not be sufficient to allow even "half-length" PCI cards to fit. Despite this
limitation, these systems are still useful because many modern PCI cards are considerably smaller than half-length.

PCI bus transactions


PCI bus traffic consists of a series of PCI bus transactions. Each transaction consists of an address phase followed
by one or more data phases. The direction of the data phases may be from initiator to target (write transaction) or
vice-versa (read transaction), but all of the data phases must be in the same direction. Either party may pause or halt
Conventional PCI 10

the data phases at any point. (One common example is a low-performance PCI device that does not support burst
transactions, and always halts a transaction after the first data phase.)
Any PCI device may initiate a transaction. First, it must request permission from a PCI bus arbiter on the
motherboard. The arbiter grants permission to one of the requesting devices. The initiator begins the address phase
by broadcasting a 32-bit address plus a 4-bit command code, then waits for a target to respond. All other devices
examine this address and one of them responds a few cycles later.
64-bit addressing is done using a two-stage address phase. The initiator broadcasts the low 32 address bits,
accompanied by a special "dual address cycle" command code. Devices which do not support 64-bit addressing can
simply not respond to that command code. The next cycle, the initiator transmits the high 32 address bits, plus the
real command code. The transaction operates identically from that point on. To ensure compatibility with 32-bit PCI
devices, it is forbidden to use a dual address cycle if not necessary, i.e. if the high-order address bits are all zero.
While the PCI bus transfers 32 bits per data phase, the initiator transmits 4 active-low byte enable signals indicating
which 8-bit bytes are to be considered significant. In particular, a write must affect only the enabled bytes in the
target PCI device. They are of little importance for memory reads, but I/O reads might have side effects. The PCI
standard explicitly allows a data phase with no bytes enabled, which must behave as a no-op.

PCI address spaces


PCI has three address spaces: memory, I/O address, and configuration.
Memory addresses are 32 bits (optionally 64 bits) in size, support caching and can be burst transactions.
I/O addresses are for compatibility with the Intel x86 architecture's I/O port address space. Although the PCI bus
specification allows burst transactions in any address space, most devices only support it for memory addresses and
not I/O.
Finally, PCI configuration space provides access to 256 bytes of special configuration registers per PCI device. Each
PCI slot gets its own configuration space address range. The registers are used to configure devices memory and I/O
address ranges they should respond to from transaction initiators. When a computer is first turned on, all PCI devices
respond only to their configuration space accesses. The computers BIOS scans for devices and assigns Memory and
I/O address ranges to them.
If an address is not claimed by any device, the transaction initiator's address phase will time out causing the initiator
to abort the operation. In case of reads, it is customary to supply all-ones for the read data value (0xFFFFFFFF) in
this case. PCI devices therefore generally attempt to avoid using the all-ones value in important status registers, so
that such an error can be easily detected by software.

PCI command codes


There are 16 possible 4-bit command codes, and 12 of them are assigned. With the exception of the unique dual
address cycle, the least significant bit of the command code indicates whether the following data phases are a read
(data sent from target to initiator) or a write (data sent from an initiator to target). PCI targets must examine the
command code as well as the address and not respond to address phases which specify an unsupported command
code.
The commands that refer to cache lines depend on the PCI configuration space cache line size register being set up
properly; they may not be used until that has been done.
0000
Interrupt Acknowledge
This is a special form of read cycle implicitly addressed to the interrupt controller, which returns an interrupt
vector. The 32-bit address field is ignored. One possible implementation is to generate an interrupt
acknowledge cycle on an ISA bus using a PCI/ISA bus bridge. This command is for IBM PC compatibility; if
Conventional PCI 11

there is no Intel 8259 style interrupt controller on the PCI bus, this cycle need never be used.
0001
Special Cycle
This cycle is a special broadcast write of system events that PCI card may be interested in. The address field of
a special cycle is ignored, but it is followed by a data phase containing a payload message. The currently
defined messages announce that the processor is stopping for some reason (e.g. to save power). No device ever
responds to this cycle; it is always terminated with a master abort after leaving the data on the bus for at least 4
cycles.
0010
I/O Read
This performs a read from I/O space. All 32 bits of the read address are provided, so that a device can (for
compatibility reasons) implement less than 4 bytes worth of I/O registers. If the byte enables request data not
within the address range supported by the PCI device (e.g. a 4-byte read from a device which only supports 2
bytes of I/O address space), it must be terminated with a target abort. Multiple data cycles are permitted, using
linear (simple incrementing) burst ordering.
The PCI standard is discouraging the use of I/O space in new devices, preferring that as much as possible be
done through main memory mapping.
0011
I/O Write
This performs a write to I/O space.
010x
Reserved
A PCI device must not respond to an address cycle with these command codes.
0110
Memory Read
This performs a read cycle from memory space. Because the smallest memory space a PCI device is permitted
to implement is 16 bits, the two least significant bits of the address are not needed; equivalent information will
arrive in the form of byte select signals. They instead specify the order in which burst data must be returned. If
a device does not support the requested order, it must provide the first word and then disconnect.
If a memory space is marked as "prefetchable", then the target device must ignore the byte select signals on a
memory read and always return 32 valid bits.
0111
Memory Write
This operates similarly to a memory read. The byte select signals are more important in a write, as unselected
bytes must not be written to memory.
Generally, PCI writes are faster than PCI reads, because a device can buffer the incoming write data and
release the bus faster. For a read, it must delay the data phase until the data has been fetched.
100x
Reserved
A PCI device must not respond to an address cycle with these command codes.
1010
Configuration Read
Conventional PCI 12

This is similar to an I/O read, but reads from PCI configuration space. A device must respond only if the low
11 bits of the address specify a function and register that it implements, and if the special IDSEL signal is
asserted. It must ignore the high 21 bits. Burst reads (using linear incrementing) are permitted in PCI
configuration space.
Unlike I/O space, standard PCI configuration registers are defined so that reads never disturb the state of the
device. It is possible for a device to have configuration space registers beyond the standard 64 bytes which
have read side effects, but this is rare.[20]
Configuration space accesses often have a few cycles of delay in order to allow the IDSEL lines to stabilize,
which makes them slower than other forms of access. Also, a configuration space access requires a multi-step
operation rather than a single machine instruction. Thus, it is best to avoid them during routine operation of a
PCI device.
1011
Configuration Write
This operates analogously to a configuration read.
1100
Memory Read Multiple
This command is identical to a generic memory read, but includes the hint that a long read burst will continue
beyond the end of the current cache line, and the target should internally prefetch a large amount of data. A
target is always permitted to consider this a synonym for a generic memory read.
1101
Dual Address Cycle
When accessing a memory address that requires more than 32 bits to represent, the address phase begins with
this command and the low 32 bits of the address, followed by a second cycle with the actual command and the
high 32 bits of the address. PCI targets that do not support 64-bit addressing can simply treat this as another
reserved command code and not respond to it. This command code can only be used with a non-zero
high-order address word; it is forbidden to use this cycle if not necessary.
1110
Memory Read Line
This command is identical to a generic memory read, but includes the hint that the read will continue to the
end of the cache line. A target is always permitted to consider this a synonym for a generic memory read.
1111
Memory Write and Invalidate
This command is identical to a generic memory write, but comes with the guarantee that one or more whole
cache lines will be written, with all byte selects enabled. This is an optimization for write-back caches
snooping the bus. Normally, a write-back cache holding dirty data must interrupt the write operation long
enough write its own dirty data first. If the write is performed using this command, the data to be written back
is guaranteed to be irrelevant, and can simply be invalidated in the write-back cache.
This optimization only affects the snooping cache, and makes no difference to the target, which may treat this
as a synonym for the memory write command.
Conventional PCI 13

PCI bus latency


Soon after promulgation of the PCI specification, it was discovered that lengthy transactions by some devices, due to
slow acknowledgments, long data bursts, or some combination, could cause buffer underrun or overrun in other
devices. Recommendations on the timing of individual phases in Revision 2.0 were made mandatory in revision
2.1:[]
• A target must be able to complete the initial data phase (assert TRDY# and/or STOP#) within 16 cycles of the
start of a transaction.
• An initiator must complete each data phase (assert IRDY#) within 8 cycles.
Additionally, as of revision 2.1, all initiators capable of bursting more than 2 data phases must implement a
programmable latency timer. The timer starts counting clock cycles when a transaction starts (initiator asserts
FRAME#). If the timer has expired and the arbiter has removed GNT#, then the initiator must terminate the
transaction at the next legal opportunity. This is usually the next data phase, but Memory Write and Invalidate
transactions must continue to the end of the cache line.

Delayed transactions
Devices unable to meet those timing restrictions must use a combination of posted writes (for memory writes) and
delayed transactions (for other writes and all reads). In a delayed transaction, the target records the transaction
(including the write data) internally and aborts (asserts STOP# rather than TRDY#) the first data phase. The initiator
must retry exactly the same transaction later. In the interim, the target internally performs the transaction, and waits
for the retried transaction. When the retried transaction is seen, the buffered result is delivered.
A device may be the target of other transactions while completing one delayed transaction; it must remember the
transaction type, address, byte selects and (if a write) data value, and only complete the correct transaction.
If the target has a limit on the number of delayed transactions that it can record internally (simple targets may impose
a limit of 1), it will force those transactions to retry without recording them. They will be dealt with when the current
delayed transaction is completed. If two initiators attempt the same transaction, a delayed transaction begun by one
may have its result delivered to the other; this is harmless.
A target abandons a delayed transaction when a retry succeeds in delivering the buffered result, the bus is reset, or
when 215=32768 clock cycles (approximately 1 ms) elapse without seeing a retry. The latter should never happen in
normal operation, but it prevents a deadlock of the whole bus if one initiator is reset or malfunctions.

PCI bus bridges


The PCI standard permits multiple independent PCI buses to be connected by bus bridges that will forward
operations on one bus to another when required. Although conventional PCI tends not to use many bus bridges, PCI
express systems use many; each PCI express slot appears to be a separate bus, connected by a bridge to the others.

Posted writes
Generally, when a bus bridge sees a transaction on one bus that must be forwarded to the other, the original
transaction must wait until the forwarded transaction completes before a result is ready. One notable exception
occurs in the case of memory writes. Here, the bridge may record the write data internally (if it has room) and signal
completion of the write before the forwarded write has completed. Or, indeed, before it has begun. Such "sent but not
yet arrived" writes are referred to as "posted writes", by analogy with a postal mail message. Although they offer
great opportunity for performance gains, the rules governing what is permissible are somewhat intricate.[21]
Conventional PCI 14

Combining, merging, and collapsing


The PCI standard permits bus bridges to convert multiple bus transactions into one larger transaction under certain
situations. This can improve the efficiency of the PCI bus.
Combining
Write transactions to consecutive addresses may be combined into a longer burst write, as long as the order of
the accesses in the burst is the same as the order of the original writes. It is permissible to insert extra data
phases with all byte enables turned off if the writes are almost consecutive.
Merging
Multiple writes to disjoint portions of the same word may be merged into a single write with multiple byte
enables asserted. In this case, writes were presented to the bus bridge in a particular order are merged so they
occur at the same time when forwarded.
Collapsing
Multiple writes to the same byte or bytes may not be combined, for example, by performing only the second
write and skipping the first write that was overwritten. This is because the PCI specification permits writes to
have side effects.

PCI bus signals


PCI bus transactions are controlled by five main control signals, two driven by the initiator of a transaction
(FRAME# and IRDY#), and three driven by the target (DEVSEL#, TRDY#, and STOP#). There are two additional
arbitration signals (REQ# and GNT#) which are used to obtain permission to initiate a transaction. All are
active-low, meaning that the active or asserted state is a low voltage. Pull-up resistors on the motherboard ensure
they will remain high (inactive or deasserted) if not driven by any device, but the PCI bus does not depend on the
resistors to change the signal level; all devices drive the signals high for one cycle before ceasing to drive the
signals.

Signal timing
All PCI bus signals are sampled on the rising edge of the clock. Signals nominally change on the falling edge of the
clock, giving each PCI device approximately one half a clock cycle to decide how to respond to the signals it
observed on the rising edge, and one half a clock cycle to transmit its response to the other device.
The PCI bus requires that every time the device driving a PCI bus signal changes, one turnaround cycle must elapse
between the time the one device stops driving the signal and the other device starts. Without this, there might be a
period when both devices were driving the signal, which would interfere with bus operation.
The combination of this turnaround cycle and the requirement to drive a control line high for one cycle before
ceasing to drive it means that each of the main control lines must be high for a minimum of two cycles when
changing owners. The PCI bus protocol is designed so this is rarely a limitation; only in a few special cases (notably
fast back-to-back transactions) is it necessary to insert additional delay to meet this requirement.
Conventional PCI 15

Arbitration
Any device on a PCI bus that is capable of acting as a bus master may initiate a transaction with any other device. To
ensure that only one transaction is initiated at a time, each master must first wait for a bus grant signal, GNT#, from
an arbiter located on the motherboard. Each device has a separate request line REQ# that requests the bus, but the
arbiter may "park" the bus grant signal at any device if there are no current requests.
The arbiter may remove GNT# at any time. A device which loses GNT# may complete its current transaction, but
may not start one (by asserting FRAME#) unless it observes GNT# asserted the cycle before it begins.
The arbiter may also provide GNT# at any time, including during another master's transaction. During a transaction,
either FRAME# or IRDY# or both are asserted; when both are deasserted, the bus is idle. A device may initiate a
transaction at any time that GNT# is asserted and the bus is idle.

Address phase
A PCI bus transaction begins with an address phase. The initiator, seeing that it has GNT# and the bus is idle, drives
the target address onto the AD[31:0] lines, the associated command (e.g. memory read, or I/O write) on the
C/BE[3:0]# lines, and pulls FRAME# low.
Each other device examines the address and command and decides whether to respond as the target by asserting
DEVSEL#. A device must respond by asserting DEVSEL# within 3 cycles. Devices which promise to respond
within 1 or 2 cycles are said to have "fast DEVSEL" or "medium DEVSEL", respectively. (Actually, the time to
respond is 2.5 cycles, since PCI devices must transmit all signals half a cycle early so that they can be received three
cycles later.)
Note that a device must latch the address on the first cycle; the initiator is required to remove the address and
command from the bus on the following cycle, even before receiving a DEVSEL# response. The additional time is
available only for interpreting the address and command after it is captured.
On the fifth cycle of the address phase (or earlier if all other devices have medium DEVSEL or faster), a catch-all
"subtractive decoding" is allowed for some address ranges. This is commonly used by an ISA bus bridge for
addresses within its range (24 bits for memory and 16 bits for I/O).
On the sixth cycle, if there has been no response, the initiator may abort the transaction by deasserting FRAME#.
This is known as master abort termination and it is customary for PCI bus bridges to return all-ones data
(0xFFFFFFFF) in this case. PCI devices therefore are generally designed to avoid using the all-ones value in
important status registers, so that such an error can be easily detected by software.

Address phase timing


_ 0_ 1_ 2_ 3_ 4_ 5_
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/
___
GNT# \___/XXXXXXXXXXXXXXXXXXX (GNT# Irrelevant after cycle has started)
_______
FRAME# \___________________
___
AD[31:0] -------<___>--------------- (Address only valid for one cycle.)
___ _______________
C/BE[3:0]# -------<___X_______________ (Command, then first data phase byte enables)
_______________________
DEVSEL# \___\___\___\___
Fast Med Slow Subtractive
Conventional PCI 16

_ _ _ _ _ _ _
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/
0 1 2 3 4 5

On the rising edge of clock 0, the initiator observes FRAME# and IRDY# both high, and GNT# low, so it drives the
address, command, and asserts FRAME# in time for the rising edge of clock 1. Targets latch the address and begin
decoding it. They may respond with DEVSEL# in time for clock 2 (fast DEVSEL), 3 (medium) or 4 (slow).
Subtractive decode devices, seeing no other response by clock 4, may respond on clock 5. If the master does not see
a response by clock 5, it will terminate the transaction and remove FRAME# on clock 6.
TRDY# and STOP# are deasserted (high) during the address phase. The initiator may assert IRDY# as soon as it is
ready to transfer data, which could theoretically be as soon as clock 2.

Dual-cycle address
To allow 64-bit addressing, a master will present the address over two consecutive cycles. First, it sends the
low-order address bits with a special "dual-cycle address" command on the C/BE[3:0]#. On the following cycle, it
sends the high-order address bits and the actual command. Dual-address cycles are forbidden if the high-order
address bits are zero, so devices which do not support 64-bit addressing can simply not respond to dual cycle
commands.

_0_ 1_ 2_ 3_ 4_ 5_ 6_
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/
___
GNT# \___/XXXXXXXXXXXXXXXXXXXXXXX
_______
FRAME# \_______________________
___ ___
AD[31:0] -------<___X___>--------------- (Low, then high bits)
___ ___ _______________
C/BE[3:0]# -------<___X___X_______________ (DAC, then actual command)
___________________________
DEVSEL# \___\___\___\___
Fast Med Slow
_ _ _ _ _ _ _ _
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/
0 1 2 3 4 5 6

Configuration access
Addresses for PCI configuration space access are decoded specially. For these, the low-order address lines specify
the offset of the desired PCI configuration register, and the high-order address lines are ignored. Instead, an
additional address signal, the IDSEL input, must be high before a device may assert DEVSEL#. Each slot connects a
different high-order address line to the IDSEL pin, and is selected using one-hot encoding on the upper address lines.

Data phases
After the address phase (specifically, beginning with the cycle that DEVSEL# goes low) comes a burst of one or
more data phases. In all cases, the initiator drives active-low byte select signals on the C/BE[3:0]# lines, but the data
on the AD[31:0] may be driven by the initiator (in case of writes) or target (in case of reads).
During data phases, the C/BE[3:0]# lines are interpreted as active-low byte enables. In case of a write, the asserted
signals indicate which of the four bytes on the AD bus are to be written to the addressed location. In the case of a
Conventional PCI 17

read, they indicate which bytes the initiator is interested in. For reads, it is always legal to ignore the byte enable
signals and simply return all 32 bits; cacheable memory resources are required to always return 32 valid bits. The
byte enables are mainly useful for I/O space accesses where reads have side effects.
A data phase with all four C/BE# lines deasserted is explicitly permitted by the PCI standard, and must have no
effect on the target (other than to advance the address in the burst access in progress).
The data phase continues until both parties are ready to complete the transfer and continue to the next data phase.
The initiator asserts IRDY# (initiator ready) when it no longer needs to wait, while the target asserts TRDY# (target
ready). Whichever side is providing the data must drive it on the AD bus before asserting its ready signal.
Once one of the participants asserts its ready signal, it may not become un-ready or otherwise alter its control signals
until the end of the data phase. The data recipient must latch the AD bus each cycle until it sees both IRDY# and
TRDY# asserted, which marks the end of the current data phase and indicates that the just-latched data is the word to
be transferred.
To maintain full burst speed, the data sender then has half a clock cycle after seeing both IRDY# and TRDY#
asserted to drive the next word onto the AD bus.

0_ 1_2_ 3_ 4_ 5_ 6_ 7_ 8_ 9_
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/
___ _______ ___ ___ ___
AD[31:0] ---<___XXXXXXXXX_______XXXXX___X___X___ (If a write)
___ ___ _______ ___ ___
AD[31:0] ---<___>~~~<XXXXXXXX___X_______X___X___ (If a read)
___ _______________ _______ ___ ___
C/BE[3:0]# ---<___X_______________X_______X___X___ (Must always be valid)
_______________ | ___ | | |
IRDY# x \_______/ x \___________
___________________ | | | |
TRDY# x x \___________________
___________ | | | |
DEVSEL# \___________________________
___ | | | |
FRAME# \___________________________________
_ _ _ _ _ |_ _ |_ |_ |_
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/
0 1 2 3 4 5 6 7 8 9

This continues the address cycle illustrated above, assuming a single address cycle with medium DEVSEL, so the
target responds in time for clock 3. However, at that time, neither side is ready to transfer data. For clock 4, the
initiator is ready, but the target is not. On clock 5, both are ready, and a data transfer takes place (as indicated by the
vertical lines). For clock 6, the target is ready to transfer, but the initator is not. On clock 7, the initiator becomes
ready, and data is transferred. For clocks 8 and 9, both sides remain ready to transfer data, and data is transferred at
the maximum possible rate (32 bits per clock cycle).
In case of a read, clock 2 is reserved for turning around the AD bus, so the target is not permitted to drive data on the
bus even if it is capable of fast DEVSEL.
Conventional PCI 18

Fast DEVSEL# on reads


A target that supports fast DEVSEL could in theory begin responding to a read the cycle after the address is
presented. This cycle is, however, reserved for AD bus turnaround. Thus, a target may not drive the AD bus (and
thus may not assert TRDY#) on the second cycle of a transaction. Note that most targets will not be this fast and will
not need any special logic to enforce this condition.

Ending transactions
Either side may request that a burst end after the current data phase. Simple PCI devices that do not support
multi-word bursts will always request this immediately. Even devices that do support bursts will have some limit on
the maximum length they can support, such as the end of their addressable memory.

Initiator burst termination


The initiator can mark any data phase as the final one in a transaction by deasserting FRAME# at the same time as it
asserts IRDY#. The cycle after the target asserts TRDY#, the final data transfer is complete, both sides deassert their
respective RDY# signals, and the bus is idle again. The master may not deassert FRAME# before asserting IRDY#,
nor may it deassert FRAME# while waiting, with IRDY# asserted, for the target to assert TRDY#.
The only minor exception is a master abort termination, when no target responds with DEVSEL#. Obviously, it is
pointless to wait for TRDY# in such a case. However, even in this case, the master must assert IRDY# for at least
one cycle after deasserting FRAME#. (Commonly, a master will assert IRDY# before receiving DEVSEL#, so it
must simply hold IRDY# asserted for one cycle longer.) This is to ensure that bus turnaround timing rules are
obeyed on the FRAME# line.

Target burst termination


The target requests the initiator end a burst by asserting STOP#. The initiator will then end the transaction by
deasserting FRAME# at the next legal opportunity; if it wishes to transfer more data, it will continue in a separate
transaction. There are several ways for the target to do this:
Disconnect with data
If the target asserts STOP# and TRDY# at the same time, this indicates that the target wishes this to be the last
data phase. For example, a target that does not support burst transfers will always do this to force single-word
PCI transactions. This is the most efficient way for a target to end a burst.
Disconnect without data
If the target asserts STOP# without asserting TRDY#, this indicates that the target wishes to stop without
transferring data. STOP# is considered equivalent to TRDY# for the purpose of ending a data phase, but no
data is transferred.
Retry
A Disconnect without data before transferring any data is a retry, and unlike other PCI transactions, PCI
initiators are required to pause slightly before continuing the operation. See the PCI specification for details.
Target abort
Normally, a target holds DEVSEL# asserted through the last data phase. However, if a target deasserts
DEVSEL# before disconnecting without data (asserting STOP#), this indicates a target abort, which is a fatal
error condition. The initiator may not retry, and typically treats it as a bus error. Note that a target may not
deassert DEVSEL# while waiting with TRDY# or STOP# low; it must do this at the beginning of a data phase.
There will always be at least one more cycle after a target-initiated disconnection, to allow the master to deassert
FRAME#. There are two sub-cases, which take the same amount of time, but one requires an additional data phase:
Disconnect-A
Conventional PCI 19

If the initiator observes STOP# before asserting its own IRDY#, then it can end the burst by deasserting
FRAME# at the end of the current data phase.
Disconnect-B
If the initiator has already asserted IRDY# (without deasserting FRAME#) by the time it observes the target's
STOP#, it is already committed to an additional data phase. The target must wait through an additional data
phase, holding STOP# asserted without TRDY#, before the transaction can end.
If the initiator ends the burst at the same time as the target requests disconnection, there is no additional bus cycle.

Burst addressing
For memory space accesses, the words in a burst may be accessed in several orders. The unnecessary low-order
address bits AD[1:0] are used to convey the initiator's requested order. A target which does not support a particular
order must terminate the burst after the first word. Some of these orders depend on the cache line size, which is
configurable on all PCI devices.

PCI burst ordering


A[1] A[0] Burst order (with 16-byte cache line)

0 0 Linear incrementing (0x0C, 0x10, 0x14, 0x18, 0x1C, ...)

0 1 Cacheline toggle (0x0C, 0x08, 0x04, 0x00, 0x1C, 0x18, ...)

1 0 Cacheline wrap (0x0C, 0x00, 0x04, 0x08, 0x1C, 0x10, ...)

1 1 Reserved (disconnect after first transfer)

If the starting offset within the cache line is zero, all of these modes reduce to the same order.
Cache line toggle and cache line wrap modes are two forms of critical-word-first cache line fetching. Toggle mode
XORs the supplied address with an incrementing counter. This is the native order for Intel 486 and Pentium
processors. It has the advantage that it is not necessary to know the cache line size to implement it.
PCI version 2.1 obsoleted toggle mode and added the cache line wrap mode,[22] where fetching proceeds linearly,
wrapping around at the end of each cache line. When one cache line is completely fetched, fetching jumps to the
starting offset in the next cache line.
Note that most PCI devices only support a limited range of typical cache line sizes; if the cache line size is
programmed to an unexpected value, they force single-word access.
PCI also supports burst access to I/O and configuration space, but only linear mode is supported. (This is rarely used,
and may be buggy in some devices; they may not support it, but not properly force single-word access either.)

Transaction examples
This is the highest-possible speed four-word write burst, terminated by the master:

0_ 1_2_ 3_ 4_ 5_ 6_ 7_
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \
___ ___ ___ ___ ___
AD[31:0] ---<___X___X___X___X___>---<___>
___ ___ ___ ___ ___
C/BE[3:0]# ---<___X___X___X___X___>---<___>
| | | | ___
IRDY# ^^^^^^^^\______________/ ^^^^^
| | | | ___
Conventional PCI 20

TRDY# ^^^^^^^^\______________/ ^^^^^


| | | | ___
DEVSEL# ^^^^^^^^\______________/ ^^^^^
___ | | | ___
FRAME# \_______________/ | ^^^^\____
_ _ |_ |_ |_ |_ _ _
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \
0 1 2 3 4 5 6 7

On clock edge 1, the initiator starts a transaction by driving an address, command, and asserting FRAME# The other
signals are idle (indicated by ^^^), pulled high by the motherboard's pull-up resistors. That might be their turnaround
cycle. On cycle 2, the target asserts both DEVSEL# and TRDY#. As the initiator is also ready, a data transfer occurs.
This repeats for three more cycles, but before the last one (clock edge 5), the master deasserts FRAME#, indicating
that this is the end. On clock edge 6, the AD bus and FRAME# are undriven (turnaround cycle) and the other control
lines are driven high for 1 cycle. On clock edge 7, another initiator can start a different transaction. This is also the
turnaround cycle for the other control lines.
The equivalent read burst takes one more cycle, because the target must wait 1 cycle for the AD bus to turn around
before it may assert TRDY#:

0_ 1_ 2_ 3_ 4_ 5_ 6_ 7_ 8_
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \
___ ___ ___ ___ ___
AD[31:0] ---<___>---<___X___X___X___>---<___>
___ _______ ___ ___ ___
C/BE[3:0]# ---<___X_______X___X___X___>---<___>
___ | | | | ___
IRDY# ^^^^\___________________/ ^^^^^
___ _____ | | | | ___
TRDY# ^^^^ \______________/ ^^^^^
___ | | | | ___
DEVSEL# ^^^^\___________________/ ^^^^^
___ | | | ___
FRAME# \___________________/ | ^^^^\____
_ _ _ |_ |_ |_ |_ _ _
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \
0 1 2 3 4 5 6 7 8

A high-speed burst terminated by the target will have an extra cycle at the end:

0_ 1_ 2_ 3_ 4_ 5_ 6_ 7_ 8_
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \
___ ___ ___ ___ ___
AD[31:0] ---<___>---<___X___X___X___XXXX>----
___ _______ ___ ___ ___ ___
C/BE[3:0]# ---<___X_______X___X___X___X___>----
| | | | ___
IRDY# ^^^^^^^\_______________________/
_____ | | | | _______
TRDY# ^^^^^^^ \______________/
Conventional PCI 21

________________ | ___
STOP# ^^^^^^^ | | | \_______/
| | | | ___
DEVSEL# ^^^^^^^\_______________________/
___ | | | | ___
FRAME# \_______________________/ ^^^^
_ _ _ |_ |_ |_ |_ _ _
CLK _/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \_/ \
0 1 2 3 4 5 6 7 8

On clock edge 6, the target indicates that it wants to stop (with data), but the initiator is already holding IRDY# low,
so there is a fifth data phase (clock edge 7), during which no data is transferred.

Parity
The PCI bus detects parity errors, but does not attempt to correct them by retrying operations; it is purely a failure
indication. Because of this, there is no need to detect the parity error before it has happened, and the PCI bus actually
detects it a few cycles later. During a data phase, whichever device is driving the AD[31:0] lines computes even
parity over them and the C/BE[3:0]# lines, and sends that out the PAR line one cycle later. All access rules and
turnaround cycles for the AD bus apply to the PAR line, just one cycle later. The device listening on the AD bus
checks the received parity and asserts the PERR# (parity error) line one cycle after that. This generally generates a
processor interrupt, and the processor can search the PCI bus for the device which detected the error.
The PERR# line is only used during data phases, once a target has been selected. If a parity error is detected during
an address phase (or the data phase of a Special Cycle), the devices which observe it assert the SERR# (System
error) line.
Even when some bytes are masked by the C/BE# lines and not in use, they must still have some defined value, and
this value must be used to compute the parity.

Fast back-to-back transactions


Due to the need for a turnaround cycle between different devices driving PCI bus signals, in general it is necessary to
have an idle cycle between PCI bus transactions. However, in some circumstances it is permitted to skip this idle
cycle, going directly from the final cycle of one transfer (IRDY# asserted, FRAME# deasserted) to the first cycle of
the next (FRAME# asserted, IRDY# deasserted).
An initiator may only perform back-to-back transactions when:
• they are by the same initiator (or there would be no time to turn around the C/BE# and FRAME# lines),
• the first transaction was a write (so there is no need to turn around the AD bus), and
• the initiator still has permission (from its GNT# input) to use the PCI bus.
Additional timing constraints may come from the need to turn around are the target control lines, particularly
DEVSEL#. The target deasserts DEVSEL#, driving it high, in the cycle following the final data phase, which in the
case of back-to-back transactions is the first cycle of the address phase. The second cycle of the address phase is then
reserved for DEVSEL# turnaround, so if the target is different from the previous one, it must not assert DEVSEL#
until the third cycle (medium DEVSEL speed).
One case where this problem cannot arise is if the initiator knows somehow (presumably because the addresses share
sufficient high-order bits) that the second transfer is addressed to the same target as the previous one. In that case, it
may perform back-to-back transactions. All PCI targets must support this.
It is also possible for the target keeps track of the requirements. If it never does fast DEVSEL, they are met trivially.
If it does, it must wait until medium DEVSEL time unless:
Conventional PCI 22

• the current transaction was preceded by an idle cycle (is not back-to-back), or
• the previous transaction was to the same target, or
• the current transaction began with a double address cycle.
Targets which have this capability indicate it by a special bit in a PCI configuration register, and if all targets on a
bus have it, all initiators may use back-to-back transfers freely.
A subtractive decoding bus bridge must know to expect this extra delay in the event of back-to-back cycles in order
to advertise back-to-back support.

64-bit PCI
The PCI specification includes optional 64-bit support. This is provided via an extended connector which provides
the 64-bit bus extensions AD[63:32], C/BE[7:4]#, and PAR64, and a number of additional power and ground pins.
The 64-bit PCI connector can be distinguished from a 32-bit connector by the additional 64-bit segment.
Memory transactions between 64-bit devices may use all 64 bits to double the data transfer rate. Non-memory
transactions (including configuration and I/O space accesses) may not use the 64-bit extension. During a 64-bit burst,
burst addressing works just as in a 32-bit transfer, but the address is incremented twice per data phase. The starting
address must be 64-bit aligned; i.e. AD2 must be 0. The data corresponding to the intervening addresses (with AD2
= 1) is carried on the upper half of the AD bus.
To initiate a 64-bit transaction, the initiator drives the starting address on the AD bus and asserts REQ64# at the
same time as FRAME#. If the selected target can support a 64-bit transfer for this transaction, it replies by asserting
ACK64# at the same time as DEVSEL#. Note that a target may decide on a per-transaction basis whether to allow a
64-bit transfer.
If REQ64# is asserted during the address phase, the initiator also drives the high 32 bits of the address and a copy of
the bus command on the high half of the bus. If the address requires 64 bits, a dual address cycle is still required, but
the high half of the bus carries the upper half of the address and the final command code during both address phase
cycles; this allows a 64-bit target to see the entire address and begin responding earlier.
If the initiator sees DEVSEL# asserted without ACK64#, it performs 32-bit data phases. The data which would have
been transferred on the upper half of the bus during the first data phase is instead transferred during the second data
phase. Typically, the initiator drives all 64 bits of data before seeing DEVSEL#. If ACK64# is missing, it may cease
driving the upper half of the data bus.
The REQ64# and ACK64# lines are held asserted for the entire transaction save the last data phase, and deasserted at
the same time as FRAME# and DEVSEL#, respectively.
The PAR64 line operates just like the PAR line, but provides even parity over AD[63:32] and C/BE[7:4]#. It is only
valid for address phases if REQ64# is asserted. PAR64 is only valid for data phases if both REQ64# and ACK64#
are asserted.
Conventional PCI 23

Cache snooping (obsolete)


PCI originally included optional support for write-back cache coherence. This required support by cacheable
memory targets, which would listen to two pins from the cache on the bus, SDONE (snoop done) and SBO# (snoop
backoff).[23]
Because this was rarely implemented in practice, it was deleted from revision 2.2 of the PCI specification,[7][24] and
the pins re-used for SMBus access in revision 2.3.[9]
The cache would watch all memory accesses, without asserting DEVSEL#. If it noticed an access that might be
cached, it would drive SDONE low (snoop not done). A coherence-supporting target would avoid completing a data
phase (asserting TRDY#) until it observed SDONE high.
In the case of a write to data that was clean in the cache, the cache would only have to invalidate its copy, and would
assert SDONE as soon as this was established. However, if the cache contained dirty data, the cache would have to
write it back before the access could proceed. so it would assert SBO# when raising SDONE. This would signal the
active target to assert STOP# rather than TRDY#, causing the initiator to disconnect and retry the operation later. In
the meantime, the cache would arbitrate for the bus and write its data back to memory.
Targets supporting cache coherency are also required to terminate bursts before they cross cache lines.

Development tools
When developing and/or troubleshooting the PCI bus, examination of
hardware signals can be very important. Logic analyzers and bus
analyzers are tools which collect, analyze, and decode signals for users
to view in useful ways.

References
[1] http:/ / www. webopedia. com/ TERM/ P/ PCI. html
[2] Hamacher et al, Computer Organization, Fifth Edition, McGraw-Hill, 2002
[3] http:/ / en. wikipedia. org/ w/ index. php?title=Conventional_PCI& action=edit A PCI card that displays POST numbers during
[5] VLB was designed for 486-based systems, yet even the more generic PCI was to BIOS startup.
gain prominence on that platform.
[6] PCI Family History (http:/ / www. pcisig. com/ specifications/ PCI_Family_History.
pdf)
[7] PCI Local Bus Specification, revision 3.0
[9] PCI Local Bus Specification, revision 2.3
[10] PCI Connector Pinout (http:/ / www. allpinouts. org/ index. php/ PCI)
[11] PCI Power Management Interface Specification v1.2
[12] archive.org/zuavra.net - Using Wake-On-LAN WOL/PME to power up your computer remotely (http:/ / web. archive. org/ web/
20070308143030/ http:/ / xlife. zuavra. net/ index. php/ 60/ )
[17] http:/ / www. pcisig. com/ news_room/ faqs/ #low_profile_pci
[18] http:/ / www. pcisig. com/ specifications/ conventional/ conventional_pci/ lowp_ecn. pdf
[19] Micro PCI, Micro AGP FAQ at iBASE (http:/ / www. ibase. com. tw/ FAQ. htm)
[21] PCI-to-PCI Bridge Architecture Specification, revision 1.1
[22] http:/ / download. intel. com/ design/ chipsets/ applnots/ 27301101. pdf
[23] PCI Local Bus Specification, revision 2.1
[24] PCI Local Bus Specification, revision 2.2
Conventional PCI 24

Further reading
Official Technical Specifications
• PCI-SIG (March 29, 2002). PCI Local Bus Specification: Revision 2.3 (http://www.pcisig.com/specifications/
conventional/conventional_pci_23/). ($1000 for non-members or $50 for members. PCI-SIG membership is
$3000 per year.)
• PCI-SIG (August 12, 2002). PCI Local Bus Specification: Revision 3.0 (http://www.pcisig.com/specifications/
conventional/pci_30/). ($1000 for non-members or $50 for members. PCI-SIG membership is $3000 per year.)
Books
• PCI Bus Demystified; 2nd Ed; Doug Abbott; 250 pages; 2004; ISBN 978-0-7506-7739-4.
• PCI System Architecture; 4th Ed; Tom Shanley; 832 pages; 1999; ISBN 978-0-201-30974-4.
• PCI-X System Architecture; 1st Ed; Tom Shanley; 752 pages; 2000; ISBN 978-0-201-72682-4.
• PCI & PCI-X Hardware and Software Architecture & Design; 5th Ed; Ed Solari; 1140 pages; 2001; ISBN
978-0-929392-63-9.
• PCI HotPlug Application and Design; 1st Ed; Alan Goodrum; 162 pages; 1998; ISBN 978-0-929392-60-8.

External links
• PCI Special Interest Group (PCI-SIG) (http://www.pcisig.com/home)
Technical Details
• Introduction to PCI protocol (http://electrofriends.com/articles/computer-science/protocol/
introduction-to-pci-protocol/), electrofriends.com
• PCI bus pin-out and signals (http://pinouts.ru/Slots/PCI_pinout.shtml), pinouts.ru
• PCI card dimensions (http://www.interfacebus.com/Design_Connector_PCI.html#b), interfacebus.com
Lists of Vendors / Devices / IDs
• PCI Vendor and Device Lists (http://www.pcidatabase.com/index.php), pcidatabase.com
• PCI ID Repository (http://pciids.sourceforge.net), sourceforge.net
Tips
• Brief overview of PCI power requirements and compatibility with a nice diagram. (http://www.4crawler.com/
Developer/VisualWorkstation/PCI/index.shtml)
• Good diagrams and text on how to recognize the difference between 5 volt and 3.3 volt slots. (http://www94.
web.cern.ch/hsi/s-link/devices/s32pci64/slottypes.html)
• Installing a PCI card (http://www.pchardwaretutor.com/tutor/?p=54)
Linux
• Linux with miniPCI cards (http://tuxmobil.org/minipci_linux.html)
• GNU/Linux PCI device driver check page (http://kmuto.jp/debian/hcl/index.cgi)
• Decoding PCI data and lspci output on Linux hosts (http://prefetch.net/articles/linuxpci.html)
Development Tools
• Active PCI Bus Extender (http://www.dinigroup.com/product/data/pciextender/files/PCIExtender_brief_lo.
pdf), dinigroup.com
FPGA Cores
• PCI Interface Core (http://www.latticesemi.com/products/intellectualproperty/referencedesigns/
pcitarget32bit33mhz.cfm), Lattice Semiconductor
• PCI Bridge Core (http://opencores.org/websvn,listing?repname=pci), OpenCore.org
Conventional PCI 25

• IP Search for PCI Bus Cores (http://www.eecs.berkeley.edu/~newton/Classes/EE290sp99/pages/hw2/pci.


htm)

PCI-X
PCI-X
PCI Local Bus

A PCI-X Gigabit Ethernet expansion card.


Year created 1998

Created by IBM, HP, and Compaq

Superseded by PCI Express (2004)

Width in bits 64

Capacity 1064 MB/s

Style Parallel

Hotplugging interface yes[citation needed]

PCI-X, short for Peripheral Component Interconnect eXtended, is a computer bus and expansion card standard that
enhances the 32-bit PCI Local Bus for higher bandwidth demanded by servers. It is a double-wide version of PCI,
running at up to four times the clock speed, but is otherwise similar in electrical implementation and uses the same
protocol.[] It has been replaced in modern designs[citation needed] by the similar-sounding PCI Express (officially
abbreviated as PCIe), with a completely different connector and a very different logical design, being a single narrow
but fast serial connection instead of a number of slower connections in parallel.

Background
PCI-X was developed jointly by IBM, HP, and Compaq and submitted for approval in 1998. It was an effort to
codify proprietary server extensions to the PCI local bus to address several shortcomings in PCI, and increase
performance of high bandwidth devices, such as Gigabit Ethernet, Fibre Channel, and Ultra3 SCSI cards, and allow
processors to be interconnected in clusters.
In PCI, a transaction that cannot be completed immediately is postponed by either the target or the initiator issuing
retry-cycles, during which no other agents can use the PCI bus. Since PCI lacks a split-response mechanism to
permit the target to return data at a later time, the bus remains occupied by the target issuing retry-cycles until the
read data is ready. In PCI-X, after the master issues the request, it disconnects from the PCI bus, allowing other
agents to use the bus. The split-response containing the requested data is generated only when the target is ready to
return all of the requested data. Split-responses increase bus efficiency by eliminating retry-cycles, during which no
data can be transferred across the bus.
PCI also suffered from the relative scarcity of unique interrupt lines. With only 4 interrupt lines (INTA/B/C/D),
systems with many PCI devices require multiple functions to share an interrupt line, complicating host-side
PCI-X 26

interrupt-handling. PCI-X added Message Signaled Interrupts, an interrupt system using writes to host-memory. In
MSI-mode, the function's interrupt is not signaled by asserting an INTx line. Instead, the function performs a
memory-write to a system-configured region in host-memory. Since the content and address are configured on a
per-function basis, MSI-mode interrupts are dedicated instead of shared. A PCI-X system allows both MSI-mode
interrupts and legacy INTx interrupts to be used simultaneously (though not by the same function.)
The lack of registered I/Os limited PCI to a maximum frequency of 66 MHz. PCI-X I/Os are registered to the PCI
clock, usually through means of a PLL to actively control I/O delay the bus pins. The improvement in setup time
allows an increase in frequency to 133 MHz.
Some devices, most notably Gigabit Ethernet cards, SCSI controllers (Fibre Channel and Ultra320), and cluster
interconnects could by themselves saturate the PCI bus's 133 MB/s bandwidth. Ports using a bus speed doubled to
66 MHz and a bus width doubled to 64 bits (with the pin count increased to 184 from 124), in combination or not,
have been implemented. These extensions were loosely supported as optional parts of the PCI 2.x standards, but
device compatibility beyond the basic 133 MB/s continued to be difficult.
Developers eventually used the combined 64-bit and 66-MHz extension as a foundation, and, anticipating future
needs, established 66-MHz and 133-MHz variants with a maximum bandwidth of 532 MB/s and 1064 MB/s
respectively. The joint result was submitted as PCI-X to the PCI Special Interest Group (Special Interest Group of
the Association for Computing Machinery). Subsequent approval made it an open standard adoptable by all
computer developers. The PCI SIG controls technical support, training, and compliance testing for PCI-X. IBM,
Intel, Microelectronics, and Mylex were to develop supporting chipsets. 3Com and Adaptec were to develop
compatible peripherals. To accelerate PCI-X adoption by the industry, Compaq offered PCI-X development tools at
their Web site. All major chip makers generally now have or have had some variant of PCI-X in their product lines.

Technical description
PCI-X revised the conventional PCI standard by doubling
the maximum clock speed (from 66 MHz to 133 MHz)[]
and hence the amount of data exchanged between the
computer processor and peripherals. Conventional PCI
supports up to 64 bits at 66 MHz (though anything above
32 bits at 33 MHz is seen only in high-end systems) and
additional bus standards move 32 bits at 66 MHz or 64
bits at 33 MHz. The theoretical maximum amount of data
exchanged between the processor and peripherals with
PCI-X is 1.06 GB/s, compared to 133 MB/s with standard
PCI. PCI-X also improves the fault tolerance of PCI, Dual Port Network Card for Single PCI-X slot to save on PCI-X
allowing, for example, faulty cards to be reinitialized or slots and use the full potential of the PCI-X 64-bit bus

taken offline.

The two most fundamental changes are:


• The shortest time between a signal appearing on the PCI bus and a response to that signal occurring on the bus
has been extended to 2 cycles, rather than 1. This allows much faster clock rates, but causes many protocol
changes:
• The ability of the conventional PCI bus protocol to insert wait states on any cycle based on the IRDY# and
TRDY# signals has been deleted; PCI-X only allows bursts to be interrupted at 128-byte boundaries.
• The initator must deassert FRAME# two cycles before the end of the transaction.
• The initator may not insert wait states. The target may, but only before any data is transferred, and wait states
for writes are limited to multiples of 2 clock cycles.
PCI-X 27

• Likewise, the length of a burst is decided before it begins; it may not be halted on an arbitrary cycle using the
FRAME# and STOP# signals.
• Subtractive decode DEVSEL# takes place two cycles after the "slow DEVSEL#" cycle rather than on the next
cycle.
• After the address phase (and before any device has responded with DEVSEL#), there is an additional 1-cycle
"attribute phase", during which 36 additional bits (both AD and C/BE# lines are used) of information about the
operation are transmitted. These include 16 bits of requester identification (PCI bus, device and function number),
12 bits of burst length, 5 bits of tag (for associating split transactions), and 3 bits of additional status.

Versions
All PCI-X cards or slots have a 64-bit
implementation and vary as follows:
• Cards
• 66 MHz (added in Rev. 1.0)[]
• 100 MHz (implemented by a
133 MHz adapter on some
servers)[1]
• 133 MHz (added in Rev. 1.0)[]
• 266 MHz (added in Rev. 2.0)[]
• 533 MHz (added in Rev. 2.0)[]
• Slots
• 66 MHz (can be found on older
3.3v and 5v keying of 64-bit PCI cards (both PCI and PCI-X). While most 64-bit PCI-X
servers)
slots are 5v and are backward compatible with common 32-bit 5v PCI cards, a number of
• 133 MHz (most common on 64-bit PCI-X slots are 3.3v and will not accept 5v cards, by far the most common voltage
modern servers) for 32-bit PCI cards
• 266 MHz (rare, being replaced
by PCI-e)
• 533 MHz (rare, being replaced by PCI-e)

Mixing of 32-bit and 64-bit PCI cards in different width slots


Most 32-bit PCI cards will function properly in 64-bit PCI-X slots, but the bus speed will be limited to the clock
frequency of the slowest card, an inherent limitation of PCI's shared bus topology. For example, when a PCI 2.3,
66-MHz peripheral is installed into a PCI-X bus capable of 133 MHz, the entire bus backplane will be limited to
66 MHz. To get around this limitation, many motherboards have multiple PCI/PCI-X buses, with one bus intended
for use with high-speed PCI-X peripherals, and the other bus intended for general-purpose peripherals.
Many 64-bit PCI-X cards are designed to work in 32-bit mode if inserted in shorter 32-bit connectors, with some loss
of speed.[2][3] An example of this is the Adaptec 29160 64-bit SCSI interface card.[4] However some 64-bit PCI-X
cards do not work in standard 32-bit PCI slots.[5]Wikipedia:Identifying reliable sources
Installing a 64-bit PCI-X card in a 32-bit slot will leave the 64-bit portion of the card edge connector not connected
and overhanging, which requires that there be no motherboard components positioned so as to mechanically obstruct
the overhanging portion of the card edge connector.
PCI-X 28

PCI-X 2.0
In 2003, the PCI SIG ratified PCI-X 2.0. It adds 266-MHz and 533-MHz variants, yielding roughly 2.15 GB/s and
4.3 GB/s throughput, respectively. PCI-X 2.0 makes additional protocol revisions that are designed to help system
reliability and add Error-correcting codes to the bus to avoid re-sends.[] To deal with one of the most common
complaints of the PCI-X form factor, the 184-pin connector, 16-bit ports were developed to allow PCI-X to be used
in devices with tight space constraints. Similar to PCI-Express, PtP functions were added to allow for devices on the
bus to talk to each other without burdening the CPU or bus controller.
Despite the various theoretical advantages of PCI-X 2.0 and its backward compatibility with PCI-X and PCI devices,
it has not been implemented on a large scale (as of 2008). This lack of implementation primarily is because hardware
vendors have chosen to integrate PCI Express instead.

Confusion with PCI-Express


PCI-X is often confused by name with similar-sounding PCI Express, commonly abbreviated as PCI-E or PCIe,
although the cards themselves are totally incompatible and look different. While they are both high-speed computer
buses for internal peripherals, they differ in many ways. The first is that PCI-X is a 64-bit parallel interface that is
backward compatible with 32-bit PCI devices. PCIe is a serial point-to-point connection with a different physical
interface that was designed to supersede both PCI and PCI-X.
PCI-X and standard PCI buses may run on a PCIe bridge, similar to the way ISA buses ran on standard PCI buses in
some computers. PCIe also matches PCI-X and even PCI-X 2.0 in maximum bandwidth. PCIe 1.0 x1 offers 250
MB/s in each direction, and up to 32 lanes (x32) is currently supported, giving a maximum of 8 GB/s in each
direction.
PCI-X has technological and economical disadvantages compared to PCI Express. The 64-bit parallel interface
requires difficult trace routing, because, as with all parallel interfaces, the signals from the bus must arrive
simultaneously or within a very short window, and noise from adjacent slots may cause interference. The serial
interface of PCIe suffers fewer such problems and therefore does not require such complex and expensive designs.
PCI-X buses, like standard PCI, are half-duplex bidirectional, whereas PCIe buses are full-duplex bidirectional.
PCI-X buses run only as fast as the slowest device, whereas PCIe devices are able to independently negotiate the bus
speed. Also, PCI-X slots are longer than PCIe 1x through PCIe 16x, which makes it impossible to make short cards
for PCI-X. PCI-X slots take quite a bit of space on motherboards, which can be a problem for ATX and smaller form
factors.

References

Further reading
• PCI Bus Demystified; 2nd Ed; Doug Abbott; 250 pages; 2004; ISBN 978-0-7506-7739-4.
• PCI-X System Architecture; 1st Ed; Tom Shanley; 752 pages; 2000; ISBN 978-0-201-72682-4.
• PCI & PCI-X Hardware and Software Architecture & Design; 5th Ed; Ed Solari; 1140 pages; 2001; ISBN
978-0-929392-63-9.

External links
• Good diagrams and text on how to recognize the difference between 5 volt and 3.3 volt PCI (and PCI-X) slots.
(http://hsi.web.cern.ch/HSI/s-link/devices/s32pci64/slottypes.html)
PCI Express 29

PCI Express
PCI Express

Year created 2004

Created by Intel · Dell · HP · IBM

Supersedes AGP · PCI · PCI-X

Width in bits 1–32

Number of devices One device each on each endpoint of each connection. PCI Express switches can create multiple endpoints out of one
endpoint to allow sharing one endpoint with multiple devices.

Capacity Per lane (each direction):


• v1.x: 250 MB/s (2.5 GT/s)
• v2.x: 500 MB/s (5 GT/s)
• v3.0: 985 MB/s (8 GT/s)
• v4.0: 1969 MB/s (16 GT/s)
So, a 16-lane slot (each direction):
• v1.x: 4 GB/s (40 GT/s)
• v2.x: 8 GB/s (80 GT/s)
• v3.0: 15.75 GB/s (128 GT/s)
• v4.0: 31.51 GB/s (256 GT/s)

Style Serial

Hotplugging Yes, if ExpressCard, Mobile PCI Express Module or XQD card


interface

External interface Yes, with PCI Express External Cabling, such as Thunderbolt

PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe, is a high-speed serial
computer expansion bus standard designed to replace the older PCI, PCI-X, and AGP bus standards. PCIe has
numerous improvements over the aforementioned bus standards, including higher maximum system bus throughput,
lower I/O pin count and smaller physical footprint, better performance-scaling for bus devices, a more detailed error
detection and reporting mechanism (Advanced Error Reporting (AER) [1]), and native hot-plug functionality. More
recent revisions of the PCIe standard support hardware I/O virtualization.
The PCIe electrical interface is also used in a variety of other standards, most notably ExpressCard, a laptop
expansion card interface.
Format specifications are maintained and developed by the PCI-SIG (PCI Special Interest Group), a group of more
than 900 companies that also maintain the conventional PCI specifications. PCIe 3.0 is the latest standard for
expansion cards that is in production and available on mainstream personal computers.[2][3]
PCI Express 30

Applications
PCI Express operates in consumer, server, and industrial applications, as a motherboard-level interconnect (to link
motherboard-mounted peripherals), a passive backplane interconnect and as an expansion card interface for add-in
boards.
In virtually all modern (as of 2012[4]) PCs, from consumer laptops and desktops to enterprise data servers, the PCIe
bus serves as the primary motherboard-level interconnect, connecting the host system-processor with both
integrated-peripherals (surface-mounted ICs) and add-on peripherals (expansion cards.) In most of these systems, the
PCIe bus co-exists with one or more legacy PCI buses, for backward compatibility with the large body of legacy PCI
peripherals.

Architecture
Conceptually, the PCIe bus is like a high-speed serial replacement of the older PCI/PCI-X bus,[] an interconnect bus
using shared address/data lines.
A key difference between PCIe bus and the older PCI is the bus topology. PCI uses a shared parallel bus
architecture, where the PCI host and all devices share a common set of address/data/control lines. In contrast, PCIe is
based on point-to-point topology, with separate serial links connecting every device to the root complex (host). Due
to its shared bus topology, access to the older PCI bus is arbitrated (in the case of multiple masters), and limited to
one master at a time, in a single direction. Furthermore, the older PCI's clocking scheme limits the bus clock to the
slowest peripheral on the bus (regardless of the devices involved in the bus transaction). In contrast, a PCIe bus link
supports full-duplex communication between any two endpoints, with no inherent limitation on concurrent access
across multiple endpoints.
In terms of bus protocol, PCIe communication is encapsulated in packets. The work of packetizing and
de-packetizing data and status-message traffic is handled by the transaction layer of the PCIe port (described later).
Radical differences in electrical signaling and bus protocol require the use of a different mechanical form factor and
expansion connectors (and thus, new motherboards and new adapter boards); PCI slots and PCIe slots are not
interchangeable. At the software level, PCIe preserves backward compatibility with PCI; legacy PCI system software
can detect and configure newer PCIe devices without explicit support for the PCIe standard, though PCIe's new
features are inaccessible.
The PCIe link between two devices can consist of anywhere from 1 to 32 lanes. In a multi-lane link, the packet data
is striped across lanes, and peak data-throughput scales with the overall link width. The lane count is automatically
negotiated during device initialization, and can be restricted by either endpoint. For example, a single-lane PCIe (×1)
card can be inserted into a multi-lane slot (×4, ×8, etc.), and the initialization cycle auto-negotiates the highest
mutually supported lane count. The link can dynamically down-configure the link to use fewer lanes, thus providing
some measure of failure tolerance in the presence of bad or unreliable lanes. The PCIe standard defines slots and
connectors for multiple widths: ×1, ×4, ×8, ×16, ×32. This allows PCIe bus to serve both cost-sensitive applications
where high throughput is not needed, as well as performance-critical applications such as 3D graphics, network (10
Gigabit Ethernet, multiport Gigabit Ethernet), and enterprise storage (SAS, Fibre Channel.)
As a point of reference, a PCI-X (133 MHz 64-bit) device and PCIe device at 4-lanes (×4), Gen1 speed have roughly
the same peak transfer rate in a single-direction: 1064 MB/sec. The PCIe bus has the potential to perform better than
the PCI-X bus in cases where multiple devices are transferring data communicating simultaneously, or if
communication with the PCIe peripheral is bidirectional.
PCI Express 31

Interconnect
PCIe devices communicate via a logical connection called an interconnect[] or link. A link is a point-to-point
communication channel between two PCIe ports, allowing both to send/receive ordinary PCI-requests (configuration
read/write, I/O read/write, memory read/write) and interrupts (INTx, MSI, MSI-X). At the physical level, a link is
composed of 1 or more lanes.[] Low-speed peripherals (such as an 802.11 Wi-Fi card) use a single-lane (×1) link,
while a graphics adapter typically uses a much wider (and thus, faster) 16-lane link.

Lane
A lane is composed of two differential signaling pairs: one pair for receiving data, the other for transmitting . Thus,
each lane is composed of four wires or signal traces. Conceptually, each lane is used as a full-duplex byte stream,
transporting data packets in eight-bit 'byte' format, between endpoints of a link, in both directions simultaneously.[5]
Physical PCIe slots may contain from one to thirty-two lanes, in powers of two (1, 2, 4, 8, 16 and 32).[] Lane counts
are written with an × prefix (e.g., ×16 represents a sixteen-lane card or slot), with ×16 being the largest size in
common use.[6]

Serial bus
The bonded serial format was chosen over a traditional parallel bus format due to the latter's inherent limitations,
including single-duplex operation, excess signal count and an inherently lower bandwidth due to timing skew.
Timing skew results from separate electrical signals within a parallel interface traveling down different-length
conductors, on potentially different printed circuit board layers, at possibly different signal velocities. Despite being
transmitted simultaneously as a single word, signals on a parallel interface experience different travel times and
arrive at their destinations at different moments. When the interface clock rate is increased to a point where its
inverse (i.e., its clock period) is shorter than the largest possible time between signal arrivals, the signals no longer
arrive with sufficient coincidence to make recovery of the transmitted word possible. Since timing skew over a
parallel bus can amount to a few nanoseconds, the resulting bandwidth limitation is in the range of hundreds of
megahertz.
A serial interface does not exhibit timing skew because there is only one differential signal in each direction within
each lane, and there is no external clock signal since clocking information is embedded within the serial signal. As
such, typical bandwidth limitations on serial signals are in the multi-gigahertz range. PCIe is just one example of a
general trend away from parallel buses to serial interconnects. Other examples include Serial ATA, USB, SAS,
FireWire (1394) and RapidIO.
Multichannel serial design increases flexibility by allocating slow devices to fewer lanes than fast devices.
PCI Express 32

Form factors

PCI Express (standard)


A PCIe card fits into a slot of its physical size or larger
(maximum ×16), but may not fit into a smaller PCIe
slot (e.g.,a ×16 card in a ×8 slot). Some slots use
open-ended sockets to permit physically longer cards
and negotiate the best available electrical connection.
The number of lanes actually connected to a slot may
also be less than the number supported by the physical
slot size.
An example is a ×8 slot that actually only runs at ×1.
These slots allow any ×1, ×2, ×4 or ×8 card, though
only running at ×1 speed. This type of socket is called a
Various PCI slots. From top to bottom:PCI Express ×4 PCI Express
×8 (×1 mode) slot, meaning it physically accepts up to ×16 PCI Express ×1 PCI Express ×16 Legacy PCI (32-bit)
×8 cards but only runs at ×1 speed. This is also
sometimes specified as "×size (@×capacity)" (for example, "×16(@×8)"). The advantage is that it can accommodate
a larger range of PCIe cards without requiring motherboard hardware to support the full transfer rate. This keeps
design and implementation costs down.

Pinout
The following table identifies the conductors on each side of the edge connector on a PCI Express card. The solder
side of the printed circuit board (PCB) is the A side, and the component side is the B side.[7] PRSNT1# and
PRSNT2# pins must be slightly shorter than the rest, to ensure that a hot-plugged card is fully inserted. The WAKE#
pin uses full voltage to wake the computer, but must be pulled high from the standby power to indicate that the card
is wake capable.[8]

PCI express ×16 connector pinout


Pin Side B Side A Comments

1 +12 V PRSNT1# Must connect to furthest-apart PRSNT2#

2 +12 V +12 V

3 +12 V +12 V

4 Ground Ground

5 SMCLK TCK SMBus and JTAG port pins

6 SMDAT TDI

7 Ground TDO

8 +3.3 V TMS

9 TRST# +3.3 V

10 +3.3 V aux +3.3 V Standby power

11 WAKE# PERST# Link reactivation; power and REFCLK stabilized

Key notch

12 Reserved Ground

13 Ground REFCLK+ Reference clock differential pair


PCI Express 33

14 HSOp(0) REFCLK- Lane 0 transmit data, + and −

15 HSOn(0) Ground

16 Ground HSIp(0) Lane 0 receive data, + and −

17 PRSNT2# HSIn(0)

18 Ground Ground

PCI ×1 board ends at pin 18

19 HSOp(1) Reserved Lane 1 transmit data, + and −

20 HSOn(1) Ground

21 Ground HSIp(1) Lane 1 receive data, + and −

22 Ground HSIn(1)

23 HSOp(2) Ground Lane 2 transmit data, + and −

24 HSOn(2) Ground

25 Ground HSIp(2) Lane 2 receive data, + and −

26 Ground HSIn(2)

27 HSOp(3) Ground Lane 3 transmit data, + and −

28 HSOn(3) Ground

29 Ground HSIp(3) Lane 3 receive data, + and −

30 Reserved HSIn(3)

31 PRSNT2# Ground

32 Ground Reserved

PCI ×4 board ends at pin 32

33 HSOp(4) Reserved Lane 4 transmit data, + and −

34 HSOn(4) Ground

35 Ground HSIp(4) Lane 4 receive data, + and −

36 Ground HSIn(4)

37 HSOp(5) Ground Lane 5 transmit data, + and −

38 HSOn(5) Ground

39 Ground HSIp(5) Lane 5 receive data, + and −

40 Ground HSIn(5)

41 HSOp(6) Ground Lane 6 transmit data, + and −

42 HSOn(6) Ground

43 Ground HSIp(6) Lane 6 receive data, + and −

44 Ground HSIn(6)

45 HSOp(7) Ground Lane 7 transmit data, + and −

46 HSOn(7) Ground

47 Ground HSIp(7) Lane 7 receive data, + and −

48 PRSNT2# HSIn(7)

49 Ground Ground

PCI ×8 board ends at pin 49


PCI Express 34

50 HSOp(8) Reserved Lane 8 transmit data, + and −

51 HSOn(8) Ground

52 Ground HSIp(8) Lane 8 receive data, + and −

53 Ground HSIn(8)

54 HSOp(9) Ground Lane 9 transmit data, + and −

55 HSOn(9) Ground

56 Ground HSIp(9) Lane 9 receive data, + and −

57 Ground HSIn(9)

58 HSOp(10) Ground Lane 10 transmit data, + and −

59 HSOn(10) Ground

60 Ground HSIp(10) Lane 10 receive data, + and −

61 Ground HSIn(10)

62 HSOp(11) Ground Lane 11 transmit data, + and −

63 HSOn(11) Ground

64 Ground HSIp(11) Lane 11 receive data, + and −

65 Ground HSIn(11)

66 HSOp(12) Ground Lane 12 transmit data, + and −

67 HSOn(12) Ground

68 Ground HSIp(12) Lane 12 receive data, + and −

69 Ground HSIn(12)

70 HSOp(13) Ground Lane 13 transmit data, + and −

71 HSOn(13) Ground

72 Ground HSIp(13) Lane 13 receive data, + and −

73 Ground HSIn(13)

74 HSOp(14) Ground Lane 14 transmit data, + and −

75 HSOn(14) Ground

76 Ground HSIp(14) Lane 14 receive data, + and −

77 Ground HSIn(14)

78 HSOp(15) Ground Lane 15 transmit data, + and −

79 HSOn(15) Ground

80 Ground HSIp(15) Lane 15 receive data, + and −

81 PRSNT2# HSIn(15)

82 Reserved Ground
PCI Express 35

Legend
Ground pin Zero volt reference

Power pin Supplies power to the PCIe card

Output pin Signal from the card to the motherboard

Input pin Signal from the motherboard to the card

Open drain May be pulled low and/or sensed by multiple cards

Sense pin Tied together on card

Reserved Not presently used, do not connect

Power
All sizes of ×4 and ×8 PCI Express cards are allowed a maximum power consumption of 25 W. All ×1 cards are
initially 10 W; full-height cards may configure themselves as 'high-power' to reach 25 W, while half-height ×1 cards
are fixed at 10 W. All sizes of ×16 cards are initially 25 W; like ×1 cards, half-height cards are limited to this number
while full-height cards may increase their power after configuration. They can use up to 75 W (3.3 V/3 A +
12 V/5.5 A), though the specification demands that the higher-power configuration be used for graphics cards only,
while cards of other purposes are to remain at 25 W.[9][10] Optional connectors add 75 W (6-pin) and/or 150 W
(8-pin) power for up to 525 W total (75 W + 3×150 W).[11]

PCI Express Mini Card


PCI Express Mini Card (also known as Mini PCI
Express, Mini PCIe, and Mini PCI-E) is a replacement
for the Mini PCI form factor, based on PCI Express. It
is developed by the PCI-SIG. The host device supports
both PCI Express and USB 2.0 connectivity, and each
card may use either standard. Most laptop computers
built after 2005 are based on PCI Express and can have
several Mini Card slots.[citation needed]

Physical dimensions
A WLAN PCI Express Mini Card and its connector.
PCI Express Mini Cards are 30×50.95 mm. There is a
52-pin edge connector, consisting of two staggered rows on a 0.8 mm pitch. Each row has eight contacts, a gap
equivalent to four contacts, then a further 18 contacts. A half-length card is also specified 30×26.8 mm. Cards have a
thickness of 1.0 mm (excluding components).

Electrical interface

PCI Express Mini Card edge connectors provide multiple connections and buses:
• PCIe ×1
• USB 2.0
• SMBus
PCI Express 36

• Wires to diagnostics LEDs for wireless network


(i.e., Wi-Fi) status on computer's chassis

MiniPCI and MiniPCI Express cards in comparison

• SIM card for GSM and WCDMA applications. (UIM signals on spec)
• Future extension for another PCIe lane
• 1.5 and 3.3 volt power

Mini PCI Express & mSATA


Despite sharing the mini-PCI Express form factor, an mSATA slot is not necessarily electrically compatible with
Mini PCI Express. For this reason, only certain notebooks are compatible with mSATA drives. Most compatible
systems are based on Intel's Sandy Bridge processor architecture, using the Huron River platform. But for a
mSATA/mini-PCI-E connector, the only prerequisite is that there is a switch which makes it either a mSATA or a
mini-PCI-E slot and can be implemented on any platform.
Notebooks like Lenovo's T-Series, W-Series, and X-Series ThinkPads released in March–April 2011 have support
for an mSATA SSD card in their WWAN card slot. The ThinkPad Edge E220s/E420s, and the Lenovo IdeaPad
Y460/Y560 also support mSATA.[12]
Some notebooks (notably the Asus Eee PC, the MacBook Air, and the Dell mini9 and mini10) use a variant of the
PCI Express Mini Card as an SSD. This variant uses the reserved and several non-reserved pins to implement SATA
and IDE interface passthrough, keeping only USB, ground lines, and sometimes the core PCIe 1x bus intact.[] This
makes the 'miniPCIe' flash and solid state drives sold for netbooks largely incompatible with true PCI Express Mini
implementations.
Also, the typical Asus miniPCIe SSD is 71 mm long, causing the Dell 51 mm model to often be (incorrectly)
referred to as half length. A true 51 mm Mini PCIe SSD was announced in 2009, with two stacked PCB layers,
which allows for higher storage capacity. The announced design preserves the PCIe interface, making it compatible
with the standard mini PCIe slot. No working product has yet been developed.
Intel has numerous Desktop Boards with the PCIe x1 Mini-Card slot which typically do not support mSATA SSD. A
list of Desktop Boards that natively support mSATA in the PCIe x1 Mini-Card slot (typically multiplexed with a
SATA port) is provided on the Intel Support site.[13]
PCI Express 37

PCI Express External Cabling


PCI Express External Cabling (also known as External PCI Express, Cabled PCI Express, or ePCIe) specifications
were released by the PCI-SIG in February 2007.[][14]
Standard cables and connectors have been defined for ×1, ×4, ×8, and ×16 link widths, with a transfer rate of
250 MB/s per lane. The PCI-SIG also expects the norm will evolve to reach the 500 MB/s, as in PCI Express 2.0.
The maximum cable length remains undetermined. An example of the uses of Cabled PCI Express is a metal
enclosure, containing a number of PCI slots and PCI-to-ePCIe adapter circuitry. This device would not be possible
had it not been for the ePCIe spec.

Derivative forms
There are several other expansion card types derived from PCIe. These include:
• Low height card
• ExpressCard: successor to the PC Card form factor (with ×1 PCIe and USB 2.0; hot-pluggable)
• PCI Express ExpressModule: a hot-pluggable modular form factor defined for servers and workstations
• XQD card: a PCI Express-based flash card standard by the CompactFlash Association
• XMC: similar to the CMC/PMC form factor (VITA 42.3)
• AdvancedTCA: a complement to CompactPCI for larger applications; supports serial based backplane topologies
• AMC: a complement to the AdvancedTCA specification; supports processor and I/O modules on ATCA boards
(×1, ×2, ×4 or ×8 PCIe).
• FeaturePak: a tiny expansion card format (43 × 65 mm) for embedded and small form factor applications; it
implements two ×1 PCIe links on a high-density connector along with USB, I2C, and up to 100 points of I/O.
• Universal IO: A variant from Super Micro Computer Inc designed for use in low profile rack mounted chassis. It
has the connector bracket reversed so it cannot fit in a normal PCI Express socket, but is pin compatible and may
be inserted if the bracket is removed.
• Thunderbolt: A variant from Intel and Apple that combines DisplayPort and PCIe protocols in a form factor
compatible with Mini DisplayPort.
• Serial Digital Video Out: some 9xx series Intel chipsets allow for adding an additional output for the integrated
video into a PCIe slot (mostly dedicated and 16 lanes)

History and revisions


While in early development, PCIe was initially referred to as HSI (for High Speed Interconnect), and underwent a
name change to 3GIO (for 3rd Generation I/O) before finally settling on its PCI-SIG name PCI Express. It was first
drawn up by a technical working group named the Arapaho Work Group (AWG) that, for initial drafts, consisted
only of Intel engineers. Subsequently the AWG expanded to include industry partners.
PCIe is a technology under constant development and improvement. The current PCI Express implementation is
version 3.0.
PCI Express 38

PCI Express 1.0a


In 2003, PCI-SIG [15] introduced PCIe 1.0a, with a per-lane data rate of 250 MB/s and a transfer rate of 2.5
gigatransfers per second (GT/s). Transfer rate is expressed in transfers per second instead of bits per second because
the number of transfers includes the overhead bits, which do not provide additional throughput.[16]
PCIe 1.x uses an 8b/10b encoding scheme that results in a 20 percent ((10-8)/10) overhead on the raw bit rate. It uses
a 2.5 GHz clock rate, therefore delivering an effective 250 000 000 bytes per second (250 MB/s) maximum data
rate.[17]

PCI Express 1.1


In 2005, PCI-SIG [15] introduced PCIe 1.1. This updated specification includes clarifications and several
improvements, but is fully compatible with PCI Express 1.0a. No changes were made to the data rate.

PCI Express 2.0


PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January 2007.[18] The PCIe 2.0
standard doubles the transfer rate compared with PCIe 1.0 to 5 GT/s and the per-lane throughput rises from 250
MB/s to 500 MB/s. This means a 32-lane PCIe connector (×32) can support throughput up to 16 GB/s aggregate.
PCIe 2.0 motherboard slots are fully backward compatible with PCIe v1.x cards. PCIe 2.0 cards are also generally
backward compatible with PCIe 1.x motherboards, using the available bandwidth of PCI Express 1.1. Overall,
graphic cards or motherboards designed for v2.0 will work with the other being v1.1 or v1.0a.
The PCI-SIG also said that PCIe 2.0 features improvements to the point-to-point data transfer protocol and its
software architecture.[19]
Intel's first PCIe 2.0 capable chipset was the X38 and boards began to ship from various vendors (Abit, Asus,
Gigabyte) as of October 21, 2007.[20] AMD started supporting PCIe 2.0 with its AMD 700 chipset series and nVidia
started with the MCP72.[21] All of Intel's prior chipsets, including the Intel P35 chipset, supported PCIe 1.1 or
1.0a.[22]
Like 1.x, PCIe 2.0 uses an 8b/10b encoding scheme, therefore delivering, per-lane, an effective 4 Gbit/s max transfer
rate from its 5 GT/s raw data rate.

PCI Express 2.1


PCI Express 2.1 supports a large proportion of the management, support, and troubleshooting systems planned for
full implementation in PCI Express 3.0. However, the speed is the same as PCI Express 2.0.

PCI Express 3.0


PCI Express 3.0 Base specification revision 3.0 was made available in November 2010, after multiple delays. In
August 2007, PCI-SIG announced that PCI Express 3.0 would carry a bit rate of 8 gigatransfers per second (GT/s),
and that it would be backward compatible with existing PCIe implementations. At that time, it was also announced
that the final specification for PCI Express 3.0 would be delayed until 2011.[23] New features for the PCIe 3.0
specification include a number of optimizations for enhanced signaling and data integrity, including transmitter and
receiver equalization, PLL improvements, clock data recovery, and channel enhancements for currently supported
topologies.[]
Following a six-month technical analysis of the feasibility of scaling the PCIe interconnect bandwidth, PCI-SIG's
analysis found out that 8 gigatransfers per second can be manufactured in mainstream silicon process technology,
and can be deployed with existing low-cost materials and infrastructure, while maintaining full compatibility (with
negligible impact) to the PCIe protocol stack.
PCI Express 39

PCIe 3.0 upgrades the encoding scheme to 128b/130b from the previous 8b/10b, reducing the overhead to
approximately 1.54% ((130-128)/130), as opposed to the 20% of PCIe 2.0. This is achieved by a technique called
"scrambling" that applies a known binary polynomial to a data stream in a feedback topology. Because the
scrambling polynomial is known, the data can be recovered by running it through a feedback topology using the
inverse polynomial.[] PCIe 3.0's 8 GT/s bit rate effectively delivers 985 MB/s per lane, double PCIe 2.0 bandwidth.
PCI-SIG expects the PCIe 3.0 specifications to undergo rigorous technical vetting and validation before being
released to the industry. This process, which was followed in the development of prior generations of the PCIe Base
and various form factor specifications, includes the corroboration of the final electrical parameters with data derived
from test silicon and other simulations conducted by multiple members of the PCI-SIG.
On November 18, 2010, the PCI Special Interest Group officially published the finalized PCI Express 3.0
specification to its members to build devices based on this new version of PCI Express.[24]
AMD latest flagship graphic card, the Radeon HD 7970, launched on January 9, 2012, was the world's first PCIe 3.0
graphic card.[25] Initial reviews suggest that the new interface would not improve graphic performance compared to
earlier PCIe 2.0, which, at the time of writing, is still under-utilized. However, the new interface would prove
advantageous when used for general purpose computing with technologies like OpenCL, CUDA and C++ AMP.[26]

PCI Express 4.0


On November 29, 2011, PCI-SIG announced PCI Express 4.0 featuring 16 GT/s, still based on copper technology.
Additionally, active and idle power optimizations are to be investigated. Final specifications are expected to be
released in 2014/2015.[27]

Current status
As of 2013[4] PCI Express has replaced AGP as the default interface for graphics cards on new systems. Almost all
models of graphics cards released since 2010 by AMD (ATI) and NVIDIA use PCI Express. NVIDIA uses the
high-bandwidth data transfer of PCIe for its Scalable Link Interface (SLI) technology, which allows multiple
graphics cards of the same chipset and model number to run in tandem, allowing increased performance. AMD has
also developed a multi-GPU system based on PCIe called CrossFire. AMD and NVIDIA have released motherboard
chipsets that support as many as four PCIe ×16 slots, allowing tri-GPU and quad-GPU card configurations.

Extensions and future directions


Some vendors offer PCIe over fiber products,[28][29] but these generally find use only in specific cases where
transparent PCIe bridging is preferable to using a more mainstream standard (such as InfiniBand or Ethernet) that
may require additional software to support it; current implementations focus on distance rather than raw bandwidth
and typically do not implement a full x16 link.
Certain data-center applications (such as large computer clusters) require the use of fiber-optic interconnects due to
the distance and latency limitations inherent in copper cabling. Typically, a network-oriented standard such as
Ethernet or Fibre Channel suffices for these applications, but in some cases the overhead introduced by routable
protocols is undesirable and a lower-level interconnect, such as InfiniBand, RapidIO, or NUMAlink is needed.
Local-bus standards such as PCIe and HyperTransport can in principle be used for this purpose,[30] but as of 2012[4]
no major vendors offer solutions in this vein.
Thunderbolt was developed by Intel as a general-purpose high speed interface combining a x2 PCIe link with
DisplayPort and was originally intended to be an all-fiber interface, but due to early difficulties in creating a
consumer-friendly fiber interconnect, most early implementations are hybrid copper-fiber systems. A notable
exception, the Sony VAIO Z VPC-Z2, uses a nonstandard USB port with an optical component to connect to an
outboard PCIe display adapter. Apple has been the primary driver of Thunderbolt adoption through 2011, but
industry-wide adoption is expected to pick up, with several vendors [31] announcing new products and systems
PCI Express 40

featuring Thunderbolt.

Hardware protocol summary


The PCIe link is built around dedicated unidirectional couples of serial (1-bit), point-to-point connections known as
lanes. This is in sharp contrast to the earlier PCI connection, which is a bus-based system where all the devices share
the same bidirectional, 32-bit or 64-bit parallel bus.
PCI Express is a layered protocol, consisting of a transaction layer, a data link layer, and a physical layer. The Data
Link Layer is subdivided to include a media access control (MAC) sublayer. The Physical Layer is subdivided into
logical and electrical sublayers. The Physical logical-sublayer contains a physical coding sublayer (PCS). The terms
are borrowed from the IEEE 802 networking protocol model.

Physical layer
The PCIe Physical Layer (PHY, PCIEPHY, PCI Express PHY, or PCIe PHY) specification is divided into two
sub-layers, corresponding to electrical and logical specifications. The logical sublayer is sometimes further divided
into a MAC sublayer and a PCS, although this division is not formally part of the PCIe specification. A specification
published by Intel, the PHY Interface for PCI Express (PIPE),[] defines the MAC/PCS functional partitioning and the
interface between these two sub-layers. The PIPE specification also identifies the physical media attachment (PMA)
layer, which includes the serializer/deserializer (SerDes) and other analog circuitry; however, since SerDes
implementations vary greatly among ASIC vendors, PIPE does not specify an interface between the PCS and PMA.
At the electrical level, each lane consists of two unidirectional LVDS or PCML pairs at 2.525 Gbit/s. Transmit and
receive are separate differential pairs, for a total of four data wires per lane.
A connection between any two PCIe devices is known as a link, and is built up from a collection of one or more
lanes. All devices must minimally support single-lane (×1) link. Devices may optionally support wider links
composed of 2, 4, 8, 12, 16, or 32 lanes. This allows for very good compatibility in two ways:
• A PCIe card physically fits (and works correctly) in any slot that is at least as large as it is (e.g., an ×1 sized card
will work in any sized slot);
• A slot of a large physical size (e.g., ×16) can be wired electrically with fewer lanes (e.g., ×1, ×4, ×8, or ×12) as
long as it provides the ground connections required by the larger physical slot size.
In both cases, PCIe negotiates the highest mutually supported number of lanes. Many graphics cards, motherboards
and bios versions are verified to support ×1, ×4, ×8 and ×16 connectivity on the same connection.
Even though the two would be signal-compatible, it is not usually possible to place a physically larger PCIe card
(e.g., a ×16 sized card) into a smaller slot —though if the PCIe slots are altered or a riser is used most motherboards
will allow this. Typically the technique is used for displaying to multiple monitors in a simulator configuration.
The width of a PCIe connector is 8.8 mm, while the height is 11.25 mm, and the length is variable. The fixed section
of the connector is 11.65 mm in length and contains two rows of 11 (22 pins total), while the length of the other
section is variable depending on the number of lanes. The pins are spaced at 1 mm intervals, and the thickness of the
card going into the connector is 1.8 mm.[][]
PCI Express 41

Lanes Pins Length

Total Variable Total Variable

×1 [32] 2×7 = 14 25 mm 7.65 mm


2×18 = 36

×4 2×32 = 64 2×21 = 42 39 mm 21.65 mm

×8 2×49 = 98 2×38 = 76 56 mm 38.65 mm

×16 2×82 = 164 2×71 = 142 89 mm 71.65 mm

Data transmission
PCIe sends all control messages, including interrupts, over the same links used for data. The serial protocol can
never be blocked, so latency is still comparable to conventional PCI, which has dedicated interrupt lines.
Data transmitted on multiple-lane links is interleaved, meaning that each successive byte is sent down successive
lanes. The PCIe specification refers to this interleaving as data striping. While requiring significant hardware
complexity to synchronize (or deskew) the incoming striped data, striping can significantly reduce the latency of the
nth byte on a link. Due to padding requirements, striping may not necessarily reduce the latency of small data packets
on a link.
As with other high data rate serial transmission protocols, the clock is embedded in the signal. At the physical level,
PCI Express 2.0 utilizes the 8b/10b encoding scheme[] to ensure that strings of consecutive ones or consecutive zeros
are limited in length. This coding was used to prevent the receiver from losing track of where the bit edges are. In
this coding scheme every eight (uncoded) payload bits of data are replaced with 10 (encoded) bits of transmit data,
causing a 20% overhead in the electrical bandwidth. To improve the available bandwidth, PCI Express version 3.0
employs 128b/130b encoding instead: similar but with much lower overhead.
Many other protocols (such as SONET) use a different form of encoding known as scrambling to embed clock
information into data streams. The PCIe specification also defines a scrambling algorithm, but it is used to reduce
electromagnetic interference (EMI) by preventing repeating data patterns in the transmitted data stream.

Data link layer


The Data Link Layer performs three vital services for the PCIe express link:
1. sequence the transaction layer packets (TLPs) that are generated by the transaction layer,
2. ensure reliable delivery of TLPs between two endpoints via an acknowledgement protocol (ACK and NAK
signaling) that explicitly requires replay of unacknowledged/bad TLPs,
3. initialize and manage flow control credits
On the transmit side, the data link layer generates an incrementing sequence number for each outgoing TLP. It serves
as a unique identification tag for each transmitted TLP, and is inserted into the header of the outgoing TLP. A 32-bit
cyclic redundancy check code (known in this context as Link CRC or LCRC) is also appended to the end of each
outgoing TLP.
On the receive side, the received TLP's LCRC and sequence number are both validated in the link layer. If either the
LCRC check fails (indicating a data error), or the sequence-number is out of range (non-consecutive from the last
valid received TLP), then the bad TLP, as well as any TLPs received after the bad TLP, are considered invalid and
discarded. The receiver sends a negative acknowledgement message (NAK) with the sequence-number of the invalid
TLP, requesting re-transmission of all TLPs forward of that sequence-number. If the received TLP passes the LCRC
check and has the correct sequence number, it is treated as valid. The link receiver increments the sequence-number
(which tracks the last received good TLP), and forwards the valid TLP to the receiver's transaction layer. An ACK
message is sent to remote transmitter, indicating the TLP was successfully received (and by extension, all TLPs with
PCI Express 42

past sequence-numbers.)
If the transmitter receives a NAK message, or no acknowledgement (NAK or ACK) is received until a timeout
period expires, the transmitter must retransmit all TLPs that lack a positive acknowledgement (ACK). Barring a
persistent malfunction of the device or transmission medium, the link-layer presents a reliable connection to the
transaction layer, since the transmission protocol ensures delivery of TLPs over an unreliable medium.
In addition to sending and receiving TLPs generated by the transaction layer, the data-link layer also generates and
consumes DLLPs, data link layer packets. ACK and NAK signals are communicated via (DLLP), as are flow control
credit information, some power management messages and flow control credit information (on behalf of the
transaction layer.)
In practice, the number of in-flight, unacknowledged TLPs on the link is limited by two factors: the size of the
transmitter's replay buffer (which must store a copy of all transmitted TLPs until the remote receiver ACKs them),
and the flow control credits issued by the receiver to a transmitter. PCI Express requires all receivers to issue a
minimum number of credits, to guarantee a link allows sending PCIConfig TLPs and message TLPs.

Transaction layer
PCI Express implements split transactions (transactions with request and response separated by time), allowing the
link to carry other traffic while the target device gathers data for the response.
PCI Express uses credit-based flow control. In this scheme, a device advertises an initial amount of credit for each
received buffer in its transaction layer. The device at the opposite end of the link, when sending transactions to this
device, counts the number of credits each TLP consumes from its account. The sending device may only transmit a
TLP when doing so does not make its consumed credit count exceed its credit limit. When the receiving device
finishes processing the TLP from its buffer, it signals a return of credits to the sending device, which increases the
credit limit by the restored amount. The credit counters are modular counters, and the comparison of consumed
credits to credit limit requires modular arithmetic. The advantage of this scheme (compared to other methods such as
wait states or handshake-based transfer protocols) is that the latency of credit return does not affect performance,
provided that the credit limit is not encountered. This assumption is generally met if each device is designed with
adequate buffer sizes.
PCIe 1.x is often quoted to support a data rate of 250 MB/s in each direction, per lane. This figure is a calculation
from the physical signaling rate (2.5 Gbaud) divided by the encoding overhead (10 bits per byte.) This means a
sixteen lane (×16) PCIe card would then be theoretically capable of 16×250 MB/s = 4 GB/s in each direction. While
this is correct in terms of data bytes, more meaningful calculations are based on the usable data payload rate, which
depends on the profile of the traffic, which is a function of the high-level (software) application and intermediate
protocol levels.
Like other high data rate serial interconnect systems, PCIe has a protocol and processing overhead due to the
additional transfer robustness (CRC and acknowledgements). Long continuous unidirectional transfers (such as those
typical in high-performance storage controllers) can approach >95% of PCIe's raw (lane) data rate. These transfers
also benefit the most from increased number of lanes (×2, ×4, etc.) But in more typical applications (such as a USB
or Ethernet controller), the traffic profile is characterized as short data packets with frequent enforced
acknowledgements.[] This type of traffic reduces the efficiency of the link, due to overhead from packet parsing and
forced interrupts (either in the device's host interface or the PC's CPU.) Being a protocol for devices connected to the
same printed circuit board, it does not require the same tolerance for transmission errors as a protocol for
communication over longer distances, and thus, this loss of efficiency is not particular to PCIe.
PCI Express 43

Uses

External PCIe cards


Theoretically, external PCIe could give a notebook the graphics power of a desktop, by connecting a notebook with
any PCIe desktop video card (enclosed in its own external housing, with strong power supply and cooling); this is
possible with an ExpressCard interface or a Thunderbolt interface. The ExpressCard interface provides bit rates of
5 Gbit/s (0.5 GB/s throughput), whereas the Thunderbolt interface provides bit rates of up to 10 Gbit/s (1 GB/s
throughput). Keep in mind that high-end video cards are PCIe 3.0 x16 which transfer at 128 Gbit/s (15.75 GB/s)
meaning data transfer to an external video card may perform 10 times slower than a video card connected directly to
the motherboard.
[33][34][35][36][37]

IBM/Lenovo has also included a PCI-Express slot in their Advanced Docking Station 250310U. It provides a
half-sized slot with an ×16 length socket, but only ×1 connectivity.[38] However, docking stations with expansion
slots are becoming less common as the laptops are getting more advanced video cards and either DVI-D interfaces,
or DVI-D pass through for port replicators and docking stations.
Additionally, Nvidia has developed Quadro Plex external PCIe video cards that can be used for advanced graphic
applications. These video cards require a PCI Express ×8 or ×16 slot for the interconnection cable.[39] In 2008, AMD
announced the ATI XGP technology, based on a proprietary cabling solution that is compatible with PCIe ×8 signal
transmissions.[40] This connector is available on the Fujitsu Amilo and the Acer Ferrari One notebooks. Only Fujitsu
has an actual external box available, which also works on the Ferrari One. Recently Acer launched the Dynavivid
graphics dock for XGP.
There are now card hubs in development that one can connect to a laptop through an ExpressCard slot, though they
are currently rare, obscure, or unavailable on the open market. These hubs can have full-sized cards placed in them.
Magma and ViDock also make use of ExpressCard and implement the usage of external graphic solutions. ViDock
are expansion chassis tailored specifically for adapting PCI Express graphics cards for use with ExpressCard
equipped laptop PCs. This enables user to make use of connecting PCIe cards externally. Although, the
developments in these technologies are still ongoing. Other examples that underwent are - MSI GUS, Asus XG
Station.
Recently, Intel and Apple introduced Thunderbolt, which allows for external PCI(e) devices to transfer at double the
speeds of the ExpressCard interface. However a mid-range external video card would still be severely throttled by
the slow connection.
Thunderbolt has given innovation to companies to release new and faster products to connect with a PCIe card
externally. Magma has release the ExpressBox 3T, which can hold up to three PCIe cards (two at 8x and one at 4x);
this allows for a better workstation when a notebook lacks many ports. MSI also release their new product with
Thunderbolt, the GUS II, which is a PCIe chassis dedicated for video cards.[41] Other products such as the Sonnet’s
Echo Express and mLogic’s mLink are Thunderbolt PCIe chassis in a smaller form factor; these allow connectivity
to low-profile video cards, sound cards, network cards, memory, storage, etc.[42] However, all these products require
the use of a Thunderbolt port (Thunderbolt devices); which makes it incompatible with a vast majority of computers.
PCI Express 44

External memory
PCI Express protocol can be used as data interface to flash memory devices, such as memory cards and solid state
drives. One such format is XQD card developed by the CompactFlash Association, SATA Express[43] and SCSI
Express.[44]
Many high-performance, enterprise-class solid state drives are designed as PCI Express RAID controller cards with
flash memory chips placed directly on the circuit board; this allows much higher transfer rates (over 1 Gbyte/s) and
IOPS (I/O operations per second) (over 1 million) comparing to Serial ATA or SAS drives.
OCZ and Marvell co-developed the native PCIe solid state drive controller Kilimanjaro that is utilized in OCZ's
Z-Drive 5. The Z-Drive 5 is designed for a PCIe 3.0 x16 slot and when the highest capacity (12 TB) version is
installed in such a slot it can run up to 7.2 Gigabytes per second sequential transfers and up to 2.52 million IOPS in
random transfers.[45]

Competing protocols
Several communications standards have emerged based on high bandwidth serial architectures. These include
InfiniBand, RapidIO, HyperTransport, QPI and StarFabric. The differences are based on the tradeoffs between
flexibility and extensibility vs latency and overhead. An example of such a tradeoff is adding complex header
information to a transmitted packet to allow for complex routing (PCI Express is not capable of this). The additional
overhead reduces the effective bandwidth of the interface and complicates bus discovery and initialization software.
Also making the system hot-pluggable requires that software track network topology changes. Examples of buses
suited for this purpose are InfiniBand and StarFabric.
Another example is making the packets shorter to decrease latency (as is required if a bus must operate as a memory
interface). Smaller packets mean packet headers consume a higher percentage of the packet, thus decreasing the
effective bandwidth. Examples of bus protocols designed for this purpose are RapidIO and HyperTransport.
PCI Express falls somewhere in the middle, targeted by design as a system interconnect (local bus) rather than a
device interconnect or routed network protocol. Additionally, its design goal of software transparency constrains the
protocol and raises its latency somewhat.

Development tools
When developing and/or troubleshooting the PCI Express bus, examination of hardware signals can be very
important to find the problems. Oscilloscopes, logic analyzers and bus analyzers are tools that collect, analyze,
decode, store signals so people can view the high-speed waveforms at their leisure.

References
[4] http:/ / en. wikipedia. org/ w/ index. php?title=PCI_Express& action=edit
[8] PCI Express Card Electromechanical Specification Revision 2.0
[9] PCI-SIG: Board Design Guidelines for PCI Express Architecture 2004 p. 19
[10] PCI Express Card Electromechanical Specification Revision 2.0
[11] http:/ / www. gamestar. de/ hardware/ grafikkarten/ powercolor-radeon-hd-7990-devil-13/ test/
powercolor_radeon_hd_7990_devil_13,603,3007615,6. html
[15] http:/ / www. pcisig. com
[18] — note that in this press release the term aggregate bandwidth refers to the sum of incoming and outgoing bandwidth; using this
terminology the aggregate bandwidth of full duplex 100BASE-TX is 200 Mbit/s
PCI Express 45

Further reading
• PCI Express System Architecture; 1st Ed; Ravi Budruk / Don Anderson / Tom Shanley; 1120 pages; 2003; ISBN
978-0-321-15630-3.
• Introduction to PCI Express : A Hardware and Software Developer's Guide; 1st Ed; 325 pages; 2003; ISBN
978-0-9702846-9-3.
• Complete PCI Express Reference : Design Implications for Hardware and Software Developers; 1st Ed; 1056
pages; 2003; ISBN 978-0-9717861-9-6.

External links
• PCI-SIG: | PCI-Express Specification Info (http://www.pcisig.com/specifications/pciexpress/)
• Introduction to PCI Protocol (http://electrofriends.com/articles/computer-science/protocol/
introduction-to-pci-protocol/)
• PCI Express Base Specification Revision 1.0 (http://www.pcisig.com/specifications/pciexpress/base).
PCI-SIG. 29 April 2002. (Requires PCI-SIG membership)
• PCI-SIG, the industry organization that maintains and develops the various PCI standards (http://www.pcisig.
com/)
• An introduction to how PCIe works at the TLP level (http://xillybus.com/tutorials/
pci-express-tlp-pcie-primer-tutorial-guide-1)
• Intel Developer Network for PCI Express Architecture (http://www.intel.com/technology/pciexpress/devnet/
)
• IDT + PCI Express solutions -> http://www.idt.com/go/pcie (http://www.idt.com/?catID=6264187)
• PCI-E Graphics Cards and Specs (http://www.gpubench.com/)
• Everything You Need to Know About the PCI Express (http://www.hardwaresecrets.com/article/
Everything-You-Need-to-Know-About-the-PCI-Express/190)
Article Sources and Contributors 46

Article Sources and Contributors


Conventional PCI  Source: http://en.wikipedia.org/w/index.php?oldid=551706114  Contributors: 194.237.150.xxx, 2strokewool, 3M ESD, 7265, ABF, Aaron Lawrence, Abc64, Acnetj, Agoode,
Agvulpine, Ahmad510, Akshay ghz, Aldie, Alecv, AlistairMcMillan, Andrew sh, Andros 1337, Anon lynx, Ashishtanwer, Ashley Y, Audriusa, Axfangli, Bevo74, Bletch, Bluebusy, Bobo192,
Bovineone, Bpfrack, Brian0918, BrianAlex, Bubba hotep, Cab88, CanisRufus, Ceros, Charles Gaudette, Charlesb, Chmod007, Cimistus, Ck lostsword, Cncxbox, Codetiger, Colin Marquardt,
Conversion script, Crazymonkey1123, Crispmuncher, Crusadeonilliteracy, Curtissthompson, Cwolfsheep, Czarkoff, Dahakon84, Dcirovic, DeadlyAssassin, Deboerjo, Delirium, Delphi234,
Denniss, DerHexer, DerekR, Dgies, Digon3, Dlrohrer2003, DoriSmith, Długosz, Echoray, Edcolins, Electricnet, Electron9, Endlessnameless, Eneville, Engineerism, Epbr123, ErikM, Europrobe,
Excirial, Extensive, Ferdinand Pienaar, Frap, Frehley, Friedfish, GCarty, GermanX, Giraffedata, Glrx, GoingBatty, GraemeL, Graham87, Grundig, Gsolo516, HarisM, Hellbus, Heron,
HexaChord, Hirzel, Hopp, Ht1848, Hurtstotalktoyou, Imroy, Insterested, Iohannes Animosus, Ixfd64, J00Bennett, JaGa, Jackd88, Jackol, Jangirke, Jannex, Jebinece, Jengelh, Jerryseinfeld, Jesse
Viviano, Jic, Jimmi Hugh, Jimsve, John Nevard, John of Reading, Jon Awbrey, Jonverve, Jrl321, Jwilkinson, Kadin2048, Kaldari, Kariteh, Kate, Keithgreer, KelleyCook, KennethJ, Kevin,
Khazar2, Kierenj, KirbyRandolf, Levonepure, LiquidFire, Looxix, Lupin, Lupo, Mabdul, MadMarky, Magioladitis, Mare, MatthewWilcox, Matusz, Maury Markowitz, MaverickSolutions,
Methecooldude, Michael Hardy, Michaeloqu, Mild Bill Hiccup, MitchellShnier, Mjb, Mnajib, Moreati, Mortense, MrBurns, Muhandes, Mulad, MureninC, Mwilso24, NCdave, Nanshu, New
questions, Nixdorf, Norm, Norm mit, Notbyworks, Notmyhandle, NuclearWarfare, Nytewing07, OlEnglish, Omegatron, Onceler, OrgasGirl, OwenX, Oxymoron83, PaulWright, Paulka, Pavel
Vozenilek, Pedant17, Pgan002, Phatom87, Philip Trueman, Philippe (WMF), Plugwash, Pmp99t, Pmsyyz, Pnm, Pol098, Pomte, PrestonH, Prodego, Prolog, Radiojon, Rchandra, Redgrittybrick,
Rich Farmbrough, Rigadoun, Riggy001, Rilak, Rivimey, Rocastelo, Rogerdpack, Rrburke, Rumpuscat, Ruud Koot, Ryanneve, Ryguasu, Saber girl08, Samarqandi, Sauloalessandre, Sbmeirow,
ScottDavis, Sergiom1973, Shanes, Shifty 270, Si-Jay, Sillydragon, Slaporte (WMF), Smyth, Snekez, Snickerdo, Spc01, SpeedyGonsales, Steinsky, Stephen Gilbert, Stickee, T g7, Tarquin,
Tbhotch, Tecknode, The Wild Falcon, TheJosh, Thinkingatoms, ThreeBlindMice, Thunderbird2, Timharwoodx, TimothyChenAllen, Tnkr111, Tobias Hoevekamp, Todd Vierling, Tommy2010,
Tomtheeditor, Tothwolf, Ummakynes, Untermenschen, Uriyan, VX, Vilbafo534, Vindictive Warrior, Vishahu, Volker, VoxLuna, Vsood 007, Waffleatron, Warpozio, Wasted Sapience,
Wbm1058, Welsh, Wernher, Widefox, Windowsvistafan, XPav, Xenium, Youssefsan, Yyy, Zakahori, Милан Јелисавчић, 472 anonymous edits

PCI-X  Source: http://en.wikipedia.org/w/index.php?oldid=551560475  Contributors: 2001:388:F000:0:0:0:0:233D, A876, ABF, AVRS, Alistair1978, Apyule, Arteitle, Bezenek, Bobo192,
Bobprime, Ceros, ChrisGualtieri, Cwolfsheep, Cybercobra, DaveHolland, Diblidabliduu, DoriSmith, Drutt, Dynamitecow, Eemullis, Ehn, Elangorajm, Electron9, Engineerism, Eptin, Evice,
Feneeth of Borg, Flightsoffancy, Frap, Frogamic, From, GermanX, Gimpyestrada, Glrx, Hellbus, Husond, Imroy, Intgr, JForget, Jackfork, Jimmi Hugh, Jludwig, JohnyDog, Jona, Jonverve,
KelleyCook, KnowManiac, Kvng, Kyng, Lavenderbunny, Lkinkade, Lopsidedman, Mabdul, Maury Markowitz, Nicolas Melay, Numbo3, Onceler, Pearle, Peyre, Phatom87, Plugwash, Pol098,
Qaywsxedc, QuietObserver720, RichardMathews, Riesz, Rilak, Roberta F., Rogerdpack, Sbmeirow, Spc01, Strolls, Szumyk, THEN WHO WAS PHONE?, The Wild Falcon, TheDrew, Tillin9,
Tobias Bergemann, Todd Vierling, Tothwolf, Wiki alf, Wrightju, 124 anonymous edits

PCI Express  Source: http://en.wikipedia.org/w/index.php?oldid=551990611  Contributors: 1ForTheMoney, 5lithy, A876, Aaron mcd, Acerperi, Ad88110, Adoniscik, Aethandor, Agoode,
Ahoerstemeier, Aidoor, AirDjurdan, Ajoyka100, Aknorals, Aldie, Alereon, Aleshik, Alison, AlistairMcMillan, Alobodig, Aluvus, Alvin-cs, Amfar018, Amor001, AndreKR, Andrew sh,
Andromeda451, Andy Dingley, Angrytoast, Antonio92, Apoorv480, Arch dude, Arkrishna, Armando, Arnero, ArnoldReinhold, Ascidian, Asestar, Aspects, Atracht, Audriusa, Avb, Avoided,
Azuris, Balabiot, Bart.vanassche, Ben Ben, Bender2k14, Berkut, Bgat, Bidgee, Bilepie66, Billauer, Binaryguru, Blakegripling ph, Blazingluke, BlueDevil, Bmicomp, Bobblewik, Bogey97,
Borgx, Bovineone, Brian Patrie, Briancarlton, Brianski, Bthetford, Btilm, Bubba73, BubblesFromBelow, BwuXEc, Bytre, C chekay, Captain-tucker, Cbreaker, Celtechm, Ceros, CesarB, Cf. Hay,
Chovain, Chowbok, ChrisGualtieri, Ciphergoth, Ckmac97, ClockworkSoul, Closedmouth, Cncxbox, Coffee4binky, Cometstyles, CommonsDelinker, Compellingelegance, Cootiequits, Cristan,
Cryogenius, Cwolfsheep, Cybercobra, Cybermaus0, Cynix, DARTH SIDIOUS 2, DMahalko, DOSGuy, DaProx, Darin-0, Darkhorse, Darxus, Davewho2, David Latapie, DavidDouthitt,
DeathHamster, Defunctzombie, Deineka, Dennis Schmitz, Denniss, Dewritech, Dgies, DhurdPLX, Diablo-D3, Diblidabliduu, Discospinster, DmitryKo, Don't give an Ameriflag, DoriSmith,
Dpark, DragonHawk, Drjt87, Drrll, Dsalt, Dsf7183, Dsm, Dubious Irony, Dwvisser, E rulez, Eatcacti, Ebikeofacookbook, Editore99, Efeinberg, Electron9, EoGuy, Epbr123, Epiteo, Eptin,
ErisDiscord, Etchesketch, Evice, Evil Monkey, ExpatEgghead, FIGATT, Falcon8765, Fatka, Fdrake, Ferret141, FerrousTigrus, Fethers, Firefox, Foobaz, FracJackMac, Francs2000,
Frankenpuppy, Frap, Frediloc, Fredrik, Freewol, Froskoy, Fsiler, Fudoreaper, Furrykef, Gdo01, Geek94, Geekosaurus, Gerixau, GermanX, Gimmetrow, Glaesisvellir, Glome83, Glrx, Gluo88,
GoingBatty, Goodone121, GraemeL, Graham1428, GrandDrake, Grblomerth, Gt4kill3r, Gurch, Halframed, Haris Akkool, HarisM, Hcberkowitz, Hellbus, Heron, Hideyuki, Hopp, Hotdogger125,
Hpcanswers, Iain.mcclatchie, Imroy, J.delanoy, Jack74661, Jacob Poon, Jasper Deng, Jcolbyk, Jdwinx, Jebinece, Jeffq, Jel, Jesse Viviano, Jflins, Jgui, Jimdavis4u, Jk050902, Jlundell, Joebishpie,
JogyB, JohnnySolc, Johnnydc, Jonverve, Joshuakh, Jsmethers, Juux, K20, Kalman5, Karam.Anthony.K, Kate, KatherinePankratz, Kbh3rd, Kbolino, Kd300, Keelec, KelleyCook, Kenyon, Kestral
fire, Kiko3d, Kinema, Kingboyk, Kingvashy, Klemen Kocjancic, Kotek20, Ksiddique, KuSh, La Parka Your Car, Leandrod, Lee Pavelich, Leo soundhar, LilHelpa, Ling.Nut, Lkesteloot, Lotje,
Lpolyak, LukeShu, Luna Santin, Lywongc, M7, MER-C, MSTCrow, Madmaxx, Mahumphrey, Marc Kupper, Materialscientist, Mathias-S, Matneh, Mattgirling, Matthiaspaul, Mattventura,
Maury Markowitz, Maverick2091, MaxVT, Maxis ftw, Mdoc7, Meco, Metageek, Mgdunn, Mgolden, MightyWarrior, Mike Rosoft, Mikepelley, Mikiemike, Mild Bill Hiccup, Mjaxon, Modster,
Monty845, Moreati, MorganHaggis, Mortense, Moxfyre, Mr. Mentor, Mr. Wheely Guy, MrBurns, MrDolomite, Mrand, Msaunier, Mulad, Music Sorter, MySchizoBuddy, NAHID, Nageh,
Nalorcs, NeOak, Neckername, Neelix, Nextop, Nicknomo, Nikpapag, Nil Einne, Nisselua, Nixdorf, Nosforit, NuclearWizard, Nz17, OSP Editor, Omicronpersei8, Oneiros, Orphan Wiki, OwenX,
PCI-SIG Administration, Pabouk, Pale blue dot, Paranoidmage, Park Flier, Pauli133, Pc-world, Pedantic of Purley, Peyre, Phatom87, Philip Trueman, Placi1982, Plasticup, Plonk420, Plugwash,
Pol098, Poompt, ProtocolOH, Psrgchx, Qdotdot, Qwe, R!SC, RBBrittain, Randall Fisher, Ph.D., Ranunculoid, Rayisthechosenone, Rbo159, Rchandra, Reedy, Rennie3915, RevRagnarok,
RexNL, Riana, Rich Farmbrough, Ricky07652, Riluve, Rjwilmsi, Rl, Rlcantwell, Rmallins, Robertwharvey, Rockstone35, Rogerdpack, Ronark, RoyBoy, Rsduhamel, Rsmoore, Rsrikanth05,
Ruggedsys, RuineR, SNEST2, Sahrin, Salam32, Samhuang1999, Samrawlins, Sarahstirland, Sarenne, SatsukiMikata, Sbmeirow, Schodge, Scohoust, Scott 110, Scottclayton, Sct72, Selket,
Shaddack, Shafran, Shandris, Shane32Wiki, Shawnc, Shnout, Silveroblivion, Sleigh, Sloverlord, Smathw, Smoove Z, Snickerdo, Spartan, SpeedyGonsales, Spinlock55, Spinningspark,
Spyd3r0us, Ssri1983, Stasdm, SteveSims, Strandist, Strom, StuffOfInterest, Supaplex, Sweetness46, SystemBuilder, Szlevi, Szzuk, Tabledhote, Tansm, Tarquin, Tektronix, The Belgain, The
Thing That Should Not Be, The Wild Falcon, The1physicist, TheDJ, TheJosh, TheNightFly, Thingg, Thinkingatoms, ThreeBlindMice, Thue, Thunderbird2, Tide rolls, Timc, Timharwoodx, Todd
Vierling, Tom.Reding, Towel401, Tpbradbury, Trevyn, TruPepitoM, Turionaltec, Tverbeek, Undeference, Vicarage, Vmaldia, Voidxor, Vorondil, Vrenator, Vsood 007, Wadsworth, Warpozio,
Warren, Wasted Sapience, WegianWarrior, Welguisz, WhosAsking, Widefox, WikHead, Wikipelli, Winterst, Wislam, Wkafig, Wolrahnaes, Wuhwuzdat, Xaje, Zac67, ZeroOne, Милан
Јелисавчић, 1085 anonymous edits
Image Sources, Licenses and Contributors 47

Image Sources, Licenses and Contributors


file:PCI Slots Digon3.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:PCI_Slots_Digon3.JPG  License: unknown  Contributors: Amada44, Digon3, Pierpao, 1 anonymous edits
Image:32-bit PCI card.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:32-bit_PCI_card.JPG  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: User
Redgrittybrick on en.wikipedia
Image:PCI und PCIe Slots.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:PCI_und_PCIe_Slots.jpg  License: Creative Commons Attribution-Sharealike 2.0  Contributors: Original
uploader was Smial at de.wikipedia
Image:PCI Keying.png  Source: http://en.wikipedia.org/w/index.php?title=File:PCI_Keying.png  License: GNU Free Documentation License  Contributors: PaulWright
Image:Intelpromtserverpcixadapter1000mta342.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Intelpromtserverpcixadapter1000mta342.jpg  License: Public Domain
 Contributors: Original uploader was Spc01 at en.wikipedia
Image:MiniPCI WiFi.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:MiniPCI_WiFi.jpg  License: Public Domain  Contributors: Original uploader was Alecv at en.wikipedia
Image:MiniPCI PCI.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:MiniPCI_PCI.jpg  License: Public Domain  Contributors: Original uploader was Alecv at en.wikipedia
Image:MiniPCI and MiniPCI Express cards.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:MiniPCI_and_MiniPCI_Express_cards.jpg  License: Public Domain  Contributors:
Cvdr
File:POST card 98usd.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:POST_card_98usd.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Rumlin
file:Intelpromtserverpcixadapter1000mta342.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Intelpromtserverpcixadapter1000mta342.jpg  License: Public Domain  Contributors:
Original uploader was Spc01 at en.wikipedia
image:dualportintelmtpro1000mtserveradapterspc.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Dualportintelmtpro1000mtserveradapterspc.jpg  License: GNU Free
Documentation License  Contributors: Spc01
image:PCI Keying.png  Source: http://en.wikipedia.org/w/index.php?title=File:PCI_Keying.png  License: GNU Free Documentation License  Contributors: PaulWright
Image:PCIExpress.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:PCIExpress.jpg  License: GNU Free Documentation License  Contributors: FxJ, GMLSX, GreyCat, Jokes
Free4Me, Jona, Mentifisto, Paxan, Túrelio, Wouterhagens, 15 anonymous edits
Image:A PCI Express Mini Card and its connector.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:A_PCI_Express_Mini_Card_and_its_connector.jpg  License: Creative
Commons Attribution 2.0  Contributors: Bastiaan van den Berg
License 48

License
Creative Commons Attribution-Share Alike 3.0 Unported
//creativecommons.org/licenses/by-sa/3.0/

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy