JNCIE-SP-12.a LG v2
JNCIE-SP-12.a LG v2
12.a
Lab Guide
Volume 2
Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
YEAR 2000 NOTICE
Juniper Networks hardware and software products do not suffer from Year 2000 problems and hence are Year 2000 compliant. The Junos operating system has no known
time-related limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
SOFTWARE LICENSE
The terms and conditions for using Juniper Networks software are described in the software license provided with the software, or to the extent applicable, in an agreement
executed between you and Juniper Networks, or Juniper Networks agent. By using Juniper Networks software, you indicate that you understand and agree to be bound by its
license terms and conditions. Generally speaking, the software license restricts the manner in which you are permitted to use the Juniper Networks software, may contain
prohibitions against certain uses, and may state conditions under which the license is automatically terminated. You should consult the software license for further details.
Contents
Lab 6: BGP Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
Implementing BGP with Route Reflectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
Implementing IBGP with Confederations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-77
This five-day course is designed to serve as the ultimate preparation for the Juniper Networks Certified Internet Expert—
Service Provider (JNCIE-SP) exam. The course focuses on caveats and tips useful for potential test candidates and
emphasizes hands-on practice through a series of timed lab simulations. On the final day of the course, students are
given a six-hour lab simulation emulating the testing topics and environment from the real exam. All labs in this course
are facilitated by Junosphere Cloud (formerly known as Junosphere) virtual lab devices and are available after hours for
additional practice time. This course is based on Junos OS Release 12.3.
Objectives
After successfully completing this course, you should:
• Be better prepared for success in taking the actual JNCIE-SP exam.
• Be well-versed in exam topics, environment, and conditions.
Intended Audience
This course benefits individuals who have already honed their skills on service provider technologies and could use
some practice and tips in preparation for the JNCIE-SP exam.
Course Level
JNCIE Service Provider Bootcamp is an advanced-level course.
Prerequisites
Students should have passed the Juniper Networks Certified Internet Professional—Service Provider (JNCIP-SP) written
exam or achieved an equal level of expertise through Education Services courseware and hands-on experience.
Day 1
Chapter 1: Course Introduction
Chapter 2: Exam Strategies
Chapter 3: Device Infrastructure
Implementing Device Infrastructure Lab
Chapter 4: IGP Implementation
IS-IS Implementation Lab
OSPF Implementation Lab
Day 2
Chapter 5: IGP Troubleshooting
IS-IS Troubleshooting Lab
OSPF Troubleshooting Lab
Chapter 6: BGP Implementation
BGP Implementation Lab
Chapter 7: BGP Troubleshooting
BGP Troubleshooting Lab
Day 3
Chapter 8: Multicast Implementation
Multicast Implementation and Troubleshooting Lab
Chapter 9: Class of Service Implementation
Class of Service Implementation and Troubleshooting Lab
Day 4
Chapter 10: MPLS Implementation
MPLS Implementation and Troubleshooting Lab
Chapter 11: MPLS VPN Implementation
MPLS VPN Implementation and Troubleshooting Lab
Day 5
JNCIE-SP Full Lab Simulation
Franklin Gothic Normal text. Most of what you read in the Lab Guide
and Student Guide.
CLI Input Text that you must enter. lab@San_Jose> show route
GUI Input Select File > Save, and type
config.ini in the Filename field.
CLI Undefined Text where the variable’s value is Type set policy policy-name.
the user’s discretion or text where
ping 10.0.x.y
the variable’s value as shown in
GUI Undefined the lab guide might differ from the Select File > Save, and type
value the user must input filename in the Filename field.
according to the lab topology.
Overview
In this lab, you will implement a BGP network including IBGP, EBGP, and routing policies
according to the provided task list. You will have 2.5 hours to complete the lab.
By completing this lab, you will perform the following tasks:
• Configure the IBGP network. Your IBGP network must be designed using route
reflection and must contain one route reflection cluster. All IBGP sessions must use
the lo0.0 interface IP address. The failure of a link or router in the network must
not result in any connectivity issues or isolation of clients.
• All IBGP sessions in your autonomous system (AS) must be authenticated using MD5
authentication.
• Configure a BGP session to the customer 2 (C2), peer (P), and transit (T) neighbors.
Configure the EBGP session to C2 to load-balance over the two links that connect R5
and C2. Only one BGP session should be used. A static route is permissible to
complete this task.
• Configure the R2 router to use load balancing over the two peering sessions with T1
and T2 routers.
• All peer (P), transit provider (T1, T2), and C2 IPv4 prefixes should be active and
reachable on all routers in your AS.
• Routers C1 and C3 belong to the same customer, which uses IPv6 routing. Provide
the communication between C1 and C3 over your AS. Both C1 and C3 routers must
be able to communicate with the Transit routers T1 and T2 using IPv6. You must
share IPv6 routes with the Transit routers over your existing IPv4 peerings. You must
use the IPv4-compatible address on your peering from R1 to T1. You are allowed to
use the IPv4-mapped address on the peerings from R2. IPv6 packet forwarding in
your AS is not permitted.
• The direct IPv6 routes on C1-R3 and C3-R4 links must be reachable from the
customer remote routers C3 and C1, respectively.
• Ensure that no more than 12 prefixes are accepted from customer routers C1 and
C3. If this limit is exceeded the router should generate the syslog message but the
session should remain active.
• All BGP sessions state changes should be logged to syslog.
login: lab
Password:
login: lab
Password:
login: lab
Password:
login: lab
Password:
login: lab
Password:
login: lab
Password:
TASK 2
Configure the IBGP network. Your IBGP network must be designed using
Route Reflection and must contain one Route Reflection cluster. All
IBGP sessions must use the lo0.0 interface IP address. The failure
of a link or router in the network must not result in any
connectivity issues or isolation of clients.
[edit]
lab@R1# set routing-options autonomous-system 3895077211
[edit]
lab@R1# show routing-options
router-id 172.27.255.1;
autonomous-system 3895077211
[edit]
lab@R1# edit protocols bgp group cluster-1
commit complete
Exiting configuration mode
lab@R1>
• R2:
lab@R2> configure
Entering configuration mode
[edit]
lab@R2# show routing-options
router-id 172.27.255.2;
autonomous-system 3895077211;
[edit]
lab@R2# edit protocols bgp group cluster-1
commit complete
Exiting configuration mode
lab@R2>
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# set routing-options autonomous-system 3895077211
[edit]
lab@R3# show routing-options
router-id 172.27.255.3;
autonomous-system 3895077211;
[edit]
lab@R3# edit protocols bgp group cluster-1
commit complete
Exiting configuration mode
lab@R3>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set routing-options autonomous-system 3895077211
[edit]
lab@R4# show routing-options
router-id 172.27.255.4;
autonomous-system 3895077211;
commit complete
Exiting configuration mode
lab@R4>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# show routing-options
router-id 172.27.255.5;
autonomous-system 3895077211;
[edit]
lab@R5# edit protocols bgp group cluster-1
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
Verify that IBGP sessions are established successfully.
• R1:
lab@R1> show bgp summary
Groups: 1 Peers: 2 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 0 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.255.3 3895077211 130 129 0 0 57:15 0/0/
0/0 0/0/0/0
172.27.255.4 3895077211 27 26 0 0 11:06 0/0/
0/0 0/0/0/0
• R2:
lab@R2> show bgp summary
Groups: 1 Peers: 2 Down peers: 0
• R3:
lab@R3> show bgp summary
Groups: 2 Peers: 4 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 0 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.255.1 3895077211 131 132 0 0 58:28 0/0/
0/0 0/0/0/0
172.27.255.2 3895077211 130 130 0 0 58:24 0/0/
0/0 0/0/0/0
172.27.255.4 3895077211 29 29 0 0 12:07 0/0/
0/0 0/0/0/0
172.27.255.5 3895077211 17 15 0 0 6:24 0/0/
0/0 0/0/0/0
• R4:
lab@R4> show bgp summary
Groups: 2 Peers: 4 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 0 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.255.1 3895077211 28 30 0 0 12:25 0/0/
0/0 0/0/0/0
172.27.255.2 3895077211 28 29 0 0 12:21 0/0/
0/0 0/0/0/0
172.27.255.3 3895077211 28 29 0 0 12:13 0/0/
0/0 0/0/0/0
172.27.255.5 3895077211 16 16 0 0 6:26 0/0/
0/0 0/0/0/0
• R5:
lab@R5> show bgp summary
Groups: 1 Peers: 2 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 0 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.255.3 3895077211 15 17 0 0 6:36 0/0/
0/0 0/0/0/0
172.27.255.4 3895077211 15 16 0 0 6:32 0/0/
0/0 0/0/0/0
Lab 6–10 • BGP Implementation www.juniper.net
JNCIE Service Provider Bootcamp
TASK 3
All IBGP sessions in your autonomous system must be authenticated
using MD5 authentication.
TASK INTERPRETATION
The task is straight forward. You must configure md5 authentication for all the IBGP sessions
from each of your routers. The task does not specify what key must be used, so you can use
whatever key you wish. In this detailed guide we use “juniper”.
TASK COMPLETION
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# set protocols bgp group cluster-1 authentication-key juniper
[edit]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
• R2:
lab@R2> configure
Entering configuration mode
[edit]
lab@R2# set protocols bgp group cluster-1 authentication-key juniper
[edit]
lab@R2# commit and-quit
commit complete
Exiting configuration mode
lab@R2>
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# set protocols bgp group cluster-1 authentication-key juniper
[edit]
lab@R3# set protocols bgp group internal authentication-key juniper
[edit]
lab@R3# commit and-quit
lab@R3>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set protocols bgp group cluster-1 authentication-key juniper
[edit]
lab@R4# set protocols bgp group internal authentication-key juniper
[edit]
lab@R4# commit and-quit
commit complete
Exiting configuration mode
lab@R4>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# set protocols bgp group cluster-1 authentication-key juniper
[edit]
lab@R5# commit and-quit
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
You can verify authentication is configured by reviewing the neighbors for each router. The output
will not display what the key being used is. To simplify the outputs use the show bgp neighbor
| match "Peer: 172.27.255|Authentication key" command.
• R1:
lab@R1> show bgp neighbor | match "Peer: 172.27.255|Authentication key"
Peer: 172.27.255.3+56190 AS 3895077211 Local: 172.27.255.1+179 AS 3895077211
Authentication key is configured
Peer: 172.27.255.4+179 AS 3895077211 Local: 172.27.255.1+56737 AS 3895077211
Authentication key is configured
• R2:
lab@R2> show bgp neighbor | match "Peer: 172.27.255|Authentication key"
Peer: 172.27.255.3+179 AS 3895077211 Local: 172.27.255.2+56748 AS 3895077211
• R3:
lab@R3> show bgp neighbor | match "Peer: 172.27.255|Authentication key"
Peer: 172.27.255.1+179 AS 3895077211 Local: 172.27.255.3+56190 AS 3895077211
Authentication key is configured
Peer: 172.27.255.2+56748 AS 3895077211 Local: 172.27.255.3+179 AS 3895077211
Authentication key is configured
Peer: 172.27.255.4+50303 AS 3895077211 Local: 172.27.255.3+179 AS 3895077211
Authentication key is configured
Peer: 172.27.255.5+61030 AS 3895077211 Local: 172.27.255.3+179 AS 3895077211
Authentication key is configured
• R4:
lab@R4> show bgp neighbor | match "Peer: 172.27.255|Authentication key"
Peer: 172.27.255.1+56737 AS 3895077211 Local: 172.27.255.4+179 AS 3895077211
Authentication key is configured
Peer: 172.27.255.2+51719 AS 3895077211 Local: 172.27.255.4+179 AS 3895077211
Authentication key is configured
Peer: 172.27.255.3+179 AS 3895077211 Local: 172.27.255.4+50303 AS 3895077211
Authentication key is configured
Peer: 172.27.255.5+57711 AS 3895077211 Local: 172.27.255.4+179 AS 3895077211
Authentication key is configured
• R5:
lab@R5> show bgp neighbor | match "Peer: 172.27.255|Authentication key"
Peer: 172.27.255.3+179 AS 3895077211 Local: 172.27.255.5+61030 AS 3895077211
Authentication key is configured
Peer: 172.27.255.4+179 AS 3895077211 Local: 172.27.255.5+57711 AS 3895077211
Authentication key is configured
TASK 4
Configure a BGP session to C2 Customer, Peer (P) and Transit (T)
neighbors. Configure the EBGP session to C2 to load balance over the
two links that connect R5 and C2. There should only be one BGP
session used. A static route is permissible to complete this task.
Note
It might take a few minutes for the BGP
session with C2 to establish. If the BGP
session does not establish immediately,
wait three to five minutes before you begin
troubleshooting the session.
TASK COMPLETION
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# set protocols bgp group T1 type external
[edit]
lab@R1# set protocols bgp group T1 peer-as 1342930876
[edit]
lab@R1# set protocols bgp group T1 neighbor 172.27.0.34
[edit]
lab@R1# set protocols bgp group P type external
[edit]
lab@R1# set protocols bgp group P peer-as 2087403078
[edit]
lab@R1# set protocols bgp group P neighbor 172.27.0.30
[edit]
lab@R1# show protocols bgp
group cluster-1 {
type internal;
local-address 172.27.255.1;
authentication-key "$9$v5P8xd24Zk.5bs.5QFAtM8X"; ## SECRET-DATA
neighbor 172.27.255.3;
neighbor 172.27.255.4;
}
group T1 {
type external;
peer-as 1342930876;
neighbor 172.27.0.34;
}
group P {
type external;
[edit]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
• R2:
lab@R2> configure
Entering configuration mode
[edit]
lab@R2# set protocols bgp group T1-T2 type external
[edit]
lab@R2# set protocols bgp group T1-T2 peer-as 1342930876
[edit]
lab@R2# set protocols bgp group T1-T2 neighbor 172.27.0.66
[edit]
lab@R2# set protocols bgp group T1-T2 neighbor 172.27.0.38
[edit]
lab@R2# show protocols bgp
group cluster-1 {
type internal;
local-address 172.27.255.2;
authentication-key "$9$AMDcuBElK8db2cyb24aiHtuO"; ## SECRET-DATA
neighbor 172.27.255.3;
neighbor 172.27.255.4;
}
group T1-T2 {
type external;
peer-as 1342930876;
neighbor 172.27.0.66;
neighbor 172.27.0.38;
}
[edit]
lab@R2# commit and-quit
commit complete
Exiting configuration mode
lab@R2>
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# set protocols bgp group P type external
[edit]
lab@R3# set protocols bgp group P peer-as 2087403078
[edit]
lab@R3# set protocols bgp group P neighbor 172.27.0.62
[edit]
lab@R3# show protocols bgp
group cluster-1 {
type internal;
local-address 172.27.255.3;
authentication-key "$9$XeSNVYJGifT3goT369OBxNd"; ## SECRET-DATA
cluster 0.0.0.1;
neighbor 172.27.255.1;
neighbor 172.27.255.2;
neighbor 172.27.255.5;
}
group internal {
type internal;
local-address 172.27.255.3;
authentication-key "$9$j9kmT69pRhrz3hrev7Nik."; ## SECRET-DATA
neighbor 172.27.255.4;
}
group P {
type external;
peer-as 2087403078;
neighbor 172.27.0.62;
}
[edit]
lab@R3# commit and-quit
commit complete
Exiting configuration mode
lab@R3>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# set routing-options static route 202.202.0.1/32 next-hop 172.27.0.50
[edit]
lab@R5# set routing-options static route 202.202.0.1/32 next-hop 172.27.0.74
[edit]
lab@R5# show routing-options
static {
route 202.202.0.1/32 next-hop [ 172.27.0.50 172.27.0.74 ];
[edit]
lab@R5# set protocols bgp group C2 type external
[edit]
lab@R5# set protocols bgp group C2 multihop
[edit]
lab@R5# set protocols bgp group C2 local-address 172.27.255.5
[edit]
lab@R5# set protocols bgp group C2 peer-as 65512
[edit]
lab@R5# set protocols bgp group C2 neighbor 202.202.0.1
[edit]
lab@R5# show protocols bgp
group cluster-1 {
type internal;
local-address 172.27.255.5;
authentication-key "$9$xfz-b2ZUH5Qn4aQn/CB17-V"; ## SECRET-DATA
neighbor 172.27.255.3;
neighbor 172.27.255.4;
}
group C2 {
type external;
multihop;
local-address 172.27.255.5;
peer-as 65512;
neighbor 202.202.0.1;
}
[edit]
lab@R5# commit and-quit
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
Verify that EBGP sessions are established successfully. You should also verify that the routes
received from the C2 neighbor at the R5 router shows two physical next hops.
• R1:
lab@R1> show bgp summary
Groups: 3 Peers: 4 Down peers: 0
• R2:
lab@R2> show bgp summary
Groups: 2 Peers: 4 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 1745 871 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.0.38 1342930876 576 95 0 0 42:45 11/
861/861/0 0/0/0/0
172.27.0.66 1342930876 504 96 0 0 42:49 860/
860/860/0 0/0/0/0
172.27.255.3 3895077211 153 576 0 0 1:07:47 0/24/
24/0 0/0/0/0
172.27.255.4 3895077211 145 568 0 0 1:05:04 0/0/
0/0 0/0/0/0
• R3:
lab@R3> show bgp summary
Groups: 3 Peers: 5 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 1786 24 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.0.62 2087403078 89 90 0 0 39:57 24/
24/24/0 0/0/0/0
172.27.255.1 3895077211 639 155 0 0 1:09:00 0/884/
884/0 0/0/0/0
172.27.255.2 3895077211 578 157 0 0 1:09:09 0/871/
871/0 0/0/0/0
172.27.255.4 3895077211 148 150 0 0 1:06:10 0/0/
0/0 0/0/0/0
172.27.255.5 3895077211 146 146 0 0 1:04:46 0/7/
7/0 0/0/0/0
• R5:
lab@R5> show bgp summary
Groups: 2 Peers: 3 Down peers: 0
[edit]
lab@R2# set protocols bgp group T1-T2 multipath
[edit]
lab@R2# commit and-quit
commit complete
Exiting configuration mode
lab@R2>
TASK VERIFICATION
Verify that the routes received from both T1 and T2 neighbors at R2 router show two physical
next hops.
• R2:
lab@R2> show route protocol bgp 6/8 terse active-path
[edit]
lab@R1# edit policy-options policy-statement nhs
[edit]
lab@R1# set protocols bgp group cluster-1 export nhs
[edit]
lab@R1# show protocols bgp group cluster-1
type internal;
local-address 172.27.255.1;
authentication-key "$9$v5P8xd24Zk.5bs.5QFAtM8X"; ## SECRET-DATA
export nhs;
[edit]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
• R2:
lab@R2> configure
Entering configuration mode
[edit]
lab@R2# edit policy-options policy-statement nhs
[edit]
lab@R2# set protocols bgp group cluster-1 export nhs
[edit]
lab@R2# show protocols bgp group cluster-1
type internal;
local-address 172.27.255.2;
authentication-key "$9$AMDcuBElK8db2cyb24aiHtuO"; ## SECRET-DATA
export nhs;
neighbor 172.27.255.3;
neighbor 172.27.255.4;
commit complete
Exiting configuration mode
lab@R2>
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# edit policy-options policy-statement nhs
[edit]
lab@R3# set protocols bgp group cluster-1 export nhs
[edit]
lab@R3# show protocols bgp group cluster-1
type internal;
local-address 172.27.255.3;
authentication-key "$9$XeSNVYJGifT3goT369OBxNd"; ## SECRET-DATA
export nhs;
cluster 0.0.0.1;
neighbor 172.27.255.1;
neighbor 172.27.255.2;
neighbor 172.27.255.5;
[edit]
lab@R3# show protocols bgp group internal
type internal;
local-address 172.27.255.3;
authentication-key "$9$j9kmT69pRhrz3hrev7Nik."; ## SECRET-DATA
export nhs;
neighbor 172.27.255.4;
[edit]
lab@R3# commit and-quit
commit complete
Exiting configuration mode
lab@R3>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# edit policy-options policy-statement nhs
[edit]
lab@R5# set protocols bgp group cluster-1 export nhs
[edit]
lab@R5# commit and-quit
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
Verify that all the routers in your AS can resolve BGP next hops.
• R1:
lab@R1> show route 202.202/24
• R2:
lab@R2> show route 202.202/24
• R3:
lab@R3> show route 111.111.1/24
• R5:
lab@R5> show route 111.111.1/24
TASK 7
Routers C1 and C3 belong to the same customer, which uses IPv6
routing. Provide the communication between C1 and C3 over your AS.
Both C1 and C3 routers must be able to communicate with the Transit
routers T1 and T2 using IPv6. You must share IPv6 routes with the
Transit routers over your existing IPv4 peerings. You must use the
IPv4-compatible address on your peering from R1 to T1. You are
allowed to use the IPv4-mapped address on the peerings from R2. IPv6
packet forwarding in your AS is not permitted.
TASK INTERPRETATION
In this task, the IPv6 forwarding in your network is not allowed but communication must be
provided between C1, C3, T1, and T2. 6PE is the application that can be used to solve the
problem. 6PE requires the network running MPLS, which is preconfigured in your topology. Your
task now is to configure 6PE on the four PE routers servicing the IPv6 topology. You must also
ensure that IPv6 routes are shared over the IPv4 sessions.
TASK COMPLETION
Configure core-facing interfaces on R1, R2, R3, and R4 to support family inet6. Configure
AS-external interfaces on R1 and R2 to support family inet6 with the appropriate
IPv4-compatible or IPv4-mapped IPv6 addresses. Configure AS-external interfaces on R3 and R4
to support family inet6 with the IPv6 native addresses.
Configure IBGP on R1, R2, R3, R4 to support 6PE signaling. Configure EBGP on R1 and R2 to
support family IPv6. Configure EBGP on R3 and R4 as native IPv6 BGP.
Configure MPLS on R1, R2, R3, R4 to support IPv6 tunneling.
[edit]
lab@R1# edit interfaces
[edit interfaces]
lab@R1# set ge-0/0/2 unit 0 family inet6 address ::172.27.0.33/126
[edit interfaces]
lab@R1# set ge-0/0/3 unit 0 family inet6
[edit interfaces]
lab@R1# set ge-0/0/6 unit 0 family inet6
[edit interfaces]
lab@R1# set ae0 unit 0 family inet6
[edit interfaces]
lab@R1# show ge-0/0/2
description "Connection to T1";
unit 0 {
family inet {
address 172.27.0.33/30;
}
family inet6 {
address ::172.27.0.33/126;
}
}
[edit interfaces]
lab@R1# show ge-0/0/3
description "Connection to R2";
unit 0 {
family inet {
address 172.27.0.1/30;
}
family inet6;
family mpls;
}
[edit interfaces]
lab@R1# show ae0
description "Connection to R4";
aggregated-ether-options {
lacp {
active;
}
}
unit 0 {
family inet {
address 172.27.0.10/30;
}
family inet6;
family mpls;
}
[edit interfaces]
lab@R1# top
[edit]
lab@R1# set protocols bgp group cluster-1 family inet unicast
[edit]
lab@R1# set protocols bgp group cluster-1 family inet6 labeled-unicast
explicit-null
[edit]
lab@R1# show protocols bgp group cluster-1
type internal;
local-address 172.27.255.1;
family inet {
unicast;
}
family inet6 {
labeled-unicast {
explicit-null;
}
}
authentication-key "$9$v5P8xd24Zk.5bs.5QFAtM8X"; ## SECRET-DATA
export nhs;
neighbor 172.27.255.3;
neighbor 172.27.255.4;
[edit]
lab@R1# set protocols bgp group T1 accept-remote-nexthop
[edit]
lab@R1# set protocols bgp group T1 family inet unicast
[edit]
lab@R1# set protocols bgp group T1 family inet6 unicast
[edit]
lab@R1# show protocols bgp group T1
[edit]
lab@R1# edit policy-options policy-statement accept-T1-ipv6
[edit]
lab@R1# set protocols bgp group T1 import accept-T1-ipv6
[edit]
lab@R1# set protocols mpls ipv6-tunneling
[edit]
lab@R1# show protocols mpls
ipv6-tunneling;
interface ge-0/0/3.0;
interface ge-0/0/6.0;
interface ae0.0;
[edit]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
• R2:
lab@R2> configure
Entering configuration mode
[edit]
lab@R2# edit interfaces
[edit interfaces]
lab@R2# set ge-0/0/1 unit 0 family inet6
[edit interfaces]
lab@R2# set ge-0/0/2 unit 0 family inet6 address ::FFFF:172.27.0.37/126
[edit interfaces]
lab@R2# set ge-0/0/3 unit 0 family inet6 address ::FFFF:172.27.0.65/126
[edit interfaces]
lab@R2# set ge-0/0/4 unit 0 family inet6
[edit interfaces]
lab@R2# show ge-0/0/1
description "Connection to R1";
unit 0 {
family inet {
address 172.27.0.2/30;
}
family inet6;
family mpls;
}
[edit interfaces]
lab@R2# show ge-0/0/2
description "Connection to T2";
unit 0 {
family inet {
address 172.27.0.37/30;
}
family inet6 {
address ::ffff:172.27.0.37/126;
}
}
[edit interfaces]
lab@R2# show ge-0/0/3
description "Connection to T1";
[edit interfaces]
lab@R2# show ge-0/0/4
description "Connection to R4";
unit 0 {
family inet {
address 172.27.0.5/30;
}
family inet6;
family mpls;
}
[edit interfaces]
lab@R2# top
[edit]
lab@R2# set protocols bgp group cluster-1 family inet unicast
[edit]
lab@R2# set protocols bgp group cluster-1 family inet6 labeled-unicast
explicit-null
[edit]
lab@R2# show protocols bgp group cluster-1
type internal;
local-address 172.27.255.2;
family inet {
unicast;
}
family inet6 {
labeled-unicast {
explicit-null;
}
}
authentication-key "$9$AMDcuBElK8db2cyb24aiHtuO"; ## SECRET-DATA
export nhs;
neighbor 172.27.255.3;
neighbor 172.27.255.4;
[edit]
lab@R2# set protocols bgp group T1-T2 accept-remote-nexthop
[edit]
lab@R2# set protocols bgp group T1-T2 family inet unicast
[edit]
lab@R2# set protocols bgp group T1-T2 family inet6 unicast
[edit]
lab@R2# show protocols bgp group T1-T2
type external;
accept-remote-nexthop;
family inet {
unicast;
}
family inet6 {
unicast;
}
peer-as 1342930876;
multipath;
neighbor 172.27.0.66;
neighbor 172.27.0.38
[edit]
lab@R2# set protocols mpls ipv6-tunneling
[edit]
lab@R2# show protocols mpls
ipv6-tunneling;
interface ge-0/0/1.0;
interface ge-0/0/4.0;
[edit]
lab@R2# commit and-quit
commit complete
Exiting configuration mode
lab@R2>
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# edit interfaces
[edit interfaces]
lab@R3# set ge-0/0/1 unit 0 family inet6
[edit interfaces]
lab@R3# set ge-0/0/2 unit 0 family inet6
[edit interfaces]
lab@R3# set ge-0/0/3 unit 0 family inet6
[edit interfaces]
lab@R3# set ge-0/0/4 unit 0 family inet6 address 2008:4498::1/64
[edit interfaces]
lab@R3# show ge-0/0/1
[edit interfaces]
lab@R3# show ge-0/0/2
description "Connection to R4";
unit 0 {
family inet {
address 172.27.0.17/30;
}
family inet6;
family mpls;
}
[edit interfaces]
lab@R3# show ge-0/0/3
description "Connection to R5";
unit 0 {
family inet {
address 172.27.0.26/30;
}
family inet6;
family mpls;
}
[edit interfaces]
lab@R3# show ge-0/0/4
description "Connection to C1";
unit 0 {
family inet6 {
address 2008:4498::1/64;
}
}
[edit interfaces]
lab@R3# top
[edit]
lab@R3# set protocols bgp group cluster-1 family inet unicast
[edit]
lab@R3# set protocols bgp group cluster-1 family inet6 labeled-unicast
explicit-null
[edit]
lab@R3# show protocols bgp group cluster-1
type internal;
local-address 172.27.255.3;
family inet {
[edit]
lab@R3# set protocols bgp group internal family inet unicast
[edit]
lab@R3# set protocols bgp group internal family inet6 labeled-unicast explicit-null
[edit]
lab@R3# show protocols bgp group internal
type internal;
local-address 172.27.255.3;
family inet {
unicast;
}
family inet6 {
labeled-unicast {
explicit-null;
}
}
authentication-key "$9$j9kmT69pRhrz3hrev7Nik."; ## SECRET-DATA
export nhs;
neighbor 172.27.255.4;
[edit]
lab@R3# set protocols bgp group C1 type external
[edit]
lab@R3# set protocols bgp group C1 peer-as 65432
[edit]
lab@R3# set protocols bgp group C1 as-override
[edit]
lab@R3# set protocols bgp group C1 neighbor 2008:4498::2
[edit]
lab@R3# show protocols bgp group C1
type external;
peer-as 65432;
as-override;
neighbor 2008:4498::2;
[edit]
lab@R3# show protocols mpls
ipv6-tunneling;
interface ge-0/0/1.0;
interface ge-0/0/2.0;
interface ge-0/0/3.0;
[edit]
lab@R3# commit and-quit
commit complete
Exiting configuration mode
lab@R3>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# edit interfaces
[edit interfaces]
lab@R4# set ge-0/0/1 unit 0 family inet6
[edit interfaces]
lab@R4# set ge-0/0/2 unit 0 family inet6 address 2008:4498:0:1::1/64
[edit interfaces]
lab@R4# set ge-0/0/4 unit 0 family inet6
[edit interfaces]
lab@R4# set ge-0/0/5 unit 0 family inet6
[edit interfaces]
lab@R4# set ae0 unit 0 family inet6
[edit interfaces]
lab@R4# show ge-0/0/1
description "Connection to R2";
unit 0 {
family inet {
address 172.27.0.6/30;
}
family inet6;
family mpls;
}
[edit interfaces]
lab@R4# show ge-0/0/2
description "Connection to C3";
[edit interfaces]
lab@R4# show ge-0/0/4
description "Connection to R5";
unit 0 {
family inet {
address 172.27.0.21/30;
}
family inet6;
family mpls;
}
[edit interfaces]
lab@R4# show ge-0/0/5
description "Connection to R3";
unit 0 {
family inet {
address 172.27.0.18/30;
}
family inet6;
family mpls;
}
[edit interfaces]
lab@R4# show ae0
description "Connection to R1";
aggregated-ether-options {
lacp {
passive;
}
}
unit 0 {
family inet {
address 172.27.0.9/30;
}
family inet6;
family mpls;
}
[edit interfaces]
lab@R4# top
[edit]
lab@R4# set protocols bgp group cluster-1 family inet unicast
[edit]
lab@R4# set protocols bgp group cluster-1 family inet6 labeled-unicast
explicit-null
[edit]
lab@R4# set protocols bgp group internal family inet unicast
[edit]
lab@R4# set protocols bgp group internal family inet6 labeled-unicast
explicit-null
[edit]
lab@R4# show protocols bgp group internal
type internal;
local-address 172.27.255.4;
family inet {
unicast;
}
family inet6 {
labeled-unicast {
explicit-null;
}
}
authentication-key "$9$EFaSlM7-waZj8XZjHqQzhSr"; ## SECRET-DATA
neighbor 172.27.255.3;
[edit]
lab@R4# set protocols bgp group C3 type external
[edit]
lab@R4# set protocols bgp group C3 peer-as 65432
[edit]
lab@R4# set protocols bgp group C3 as-override
[edit]
lab@R4# set protocols bgp group C3 neighbor 2008:4498:0:1::2
[edit]
lab@R4# show protocols bgp group C3
type external;
[edit]
lab@R4# set protocols mpls ipv6-tunneling
[edit]
lab@R4# show protocols mpls
ipv6-tunneling;
interface ge-0/0/1.0;
interface ge-0/0/4.0;
interface ge-0/0/5.0;
interface ae0.0;
[edit]
lab@R4# commit and-quit
commit complete
Exiting configuration mode
lab@R4>
TASK VERIFICATION
Verify that BGP sessions with family inet6 support are established successfully.
Verify that IPv4-mapped IPv6 loopback addresses are reachable in inet6.3 table.
Verify that R1, R2, R3, and R4 exchange IPv6 routes.
• R1:
lab@R1> show bgp summary
Groups: 3 Peers: 4 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 930 895 0 0 0 0
inet6.0 65 33 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.0.30 2087403078 455 1393 0 0 3:27:21 24/
24/24/0 0/0/0/0
172.27.0.34 1342930876 583 177 0 0 1:17:52 Establ
inet.0: 860/860/860/0
inet6.0: 1/1/1/0
172.27.255.3 3895077211 84 491 0 1 34:26 Establ
inet.0: 11/35/35/0
inet6.0: 16/32/32/0
172.27.255.4 3895077211 46 455 0 1 18:07 Establ
inet.0: 0/11/11/0
inet6.0: 16/32/32/0
• R2:
lab@R2> show bgp summary
Groups: 2 Peers: 4 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 3489 1745 0 0 0 0
inet6.0 68 34 0 0 0 0
::ffff:172.27.255.1/128
*[LDP/20] 01:00:08, metric 10
> to 172.27.0.1 via ge-0/0/1.0
::ffff:172.27.255.3/128
*[LDP/20] 01:00:08, metric 20
to 172.27.0.6 via ge-0/0/4.0, Push 299792
> to 172.27.0.1 via ge-0/0/1.0, Push 299808
::ffff:172.27.255.4/128
*[LDP/20] 01:00:08, metric 10
> to 172.27.0.6 via ge-0/0/4.0
::ffff:172.27.255.5/128
*[LDP/20] 01:00:08, metric 20
> to 172.27.0.6 via ge-0/0/4.0, Push 299776
• R3:
lab@R3> show bgp summary
Groups: 4 Peers: 6 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 1786 895 0 0 0 0
inet6.0 34 33 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.0.62 2087403078 470 1423 0 0 3:35:12 24/
24/24/0 0/0/0/0
172.27.255.1 3895077211 527 119 0 0 50:49 Establ
inet.0: 860/884/884/0
inet6.0: 1/1/1/0
172.27.255.2 3895077211 525 529 0 0 50:45 Establ
inet.0: 11/871/871/0
inet6.0: 0/1/1/0
172.27.255.4 3895077211 550 548 0 1 34:18 Establ
inet.0: 0/0/0/0
inet6.0: 16/16/16/0
172.27.255.5 3895077211 113 527 0 0 50:41 Establ
inet.0: 0/7/7/0
2008:4498::2 65432 112 116 0 0 50:33 Establ
inet6.0: 16/16/16/0
::ffff:172.27.255.1/128
*[LDP/20] 00:51:15, metric 10
> to 172.27.0.14 via ge-0/0/1.0
::ffff:172.27.255.2/128
*[LDP/20] 00:51:15, metric 20
> to 172.27.0.14 via ge-0/0/1.0, Push 299824
to 172.27.0.18 via ge-0/0/2.0, Push 299824
::ffff:172.27.255.4/128
*[LDP/20] 00:51:15, metric 10
> to 172.27.0.18 via ge-0/0/2.0
::ffff:172.27.255.5/128
*[LDP/20] 00:51:15, metric 10
> to 172.27.0.25 via ge-0/0/3.0
• R4:
lab@R4> show bgp summary
Groups: 3 Peers: 5 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 1786 895 0 0 0 0
inet6.0 34 33 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.255.1 3895077211 499 89 0 0 37:58 Establ
inet.0: 884/884/884/0
inet6.0: 1/1/1/0
172.27.255.2 3895077211 497 501 0 0 37:54 Establ
inet.0: 11/871/871/0
inet6.0: 0/1/1/0
172.27.255.3 3895077211 555 556 0 0 37:46 Establ
inet.0: 0/24/24/0
inet6.0: 16/16/16/0
172.27.255.5 3895077211 85 499 0 0 37:50 Establ
inet.0: 0/7/7/0
2008:4498:0:1::2 65432 84 88 0 0 37:42 Establ
inet6.0: 16/16/16/0
::ffff:172.27.255.1/128
*[LDP/20] 00:38:23, metric 5
> to 172.27.0.10 via ae0.0
::ffff:172.27.255.2/128
*[LDP/20] 00:38:23, metric 10
> to 172.27.0.5 via ge-0/0/1.0
::ffff:172.27.255.3/128
*[LDP/20] 00:38:23, metric 10
> to 172.27.0.17 via ge-0/0/5.0
::ffff:172.27.255.5/128
*[LDP/20] 00:38:23, metric 10
> to 172.27.0.22 via ge-0/0/4.0
TASK 8
The direct IPv6 routes on C1-R3 and C3-R4 links must be reachable
from the Customer remote routers C3 and C1 respectively.
TASK INTERPRETATION
You must apply a redistribution policy at R3 and R4.
TASK COMPLETION
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# edit policy-options policy-statement IPv6-direct
[edit]
lab@R3# set protocols bgp group internal export IPv6-direct
[edit]
lab@R3# show protocols bgp group internal
type internal;
local-address 172.27.255.3;
family inet {
unicast;
}
family inet6 {
labeled-unicast {
explicit-null;
}
}
authentication-key "$9$j9kmT69pRhrz3hrev7Nik."; ## SECRET-DATA
export [ nhs IPv6-direct ];
neighbor 172.27.255.4;
[edit]
lab@R3# commit and-quit
commit complete
Exiting configuration mode
lab@R3>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# edit policy-options policy-statement IPv6-direct
[edit]
lab@R4# set protocols bgp group internal export IPv6-direct
[edit]
lab@R4# show protocols bgp group internal
type internal;
local-address 172.27.255.4;
family inet {
unicast;
}
family inet6 {
labeled-unicast {
explicit-null;
}
}
authentication-key "$9$EFaSlM7-waZj8XZjHqQzhSr"; ## SECRET-DATA
export IPv6-direct;
neighbor 172.27.255.3;
[edit]
lab@R4# commit and-quit
commit complete
Exiting configuration mode
lab@R4>
TASK VERIFICATION
Verify that the redistribution policy is applied at R3 and R4.
• R3:
lab@R3> show route advertising-protocol bgp 2008:4498::2 2008:4498:0:1::/64
• R4:
lab@R4> show route advertising-protocol bgp 2008:4498:0:1::2 2008:4498::/64
TASK 9
Ensure that no more than 12 prefixes are accepted from Customer
routers C1 and C3. If this limit is exceeded the router should
generate the syslog message but the session should remain active.
TASK INTERPRETATION
When prefix limit is configured in BGP, the default action is to generate the syslog message,
therefore you must configure only the limit, without specifying other options.
TASK COMPLETION
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# show protocols bgp group C1
type external;
family inet6 {
unicast {
prefix-limit {
maximum 12;
}
}
}
peer-as 65432;
as-override;
neighbor 2008:4498::2;
[edit]
lab@R3# commit and-quit
commit complete
Exiting configuration mode
lab@R3>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set protocols bgp group C3 family inet6 unicast prefix-limit maximum 12
[edit]
lab@R4# show protocols bgp group C3
type external;
family inet6 {
unicast {
prefix-limit {
maximum 12;
}
}
}
peer-as 65432;
as-override;
neighbor 2008:4498:0:1::2;
[edit]
lab@R4# commit and-quit
commit complete
Exiting configuration mode
lab@R4>
• R4:
lab@R4> show log messages | match "Configured maximum"
Jan 26 15:00:52 R4 rpd[1267]: 2008:4498:0:1::2 (External AS 65432): Configured
maximum prefix-limit(12) exceeded for inet6-unicast nlri: 16
TASK 10
All BGP sessions state changes should be logged to syslog.
TASK INTERPRETATION
The task is fairly straight forward. You must configure the log-updown option under the BGP
protocol for every router.
TASK COMPLETION
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# set protocols bgp log-updown
[edit]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
• R2:
lab@R2> configure
Entering configuration mode
[edit]
lab@R2# set protocols bgp log-updown
[edit]
lab@R2# commit and-quit
commit complete
Exiting configuration mode
lab@R2>
Lab 6–50 • BGP Implementation www.juniper.net
JNCIE Service Provider Bootcamp
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# set protocols bgp log-updown
[edit]
lab@R3# commit and-quit
commit complete
Exiting configuration mode
lab@R3>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set protocols bgp log-updown
[edit]
lab@R4# commit and-quit
commit complete
Exiting configuration mode
lab@R4>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# set protocols bgp log-updown
[edit]
lab@R5# commit and-quit
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
Verify that all BGP sessions state changes are logged to syslog.
• R1:
lab@R1> clear bgp neighbor
Cleared 4 connections
Note
The next several steps are comprised of
policy tasks. To most efficiently implement
the BGP policy tasks, we will discuss each
policy task in a separate step, however, the
tasks will be completed together in a later
step.
TASK 11
Implement an export policy that affects incoming traffic from
Transit routers. Traffic should enter your network through the T1
router.
TASK INTERPRETATION
The prefixes advertised by R2 to T2 should look inferior to the ones advertised by R1 and R2 to
T1. Routers to apply the policy: R2.
TASK 12
Implement an import policy for the Transit routers that ensures the
outbound IPv4 traffic exits your AS at the R2 router.
TASK INTERPRETATION
The prefixes received from T1 and T2 should be advertised to IBGP neighbors with better
preference by R2. Routers to apply the policy: R2.
TASK 13
Ensure that the traffic going to the destinations advertised by the
P router prefers R3 as the exit point.
TASK INTERPRETATION
The task is straightforward. Routers to apply the policy: R3.
TASK 14
Routes received from the P router should not be advertised to T1 or
T2 or vise-versa.
Note
The example solution provided in this
section is one of several possible
approaches. You can accomplish the task
by designing your policies in different way.
TASK COMPLETION
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# edit routing-options
[edit routing-options]
lab@R1# set rib inet6.0 aggregate route 2008:4498::/32
[edit routing-options]
lab@R1# set aggregate route 172.27.0.0/16
[edit routing-options]
lab@R1# show
rib inet6.0 {
aggregate {
route 2008:4498::/32;
}
}
aggregate {
route 172.27.0.0/16;
}
router-id 172.27.255.1;
autonomous-system 3895077211;
[edit routing-options]
lab@R1# top edit policy-options
[edit policy-options]
lab@R1# set community C2-routes members 7211:65512
[edit policy-options]
lab@R1# set community P-routes members 7211:1111
[edit policy-options]
lab@R1# set community T1-routes members 7211:2222
[edit policy-options]
lab@R1# set community T2-routes members 7211:3333
[edit policy-options]
lab@R1# edit policy-statement from-T1
[edit policy-options]
lab@R1# edit policy-statement from-P
[edit]
lab@R1# set protocols bgp group T1 import from-T1
[edit]
lab@R1# set protocols bgp group T1 export to-T1
[edit]
lab@R1# set protocols bgp group P import from-P
[edit]
lab@R1# set protocols bgp group P export to-P
[edit]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
• R2:
lab@R2> configure
Entering configuration mode
[edit]
lab@R2# edit routing-options
[edit routing-options]
lab@R2# set rib inet6.0 aggregate route 2008:4498::/32
[edit routing-options]
lab@R2# set aggregate route 172.27.0.0/16
[edit routing-options]
lab@R2# show
rib inet6.0 {
aggregate {
route 2008:4498::/32;
}
}
aggregate {
route 172.27.0.0/16;
}
router-id 172.27.255.2;
autonomous-system 3895077211;
[edit routing-options]
lab@R2# top edit policy-options
[edit policy-options]
lab@R2# set community C2-routes members 7211:65512
[edit policy-options]
lab@R2# set community P-routes members 7211:1111
[edit policy-options]
lab@R2# set community T1-routes members 7211:2222
[edit policy-options]
lab@R2# set community T2-routes members 7211:3333
[edit policy-options]
lab@R2# edit policy-statement from-T1
[edit policy-options]
lab@R2# edit policy-statement to-T1
[edit policy-options]
lab@R2# edit policy-statement from-T2
[edit policy-options]
lab@R2# edit policy-statement to-T2
[edit]
lab@R2# set protocols bgp group T1-T2 neighbor 172.27.0.66 import from-T1
[edit]
lab@R2# set protocols bgp group T1-T2 neighbor 172.27.0.66 export to-T1
[edit]
lab@R2# set protocols bgp group T1-T2 neighbor 172.27.0.38 import from-T2
[edit]
lab@R2# set protocols bgp group T1-T2 neighbor 172.27.0.38 export to-T2
[edit]
lab@R2# commit and-quit
commit complete
Exiting configuration mode
lab@R2>
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# edit routing-options
[edit routing-options]
lab@R3# set aggregate route 172.27.0.0/16
[edit routing-options]
lab@R3# show
aggregate {
route 172.27.0.0/16;
}
router-id 172.27.255.3;
autonomous-system 3895077211;
[edit routing-options]
lab@R3# top edit policy-options
[edit policy-options]
lab@R3# set community C2-routes members 7211:65512
[edit policy-options]
lab@R3# set community P-routes members 7211:1111
[edit policy-options]
lab@R3# set community T1-routes members 7211:2222
[edit policy-options]
lab@R3# set community T2-routes members 7211:3333
[edit policy-options]
lab@R3# edit policy-statement from-P
[edit policy-options]
lab@R3# edit policy-statement to-P
[edit]
lab@R3# set protocols bgp group P import from-P
[edit]
lab@R3# set protocols bgp group P export to-P
[edit]
lab@R3# commit and-quit
commit complete
Exiting configuration mode
lab@R3>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# edit routing-options
[edit routing-options]
lab@R5# set aggregate route 172.27.0.0/16
[edit routing-options]
lab@R5# show
static {
[edit routing-options]
lab@R5# top edit policy-options
[edit policy-options]
lab@R5# set community C2-routes members 7211:65512
[edit policy-options]
lab@R5# set community P-routes members 7211:1111
[edit policy-options]
lab@R5# set community T1-routes members 7211:2222
[edit policy-options]
lab@R5# set community T2-routes members 7211:3333
[edit policy-options]
lab@R5# edit policy-statement from-C2
[edit policy-options]
lab@R5# edit policy-statement to-C2
[edit]
lab@R5# set protocols bgp group C2 import from-C2
[edit]
lab@R5# set protocols bgp group C2 export to-C2
[edit]
lab@R5# commit and-quit
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
Verify that all BGP policy tasks are correctly configured.
• R1:
lab@R1> show route advertising-protocol bgp 172.27.0.34 172.27.0.0/16
lab@R1> show route table inet.0 protocol bgp terse | match "(/2[5-9])|(/3[0-2])"
• R2:
lab@R2> show route advertising-protocol bgp 172.27.0.66 172.27.0.0/16
lab@R2> show route table inet.0 protocol bgp terse | match "(/2[5-9])|(/3[0-2])"
• R3:
lab@R3> show route advertising-protocol bgp 172.27.0.62 172.27.0.0/16
lab@R3> show route table inet.0 protocol bgp terse | match "(/2[5-9])|(/3[0-2])"
• R5:
lab@R5> show route advertising-protocol bgp 202.202.0.1 172.27.0.0/16
[edit]
lab@R1# delete routing-options autonomous-system
[edit]
lab@R1# edit routing-options
[edit routing-options]
lab@R1# set autonomous-system 65000
[edit routing-options]
lab@R1# set confederation 3895077211
[edit routing-options]
lab@R1# set confederation members 65000
[edit routing-options]
lab@R1# set confederation members 65001
[edit routing-options]
lab@R1# set confederation members 65002
[edit routing-options]
lab@R1# show
rib inet6.0 {
aggregate {
route 2008:4498::/32;
}
}
aggregate {
route 172.27.0.0/16;
}
[edit routing-options]
lab@R1# top
[edit]
lab@R1# delete protocols bgp group cluster-1
[edit]
lab@R1# edit protocols bgp group IBGP
commit complete
Exiting configuration mode
lab@R1>
[edit]
lab@R2# delete routing-options autonomous-system
[edit]
lab@R2# edit routing-options
[edit routing-options]
lab@R2# set autonomous-system 65001
[edit routing-options]
lab@R2# set confederation 3895077211
[edit routing-options]
lab@R2# set confederation members 65000
[edit routing-options]
lab@R2# set confederation members 65001
[edit routing-options]
lab@R2# set confederation members 65002
[edit routing-options]
lab@R2# show
rib inet6.0 {
aggregate {
route 2008:4498::/32;
}
}
aggregate {
route 172.27.0.0/16;
}
router-id 172.27.255.2;
autonomous-system 65001;
confederation 3895077211 members [ 65000 65001 65002 ];
[edit routing-options]
lab@R2# top
[edit]
lab@R2# delete protocols bgp group cluster-1
[edit]
lab@R2# edit protocols bgp group IBGP
commit complete
Exiting configuration mode
lab@R2>
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# delete routing-options autonomous-system
[edit]
lab@R3# edit routing-options
[edit routing-options]
lab@R3# set autonomous-system 65000
[edit routing-options]
lab@R3# set confederation 3895077211
[edit routing-options]
lab@R3# set confederation members 65000
[edit routing-options]
lab@R3# set confederation members 65002
[edit routing-options]
lab@R3# show
aggregate {
route 172.27.0.0/16;
}
router-id 172.27.255.3;
autonomous-system 65000;
confederation 3895077211 members [ 65000 65001 65002 ];
[edit routing-options]
lab@R3# top
[edit]
lab@R3# delete protocols bgp group cluster-1
[edit]
lab@R3# delete protocols bgp group internal
[edit]
lab@R3# edit protocols bgp group IBGP
commit complete
Exiting configuration mode
lab@R3>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# delete routing-options autonomous-system
[edit]
lab@R4# edit routing-options
[edit routing-options]
lab@R4# set autonomous-system 65001
[edit routing-options]
lab@R4# set confederation 3895077211
[edit routing-options]
lab@R4# set confederation members 65000
[edit routing-options]
lab@R4# set confederation members 65001
[edit routing-options]
lab@R4# show
router-id 172.27.255.4;
autonomous-system 65001;
confederation 3895077211 members [ 65000 65001 65002 ];
[edit routing-options]
lab@R4# top
[edit]
lab@R4# delete protocols bgp group internal
[edit]
lab@R4# edit protocols bgp group IBGP
commit complete
Exiting configuration mode
lab@R4>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# edit routing-options
[edit routing-options]
lab@R5# set autonomous-system 65002
[edit routing-options]
lab@R5# set confederation 3895077211
[edit routing-options]
lab@R5# set confederation members 65000
[edit routing-options]
lab@R5# set confederation members 65001
[edit routing-options]
lab@R5# set confederation members 65002
[edit routing-options]
lab@R5# show
static {
route 202.202.0.1/32 next-hop [ 172.27.0.50 172.27.0.74 ];
}
aggregate {
route 172.27.0.0/16;
}
router-id 172.27.255.5;
autonomous-system 65002;
confederation 3895077211 members [ 65000 65001 65002 ];
[edit routing-options]
lab@R5# top
[edit]
lab@R5# set interfaces ge-0/0/1 unit 0 family inet6
[edit]
lab@R5# set interfaces ge-0/0/2 unit 0 family inet6
[edit]
lab@R5# set protocols mpls ipv6-tunneling
[edit]
lab@R5# delete protocols bgp group cluster-1
[edit]
lab@R5# edit protocols bgp group CBGP
commit complete
Exiting configuration mode
lab@R5>
• R2:
lab@R2> show bgp summary
Groups: 4 Peers: 4 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 1744 1734 0 0 0 0
inet6.0 38 36 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.0.38 1342930876 650 171 0 0 1:15:34 Establ
inet.0: 856/861/856/0
inet6.0: 1/1/1/0
172.27.0.66 1342930876 578 172 0 0 1:15:38 Establ
inet.0: 855/860/855/0
inet6.0: 1/1/1/0
172.27.255.1 65000 1012 643 0 0 1:15:33 Establ
inet.0: 23/23/23/0
inet6.0: 17/18/18/0
172.27.255.4 65001 80 551 0 0 33:14 Establ
inet.0: 0/0/0/0
inet6.0: 17/18/18/0
• R3:
lab@R3> show bgp summary
Groups: 4 Peers: 4 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 897 889 0 0 0 0
inet6.0 34 34 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.0.62 2087403078 158 161 0 0 1:11:18 16/
24/23/0 0/0/0/0
172.27.255.1 65000 626 163 0 0 1:11:10 Establ
inet.0: 866/866/866/0
inet6.0: 18/18/18/0
172.27.255.5 65002 542 575 0 0 29:20 Establ
inet.0: 7/7/7/0
inet6.0: 0/0/0/0
2008:4498::2 65432 158 162 0 0 1:11:14 Establ
inet6.0: 16/16/16/0
• R4:
lab@R4> show bgp summary
Groups: 3 Peers: 3 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 912 889 0 0 0 0
inet6.0 52 34 0 0 0 0
• R5
lab@R5> show bgp summary
Groups: 2 Peers: 3 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 1771 889 0 0 0 0
inet6.0 69 35 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.255.3 65000 701 708 0 0 1:44:42 Establ
inet.0: 882/882/882/0
inet6.0: 35/35/35/0
172.27.255.4 65001 707 648 0 0 1:44:33 Establ
inet.0: 0/882/882/0
inet6.0: 0/34/34/0
202.202.0.1 65512 699 699 0 0 1:44:51 7/7/
7/0 0/0/0/0
STOP Tell your instructor that you have completed this lab.
Overview
In this lab, you will have to troubleshoot a BGP network including IBGP, EBGP, and routing
policies according to the provided task list. You will have 1.5 hours to complete the lab.
The initial lab setup is shown below:
• OSPF is the core IGP protocol. The OSPF domain is divided into two areas. R1 and R2
routers are located in Area 0, R5 router is in Area 1. R3 and R4 routers are ABRs with
links in both Area 0 and Area 1.
• LDP is configured as the core MPLS protocol on all routers in your AS.
• Your IBGP network is configured using route reflection design with one route
reflection cluster and two route reflectors: R3 and R4. All IBGP sessions use the lo0.0
interface IP address.
• All IBGP sessions in your autonomous system are authenticated using MD5
authentication using the key juniper.
• BGP next-hop-self policy is used to resolve the BGP next hop for IPv4 prefixes on all
routers in your AS except for R4.
• EBGP over IPv4 sessions are configured to C2 Customer, Peer (P), and Transit (T)
neighbors. EBGP session to C2 is configured to load balance over the two links that
connect R5 and C2 using only one BGP session.
• EBGP over IPv6 sessions are configured to C1 and C3 routers. The communication
among C1, C3, and the Transit routers T1 and T2 is provided using 6PE technology.
• The R3 and R4 are configured with prefix limit with maximum 12 prefixes allowed
from customer routers C1 and C3. If this limit is exceeded, the routers should
generate the syslog message but the sessions should remain active.
• All routers in your AS are configured to log BGP sessions state changes to syslog.
• Policies are implemented at R1, R2, R3, and R5 routers that should advertise a
summary route representing local AS IPv4 range to the Peer (P), Transit provider (T1
and T2), and the C2 Customer.
• Policies are implemented at R1 and R2 routers that should advertise only a summary
route representing local AS IPv6 range to the Transit provider and block all other IPv6
routes.
login: ops
Password:
login: ops
Password:
login: ops
Password:
login: ops
Password:
login: ops
Password:
The output shows that the sessions are configured to the peers 172.27.255.3 and
172.27.255.4 using 172.27.255.1 local address. The sessions use authentication. We first
check the IP connectivity between the peers.
ops@R1> ping 172.27.255.3 source 172.27.255.1 count 2
PING 172.27.255.3 (172.27.255.3): 56 data bytes
64 bytes from 172.27.255.3: icmp_seq=0 ttl=64 time=3.468 ms
64 bytes from 172.27.255.3: icmp_seq=1 ttl=64 time=5.449 ms
login: lab
Password:
• R2:
R2 (ttyd0)
login: lab
Password:
• R3:
R3 (ttyd0)
login: lab
Password:
• R4:
R4 (ttyd0)
login: lab
Password:
login: lab
Password:
TASK 4
Using CLI operational and configuration mode ensure that all IBGP
and EBGP sessions are up, running and support appropriate address
families. You are not allowed to change the OSPF area design.
TASK INTERPRETATION
The task is straightforward.
TASK COMPLETION
• R1:
Synopsis: The probable source of the peering problem is authentication.
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# set protocols bgp group IBGP authentication-key juniper
[edit]
lab@R1# commit
commit complete
[edit]
lab@R1# run show bgp summary
Groups: 3 Peers: 4 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 908 879 0 0 0 0
inet6.0 1 1 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.0.30 2087403078 659 666 0 0 4:59:21 23/
24/23/0 0/0/0/0
172.27.0.34 1342930876 1070 669 0 0 4:59:17 Establ
inet.0: 855/860/855/0
inet6.0: 1/1/1/0
172.27.255.3 3895077211 34 511 0 0 13:05 Establ
inet.0: 1/24/24/0
inet6.0: 0/0/0/0
172.27.255.4 3895077211 32 441 0 0 12:50 Establ
inet.0: 0/0/0/0
inet6.0: 0/0/0/0
The output shows that all sessions are established now and the peers negotiated appropriate
address families.
[edit]
lab@R2# set protocols ospf area 0 interface lo0.0
[edit]
lab@R2# commit
commit complete
[edit]
lab@R2# run show bgp summary
Groups: 3 Peers: 4 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 3478 1735 0 0 0 0
inet6.0 2 1 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.0.38 1342930876 1155 682 0 0 5:05:58 856/
861/856/0 0/0/0/0
172.27.0.66 1342930876 1154 682 0 0 5:05:54 855/
860/855/0 0/0/0/0
172.27.255.3 3895077211 416 481 0 0 58 Establ
inet.0: 1/879/879/0
inet6.0: 1/1/1/0
172.27.255.4 3895077211 413 410 0 0 6 Establ
inet.0: 23/878/878/0
inet6.0: 0/1/1/0
The output shows that all sessions are established now, but EBGP sessions have negotiated only
the IPv4 address family.
[edit]
lab@R2# run show bgp neighbor 172.27.0.66
Peer: 172.27.0.66+52140 AS 1342930876 Local: 172.27.0.65+179 AS 3895077211
Type: External State: Established Flags: <Sync>
Last State: OpenConfirm Last Event: RecvKeepAlive
Last Error: None
Export: [ to-T1 ] Import: [ from-T1 ]
Options: <Preference LogUpDown PeerAS Multipath Refresh>
Holdtime: 90 Preference: 170
Number of flaps: 0
Peer ID: 111.111.0.1 Local ID: 172.27.255.2 Active Holdtime: 90
Keepalive Interval: 30 Peer index: 0
BFD: disabled, down
Local Interface: ge-0/0/3.0
NLRI for restart configured on peer: inet-unicast
NLRI advertised by peer: inet-unicast inet6-unicast
NLRI for this session: inet-unicast
Peer supports Refresh capability (2)
Restart time configured on the peer: 120
Stale routes from peer are kept for: 300
Restart time requested by this peer: 120
Lab 7–18 • BGP Troubleshooting www.juniper.net
JNCIE Service Provider Bootcamp
NLRI that peer supports restart for: inet-unicast inet6-unicast
NLRI that restart is negotiated for: inet-unicast
NLRI of received end-of-rib markers: inet-unicast
NLRI of all end-of-rib markers sent: inet-unicast
Peer supports 4 byte AS extension (peer-as 1342930876)
Peer does not support Addpath
Table inet.0 Bit: 10001
RIB State: BGP restart is complete
Send state: in sync
Active prefixes: 855
Received prefixes: 860
Accepted prefixes: 855
Suppressed due to damping: 0
Advertised prefixes: 25
Last traffic (seconds): Received 3 Sent 21 Checked 48
Input messages: Total 1164Updates 482Refreshes 0Octets 43654
Output messages: Total 692Updates 3Refreshes 0Octets 13406
Output Queue[0]: 0
[edit]
lab@R2# run show bgp neighbor 172.27.0.38
Peer: 172.27.0.38+51554 AS 1342930876 Local: 172.27.0.37+179 AS 3895077211
Type: External State: Established Flags: <Sync>
Last State: OpenConfirm Last Event: RecvKeepAlive
Last Error: None
Export: [ to-T2 ] Import: [ from-T2 ]
Options: <Preference LogUpDown PeerAS Multipath Refresh>
Holdtime: 90 Preference: 170
Number of flaps: 0
Peer ID: 111.111.0.2 Local ID: 172.27.255.2 Active Holdtime: 90
Keepalive Interval: 30 Peer index: 0
BFD: disabled, down
Local Interface: ge-0/0/2.0
NLRI for restart configured on peer: inet-unicast
NLRI advertised by peer: inet-unicast inet6-unicast
NLRI for this session: inet-unicast
Peer supports Refresh capability (2)
Restart time configured on the peer: 120
Stale routes from peer are kept for: 300
Restart time requested by this peer: 120
NLRI that peer supports restart for: inet-unicast inet6-unicast
NLRI that restart is negotiated for: inet-unicast
NLRI of received end-of-rib markers: inet-unicast
NLRI of all end-of-rib markers sent: inet-unicast
Peer supports 4 byte AS extension (peer-as 1342930876)
Peer does not support Addpath
Table inet.0 Bit: 10000
RIB State: BGP restart is complete
Send state: in sync
Active prefixes: 856
Received prefixes: 861
Accepted prefixes: 856
Suppressed due to damping: 0
Advertised prefixes: 25
Last traffic (seconds): Received 23 Sent 21 Checked 11
edit]
lab@R2# show protocols bgp group T1-T2
type external;
peer-as 1342930876;
multipath;
neighbor 172.27.0.66 {
import from-T1;
export to-T1;
}
neighbor 172.27.0.38 {
import from-T2;
export to-T2;
}
[edit]
lab@R2# set protocols bgp group T1-T2 family inet unicast
[edit]
lab@R2# set protocols bgp group T1-T2 family inet6 unicast
[edit]
lab@R2# commit
commit complete
[edit]
lab@R2# run show bgp summary
Groups: 3 Peers: 4 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 3478 1735 0 0 0 0
inet6.0 4 2 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.0.38 1342930876 488 8 0 0 1:05 Establ
inet.0: 856/861/856/0
inet6.0: 1/1/1/0
172.27.0.66 1342930876 416 8 0 0 1:09 Establ
inet.0: 855/860/855/0
inet6.0: 1/1/1/0
172.27.255.3 3895077211 436 956 0 0 10:08 Establ
inet.0: 1/879/879/0
inet6.0: 0/1/1/0
172.27.255.4 3895077211 433 885 0 0 9:16 Establ
inet.0: 23/878/878/0
inet6.0: 0/1/1/0
The output now shows that all sessions are established and the peers negotiated appropriate
address families.
• R3:
Synopsis: The source of peering problems:
– R1 is authentication key mismatch;
Lab 7–20 • BGP Troubleshooting www.juniper.net
JNCIE Service Provider Bootcamp
– R2 is bidirectional IGP reachability. R2 loopback address is not known in OSPF;
– R4 is absence of IBGP session configured;
– R5 is bidirectional IGP reachability, most probably incorrect routing
configuration on R5;
– C1 is misconfigured BGP parameters.
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# set protocols bgp group Clients neighbor 172.27.255.4
[edit]
lab@R3# set protocols bgp group C1 traceoptions file bgp-trace.log
[edit]
lab@R3# set protocols bgp group C1 traceoptions flag open detail
[edit]
lab@R3# commit
commit complete
[edit]
lab@R3# run show log bgp-trace.log
Jan 27 10:28:34 trace_on: Tracing to "/var/log/bgp-trace.log" started
Jan 27 10:30:23.428658 advertising receiving-speaker only capability to neighbor
2008:4498::2 (External AS 65432)
Jan 27 10:30:23.428743 bgp_send: sending 59 bytes to 2008:4498::2 (External AS
65432)
Jan 27 10:30:23.428772
Jan 27 10:30:23.428772 BGP SEND 2008:4498::1+64511 -> 2008:4498::2+179
Jan 27 10:30:23.428800 BGP SEND message type 1 (Open) length 59
Jan 27 10:30:23.428827 BGP SEND version 4 as 23456 holdtime 90 id 172.27.255.3
parmlen 30
Jan 27 10:30:23.428851 BGP SEND MP capability AFI=2, SAFI=1
Jan 27 10:30:23.428874 BGP SEND Refresh capability, code=128
Jan 27 10:30:23.428898 BGP SEND Refresh capability, code=2
Jan 27 10:30:23.428924 BGP SEND Restart capability, code=64, time=120, flags=
Jan 27 10:30:23.430615 BGP SEND 4 Byte AS-Path capability (65), as_num 3895077211
Jan 27 10:30:23.434276 advertising receiving-speaker only capability to neighbor
2008:4498::2 (External AS 65432)
Jan 27 10:30:23.437365
Jan 27 10:30:23.437365 BGP RECV 2008:4498::2+179 -> 2008:4498::1+64511
Jan 27 10:30:23.437425 BGP RECV message type 1 (Open) length 59
Jan 27 10:30:23.437451 BGP RECV version 4 as 65422 holdtime 90 id 201.201.0.1
parmlen 30
Jan 27 10:30:23.437475 BGP RECV MP capability AFI=2, SAFI=1
Jan 27 10:30:23.437498 BGP RECV Refresh capability, code=128
Jan 27 10:30:23.437522 BGP RECV Refresh capability, code=2
Jan 27 10:30:23.437546 BGP RECV Restart capability, code=64, time=120, flags=
Jan 27 10:30:23.437570 BGP RECV 4 Byte AS-Path capability (65), as_num 65422
Jan 27 10:30:23.437636 bgp_process_open:2691: NOTIFICATION sent to 2008:4498::2
(External AS 65432): code 2 (Open Message Error) subcode 2 (bad peer AS number),
Reason: peer 2008:4498::2 (External AS 65432) claims 65422, 65432 configured
www.juniper.net BGP Troubleshooting • Lab 7–21
JNCIE Service Provider Bootcamp
Jan 27 10:30:23.437662 bgp_send: sending 21 bytes to 2008:4498::2 (External AS
65432)
Jan 27 10:30:23.437689
Jan 27 10:30:23.437689 BGP SEND 2008:4498::1+64511 -> 2008:4498::2+179
Jan 27 10:30:23.437715 BGP SEND message type 3 (Notification) length 21
Jan 27 10:30:23.437739 BGP SEND Notification code 2 (Open Message Error) subcode 2
(bad peer AS number)
The output shows that the remote peer (C1) has incorrectly configured AS 65422. You cannot
change the EBGP peer configuration, hence you must change the R3 peer-as setting.
[edit]
lab@R3# set protocols bgp group C1 peer-as 65422
[edit]
lab@R3# commit
commit complete
[edit]
lab@R3# run show bgp summary
Groups: 3 Peers: 6 Down peers: 2
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 1768 890 0 0 0 0
inet6.0 18 17 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.0.62 2087403078 733 1231 0 0 5:32:31 24/
24/24/0 0/0/0/0
172.27.255.1 3895077211 583 111 0 0 45:59 Establ
inet.0: 855/878/878/0
inet6.0: 1/1/1/0
172.27.255.2 3895077211 994 475 0 0 27:31 Establ
inet.0: 11/866/866/0
inet6.0: 0/1/1/0
172.27.255.4 3895077211 0 0 0 0 9:14 Active
172.27.255.5 3895077211 0 0 0 0 5:50:16 Connect
2008:4498::2 65422 7 9 0 0 2:08 Establ
inet6.0: 16/16/16/0
The output shows that all sessions except for R4 (172.27.255.4) and R5 (172.27.255.5) are
established successfully and the peers negotiated the required address families.
• R4:
Synopsis: The source of peering problems:
– R1 is authentication key mismatch;
– R2 is bidirectional IGP reachability. R2 loopback address is not known in OSPF;
– R3 is absence of IBGP session configured;
– R5 is bidirectional IGP reachability, most probably incorrect routing configuration
on R5;
– C3 is misconfigured BGP prefix limit action.
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set protocols bgp group Clients neighbor 172.27.255.3
[edit]
lab@R4# delete protocols bgp group C3 family inet6 unicast prefix-limit teardown
[edit]
lab@R4# commit
commit complete
[edit]
lab@R4# run show bgp summary
Groups: 2 Peers: 5 Down peers: 1
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 1768 890 0 0 0 0
inet6.0 34 33 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.255.1 3895077211 554 150 0 0 1:03:34 Establ
inet.0: 878/878/878/0
inet6.0: 1/1/1/0
172.27.255.2 3895077211 965 514 0 0 44:28 Establ
inet.0: 11/866/866/0
inet6.0: 0/1/1/0
172.27.255.3 3895077211 420 420 0 0 2:31 Establ
inet.0: 1/24/24/0
inet6.0: 16/16/16/0
172.27.255.5 3895077211 0 0 0 0 5:57:10 Connect
2008:4498:0:1::2 65432 8 11 0 0 2:27 Establ
inet6.0: 16/16/16/0
The output shows that all sessions except for R5 (172.27.255.5) are established successfully
and the peers negotiated the required address families.
• R5:
Synopsis: The source of peering problems:
– R3 is incorrectly configured routing;
– R4 is incorrectly configured routing;
– C2 is misconfigured BGP parameters.
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# set protocols bgp group C2 traceoptions file bgp-trace.log
[edit]
lab@R5# set protocols bgp group C2 traceoptions flag all detail
[edit]
lab@R5# delete routing-options aggregate route 172.27.0.0/16
[edit]
[edit]
lab@R5# run show log bgp-trace.log
Jan 27 10:59:32 trace_on: Tracing to "/var/log/bgp-trace.log" started
Jan 27 10:59:55.538658 advertising receiving-speaker only capability to neighbor
202.202.0.1 (External AS 65512)
Jan 27 10:59:55.538714 bgp_4byte_aspath_add_cap():155 AS4-Peer 202.202.0.1
(External AS 65512)(SEND): 4 byte AS capability added, AS 3895077211
The output shows that R5 cannot send any BGP messages to the 202.202.0.1 peer.
lab@R5# show protocols bgp group C2
type external;
traceoptions {
file bgp-trace.log;
flag all detail;
}
local-address 172.27.255.5;
export to-C2;
peer-as 65512;
neighbor 202.202.0.1;
The sessions is EBGP session but multihop setting is missing from the configuration.
[edit]
lab@R5# set protocols bgp group C2 multihop
[edit]
lab@R5# commit
commit complete
[edit]
lab@R5# run show bgp summary
Groups: 2 Peers: 3 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 1787 890 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn
State|#Active/Received/Accepted/Damped...
172.27.255.3 3895077211 433 23 0 0 9:14 867/
890/890/0 0/0/0/0
172.27.255.4 3895077211 434 24 0 0 9:21 16/
890/890/0 0/0/0/0
202.202.0.1 65512 519 515 0 0 1:36 7/7/
7/0 0/0/0/0
The output shows that all sessions are established successfully and the peers negotiated the
required address families.
TASK 5
All Peer (P), Transit provider (T1, T2) and C2 IPv4 prefixes, except
of the prefixes with mask shorter than /8 or longer than /24, must
be active and reachable on all routers in your AS.
inet.0: 917 destinations, 3481 routes (912 active, 0 holddown, 1700 hidden)
+ = Active Route, - = Last Active, * = Both
inet.0: 917 destinations, 3481 routes (912 active, 0 holddown, 1700 hidden)
+ = Active Route, - = Last Active, * = Both
inet.0: 917 destinations, 3481 routes (912 active, 0 holddown, 1700 hidden)
+ = Active Route, - = Last Active, * = Both
inet.0: 917 destinations, 2636 routes (912 active, 0 holddown, 1700 hidden)
Prefix Nexthop MED Lclpref AS path
* 35.0.0.0/8 172.27.0.34 100 1342930876 8918
237 I
The output shows that the route 35/8 is received from R3 with the original BGP next hop
172.27.0.34. The problem is related to next-hop-self policy on R1.
lab@R2> show route 172.27.0.34
inet.0: 917 destinations, 2636 routes (912 active, 0 holddown, 1700 hidden)
+ = Active Route, - = Last Active, * = Both
inet.0: 917 destinations, 3481 routes (912 active, 0 holddown, 1700 hidden)
Prefix Nexthop MED Lclpref AS path
* 35.0.0.0/8 172.27.255.4 100 1342930876 8918
237 I
The output shows another problem with BGP next hop. The 35/8 route is received from R4 with
BGP next hop set to R4 loopback address. This output indicates that R4 incorrectly changes next
hop to self for certain prefixes.
lab@R2> show route protocol bgp terse | match "(/2[5-9])|(/3[0-2])"
* ? 150.150.13.0/25 B 170 100 2087403078 I
inet.0: 917 destinations, 2636 routes (912 active, 0 holddown, 1700 hidden)
150.150.13.0/25 (2 entries, 1 announced)
*BGP Preference: 170/-101
Next hop type: Indirect
Address: 0x95848f8
Next-hop reference count: 72
Source: 172.27.255.4
Next hop type: Router, Next hop index: 602
Next hop: 172.27.0.6 via ge-0/0/4.0, selected
Session Id: 0x2
Protocol next hop: 172.27.255.4
Indirect next hop: 95c7928 262147 INH Session ID: 0xc
State: <Active Int Ext>
Local AS: 3895077211 Peer AS: 3895077211
Age: 15 Metric2: 1
Validation State: unverified
Task: BGP_3895077211.172.27.255.4+179
Announcement bits (3): 0-KRT 6-BGP_RT_Background 7-Resolve tree 2
AS path: 2087403078 I (Originator)
Cluster list: 0.0.0.1
Originator ID: 172.27.255.3
Accepted
Localpref: 100
Router ID: 172.27.255.4
BGP Preference: 170/-101
Next hop type: Indirect
Address: 0x963c18c
Next-hop reference count: 24
Source: 172.27.255.3
Next hop type: Router, Next hop index: 262143
Next hop: 172.27.0.1 via ge-0/0/1.0
Session Id: 0x1
Next hop: 172.27.0.6 via ge-0/0/4.0, selected
Session Id: 0x2
Protocol next hop: 172.27.255.3
Indirect next hop: 9594000 262142 INH Session ID: 0x6
State: <NotBest Int Ext>
Inactive reason: Not Best in its group - IGP metric
Local AS: 3895077211 Peer AS: 3895077211
Age: 18:42 Metric2: 2
Validation State: unverified
Task: BGP_3895077211.172.27.255.3+179
AS path: 2087403078 I
Accepted
Localpref: 100
Router ID: 172.27.255.3
The output reveals that a route with a mask longer than /24 is in the routing table. This route is
received from both R3 and R4 with the originator being R3 (172.27.255.3). Most probably an R3
EBGP policy is configured incorrectly.
inet.0: 917 destinations, 2636 routes (912 active, 0 holddown, 1700 hidden)
+ = Active Route, - = Last Active, * = Both
inet.0: 909 destinations, 949 routes (64 active, 0 holddown, 878 hidden)
+ = Active Route, - = Last Active, * = Both
...
[edit]
lab@R1# set policy-options policy-statement NHS term 1 from protocol bgp
[edit]
lab@R1# set policy-options policy-statement NHS term 1 from route-type external
[edit]
lab@R1# set policy-options policy-statement NHS term 1 then next-hop self
[edit]
lab@R1# set protocols bgp group IBGP export NHS
[edit]
lab@R1# commit
commit complete
[edit]
lab@R1# run show route advertising-protocol bgp 172.27.255.3 35/8
[edit]
lab@R1# run show route advertising-protocol bgp 172.27.255.4 35/8
[edit]
lab@R1# run show route advertising-protocol bgp 172.27.0.34 172.27/16
[edit]
lab@R1# set policy-options policy-statement to-T1 term 3 from rib inet6.0
[edit]
lab@R1# set policy-options policy-statement to-T1 term 3 from route-filter
2008:4498::/32 longer
[edit]
lab@R1# set policy-options policy-statement to-T1 term 3 then reject
[edit]
lab@R1# commit
commit complete
[edit]
lab@R1# run show route advertising-protocol bgp 172.27.0.34 table inet6.0
[edit]
lab@R2# show policy-options policy-statement from-T1
term 1 {
from {
as-path AS1342930876;
route-filter 0.0.0.0/0 prefix-length-range /8-/24;
}
to rib inet.0;
then accept;
}
term 2 {
to rib inet6.0;
then accept;
}
term 3 {
then reject;
}
[edit]
lab@R2# show policy-options policy-statement from-T2
term 1 {
from {
as-path AS1342930876;
route-filter 0.0.0.0/0 prefix-length-range /8-/24;
}
to rib inet.0;
then accept;
}
term 2 {
[edit]
lab@R2# delete policy-options policy-statement from-T1 term 1 from as-path
[edit]
lab@R2# delete policy-options policy-statement from-T2 term 1 from as-path
[edit]
lab@R2# commit
commit complete
[edit]
lab@R2# run show route hidden terse
[edit]
lab@R2# run show route advertising-protocol bgp 172.27.255.3 35/8
[edit]
lab@R2# run show route advertising-protocol bgp 172.27.255.4 35/8
[edit]
lab@R2# run show route advertising-protocol bgp 172.27.0.38 172.27/16
[edit]
lab@R2# run show route advertising-protocol bgp 172.27.0.38 table inet6.0
[edit]
lab@R3# show protocols bgp group P
type external;
export to-P;
peer-as 2087403078;
neighbor 172.27.0.62;
[edit]
lab@R3# set policy-options policy-statement from-P term 1 from protocol bgp
[edit]
lab@R3# set policy-options policy-statement from-P term 1 from route-filter
0.0.0.0/0 prefix-length-range /8-/24
[edit]
lab@R3# set policy-options policy-statement from-P term 1 then accept
[edit]
lab@R3# set policy-options policy-statement from-P term 2 then reject
[edit]
lab@R3# set protocols bgp group P import from-P
[edit]
lab@R3# commit
commit complete
[edit]
lab@R3# run show route protocol bgp terse | match "(/2[5-9])|(/3[0-2])"
[edit]
lab@R3# run show route hidden
[edit]
lab@R5# run show route advertising-protocol bgp 172.27.255.3 202.202/24
[edit]
lab@R5# run show route advertising-protocol bgp 172.27.255.4 202.202/24
[edit]
lab@R5# set policy-options policy-statement from-C2 term 1 then local-preference
200
[edit]
lab@R5# show policy-options policy-statement from-C2
term 1 {
then {
local-preference 200;
}
}
[edit]
lab@R5# set protocols bgp group C2 import from-C2
[edit]
lab@R5# commit
commit complete
[edit]
lab@R5# run show route advertising-protocol bgp 172.27.255.3 202.202/24
[edit]
lab@R5# run show route advertising-protocol bgp 172.27.255.4 202.202/24
[edit]
lab@R3# set policy-options policy-statement LOCAL-RANGE term 1 then next-hop
172.27.0.26
[edit]
lab@R3# set policy-options policy-statement LOCAL-RANGE term 1 then accept
[edit]
lab@R3# show policy-options policy-statement LOCAL-RANGE
term 1 {
from {
protocol aggregate;
route-filter 172.27.0.0/16 exact;
}
then {
next-hop 172.27.0.26;
accept;
}
}
[edit]
lab@R3# set protocols bgp group Clients neighbor 172.27.255.5 export NHS
[edit]
lab@R3# set protocols bgp group Clients neighbor 172.27.255.5 export LOCAL-RANGE
[edit]
lab@R3# show protocols bgp group Clients
type internal;
local-address 172.27.255.3;
family inet {
unicast;
}
family inet6 {
labeled-unicast {
explicit-null;
}
}
authentication-key "$9$H.fz9A0hSe36SevW-dk.P"; ## SECRET-DATA
export [ NHS IPv6-DIRECT ];
cluster 0.0.0.1;
neighbor 172.27.255.1;
neighbor 172.27.255.2;
neighbor 172.27.255.5 {
export [ NHS LOCAL-RANGE ];
}
neighbor 172.27.255.4;
[edit]
lab@R3# commit
commit complete
[edit]
lab@R3# run show route advertising-protocol bgp 172.27.255.5 172.27/16
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set policy-options policy-statement LOCAL-RANGE term 1 from protocol
aggregate
[edit]
lab@R4# set policy-options policy-statement LOCAL-RANGE term 1 from route-filter
172.27.0.0/16 exact
[edit]
lab@R4# set policy-options policy-statement LOCAL-RANGE term 1 then next-hop
172.27.0.21
[edit]
lab@R4# set policy-options policy-statement LOCAL-RANGE term 1 then accept
[edit]
lab@R4# show policy-options policy-statement LOCAL-RANGE
term 1 {
from {
protocol aggregate;
route-filter 172.27.0.0/16 exact;
}
then {
next-hop 172.27.0.21;
accept;
}
}
[edit]
lab@R4# set protocols bgp group Clients neighbor 172.27.255.5 export NHS
[edit]
lab@R4# set protocols bgp group Clients neighbor 172.27.255.5 export LOCAL-RANGE
[edit]
lab@R4# show protocols bgp group Clients
type internal;
local-address 172.27.255.4;
family inet {
unicast;
}
family inet6 {
[edit]
lab@R4# commit
commit complete
[edit]
lab@R4# run show route advertising-protocol bgp 172.27.255.5 172.27/16
The traceroute shows that IPv6 traffic takes the optimal path, but the detailed output for the
2008:4498:1::/64 prefix shows that BGP next hop is changed by R4 to self.
• R3:
[edit]
lab@R3# exit
Exiting configuration mode
[edit]
lab@R4# delete protocols bgp group Clients neighbor 172.27.255.5 export NHS
[edit]
lab@R4# delete policy-options policy-statement NHS
[edit]
lab@R4# commit
commit complete
Now check that traffic to 150.150/24 destinations takes the optimal path at R2.
• R2:
lab@R2> traceroute 150.150.0.1
traceroute to 150.150.0.1 (150.150.0.1), 30 hops max, 40 byte packets
1 172.27.0.1 (172.27.0.1) 7.066 ms 6.874 ms 6.904 ms
2 150.150.0.1 (150.150.0.1) 7.875 ms 9.434 ms 9.811 ms
The output shows that the traffic takes the optimal path.
• R4:
[edit]
lab@R4# exit
Exiting configuration mode
STOP Tell your instructor that you have completed this lab.
Overview
In this lab, you will be given a list of tasks specific to implementing and troubleshooting
multicast which you will need to accomplish within a specific time frame. You will have 1 hour to
complete the simulation.
By completing this lab, you will perform the following tasks:
• Configure all routers to participate in protocol independent multicast (PIM).
• Ensure that R1 and R2 are rendezvous points (RPs) for all groups in the PIM domain.
All routers should use the closest RP. You must use the virtual IP address of
172.27.255.11. The RP configuration must support only IPv4.
• Group 224.2.2.2 is critical for Rec2, and they have requested that the multicast
traffic always use the same path to keep traffic loss to a minimum (except in the
event of a failure). You cannot use policy, and you cannot alter routes in inet.0 to
accomplish this task. One static route can be used if needed to accomplish this task.
• Ensure that joins to source are load-balanced for groups sourced from S1.
Configuring PIM
In this lab part, you will log in to your assigned routers and ensure that you are running the
correct startup configuration file for this lab. Refer to the network diagram for this lab for
topological and configuration details. You will then configure PIM. You must ensure the RP are
configured within the guidelines defined by the tasks in this lab.
Note
We recommend that you spend some time
investigating the current operation of your
routers. During the exam, you might be
given routers that are operating
inefficiently. Investigating operating issues
now might save you time troubleshooting
strange issues later.
INITIAL TASK
Access the CLI for your routers using either the console, Telnet, or SSH as directed by your
instructor. Refer to the management network diagram for the IP address associated with your
devices. Log in as user lab with the password lab123. Verify OSPF is configured and
neighborships are up, and that only the interfaces connecting the routers have an OSPF
neighborship.
TASK COMPLETION
• R1:
R1 (ttyd0)
login: lab
Password:
lab@R1>
• R2:
R2 (ttyd0)
login: lab
lab@R2>
• R3:
R3 (ttyd0)
login: lab
Password:
lab@R3>
• R4:
R4 (ttyd0)
login: lab
Password:
lab@R4>
• R5:
R5 (ttyd0)
login: lab
Password:
lab@R5>
TASK 1
Configure all routers to participate in PIM.
Note
We recommend that you include the
configuration steps for the second task
while you are configuring the first task. This
approach will save you some time and
effort as you move through the tasks of this
lab.
TASK 2
Ensure that R1 and R2 are RPs for all groups in the PIM domain. All
routers should use the closest RP. You must use the virtual IP
address of 172.27.255.11. The RP configuration must only be able to
support IPv4.
[edit]
lab@R1# set interfaces lo0 unit 0 family inet address 172.27.255.1/32 primary
[edit]
lab@R1# set interfaces lo0 unit 0 family inet address 172.27.255.11/32
[edit]
lab@R1# show interfaces lo0
unit 0 {
family inet {
address 172.27.255.1/32 {
primary;
}
address 172.27.255.11/32;
}
}
[edit]
lab@R1# set protocols msdp group anycast-rp local-address 172.27.255.1
[edit]
lab@R1# set protocols msdp group anycast-rp peer 172.27.255.2
[edit]
lab@R1# set protocols pim rp local address 172.27.255.11
[edit]
lab@R1# set protocols pim interface all
[edit]
lab@R1# show protocols
msdp {
group anycast-rp {
local-address 172.27.255.1;
peer 172.27.255.2;
}
}
...
pim {
rp {
local {
address 172.27.255.11;
}
}
interface all;
interface ge-0/0/0.0 {
disable;
}
}
[edit]
lab@R1# commit
commit complete
[edit]
lab@R1#
• R2:
lab@R2> configure
Entering configuration mode
[edit]
lab@R2# set interfaces lo0 unit 0 family inet address 172.27.255.2/32 primary
[edit]
lab@R2# set interfaces lo0 unit 0 family inet address 172.27.255.11/32
[edit]
lab@R2# show interfaces lo0
unit 0 {
family inet {
address 172.27.255.2/32 {
primary;
}
address 172.27.255.11/32;
}
}
[edit]
lab@R2# set protocols msdp group anycast-rp local-address 172.27.255.2
[edit]
lab@R2# set protocols msdp group anycast-rp peer 172.27.255.1
[edit]
lab@R2# set protocols pim rp local address 172.27.255.11
[edit]
lab@R2# set protocols pim interface all
[edit]
lab@R2# set protocols pim interface ge-0/0/0 disable
[edit]
lab@R2# show protocols
msdp {
group anycast-rp {
local-address 172.27.255.2;
peer 172.27.255.1;
}
}
...
pim {
rp {
local {
address 172.27.255.11;
}
}
interface all;
interface ge-0/0/0.0 {
disable;
}
}
[edit]
lab@R2# commit
commit complete
[edit]
lab@R2#
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# set protocols pim rp static address 172.27.255.11
[edit]
lab@R3# set protocols pim interface all
[edit]
lab@R3# set protocols pim interface ge-0/0/0 disable
[edit]
lab@R3# commit
commit complete
[edit]
lab@R3#
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set protocols pim rp static address 172.27.255.11
[edit]
lab@R4# set protocols pim interface all
[edit]
lab@R4# set protocols pim interface ge-0/0/0 disable
[edit]
lab@R4# show protocols pim
rp {
static {
address 172.27.255.11;
}
}
interface all;
interface ge-0/0/0.0 {
disable;
}
[edit]
lab@R4# commit
commit complete
[edit]
lab@R4#
[edit]
lab@R5# set protocols pim rp static address 172.27.255.11
[edit]
lab@R5# set protocols pim interface all
[edit]
lab@R5# set protocols pim interface ge-0/0/0 disable
[edit]
lab@R5# show protocols pim
rp {
static {
address 172.27.255.11;
}
}
interface all;
interface ge-0/0/0.0 {
disable;
}
[edit]
lab@R5# commit
commit complete
[edit]
lab@R5#
TASK VERIFICATION
Begin your verification by reviewing the status of the RPs on R1 and R2. Verify that R1 and R2
are local RPs and that they are the RPs for all groups.
• R1:
[edit]
lab@R1# exit
Exiting configuration mode
RP: 172.27.255.11
Learned via: static configuration
Time Active: 2d 12:44:52
Holdtime: 0
Device Index: 130
Subunit: 32769
Interface: ppd0.32769
Group Ranges:
www.juniper.net Multicast Implementation and Troubleshooting • Lab 8–9
JNCIE Service Provider Bootcamp
224.0.0.0/4
Anycast PIM local address used: 172.27.255.1
lab@R1>
• R2:
[edit]
lab@R2# exit
Exiting configuration mode
RP: 172.27.255.11
Learned via: static configuration
Time Active: 2d 12:34:33
Holdtime: 0
Device Index: 130
Subunit: 32769
Interface: ppd0.32769
Group Ranges:
224.0.0.0/4
Anycast PIM local address used: 172.27.255.2
lab@R2>
Now that you have verified the RPs, verify the MSDP status and source-actives. The intradomain
MSDP usage for anycast RP covers the requirement for both R1 and R2 being the RP for all
groups, the requirement of a virtual IP for the RP, and the requirement for only IPv4 support. PIM
anycast RP could also be used, except it supports IPv4 and IPv6.
• R2:
lab@R2> show msdp
Peer address Local address State Last up/down Peer-Group SA Count
172.27.255.1 172.27.255.2 Established 2d 13:03:21 anycast-rp 2/2
Finally, you must verify that all other routers use the closest RP. View the status of the RP on all
other routers. Make sure that the active groups using the RP matches the join to RP. Then check
that the join to RP upstream neighbor matches the shortest path to the RP..
• R3:
[edit]
lab@R3# exit
Exiting configuration mode
RP: 172.27.255.11
Learned via: static configuration
Time Active: 2d 13:08:19
Holdtime: 0
Group: 224.1.1.1
Source: *
RP: 172.27.255.11
Flags: sparse,rptree,wildcard
Upstream interface: ge-0/0/1.0
Upstream neighbor: 172.27.0.14
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.0.58 State: Join Flags: SRW Timeout: 156
Group: 224.1.1.1
Source: 172.27.0.30
Flags: sparse,spt
Upstream interface: ge-0/0/1.0
Upstream neighbor: 172.27.0.14
Upstream state: None, Join to Source
Keepalive timeout: 328
Downstream neighbors:
Interface: ge-0/0/3.0
172.27.0.25 State: Join Flags: S Timeout: 176
Interface: ge-0/0/4.0
172.27.0.58 State: Join Flags: S Timeout: 156
lab@R3>
• R4:
[edit]
lab@R4# exit
Exiting configuration mode
RP: 172.27.255.11
Learned via: static configuration
Time Active: 2d 13:21:14
Holdtime: 0
Device Index: 131
Subunit: 32769
Interface: ppe0.32769
Group Ranges:
224.0.0.0/4
Active groups using RP:
224.3.3.3
224.2.2.2
224.1.1.1
Group: 224.1.1.1
Source: *
RP: 172.27.255.11
Flags: sparse,rptree,wildcard
Upstream interface: ae0.0
Upstream neighbor: 172.27.0.10
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.0.22 State: Join Flags: SRW Timeout: 161
Group: 224.1.1.1
Source: 172.27.0.30
Group: 224.2.2.2
Source: *
RP: 172.27.255.11
Flags: sparse,rptree,wildcard
Upstream interface: ae0.0
Upstream neighbor: 172.27.0.10
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.0.22 State: Join Flags: SRW Timeout: 150
Group: 224.2.2.2
Source: 172.27.0.38
Flags: sparse,spt
Upstream interface: ge-0/0/1.0
Upstream neighbor: 172.27.0.5
Upstream state: Join to Source, Prune to RP
Keepalive timeout: 344
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.0.22 State: Join Flags: S Timeout: 150
lab@R4>
• R5:
[edit]
lab@R5# exit
Exiting configuration mode
Group: 224.1.1.1
Source: *
RP: 172.27.255.11
Flags: sparse,rptree,wildcard
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.1.4 State: Join Flags: SRW Timeout: 180
Group: 224.1.1.1
Source: 172.27.0.30
Flags: sparse,spt
Upstream interface: ge-0/0/1.0
Upstream neighbor: 172.27.0.26
Upstream state: Join to Source, Prune to RP
Keepalive timeout: 304
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.1.4 State: Join Flags: S Timeout: 180
Group: 224.2.2.2
Source: *
RP: 172.27.255.11
Flags: sparse,rptree,wildcard
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: Join to RP
Group: 224.2.2.2
Source: 172.27.0.38
Flags: sparse,spt
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: None, Join to Source
Keepalive timeout: 356
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.1.2 State: Join Flags: S Timeout: 172
lab@R5>
Question: Why does the *,G and S,G for group 224.1.1.1 have a
different upstream neighbor?
TASK 3
Group 224.2.2.2 is critical for Rec2, and they have requested that
the multicast traffic always uses the same path to keep traffic loss
to a minimum (except in the event of a failure). You cannot use
policy, and you cannot alter routes in inet.0 to accomplish this
task. One static route can be used if needed to accomplish this
task.
TASK INTERPRETATION
The task reveals that group 224.2.2.2 should always use the shared tree and should not cutover
to the source tree or shortest-path tree (SPT). The traffic is critical to Rec2 and they do not want
to lose any traffic during the source tree cutover process.
[edit]
lab@R5# edit routing-options
[edit routing-options]
lab@R5# set rib-groups to_inet.2 import-rib [inet.0 inet.2]
[edit routing-options]
lab@R5# set rib-groups rpf_inet.2 import-rib inet.2
[edit routing-options]
lab@R5# set interface-routes rib-group inet to_inet.2
[edit routing-options]
lab@R5# show
max-interface-supported 0;
interface-routes {
rib-group inet to_inet.2;
}
rib-groups {
to_inet.2 {
import-rib [ inet.0 inet.2 ];
}
rpf_inet.2 {
import-rib inet.2;
}
}
[edit routing-options]
lab@R5# top edit protocols
[edit protocols]
lab@R5# set ospf rib-group to_inet.2
[edit protocols]
lab@R5# set pim rib-group inet rpf_inet.2
[edit protocols]
lab@R5# show
ospf {
www.juniper.net Multicast Implementation and Troubleshooting • Lab 8–17
JNCIE Service Provider Bootcamp
rib-group to_inet.2;
area 0.0.0.0 {
interface all;
interface ge-0/0/0.0 {
disable;
}
}
}
pim {
rib-group inet rpf_inet.2;
rp {
static {
address 172.27.255.11;
}
}
interface all;
interface ge-0/0/0.0 {
disable;
}
}
[edit protocols]
lab@R5# commit
commit complete
[edit protocols]
lab@R5#
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# edit routing-options
[edit routing-options]
lab@R4# set rib-groups to_inet.2 import-rib [inet.0 inet.2]
[edit routing-options]
lab@R4# set rib-groups rpf_inet.2 import-rib inet.2
[edit routing-options]
lab@R4# set interface-routes rib-group inet to_inet.2
[edit routing-options]
lab@R4# show
max-interface-supported 0;
interface-routes {
rib-group inet to_inet.2;
}
rib-groups {
to_inet.2 {
import-rib [ inet.0 inet.2 ];
}
[edit routing-options]
lab@R4# top edit protocols
[edit protocols]
lab@R4# set ospf rib-group to_inet.2
[edit protocols]
lab@R4# set pim rib-group inet rpf_inet.2
[edit protocols]
lab@R4# show
ospf {
rib-group to_inet.2;
area 0.0.0.0 {
interface all;
interface ge-0/0/0.0 {
disable;
}
}
}
pim {
rib-group inet rpf_inet.2;
rp {
static {
address 172.27.255.11;
}
}
interface all;
interface ge-0/0/0.0 {
disable;
}
}
[edit protocols]
lab@R4# commit
commit complete
[edit protocols]
lab@R4#
TASK VERIFICATION
We begin by verifying which table the RPF check is using to the RP and source, and that the
routing table shows the correct routes for the RP and source.
• R5:
[edit protocols]
lab@R5# run show multicast rpf 172.27.0.38
Multicast RPF table: inet.2 , 22 entries
[edit protocols]
lab@R5# run show multicast rpf 172.27.255.11
Multicast RPF table: inet.2 , 22 entries
172.27.255.11/32
Protocol: OSPF
Interface: ge-0/0/1.0
Neighbor: 172.27.0.26
[edit protocols]
lab@R5# run show route 172.27.0.38
[edit protocols]
lab@R5# run show route 172.27.255.11
Answer: No. The route has not been altered in inet.2 so that the
next-hop of 172.27.0.21 is preferred. The output may vary and
show 172.27.0.21 is preferred, but you want to ensure that
172.27.0.26 cannot be chosen.
TASK CORRECTION
To ensure that 172.27.0.21 is preferred to match the SPT path in the inet.2 table, you must
make the 172.27.0.21 more preferred. A static route can be used to resolve this issue. Make
sure to apply the same inet.2 configuration on R4.
• R5:
[edit protocols]
lab@R5# top edit routing-options
[edit routing-options]
lab@R5# set rib inet.2 static route 172.27.255.11/32 next-hop 172.27.0.21
[edit routing-options]
lab@R5# show
max-interface-supported 0;
interface-routes {
rib-group inet to_inet.2;
}
rib inet.2 {
static {
route 172.27.255.11/32 next-hop 172.27.0.21;
}
}
rib-groups {
to_inet.2 {
import-rib [ inet.0 inet.2 ];
}
rpf_inet.2 {
import-rib inet.2;
}
}
[edit routing-options]
lab@R5# commit
commit complete
[edit routing-options]
lab@R5#
• R4:
[edit protocols]
lab@R4# top edit routing-options
[edit routing-options]
lab@R4# show
max-interface-supported 0;
interface-routes {
rib-group inet to_inet.2;
}
rib inet.2 {
static {
route 172.27.255.11/32 next-hop 172.27.0.5;
}
}
rib-groups {
to_inet.2 {
import-rib [ inet.0 inet.2 ];
}
rpf_inet.2 {
import-rib inet.2;
}
}
[edit routing-options]
lab@R4# commit
commit complete
[edit routing-options]
lab@R4#
Now that the route has a defined preferred next-hop, you can verify that the SPT and RPT match.
• R5:
[edit routing-options]
lab@R5# run show multicast rpf 172.27.0.38
Multicast RPF table: inet.2 , 22 entries
172.27.0.36/30
Protocol: OSPF
Interface: ge-0/0/2.0
Neighbor: 172.27.0.21
[edit routing-options]
lab@R5# run show multicast rpf 172.27.255.11
Multicast RPF table: inet.2 , 22 entries
172.27.255.11/32
Protocol: Static
Interface: ge-0/0/2.0
Neighbor: 172.27.0.21
[edit routing-options]
lab@R5# run show route 172.27.0.38
[edit routing-options]
lab@R5# run show route 172.27.255.11
[edit routing-options]
lab@R5# run show pim join extensive 224.2.2.2
Instance: PIM.master Family: INET
R = Rendezvous Point Tree, S = Sparse, W = Wildcard
Group: 224.2.2.2
Source: *
RP: 172.27.255.11
Flags: sparse,rptree,wildcard
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.1.2 State: Join Flags: SRW Timeout: 170
Group: 224.2.2.2
Source: 172.27.0.38
Flags: sparse,spt
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: None, Join to Source
Keepalive timeout: 318
Downstream neighbors:
• R4:
[edit routing-options]
lab@R4# run show multicast rpf 172.27.0.38
Multicast RPF table: inet.2 , 23 entries
172.27.0.36/30
Protocol: OSPF
Interface: ge-0/0/1.0
Neighbor: 172.27.0.5
[edit routing-options]
lab@R4# run show multicast rpf 172.27.255.11
Multicast RPF table: inet.2 , 23 entries
172.27.255.11/32
Protocol: Static
Interface: ge-0/0/1.0
Neighbor: 172.27.0.5
[edit routing-options]
lab@R4# run show route 172.27.0.38
[edit routing-options]
lab@R4# run show route 172.27.255.11
[edit routing-options]
lab@R4# run show pim join extensive 224.2.2.2
Instance: PIM.master Family: INET
R = Rendezvous Point Tree, S = Sparse, W = Wildcard
Group: 224.2.2.2
Source: *
RP: 172.27.255.11
Flags: sparse,rptree,wildcard
Upstream interface: ge-0/0/1.0
Upstream neighbor: 172.27.0.5
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.0.22 State: Join Flags: SRW Timeout: 188
Group: 224.2.2.2
Source: 172.27.0.38
Flags: sparse,spt
Upstream interface: ge-0/0/1.0
Upstream neighbor: 172.27.0.5
Upstream state: None, Join to Source
Keepalive timeout: 335
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.0.22 State: Join Flags: S Timeout: 188
Answer: Yes. Both R4 and R5 should show that the RPT and SPT
use the same path to group 224.2.2.2.
TASK 4
Ensure that joins to source are load-balanced for groups sourced
from S1.
TASK INTERPRETATION
If you view the lab diagram, R5 should have two equal paths to S1. Also, verify that R5 has equal
cost paths to S1. To load balance across the equal cost paths, simply configure the PIM option
join-load-balance on R5.
TASK COMPLETION
First verify the status of the routes and PIM joins on R5.
[edit routing-options]
lab@R5# run show route 172.27.0.30
[edit routing-options]
lab@R5# run show pim join extensive 224.1.1.1
Instance: PIM.master Family: INET
R = Rendezvous Point Tree, S = Sparse, W = Wildcard
Group: 224.1.1.1
Source: *
RP: 172.27.255.11
Flags: sparse,rptree,wildcard
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.1.4 State: Join Flags: SRW Timeout: 150
Group: 224.1.1.1
Source: 172.27.0.30
Flags: sparse,spt
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: None, Join to Source
Keepalive timeout: 359
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.1.4 State: Join Flags: S Timeout: 150
[edit routing-options]
lab@R5# run show pim join extensive 224.3.3.3
Instance: PIM.master Family: INET
R = Rendezvous Point Tree, S = Sparse, W = Wildcard
Group: 224.3.3.3
Source: *
RP: 172.27.255.11
Flags: sparse,rptree,wildcard
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: Join to RP
Group: 224.3.3.3
Source: 172.27.0.30
Flags: sparse,spt
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: None, Join to Source
Keepalive timeout: 347
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.1.3 State: Join Flags: S Timeout: 198
Answer: Yes. You should see two next-hops for the route
172.27.0.30.
Answer: No, both S,Gs for groups sourced from S1 use the
same upstream neighbor.
Now that you have verified load balancing is not occurring, configure the option to load balance
under PIM.
• R5:
[edit routing-options]
lab@R5# top edit protocols pim
commit complete
Group: 224.1.1.1
Source: *
RP: 172.27.255.11
Flags: sparse,rptree,wildcard
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.1.4 State: Join Flags: SRW Timeout: 205
Group: 224.1.1.1
Source: 172.27.0.30
Flags: sparse,spt
Upstream interface: ge-0/0/1.0
Upstream neighbor: 172.27.0.26
Upstream state: Join to Source, Prune to RP
Keepalive timeout: 353
Downstream neighbors:
Lab 8–28 • Multicast Implementation and Troubleshooting www.juniper.net
JNCIE Service Provider Bootcamp
Interface: ge-0/0/4.0
172.27.1.4 State: Join Flags: S Timeout: 205
Group: 224.3.3.3
Source: *
RP: 172.27.255.11
Flags: sparse,rptree,wildcard
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.1.3 State: Join Flags: SRW Timeout: 200
Group: 224.3.3.3
Source: 172.27.0.30
Flags: sparse,spt
Upstream interface: ge-0/0/2.0
Upstream neighbor: 172.27.0.21
Upstream state: None, Join to Source
Keepalive timeout: 349
Downstream neighbors:
Interface: ge-0/0/4.0
172.27.1.3 State: Join Flags: S Timeout: 200
Question: Are the joins to source for the three groups from S1
load balancing?
STOP Tell your instructor that you have completed this lab.
Overview
In this lab, you will be given a list of tasks specific to implementing and troubleshooting class of
service that you will need to accomplish within a specific time frame. You will have 2 hours to
complete the simulation.
By completing this lab, you will perform the following tasks:
• Configure a scheduler named jncie-cos on all routers with the following criteria:
– The expedited-forwarding queue should have the high priority with 10%
allocation of traffic;
– The assured-forwarding queue should have medium-high priority with 5%
allocation of traffic;
– The best-effort queue should have low priority with 80% allocation of traffic;
– The network-connect queue should have low priority with 5% traffic allocation;
and
– Apply the scheduler on all interfaces.
• Configure a MF classifier named voice on R5:
– The classifier should match any traffic with DSCP EF markings and place this
traffic into the EF queue;
– The classifier should match any TCP traffic destined to port 2000 and place this
traffic on the AF queue; and
– Place this classifier on traffic coming from the C1 router.
• Configure a MF classifier named internet on R1:
– Match all traffic and place into the best effort queue and mark as high loss drop
profile; and
– Place this classifier on all traffic coming from the C2 router.
• Configure a rewrite marker named jncie-rw on R5:
– Mark all traffic on the expedited-forwarding queue as DSCP EF; and
– Mark all traffic on the assured-forwarding queue as DSCP AF21.
• Configure a behavior aggregate classifier named jncie-ba on R3, and R4:
www.juniper.net Class of Service Implementation and Troubleshooting • Lab 9–1
JNCIE Service Provider Bootcamp
– Place all traffic with inet-precedence 5 into the expedited-forwarding queue; and
– Place all traffic with inet-precedence 3 into the assured-forwarding queue.
• Configure a filter named jncie-police on R3 and R4:
– Send any traffic marked as DSCP 21 and exceeding 50 Mb to the best effort
queue and mark it as loss priority high;
– Send any traffic marked as DSCP 46 and exceeding 100 Mb to the best effort
queue and mark it as loss priority low; and
– Apply the policer to the interfaces facing R5.
• Configure a behavior aggregate classifier on R2:
– Place all traffic marked with 802.1p number 5 on the expedited forwarding
queue; and
– Apply this to the interface facing the VPLS CE2 device.
• Configure a rewrite marker named vpls-rw on R2:
– Mark all traffic in the expedited queue to EXP 5.
Configuring CoS
In this lab part, you will log in to your assigned routers and ensure that you are running the
correct startup configuration file for this lab. Refer to the network diagram for this lab for
topological and configuration details. You will then configure various CoS settings depending on
the outlined requirements. You must ensure that all the CoS requirements are met based on the
task guidelines.
Note
We recommend that you spend some time
investigating the current operation of your
routers. During the real exam, you might be
given routers that are operating
inefficiently. Investigating operating issues
now might save you a lot of time
troubleshooting strange issues later.
TASK 1
Configure a scheduler-map named jncie-cos on all routers. Map each
queue to the following set of criteria:
• The expedited-forwarding queue should have the high priority
with a 10% transmit rate;
• The assured-forwarding queue should have medium-high
priority with a 5% transmit rate;
• The best-effort queue should have low priority with a 80%
transmit rate;
• The network-connect queue should have low priority with a 5%
transmit rate; and
• Apply the scheduler on all gigabit interfaces.
Note
When you have a repetitive task on the
exam, take advantage of Notepad access
for copy and paste operations.
TASK INTERPRETATION
The task is requesting a simple scheduler-map configuration to be applied on all interfaces. It
lays out all the necessary criteria and it includes instructions to use a specific name for the
scheduler-map, but it does not seem to matter what you use to name the schedulers
themselves. The rest of the instructions are straightforward.
[edit class-of-service]
lab@R1# set schedulers ef transmit-rate percent 10
[edit class-of-service]
lab@R1# set schedulers ef priority high
[edit class-of-service]
lab@R1# set schedulers af priority medium-high
[edit class-of-service]
lab@R1# set schedulers af transmit-rate percent 5
[edit class-of-service]
lab@R1# set schedulers be transmit-rate percent 80
[edit class-of-service]
lab@R1# set schedulers be priority low
[edit class-of-service]
lab@R1# set schedulers nc transmit-rate percent 5
[edit class-of-service]
lab@R1# set schedulers nc priority low
[edit class-of-service]
lab@R1# set scheduler-maps jncie-cos forwarding-class expedited-forwarding
scheduler ef
[edit class-of-service]
lab@R1# set scheduler-maps jncie-cos forwarding-class assured-forwarding scheduler
af
[edit class-of-service]
lab@R1# set scheduler-maps jncie-cos forwarding-class best-effort scheduler be
[edit class-of-service]
lab@R1# set scheduler-maps jncie-cos forwarding-class network-control scheduler nc
[edit class-of-service]
lab@R1# set interfaces ge-* scheduler-map jncie-cos
lab@R1# show
interfaces {
ge-* {
scheduler-map jncie-cos;
}
}
scheduler-maps {
jncie-cos {
TASK 2
Configure a multifield classifier named voice on R5:
• The classifier should match any traffic with DSCP EF
markings and place this traffic into the EF queue;
• The classifier should match any TCP traffic destined to port
2000 and place this traffic on the AF queue; and
• Place this classifier on traffic coming from the C1 router.
TASK VERIFICATION
The best way to verify this task is to make sure your firewall is configured correctly on the correct
interface by examining the configuration. If the test provides you with access to the Customer 1
router, a ping with the correct set of ToS bytes can be generated, and verify that the internal
router is placing the traffic into the correct queue.
In this lab, you are given access to the external device. The Customer 1 router is in a routing
instance named C1 in the VR-device. The device can be accessed with SSH from the R5 router or
through the management IP with the user lab and password lab123. On R5, find which routes
are advertised to C1 and then generate a ping to C1 from that destination with the correct
markings. After doing this, return to R5 and view an extensive output for the interface facing the
internal routers.
• R5:
[edit firewall family inet filter voice]
lab@R5# top
[edit]
lab@R5# exit
Note
Make sure you verify with the proctor if the
external device is accessible and if its using
a routing instance.
Note
To be more efficient when doing the ping
command, take advantage of the rapid
and count statements.
TASK 3
Configure a MF classifier name internet on R1:
• Match all traffic and place into the best effort queue and
mark as high loss drop profile; and
• Place this classifier on all traffic coming from the C2
router.
TASK INTERPRETATION
This task is very similar to the previous task with the additional requirement of marking the
packets with loss priority high.
Create a firewall filter named internet and place all the traffic in the best effort queue with
loss priority to high. Configure this filter as input on the interface facing the Customer 2
router. Because the term matches all traffic and uses the then forwarding-class
terminating action, no subsequent accept term is necessary.
TASK COMPLETION
• R1:
[edit class-of-service]
lab@R1# top edit firewall family inet filter internet
TASK VERIFICATION
As with the previous command, the easiest and most efficient way to verify this task is to simply
check the configuration and ensure everything is in place. This task is more complicated to
verify from an external device due to the fact that even without the firewall configured, all traffic
can go into the best-effort queue. If everything appears correctly when viewing the
configuration, it should satisfy this task.
Note
During the exam, we recommend you verify
the success of tasks when possible.
However, if time is a factor, priority should
be given to any unfinished tasks.
TASK 4
Configure a rewrite marker named jncie-rw on R5:
• Mark all traffic on the expedited-forwarding queue as DSCP
EF;
• Mark all traffic on the assured-forwarding queue as DSCP
AF21; and
• Place the rewrite on the interfaces facing R3 and R4.
TASK INTERPRETATION
The task is asking for a simple rewrite marker on traffic in the expedited forwarding and assured
forwarding queues. Create a DSCP rewrite marker named jncie-rw and match on the correct
markings, and apply it to the correct forwarding class. Apply this rewrite marker to the interfaces
facing the internal network. Utilize the copy and replace Junos commands to speed up the
configuration. Because the task does not specify on which loss-priority markings should be
matched, all of them are used at this time.
Note
During the exam, we recommend that you
over-configure rather than under-configure.
If a task does not explicitly mention a step,
and if the extra configuration does not
conflict with any other task in the exam, it is
a good idea to perform the additional
configuration steps.
TASK COMPLETION
• R5:
lab@R5> configure
[edit class-of-service]
lab@R5# set interfaces ge-0/0/1 unit 0 rewrite-rules dscp jncie-rw
[edit class-of-service]
lab@R5# set interfaces ge-0/0/2 unit 0 rewrite-rules dscp jncie-rw
[edit class-of-service]
lab@R5# commit
TASK VERIFICATION
Remember that the rewrite of bits is an egress operation in Junos OS. Ensure that the correct
rewrite marker is applied by looking at the output of the show class-of-service
interface and the show class-of-service rewrite-rule type dscp
operational commands.
• R5:
[edit class-of-service]
lab@R5# run show class-of-service interface ge-0/0/1
Physical interface: ge-0/0/1, Index: 134
Queues supported: 8, Queues in use: 4
Scheduler map: <default>, Index: 2
Congestion-notification: Disabled
[edit class-of-service]
lab@R5# run show class-of-service rewrite-rule type dscp name jncie-rw
Rewrite rule: jncie-rw, Code point type: dscp, Index: 56953
Forwarding class Loss priority Code point
expedited-forwarding low 101110
expedited-forwarding high 101110
expedited-forwarding medium-low 101110
expedited-forwarding medium-high 101110
assured-forwarding low 010010
assured-forwarding high 010010
assured-forwarding medium-low 010010
assured-forwarding medium-high 010010
TASK 5
Configure a behavior aggregate classifier named jncie-ba on all ge
interfaces of R3 and R4:
• Place all traffic with inet-precedence 5 into the
expedited-forwarding queue; and
• Place all traffic with inet-precedence 3 into the
assured-forwarding queue.
TASK INTERPRETATION
As with a previous task, this task requires classification of traffic into different
forwarding-classes. However, this task is explicit in requiring the use of a behavior aggregate
classifier and use of inet-precedence. Because it is not explicit as to what the loss priority should
be, it is safe to use loss-priority low.
As with previous tasks, because no change occurs in the configuration from router to router,
take advantage of Notepad for copy and paste operations.
• R1 and R2:
[edit class-of-service]
lab@R1# run ping 172.27.255.5 rapid count 20 tos 160
PING 172.27.255.5 (172.27.255.5): 56 data bytes
!!!!!!!!!!!!!!!!!!!!
--- 172.27.255.5 ping statistics ---
20 packets transmitted, 20 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.308/4.660/7.511/1.103 ms
[edit class-of-service]
lab@R1# run ping 172.27.255.5 rapid count 20 tos 96
PING 172.27.255.5 (172.27.255.5): 56 data bytes
!!!!!!!!!!!!!!!!!!!!
--- 172.27.255.5 ping statistics ---
20 packets transmitted, 20 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.268/4.578/6.781/1.004 ms
• R3 and R4:
[edit class-of-service classifiers inet-precedence jncie-ba]
lab@R3# run show interfaces ge-0/0/3 extensive | find "Queue counter"
[edit firewall]
lab@R3# set policer ef if-exceeding bandwidth-limit 100m burst-size-limit 15000
[edit firewall]
lab@R3# set policer ef then forwarding-class best-effort
[edit firewall]
lab@R3# set policer ef then loss-priority low
[edit firewall]
lab@R3# copy policer ef to policer af21
[edit firewall]
lab@R3# edit policer af21
[edit firewall]
lab@R3# set family inet filter jncie-police term 1 from dscp af21
[edit firewall]
lab@R3# set family inet filter jncie-police term 1 then policer af21
[edit firewall]
lab@R3# set family inet filter jncie-police term 2 from dscp ef
[edit firewall]
lab@R3# set family inet filter jncie-police term 2 then policer ef
[edit firewall]
lab@R3# set family inet filter jncie-police term 3 then accept
[edit firewall]
lab@R3# top set interfaces ge-0/0/3.0 family inet filter input jncie-police
[edit firewall]
lab@R3# commit
• R4
[edit class-of-service classifiers inet-precedence jncie-ba]
lab@R4# top edit firewall
[edit firewall]
lab@R4# set policer ef then forwarding-class best-effort
[edit firewall]
lab@R4# set policer ef then loss-priority low
[edit firewall]
lab@R4# copy policer ef to policer af21
[edit firewall]
lab@R4# edit policer af21
[edit firewall]
lab@R4# set family inet filter jncie-police term 1 from dscp af21
[edit firewall]
lab@R4# set family inet filter jncie-police term 1 then policer af21
[edit firewall]
lab@R4# set family inet filter jncie-police term 2 from dscp ef
[edit firewall]
lab@R4# set family inet filter jncie-police term 3 then accept
[edit firewall]
lab@R4# top set interfaces ge-0/0/4.0 family inet filter input jncie-police
[edit firewall]
lab@R4# commit
TASK VERIFICATION
During the exam, you cannot generate enough traffic to see if the filter is working properly.
Verification for this task can easily be done by double checking the configuration. Confirm that
the filter has the correct name and is applied to the right interface and remember to apply an
accept term to the filter.
TASK 7
Configure a behavior aggregate classifier on R2 named vpls-ba:
• Place all traffic marked with 802.1p number 5 on the
expedited forwarding queue; and
• Apply this classifier to the interface facing the VPLS CE1
device.
TASK INTERPRETATION
Refer to the topology diagram. This task is asking for another behavior aggregate, this time
based on 802.1p markings. A behavior aggregate named vpls-ba of type 802.1p must be
created that matches the number 5. The behavior aggregate must be placed on the interface
facing the VPLS CE1 device. As with the previous classifiers, if a loss priority marking is not
provided, loss-priority low can be used.
TASK COMPLETION
• R2:
[edit class-of-service]
lab@R2# edit classifiers ieee-802.1 vpls-ba
• VR-device:
lab@vrdevice> show route table VPLS-CE protocol local terse
STOP Tell your instructor that you have completed this lab.
Overview
In this lab, you will be given a list of tasks specific to implementing and troubleshooting MPLS
which you will need to accomplish within a specific time frame. You will have 1.5 hours to
complete the simulation.
By completing this lab, you will perform the following tasks:
• Configure the RSVP LSPs, defined in the LSP tables, through your network and
ensure all LSPs are up and functional.
• R2 is not allowed to run RSVP to signal its LSPs. You must route between R2 and R5
using a LSP. You must also ensure that the failure of any transit router does not
prevent the exchange of labels between R2 and R5. LDP is prohibited on R3.
• Ensure that the r1-to-r5 LSP has two unique paths. The primary path should
traverse R4 while the secondary path should use a different path and be signaled
and ready for use.
• Configure the administrative groups defined in the Admin table on all RSVP routers.
Apply these administrative groups to the appropriate links as illustrated on the lab
diagram. Ensure that the r3-to-r4 LSP avoids the R3-R4 link.
• Configure the r5-to-r1 LSP to reserve 450 Mbps of bandwidth across the
network.
• Create a bypass to improve convergence time for the r5-to-r1 LSP in the event of
a R4-R1 link failure. Ensure bandwidth reservation is honored and the best available
path is chosen.
• Ensure that all MPLS packets that transit the R1-R4 link are load balanced across
both member links of the aggregated Ethernet bundle. The contents of the outer
label as well as the IP packet should be used by the load balancing algorithm.
• Ensure that the entire core MPLS network appears as two hops for any transit traffic.
Configuring LSPs
In this lab part, you will log in to your assigned routers and configure the label-switched paths
(LSPs) required to transport traffic through your core network. You must ensure all LSPs are
created within the guidelines defined by the tasks in this lab.
INITIAL TASK
Access the CLI for your routers using either the console, Telnet, or SSH as directed by your
instructor. Refer to the management network diagram for the IP address associated with your
devices. Log in as user lab with the password lab123.
TASK COMPLETION
• R1:
R1 (ttyd0)
login: lab
Password:
login: lab
Password:
login: lab
Password:
login: lab
Password:
login: lab
Password:
TASK 1
Configure the LSPs, defined in the following LSP tables, through
your network and ensure all LSPs are up and functional.
Note
We recommend that you include the
configuration steps for the third task while
you are configuring the first task. This
approach will save you time and effort as
you move through the tasks of this lab.
R1
R3
R4
R5
TASK INTERPRETATION
The task appears to be a simple one and in some aspects it is. The difficult part of this task is
ensuring you properly configure each LSP and keep track of the LSPs you have configured.
A good way to track your progress is to check off each LSP as you configure them. This ensures
you do not overlook creating one of the LSPs, because the failure to configure any portion of the
task, results in the loss of points for the entire task. Another aspect of this task to keep in mind
is that the LSPs must be defined exactly as shown on the LSP tables.
In this task, you configure standard individual RSVP LSPs, but looking ahead to the third task,
you know that for the LSP from R1 to R5 there are additional constraints that we need to
configure. Therefore, it makes good sense while configuring the LSP from R1 to R5 that you
combine these actions into a single configuration task. The third task requires that you configure
two unique paths to be applied to the LSP you configured to egress on R5. There is also a
requirement for the second path to be signaled and ready for use. This is accomplished by using
the standby option when creating the secondary path.
[edit]
lab@R1# set protocols rsvp interface ae0
[edit]
lab@R1# set protocols rsvp interface ge-0/0/6
[edit]
lab@R1# edit protocols mpls
commit complete
Exiting configuration mode
lab@R1>
• R3:
lab@R3> configure
Entering configuration mode
[edit]
lab@R3# set protocols rsvp interface all
[edit]
lab@R3# edit protocols mpls
commit complete
lab@R3>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set protocols rsvp interface ae0
[edit]
lab@R4# set protocols rsvp interface ge-0/0/4
[edit]
lab@R4# set protocols rsvp interface ge-0/0/5
[edit]
lab@R4# edit protocols mpls
commit complete
Exiting configuration mode
lab@R4>
[edit]
lab@R5# set protocols rsvp interface all
[edit]
lab@R5# edit protocols mpls
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
Begin your verification by reviewing the status of your LSPs from the perspective of R1. If
everything is functioning well, move through the rest of the routers on which you configured
LSPs.
• R1:
lab@R1> show mpls lsp
Ingress LSP: 3 sessions
To From State Rt P ActivePath LSPname
172.27.255.3 0.0.0.0 Dn 0 - r1-to-r3
172.27.255.4 0.0.0.0 Dn 0 - r1-to-r4
172.27.255.3
From: 0.0.0.0, State: Dn, ActiveRoute: 0, LSPname: r1-to-r3
ActivePath: (none)
LSPtype: Static Configured, Penultimate hop popping
LoadBalance: Random
Encoding type: Packet, Switching type: Packet, GPID: IPv4
Primary State: Dn
Priorities: 7 0
SmartOptimizeTimer: 180
No computed ERO.
Created: Thu Jan 29 11:30:17 2015
172.27.255.4
From: 0.0.0.0, State: Dn, ActiveRoute: 0, LSPname: r1-to-r4
ActivePath: (none)
LSPtype: Static Configured, Penultimate hop popping
LoadBalance: Random
Encoding type: Packet, Switching type: Packet, GPID: IPv4
Primary State: Dn
Priorities: 7 0
SmartOptimizeTimer: 180
No computed ERO.
Created: Thu Jan 29 11:30:17 2015
172.27.255.5
From: 0.0.0.0, State: Dn, ActiveRoute: 0, LSPname: r1-to-r5
ActivePath: (none)
LSPtype: Static Configured, Penultimate hop popping
LoadBalance: Random
Encoding type: Packet, Switching type: Packet, GPID: IPv4
Primary path-1 State: Dn
Priorities: 7 0
SmartOptimizeTimer: 180
No computed ERO.
Standby path-2 State: Dn
Priorities: 7 0
SmartOptimizeTimer: 180
No computed ERO.
Created: Thu Jan 29 11:30:17 2015
Total 3 displayed, Up 0, Down 3
lab@R1>
Question: Using the previous outputs from R1, why are the LSPs
down?
Answer: The answer lies with the last command that was
executed. No interfaces are participating in MPLS.
TASK CORRECTION
To correct the issue you have to enable family mpls on all interfaces that will be participating
in your MPLS network.
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# edit interfaces
[edit interfaces]
lab@R1# set ae0 unit 0 family mpls
[edit interfaces]
lab@R1# set ge-0/0/6 unit 0 family mpls
[edit interfaces]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
• R3:
lab@R3> configure
Entering configuration mode
[edit interfaces]
lab@R3# set ge-0/0/1 unit 0 family mpls
[edit interfaces]
lab@R3# set ge-0/0/2 unit 0 family mpls
[edit interfaces]
lab@R3# set ge-0/0/3 unit 0 family mpls
[edit interfaces]
lab@R3# commit and-quit
commit complete
Exiting configuration mode
lab@R3>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# edit interfaces
[edit interfaces]
lab@R4# set ae0 unit 0 family mpls
[edit interfaces]
lab@R4# set ge-0/0/4 unit 0 family mpls
[edit interfaces]
lab@R4# set ge-0/0/5 unit 0 family mpls
[edit interfaces]
lab@R4# commit and-quit
commit complete
Exiting configuration mode
lab@R4>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# edit interfaces
[edit interfaces]
lab@R5# set ge-0/0/1 unit 0 family mpls
[edit interfaces]
lab@R5# commit and-quit
commit complete
Exiting configuration mode
lab@R5>
Now that you have added the protocol family to the correct interfaces, you must review the state
of your LSPs. Begin with one router and then progress through the rest of the routers on which
you configured LSPs.
• R1:
lab@R1> show mpls lsp
Ingress LSP: 3 sessions
To From State Rt P ActivePath LSPname
172.27.255.3 172.27.255.1 Up 10 * r1-to-r3
172.27.255.4 172.27.255.1 Up 0 * r1-to-r4
172.27.255.5 172.27.255.1 Up 10 * path-1 r1-to-r5
Total 3 displayed, Up 3, Down 0
Answer: You should see that all your LSPs are Up and
functioning correctly. If you do not see all LSPs up, you can wait
a few minutes and try again. If they do not, please review your
changes and ask your instructor for help.
• R3:
lab@R3> show mpls lsp
Ingress LSP: 3 sessions
To From State Rt P ActivePath LSPname
172.27.255.1 172.27.255.3 Up 10 * r3-to-r1
172.27.255.4 172.27.255.3 Up 0 * r3-to-r4
172.27.255.5 172.27.255.3 Up 10 * r3-to-r5
Total 3 displayed, Up 3, Down 0
• R4:
lab@R4> show mpls lsp
Ingress LSP: 3 sessions
To From State Rt P ActivePath LSPname
172.27.255.1 172.27.255.4 Up 10 * r4-to-r1
172.27.255.3 172.27.255.4 Up 10 * r4-to-r3
172.27.255.5 172.27.255.4 Up 10 * r4-to-r5
Total 3 displayed, Up 3, Down 0
• R5:
lab@R5> show mpls lsp
Ingress LSP: 3 sessions
To From State Rt P ActivePath LSPname
172.27.255.1 172.27.255.5 Up 10 * r5-to-r1
172.27.255.3 172.27.255.5 Up 10 * r5-to-r3
172.27.255.4 172.27.255.5 Up 0 * r5-to-r4
Total 3 displayed, Up 3, Down 0
Answer: You should see that all you LSPs are Up and functioning
correctly.
TASK 2
R2 is not allowed to run RSVP to signal its LSPs. You must route
between R2 and R5 using a LSP. You must also ensure that the failure
of any transit router does not prevent the exchange of labels
between R2 and R5. LDP is prohibited on R3.
TASK INTERPRETATION
The task is telling you that you must configure LDP to signal LSPs, in addition to the RSVP LSPs.
As the task indicates, you are not allowed to run LDP on R3 and you must ensure redundancy.
To meet the requirements of this task, you must configure LDP tunneling through your RSVP LSP
network. You configure LDP tunneling for the LSPs from both R1 and R4 that terminate on R5.
This ensures that labels are still exchanged from R2 to R5 if there is a failure of any transit
device through the RSVP network.
TASK COMPLETION
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# edit interfaces
[edit interfaces]
lab@R1# set ge-0/0/3 unit 0 family mpls
[edit interfaces]
lab@R1# top
[edit]
lab@R1# set protocols ldp interface ge-0/0/3
[edit]
lab@R1# set protocols ldp interface lo0
[edit]
lab@R1# set protocols mpls label-switched-path r1-to-r5 ldp-tunneling
[edit]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
[edit]
lab@R2# edit interfaces
[edit interfaces]
lab@R2# set ge-0/0/1 unit 0 family mpls
[edit interfaces]
lab@R2# set ge-0/0/4 unit 0 family mpls
[edit interfaces]
lab@R2# top
[edit]
lab@R2# set protocols ldp interface all
[edit]
lab@R2# commit and-quit
commit complete
Exiting configuration mode
lab@R2>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set interfaces ge-0/0/1 unit 0 family mpls
[edit]
lab@R4# set protocols ldp interface lo0
[edit]
lab@R4# set protocols ldp interface ge-0/0/1
[edit]
lab@R4# set protocols mpls label-switched-path r4-to-r5 ldp-tunneling
[edit]
lab@R4# commit and-quit
commit complete
Exiting configuration mode
lab@R4>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# set protocols ldp interface lo0
[edit]
lab@R5# set protocols mpls label-switched-path r5-to-r1 ldp-tunneling
[edit]
lab@R5# set protocols mpls label-switched-path r5-to-r4 ldp-tunneling
[edit]
lab@R5# commit and-quit
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
Begin your verification by reviewing the status of your LSPs from the perspective of R1. If
everything is functioning well, move on through the rest of the routers on which you configured
LDP.
• R1:
lab@R1> show ldp interface
Interface Label space ID Nbr count Next hello
ge-0/0/3.0 172.27.255.1:0 1 4
lo0.0 172.27.255.1:0 1 0
• R2:
lab@R2> show ldp interface
Interface Label space ID Nbr count Next hello
lo0.0 172.27.255.2:0 0 0
ge-0/0/1.0 172.27.255.2:0 1 4
ge-0/0/4.0 172.27.255.2:0 1 2
• R4:
lab@R4> show ldp interface
Interface Label space ID Nbr count Next hello
lo0.0 172.27.255.4:0 1 0
ge-0/0/1.0 172.27.255.4:0 1 2
• R5:
lab@R5> show ldp interface
Interface Label space ID Nbr count Next hello
lo0.0 172.27.255.5:0 2 0
TASK INTERPRETATION
If you followed the instructions in the first task, you have already completed this task. If you did
not include this task when you configured your RSVP LSPs then you should complete this task
now. You can refer to the detailed steps outlined in the first task to complete this third task.
TASK 4
Configure the administrative groups, defined in the Admin Groups
table, on all RSVP routers. Apply these administrative groups to the
appropriate links as illustrated on the lab diagram. Ensure that the
r3-to-r4 LSP avoids the R3-R4 link.
Admin Groups
plat 1
gold 2
silver 3
bronze 4
TASK INTERPRETATION
This task requires you to configure the administrative groups defined in the table. Apply these
groups to the appropriate links and ensure that you apply the additional constraints to the
defined LSP r3-to-r4 by excluding the bronze admin group.
TASK COMPLETION
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# edit protocols mpls
commit complete
Exiting configuration mode
lab@R1>
[edit]
lab@R3# edit protocols mpls
commit complete
Exiting configuration mode
lab@R3>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# edit protocols mpls
commit complete
Exiting configuration mode
lab@R4>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# edit protocols mpls
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
Begin your verification by ensuring that all MPLS interfaces have the correct administrative
groups applied. While on R3, you should also verify that the constraints that you applied to the
r3-to-r4 LSP have taken effect. You can do this by reviewing the extensive information for the
particular LSP. You may need to wait for the LSP to resignal or you can manually clear this LSP.
• R1:
lab@R1> show mpls interface
Interface State Administrative groups
ge-0/0/3.0 Up <none>
ge-0/0/6.0 Up gold
ae0.0 Up plat
• R3:
lab@R3> show mpls interface
Interface State Administrative groups
ge-0/0/1.0 Up gold
ge-0/0/2.0 Up bronze
ge-0/0/3.0 Up plat
172.27.255.4
From: 172.27.255.3, State: Up, ActiveRoute: 0, LSPname: r3-to-r4
ActivePath: (primary)
LSPtype: Static Configured, Penultimate hop popping
LoadBalance: Random
Encoding type: Packet, Switching type: Packet, GPID: IPv4
*Primary State: Up
Priorities: 7 0
SmartOptimizeTimer: 180
Exclude: bronze
Computed ERO (S [L] denotes strict [loose] hops): (CSPF metric: 15)
172.27.0.14 S 172.27.0.9 S
Received RRO (ProtectionFlag 1=Available 2=InUse 4=B/W 8=Node 10=SoftPreempt
20=Node-ID):
172.27.0.14 172.27.0.9
18 Jan 29 14:48:18.923 Selected as active path
17 Jan 29 14:48:18.904 Record Route: 172.27.0.14 172.27.0.9
16 Jan 29 14:48:18.904 Up
15 Jan 29 14:48:18.798 Originate Call
14 Jan 29 14:48:18.798 CSPF: computation result accepted 172.27.0.14 172.27.0.9
13 Jan 29 14:48:18.797 Clear Call
12 Jan 29 14:48:18.797 Deselected as active
11 Jan 29 14:43:56.589 Record Route: 172.27.0.18
10 Jan 29 14:43:56.589 Up
9 Jan 29 14:43:56.545 Originate Call
8 Jan 29 14:43:56.545 CSPF: computation result accepted 172.27.0.18
7 Jan 29 14:43:56.542 Clear Call
6 Jan 29 11:44:36.108 Selected as active path
5 Jan 29 11:44:36.088 Record Route: 172.27.0.18
4 Jan 29 11:44:36.088 Up
3 Jan 29 11:44:36.027 Originate Call
2 Jan 29 11:44:36.027 CSPF: computation result accepted 172.27.0.18
1 Jan 29 11:44:06.390 CSPF failed: no route toward 172.27.255.4[3 times]
Created: Thu Jan 29 11:32:20 2015
Total 1 displayed, Up 1, Down 0
Answer: The LSP should now use an alternative path to R4. This
LSP should avoid the more preferred, direct link between R3
and R4.
• R5:
lab@R5> show mpls interface
Interface State Administrative groups
ge-0/0/1.0 Up plat
ge-0/0/2.0 Up gold
TASK 5
Configure the r5-to-r1 LSP to reserve 450 Mbps of bandwidth across
the network.
TASK INTERPRETATION
This task indicates that you must assign a bandwidth reservation to the LSP that you created
from r5-to-r1.
TASK COMPLETION
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# edit protocols mpls
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
On R5, verify that the r5-to-r1 LSP is requesting the bandwidth and the LSP has been
signaled. You can also see the reservation by looking at the RSVP interfaces.
• R5:
lab@R5> show mpls lsp name r5-to-r1 extensive
Ingress LSP: 3 sessions
Answer: Yes, you should see that the LSP is reserving 450 Mbps
of bandwidth.
TASK 6
Create a bypass to improve convergence time for the r5-to-r1 LSP in
the event of a R4-R1 link failure. Ensure bandwidth reservation is
honored and the best available path is chosen.
[edit]
lab@R4# set protocols rsvp interface ae0.0 link-protection bandwidth 450m
[edit]
lab@R4# commit and-quit
commit complete
Exiting configuration mode
lab@R4>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# set protocols mpls label-switched-path r5-to-r1 link-protection
[edit]
lab@R5# commit and-quit
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
Begin your verification by looking at the LSP from the perspective of the ingress router (R5).
After determining that link-protection is being requested for the LSP, move to R4 and verify that
the RSVP interface you configured is creating a bypass LSP.
• R5:
lab@R5> show mpls lsp name r5-to-r1 detail
Ingress LSP: 3 sessions
172.27.255.1
• R4:
lab@R4> show rsvp interface ae0.0 extensive
ae0.0 Index 70, State Ena/Up
NoAuthentication, NoAggregate, NoReliable, LinkProtection
HelloInterval 9(second)
Address 172.27.0.9
ActiveResv 2, PreemptionCnt 0, Update threshold 10%
Subscription 100%,
bc0 = ct0, StaticBW 2Gbps
ct0: StaticBW 2Gbps, AvailableBW 1.55Gbps
MaxAvailableBW 2Gbps = (bc0*subscription)
ReservedBW [0] 450Mbps[1] 0bps[2] 0bps[3] 0bps[4] 0bps[5] 0bps[6] 0bps[7] 0bps
Protection: On, Bypass: 1, LSP: 1, Protected LSP: 1, Unprotected LSP: 0
1 Jan 29 14:52:38 New bypass Bypass->172.27.0.10
Bypass: Bypass->172.27.0.10, State: Up, Type: LP, LSP: 1, Backup: 0
3 Jan 29 14:52:39 Record Route: 172.27.0.17 172.27.0.14
2 Jan 29 14:52:39 Up
1 Jan 29 14:52:39 CSPF: computation result accepted
172.27.255.1
From: 172.27.255.4, LSPstate: Up, ActiveRoute: 0
LSPname: Bypass->172.27.0.10
LSPtype: Static Configured
Suggested label received: -, Suggested label sent: -
Recovery label received: -, Recovery label sent: 299792
Resv style: 1 SE, Label in: -, Label out: 299792
Time left: -, Since: Thu Jan 29 14:52:39 2015
Tspec: rate 450Mbps size 450Mbps peak Infbps m 20 M 1500
Port number: sender 1 receiver 40735 protocol 0
Type: Bypass LSP
Number of data route tunnel through: 1
Number of RSVP session tunnel through: 0
PATH rcvfrom: localclient
Adspec: sent MTU 1500
Path MTU: received 1500
PATH sentto: 172.27.0.17 (ge-0/0/5.0) 4 pkts
RESV rcvfrom: 172.27.0.17 (ge-0/0/5.0) 4 pkts
Explct route: 172.27.0.17 172.27.0.14
Record route: <self> 172.27.0.17 172.27.0.14
Total 1 displayed, Up 1, Down 0
Answer: Yes, you should see 450 Mbps for two interfaces now.
The second interface indicates that the bypass LSP is also
reserving bandwidth as the task required.
TASK 7
Ensure that all MPLS packets that transit the R1-R4 link are load
balanced across both member links of the Aggregated Ethernet bundle.
The contents of the outer label as well as the IP packet should be
used by the load balancing algorithm.
TASK INTERPRETATION
This task indicates that you must alter the hash key being used by the forwarding table when
deciding what interface next-hop to use for MPLS traffic traversing the aggregated Ethernet
interface.
Based on the requirements, you must use the first label as well as the IP payload when
calculating the physical interface to send the MPLS traffic out. You must make this configuration
change on both R1 and R4 to meet the requirements of the task.
TASK COMPLETION
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# set forwarding-options hash-key family mpls label-1 payload ip
[edit]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set forwarding-options hash-key family mpls label-1 payload ip
[edit]
lab@R4# commit and-quit
lab@R4>
TASK VERIFICATION
Because no transit traffic is traversing your core network, you need no verification steps for this
particular task. If you configured the hash algorithm as illustrated in the detailed steps, then
everything should be working correctly.
TASK 8
Ensure that the entire core network appears as two hops for any
transit traffic.
TASK INTERPRETATION
This task indicates that you must alter the default TTL behavior. Even though all devices in your
MPLS network are running the Janos OS, you must use the no-propagate-ttl option. You
must use this option because LDP is not supported by the no-decrement-ttl feature. You
must configure the no-propagate-ttl option for all MPLS LSP on all routers.
TASK COMPLETION
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# set protocols mpls no-propagate-ttl
[edit]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
• R2:
lab@R2> configure
Entering configuration mode
[edit]
lab@R2# set protocols mpls no-propagate-ttl
[edit]
lab@R2# commit and-quit
commit complete
Exiting configuration mode
lab@R2>
[edit]
lab@R3# set protocols mpls no-propagate-ttl
[edit]
lab@R3# commit and-quit
commit complete
Exiting configuration mode
lab@R3>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set protocols mpls no-propagate-ttl
[edit]
lab@R4# commit and-quit
commit complete
Exiting configuration mode
lab@R4>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# set protocols mpls no-propagate-ttl
[edit]
lab@R5# commit and-quit
commit complete
Exiting configuration mode
lab@R5>
Note
For verification, you can traceroute through
your MPLS core using the vr-device router.
Each virtual routing instance acting as a
external provider has a loopback address
assigned to it. You can use these
addresses to verify TTL behavior. Before
verifying, you must resignal all the LSPs for
this change to take effect.
TASK VERIFICATION
Verify that the changes you made have taken effect using traceroute.
Clear your MPLS LSPs on all routers using the clear mpls lsp command. This will allow the
TTL changes to be altered in the sessions.
Move to the VR-device.
Move to your open session on the VR-device and verify your changes by tracerouting from one of
the virtual routers through your core network to another virtual router. For simplicity, use the
traceroute 174.100.0.1 source 177.100.0.1 routing-instance
customer2 command on the VR-device. This will traceroute through the core using the
r5-to-r1 LSP.
lab@vr-device> traceroute 174.100.0.1 source 177.100.0.1 routing-instance
customer2
traceroute to 174.100.0.1 (174.100.0.1) from 177.100.0.1, 30 hops max, 40 byte
packets
1 172.27.0.49 (172.27.0.49) 7.073 ms 7.696 ms 5.292 ms
2 172.27.0.10 (172.27.0.10) 8.747 ms 7.251 ms 7.771 ms
3 174.100.0.1 (174.100.0.1) 6.737 ms 9.255 ms 9.958 ms
Question: How many hops do you see when traversing your core
network?
Answer: You should see only two hops, the ingress and the
egress routers for your LSP.
STOP Tell your instructor that you have completed this lab.
Overview
In this lab, you will be given a list of tasks specific to implementing and troubleshooting MPLS
VPNs which you will need to accomplish within a specific time frame. You will have 3 hours to
complete the simulation.
By completing this lab, you will perform the following tasks:
• Create a Layer 3 VPN named vpn-1, connecting the following sites: CE-1, CE-2, CE-3,
and CE-4. The CE-3 and CE-4 sites peer using BGP. The CE-1 and CE-2 are using
OSPF Area 0. Ensure all the CE routers can ping the remote directly connected PE-CE
links.
• The CE-1 and CE-2 routers share a backdoor OSPF connection. Ensure that CE-1 and
CE-2 prefer to send traffic through the Layer 3 VPN. The internal connection between
CE-1 and CE-2 has an interface metric of 10.
• You are required to provide Internet access for vpn-1 on the R1 PE router. You are
allowed to use one static route to complete this task.
• On R1, ensure that vpn-1 traffic destined to CE-1 uses the r1-to-r5-one LSP
and traffic destined to CE-3 uses the r1-to-r5-two LSP.
• Configure a VPLS Layer 2 VPN named vpn-2 between CE-5 and CE-6 using VLAN
200. Make sure the VPN uses the VPN RFC 4448 encapsulation and uses BGP as
the VPN signaling protocol. The maximum number of MAC addresses learned by the
VPLS domain should be limited to 500 on each PE-CE link. Ensure that broadcast
and multicast traffic will be policed to 50 Mbps for all sites before entering the MPLS
domain.
• You must extend vpn-3 connecting CE-7 to CE-8 using an inter-provider solution with
ISP-A. You must not configure a routing instance on R3. The address of the remote PE
will be learned from ISP-A.The remote PE is using the route target value of
target:60001:101. Use the information in the lab diagram for this lab to complete
this task.
Note
We recommend that you spend some time
investigating the current operation of your
routers. During the real exam, you might be
given routers that are operating
inefficiently. Investigating operating issues
now might save you a lot of time
troubleshooting strange issues later.
INITIAL TASK
Access the CLI for your routers using either the console, Telnet, or SSH as directed by your
instructor. Refer to the management network diagram for the IP address associated with your
devices. Log in as user lab with the password lab123.
TASK COMPLETION
• R1:
R1 (ttyd0)
login: lab
Password:
login: lab
Password:
login: lab
Password:
login: lab
Password:
login: lab
Password:
login: lab
Password:
TASK 1
Create a Layer 3 VPN called vpn-1, connecting the following sites:
CE-1, CE-2, CE-3, and CE-4. The CE-3 and CE-4 sites peer using BGP.
The CE-1 and CE-2 are using OSPF area 0. Ensure all the CE routers
can ping the remote directly connected PE-CE links.
TASK INTERPRETATION
To complete this task, you must configure a VPN routing instance on routers R1, R5, and R4 to
connect the specified CE devices. Begin by configuring the routing instance on R5 because two
peerings exist. Include the appropriate interfaces for the VPN instance. Define a Type 1 route
distinguisher using the local loopback address, to uniquely identify the source of the route
advertisements. Define the VPN route target as target:65100:100. This target is used to
identify which MP-BGP routes to accept. Configure an external BGP peering to CE-3 from your
routing instance, using the information outlined on the lab diagram. Note, that because both
sites are peering using the same AS, you must configure the BGP groups with as-override.
Using this option allows the PE to advertise the remote routes into the site. Configure an OSPF
peering to the CE-1 router from your routing instance.
You must create a routing policy to export your BGP routes into OSPF on R4 and R5, so that the
routes learned from your MP-BGP and EBGP peers can be shared with the OSPF CE routers. On
R5, you must include the direct route for the interface connecting to CE-3 to ensure that this
route is sent from both R4 and R5 into the OSPF network.
Next, create a routing policy to export the OSPF routes on R5 to CE-3 through BGP. Remember to
include the directly connected network for the OSPF connection to CE-1.
[edit]
lab@R5# set protocols bgp group internal family inet unicast
[edit]
lab@R5# set protocols bgp group internal family inet-vpn unicast
[edit]
lab@R5# edit routing-instances vpn-1
[edit]
lab@R5# edit policy-options policy-statement bgp-to-ospf
[edit policy-options]
lab@R5# edit policy-statement ospf-to-bgp
commit complete
Exiting configuration mode
lab@R5>
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set protocols bgp group internal family inet unicast
[edit]
lab@R4# set protocols bgp group internal family inet-vpn unicast
[edit]
lab@R4# edit routing-instances vpn-1
[edit]
lab@R4# edit policy-options policy-statement bgp-to-ospf
[edit]
lab@R4# edit routing-instances vpn-1
commit complete
Exiting configuration mode
lab@R4>
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# set protocols bgp group internal family inet unicast
[edit]
lab@R1# set protocols bgp group internal family inet-vpn unicast
[edit]
lab@R1# edit routing-instances vpn-1
commit complete
Exiting configuration mode
lab@R1>
TASK VERIFICATION
Begin your verification by reviewing the status of your PE to CE neighborships. To simplify the
outputs, you should include the instance option with the show command and specify the VPN
name. Review the vpn-1.inet.0 routing table to verify that you have the remote networks for
the directly connected interface. You can include the terse option to quickly see what networks
are there without all the extra detailed information.
You should also log into the VR-device and verify the routing tables for each of the CE devices.
Finally, verify that you can ping from the local CE interface to the remote CE interfaces for all of
your CE routers.
You do not need to verify every detail from each device because if it is working on one or two
routers, it should be working on all.
You might want to review the contents of the bgp.l3vpn.0 routing table to see which routes
are being learned from which PE router by using the route distinguisher that is prepended to the
prefix.
Note
During the verification phase of the first
task, you must determine which routes are
being sent from which CE device. You can
determine this by systematically reviewing
the VRF tables and isolating the routes. To
save you some time during this step, the CE
devices in your Layer 3 VPN are listed below
with the routes they should be sending:
CE-1 = 65.100.0.0/24 to 65.100.4.0/24
CE-2 = 65.100.5.0/24 to 65.100.9.0/24
CE-3 = 65.100.10.0/24 to 65.100.14.0/24
CE-4 = 65.100.15.0/24 to 65.100.19.0/24
• R5:
lab@R5> show bgp summary instance vpn-1
Groups: 1 Peers: 1 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
vpn-1.inet.0
29 13 0 0 0 0
vpn-1.mdt.0
0 0 0 0 0 0
• R1:
lab@R1> show bgp summary instance vpn-1
Groups: 1 Peers: 1 Down peers: 0
• R4:
lab@R4> show ospf neighbor instance vpn-1
Address Interface State ID Pri Dead
172.27.0.42 ge-0/0/3.0 Full 65.100.255.2 128 34
TASK 2
The CE-1 and CE-2 routers share a backdoor OSPF connection. Ensure
that CE-1 and CE-2 prefer to send traffic through the Layer 3 VPN.
The internal connection between CE-1 and CE-2 has an interface
metric of 10.
TASK INTERPRETATION
To complete this task, you must ensure that the VPN connection appears as an internal route,
which allows you to alter the link metric to make the VPN more preferred than the existing
connection between CE-1 and CE-2 to allow the VPN to appear as a internal link, you must
configure a sham link between R4 and R5. As a requirement for sham links, you must include a
loopback address. Configure a secondary loopback unit using 65.100.255.14 on R4 and
65.100.255.15 on R5. Add this interface to the VPN. The loopback interface address is used as
the local and remote address for the sham link. Finally, you must add a metric to the sham link
that is lower than the existing connection between CE-1 and CE-2, which has a metric of 10.
TASK COMPLETION
• R4:
lab@R4> configure
Entering configuration mode
[edit]
lab@R4# set interfaces lo0.1 family inet address 65.100.255.14
[edit]
lab@R4# edit routing-instances vpn-1
commit complete
Exiting configuration mode
lab@R4>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# set interfaces lo0.1 family inet address 65.100.255.15
[edit]
lab@R5# edit routing-instances vpn-1
commit complete
Exiting configuration mode
lab@R5>
TASK INTERPRETATION
To complete this task, you must create a static route in the main instance that encompasses the
VPN networks (65.100.0.0/16) with the next-table operation pointing to vpn-1.inet.0.
Advertise this static route into your IBGP network using an export policy. This policy allows the
Internet traffic to reach to your VPN. Because you do not have any EBGP peers for R1 in this lab,
you can simply export this route by adding a new term to your next hop self policy. Alternatively,
you could create a new export policy and apply to your internal IBGP group. Next, you will create a
rib-group designed to copy the routes from the inet.0 into the vpn-1.inet.0 routing
table. Finally, you must apply this RIB group to your IBGP, OSPF, and interface routes in the main
instance.
TASK COMPLETION
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# edit routing-options
[edit routing-options]
lab@R1# set static route 65.100.0/16 next-table vpn-1.inet.0
[edit routing-options]
lab@R1# set rib-groups rib-1 import-rib [inet.0 vpn-1.inet.0]
[edit routing-options]
lab@R1# set interface-routes rib-group rib-1
[edit routing-options]
lab@R1# top edit protocols
[edit protocols]
lab@R1# set ospf rib-group rib-1
[edit protocols]
lab@R1# set bgp group internal family inet unicast rib-group rib-1
[edit protocols]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
TASK VERIFICATION
Begin your verification by reviewing the vpn-1.inet.0 routing table on R1 to verify that you
now have all the Internet routes. Next, verify that you have the Internet routes in the
vpn-1.inet.0 routing table on R5. While on R5, verify that you have the 65.100.0.0/16 route
in the inet.0 routing table. Once you have verified the routes are present, ping from the main
instance to the loopback address on R5 that is assigned to the routing instance. This action can
be accomplished using the ping 65.100.255.15 count 5 command. This command will
illustrate that you can pass traffic from the main instance through R1 into the VPN to R5. You
can do additional verification if you want.
• R1:
lab@R1> show route table vpn-1.inet.0 terse
• R5:
lab@R5> show route table vpn-1.inet.0 terse
Answer: Yes, you should be able to ping from the main instance
into the VPN.
TASK 4
On R1, ensure that vpn-1 traffic destined to CE-1 uses the
r1-to-r5-one LSP and traffic destined to CE-3 uses the r1-to-r5-two
LSP.
TASK INTERPRETATION
To complete this task, you must create two additional unique communities on R5. You must add
these communities to the routes learned from the each of the CE neighbors before advertising
them through MP-BGP to the other PE routers. Remember to also create and add the target
community to these routes before you accept and advertise them to your MP-BGP peers.
Remember to include the direct routes when adding the communities to the BGP routes. To add
additional communities to your MP-BGP routes, you must manually create a vrf-export and
vrf-import policies on R5, and remove the vrf-target statement.
You must then create a policy on R1 to alter the next-hop LSP in the forwarding table based on
which community tag is present in the BGP route.You must define the communities and values
on R1 also.
TASK COMPLETION
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# edit policy-options
[edit policy-options]
lab@R5# set community vpn-1 members target:65100:100
[edit policy-options]
lab@R5# set community ce-1 members 65100:1
[edit policy-options]
lab@R5# set community ce-3 members 65100:3
[edit policy-options]
lab@R5# edit policy-statement vpn-export
commit complete
Exiting configuration mode
lab@R5>
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# edit policy-options
[edit policy-options]
lab@R1# set community ce-1 members 65100:1
[edit policy-options]
lab@R1# set community ce-3 members 65100:3
[edit policy-options]
lab@R1# edit policy-statement set-lsp
[edit]
lab@R1# commit and-quit
commit complete
Exiting configuration mode
lab@R1>
TASK VERIFICATION
You can easily verify this task on R1 by reviewing the selected next hops for the CE prefixes
advertised by the R5 router in the VRF routing table. Routes from CE-1 should show only a
next-hop of LSP r1-to-r5-one and routes learned from CE-3 should show only the
next-hop of r1-to-r5-two.
• R1:
lab@R1> show route table vpn-1.inet.0
[edit]
lab@R2# set interfaces ge-0/0/3 vlan-tagging
[edit]
lab@R2# set interfaces ge-0/0/3 encapsulation extended-vlan-vpls
[edit]
lab@R2# set interfaces ge-0/0/3 unit 200 vlan-id 200 family vpls
[edit firewall]
lab@R2# set policer policer-1 if-exceeding bandwidth-limit 50m
[edit firewall]
lab@R2# set policer policer-1 if-exceeding burst-size-limit 1m
[edit firewall]
lab@R2# set policer policer-1 then discard
[edit firewall]
lab@R2# edit family vpls filter police-vpls
commit complete
Exiting configuration mode
lab@R2>
• R5:
lab@R5> configure
Entering configuration mode
[edit]
lab@R5# set interfaces ge-0/0/5 vlan-tagging
[edit]
lab@R5# set interfaces ge-0/0/5 encapsulation extended-vlan-vpls
[edit]
lab@R5# set interfaces ge-0/0/5 unit 200 vlan-id 200 family vpls
[edit]
lab@R5# edit protocols bgp group internal
[edit firewall]
lab@R5# set policer policer-1 if-exceeding bandwidth-limit 50m
[edit firewall]
lab@R5# set policer policer-1 if-exceeding burst-size-limit 1m
[edit firewall]
lab@R5# set policer policer-1 then discard
[edit firewall]
lab@R5# edit family vpls filter police-vpls
commit complete
Exiting configuration mode
lab@R5>
TASK VERIFICATION
Begin your verification on R2 by reviewing the VPLS connections. After verifying that your VPLS
session is up and functioning, move to the VR-device and use the ping utility to ping through your
newly created VPLS connection.
• R2:
lab@R2> show vpls connections
Layer-2 VPN connections:
Instance: vpn-2
Local site: ce-5 (5)
connection-site Type St Time last up # Up trans
6 rmt Up Jan 27 02:30:21 2015 1
Remote PE: 172.27.255.5, Negotiated control-word: No
Incoming label: 262150, Outgoing label: 262157
Local interface: lsi.1048576, Status: Up, Encapsulation: VPLS
Description: Intf - vpls vpn-2 local site 5 remote site 6
• VR-device:
lab@vr-device> ping 51.100.0.2 routing-instance CE-5 count 5
PING 51.100.0.2 (51.100.0.2): 56 data bytes
64 bytes from 51.100.0.2: icmp_seq=0 ttl=64 time=10.073 ms
64 bytes from 51.100.0.2: icmp_seq=1 ttl=64 time=10.594 ms
64 bytes from 51.100.0.2: icmp_seq=2 ttl=64 time=11.183 ms
64 bytes from 51.100.0.2: icmp_seq=3 ttl=64 time=7.548 ms
64 bytes from 51.100.0.2: icmp_seq=4 ttl=64 time=14.563 ms
TASK 6
You must extend vpn-3 connecting CE-7 to CE-8 using a inter-provider
solution with ISP-A. You must not configure a routing instance on
R3. The address of the remote PE will be learned from ISP-A. The
Remote PE is using the route target value of target:60001:101. Use
the information in the lab diagram for this lab to complete this
task.
[edit]
lab@R3# set interfaces ge-0/0/4 unit 0 family mpls
[edit]
lab@R3# edit protocols bgp group internal
commit complete
Exiting configuration mode
lab@R3>
• R1:
lab@R1> configure
Entering configuration mode
[edit]
lab@R1# edit routing-instances vpn-3
commit complete
Exiting configuration mode
lab@R1>
• R1:
lab@R1> show bgp summary
Groups: 4 Peers: 7 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0 1 1 0 0 0 0
bgp.l3vpn.0 42 42 0 0 0 0
inet.3 1 1 0 0 0 0
• VR-device:
lab@vr-device> ping 85.100.255.1 routing-instance CE-7 count 5
PING 85.100.255.1 (85.100.255.1): 56 data bytes
64 bytes from 85.100.255.1: icmp_seq=0 ttl=64 time=12.549 ms
64 bytes from 85.100.255.1: icmp_seq=1 ttl=64 time=18.650 ms
64 bytes from 85.100.255.1: icmp_seq=2 ttl=64 time=8.579 ms
64 bytes from 85.100.255.1: icmp_seq=3 ttl=64 time=9.561 ms
64 bytes from 85.100.255.1: icmp_seq=4 ttl=64 time=7.556 ms
STOP Tell your instructor that you have completed this lab.