0% found this document useful (0 votes)
21 views192 pages

SDX Product Training v5

NetScaler SDX is a highly differentiated application delivery controller (ADC) designed for multitenancy and consolidation, offering isolated instances with significant performance and scalability benefits. It supports independent lifecycles for applications and tenants, ensuring compliance and operational efficiency while reducing costs. The SDX product line includes various platforms with flexible licensing options to accommodate different deployment needs and maximize resource utilization.

Uploaded by

Narendra Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views192 pages

SDX Product Training v5

NetScaler SDX is a highly differentiated application delivery controller (ADC) designed for multitenancy and consolidation, offering isolated instances with significant performance and scalability benefits. It supports independent lifecycles for applications and tenants, ensuring compliance and operational efficiency while reducing costs. The SDX product line includes various platforms with flexible licensing options to accommodate different deployment needs and maximize resource utilization.

Uploaded by

Narendra Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 192

NetScaler SDX

Sales Training
Why SDX

SDX Product Line

Under the Hood

Performance and Scalability

Networking

Use Cases

Workflow
Citrix TriScale Technology

Scale up

Simplicity with
Many-In-One

Elasticity with Scale in


Pay-As-You-Grow

Expandability
Scale out
NetScaler SDX – Highly Differentiated ADC

SDX > 10% of NetScaler product sales


• You get all the benefits of NetScaler

• Designed specifically to meet multitenancy AND consolidation requirements


 in under 9 months
• Isolated multitenancy with up to 40:1 consolidation

4
True Multitenancy

• Isolated VPX per tenant


• Memory, CPU hardwalling
• Separate entity spaces
• Version independence
• Maintenance independence

• Completely isolated networks


• Single point of control (SVM)
• Fully contained networking appliance
Key Capabilities

• Integrated instance management for multitenancy control

• SR-IOV for reliably high performance and bandwidth hardwalling

• NIC-level VLAN support for L2 segmentation

• Hardware assist for fast SSL processing

6
Pendulum of Compromise

Multitenancy Consolidation
M C (Chaos)
(Sprawl)

7
Drivers

• Independent per-app, per-tenant lifecycles

• New network topologies from app & DC overhaul

• Compliance and low OPEX still reign

• SLAs, ROI equally critical to success

8
Qualifying Questions

• Identifying Strategic Imperatives


○ Are you virtualizing your datacenter(s)?
○ Are you implementing new “flat” network topologies?
○ Are you required to make your network “multi-tenant”?

• More Tactical, LB-Specific


○ Are you concerned about how L4-7 networking services will be deployed in new DC topologies?
○ Do you have many “underutilized” ADCs you’d like to consolidate?
○ Are you concerned that if you consolidate, SLAs will suffer?

• Specific to XenApp / XenDesktop


○ How are you delivering XA/XD today?
9
Pendulum of Compromise

Multitenancy Consolidation
M C (Chaos)
(Sprawl)

10
Applications Have Different Owners and Needs

Desktop
Admin Throughput
Finance Commerce Collaboration

Sales/
Manufacturing Commerce Administration
Service

Functionality
Sales/ LoB LoB
Finance
Service Specialists Specialists

LoB
Specialists
LoB
Policies
Specialists
Sales/
Service
Collaboration

Network
Comms
Service Levels
Manufacturing Commerce Commerce
L4-7 is Different

Resource consumption is independent of


number and size of packets

Pushing Packets Processing Payloads


Applications Have Individual Lifecycles

Maintenance windows

Infrastructure change frequency

Application change frequency

Desire for new ADC functionality


Pair per Application/Tenant

LB
Networks are Evolving

Traditional  Cloud-Centric

Virtualized Datacenter = Consolidation


Virtualized Data Centers Drive Flat Networks
Traditional Cloud-centric
Pod per App – Inefficient Any App, Any Pod

• Segmented Hierarchical Network • Much East/West traffic


• Most data flows North/South • Network Services Layer must Span Pods
• Network services deployed by pod • Driving virtualization and consolidation

App A App B App C


Partition per App/Tenant Shared Instance
Reality Checks 1 & 2

Segmentation for Compliance

Drive Down Operational Costs


Compliance vs. OPEX

AppFW
External DMZ SSL VPN

AppFW
Internal DMZ LB

Internal & Lab LB

19
Reality Check 3 – How to Meet SLAs?

Capacity

Isolation
Partitions Fail for True Isolation

• 1 shared instance – partitioned

Tenant 1 Partition

Tenant 2 Partition

Tenant 3 Partition

Tenant 4 Partition
by rate limits, RBA and ACLs
• Partitions NOT fully isolated
• No CPU or memory isolation
• No lifecycle independence
• No high availability independence

ADC
Multitenancy vs. Consolidation

SLAs High Isolation & RAS


Lifecycle App and Tenant

M Compliance
Independence

High Isolation, Segmentation

CAP/OP/ROI Less Hardware, High


Utilization

C Efficiency
Management
High Consolidation &
Utilization
Single-point Provision,
Multitenancy vs. Consolidation

Multitenancy Consolidation
M C (Chaos)
(Sprawl)

23
How to Avoid Tradeoffs
Lifecycle
Multitenancy
Compliance
SLA

Pair Per-
App
/Tenant
SDX
Efficiency
Consolidation
Management
ROI

Shared
Instance
Partitions
24
NetScaler SDX

• CPU, memory, IO virtualization

NetScaler VPX

NetScaler VPX

NetScaler VPX
Service VM ○ XenServer + Intel + SR-IOV NICs
XenServer

• Independent instances, versions


○ Direct hardware access

• Service VM
○ Single point for management

• HW-level SSL isolation


NetScaler Hardware
• HA across devices
25
Comparison of Options
Pair per App / Shared
Tenant Instance SDX
Partitions
Resource Isolation Strong Weak Strong
Lifecycle isolation Strong Weak Strong
Delegated admin safety OK Weak Strong
Efficiency Weak Strong Strong
CAPEX/OPEX Weak Strong Strong
Ease of Mgmt OK Strong Strong
Performance Strong Unpredictable Strong
Scalability Strong Strong Strong
Very Compelling Economics

Individual NetScaler
Devices SDX
• 10 HA pairs installed 1-year product $535K $305K
3-year product $935K $394K
Space (rack units) 28 4
• Four new pairs needed Power (Watts) 6,400 1,300
Heat (BTUs) 22,000 4,450

27
Property Management Company
Upsell NS for XA/XD to NetScaler SDX for ALL load balancing
Customer: a large diversified property group, actively managing a portfolio of assets including
residential communities, retirement living, shopping centres, office and industrial assets.

Account Situation Engagement Strategy & Winning Formula


• Leverage senior architecture relationship with XA/XD customer lead
• MPX 5500 for XA/XD • Consolidate ISA, LB for Exchange/SharePoint, LB and GSLB in DMZ
• Use integrated GSLB to reduce complex BGP config for datacentre failover
• Heavy Cisco skills; N/W team • Anticipate and head-off aggressive F5 EoQ deal
initially favored F5
Results
• Upsold from MPX 5500 to 3 x SDX 11500

Deal Size: $125,000


Webscale Deployment
NetScaler SDX provides “instance per game” with full isolation
Customer: an industry leader in “free to play” massively multiplayer online games

Account Situation Engagement Strategy & Winning Formula


• Highlight key feature differences between ACE and NetScaler
• Wanted an “ADC for each game • Leverage SPLUNK relationship
property” • Used competitive trade in of Cisco ACE for additional discounts.
• Cisco (ACE) incumbent
Results
• Big SPLUNK user • 4 x SDX 11500
• SDX used in DMZ in two data centers
• Follow-up opportunity for two more pair for internal database load balancing

Deal Size: $200,000


Enterprise DMZ Redesign
Datacenter consolidation with NS SDX for security zone redesign
Customer: an integrated healthcare delivery system with ~20,000 employees.

Account Situation Engagement Strategy & Winning Formula


• Used competitive trade-in discounts for existing competitive load balancers that
• 2 pairs 10500EE, 2 pairs of 7000 were being consolidated to NetScalers
MPX
Results
• NetScaler is the incumbent and • Sold 2 pairs of SDX 11500 and 4 Enterprise to Platinum upgrades for 10500
standard • Gave customer capability to reduce their datacenter footprint
• Provided customer with various security zones
• Network redesign into multiple • internal, external and database load balancing)
security zones

Deal Size: $383K


Qualifying Questions Revisited

• Identifying Strategic Imperatives


○ Are you virtualizing your datacenter(s)?
○ Are you implementing new “flat” network topologies?
○ Are you required to make your network “multi-tenant”?

• More Tactical, LB-Specific


○ Are you concerned about how L4-7 networking services will be deployed in new DC topologies?
○ Do you have many “underutilized” ADCs you’d like to consolidate?
○ Are you concerned that if you consolidate, SLAs will suffer?

• Specific to XenApp / XenDesktop


○ How are you delivering XA/XD today?
31
SDX Product Line
Available Platforms

• Corinth – 11500, 13500, 14500, 16500, 18500, 20500


○ 8 to 42 Gbps, max 20 instances

• Constantinople – 17500, 19500, 21500


○ 20 to 50 Gbps, max 20 instances

• Galata – 17550, 19550, 20550, 21550


○ 20 to 50 Gbps, max 40 instances
NetScaler SDX Licensing

• Platform license – entitles base SDX appliance


○ Default 5 instances allowed with base platform license

• 5-Instance Add-On Pack license (Instance Pay-Grow)


○ Enables adding of additional VPX instances, beyond the default 5

• Platform Upgrade license (Platform Pay-Grow)


○ Upgrade to higher throughput capacity on same hardware platform

• Platform Conversion license


○ Change MPX appliance to SDX appliance (not applicable for FIPS, 9500, 7500, 5500)
NetScaler SDX Licensing Example

• Purchased system = MPX 11500


• Apply platform conversion license
○ MPX 11500  SDX 11500

• Apply platform license


○ up to 5 VPX instances, max system throughput 8 Gbps

• Add 3 x 5-pack instance licenses


○ add up to 20 VPX instances – max system throughput still 8 Gbps

• Apply SDX-11500-to-SDX-20500 platform upgrade license


○ max system throughput increases to 42 Gbps
SDX Licensing – Enforced Resource Parameters

• Total throughput (platform, upgrade, conversion licenses)


○ Defined at time of instance provisioning by SVM

• Per-instance rate limits


○ Instance throughput enforced by each individual instance

• Total number of instances (platform + add-on pack licenses)


○ Enforced by SVM based on # of instance packs + default base 5 instance allowance
SDX Licensing – Caveats

• Platinum Edition only for SDX


• Platform License, Add-On Pack, Platform Upgrade MUST be applied via SVM
• MPX-to-SDX Platform Conversion license is NOT applied via SVM
○ Requires installation of SSD, then applying of new platform license via SVM
○ http://support.citrix.com/article/CTX129423

• Certain SDX Platform Upgrade paths will require multiple SKUs


○ E.g., MPX 11500 to SDX 13500
○ MPX 11500 upgrade to SDX 11500
○ SDX 11500 upgrade to SDX 13500
Licensing

• Apply multiple pay-grow licenses without reboot


SDX Maintenance License Types

• Normal NetScaler maintenance SKUs


○ 1, 2, 3 Years @ Bronze, Silver, Gold

• Per-month platform upgrade license


○ For syncing maintenance cycles only – no per-month SKUs for product
Release Timeline (Major Milestones)

• SDX has been shipping for over 1 year


○ Over 10% of Citrix networking revenue

• 2nd major SDX release coming Q2 2013


○ Authentication, advanced networking, scalability and density

• 10 SDX releases since initial production release


○ 6 maintenance releases, 4 focused on enhancements
Release Timeline (Major Milestones) – cont’d

• 9.3, Build 48.6 – initial SDX production release


• 9.3, Build 49.5 – MPX-to-SDX conversion, performance testing
• 9.3, Build 50.3 – SVM UI and user account mgmt improvements
• 9.3, Build 51.5 – interface reliability, force shut down/reboot
• 9.3, Build 52.3 – internal database improvements
• 9.3, Build 53.5 – networking, admin security, scheduling tasks
• 9.3, Build 54.4 – manual backup / restore, tech support, IF mgmt
Release Timeline (Major Milestones) – cont’d

• 9.3, Build 55.6


○ Interface management improvements, instance reliability

• 9.3, Build 54.5006.e


○ Clock sync
○ SSL cert installation
○ SSO to SVM and VPX instances
○ Upgrading XenServer
Release Timeline (Major Milestones) – cont’d

• 9.3, Build 56.5 – major usability improvements


○ Progress bar
○ networkconfig utility
○ Provision / modify and setup wizards
○ Factory reset
○ Instance memory upgrade
Release Timeline (Major Milestones) – cont’d
• Version 10.0, Build 54.7
○ Multi-PE VPX support (SW RSS)
○ Dedicated / shared core assignment
○ Backup / Restore
○ XenServer 6.0 Upgrade
○ L2 mode and VMAC support
○ NTP Sync
○ Network config utility
○ Provisioning status
○ New look and feel (GUI)
Release Timeline (Major Milestones) – cont’d

• 9.3, Build 56.5006.e


○ Maintenance release

• 9.3, Build 57.5


○ SNMPv2
○ Tagged VLAN without NSVLAN
○ Streamlined, secure SVM-to-VPX and SVM-to-XS communications

• 10.0, Build 69.4


○ System health monitoring & reporting
○ VPX change management tracking via SVM
Under the Hood
What is SDX?

• Fully independent NetScaler instances on a networking appliance


• Leverages MPX hardware platform, XenServer, and HV
○ MPX – proven platform with SSL hardware assist
○ XenServer – fully integrated VM control
○ VT-d  direct access to hardware resources
○ SR-IOV  fast network I/O eliminates latency from vswitch

• Aggregate throughput and SSL processing comparable to MPX


• Bandwidth hardwalling can be achieved per instance
SDX: Multi-tenant NetScaler Appliance

ServiceVM

NetScaler 2

NetScaler 1

NetScaler 3
Separate management networks
Instances are separate VMs

vSwitch
Data plane uses SR-IOV
Virtualization layer

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4
Architecture

• Hardware / Virt Layer – NetScaler MPX hardware platform


○ SR-IOV-capable NICs, Intel VT-x & VT-d enabled CPU, Cavium Nitrox for SSL offload

• Virtualization Layer – Citrix XenServer hypervisor


○ Only management ports 0/1 and 0/2 use vSwitch

• Management Service (SVM) – SDX appliance management


○ Pre-provisioned, GUI-based secure VM lifecycle management, PV drivers

• NetScaler VPX – virtualized NetScaler instances


○ Full HVM on SR-IOV dataplane interfaces – different from VPX on OTS XenServer
Hardware Virtualization

• VT-x – Intel x86 CPU virtualization


• VT-d – Intel virtualization for directed I/O
○ VPX instances directly access NIC & HD/SSD via I/O MMU
○ Done via direct memory access (DMA), interrupt remapping

• SR-IOV – single-root I/O virtualization


○ PCI-SIG standard for native PCI-E device sharing
I/O Virtualization
VPX VPX
Instance 1 Instance 2 VF
RX and TX queues

VF0 VF1
Inbound
MAC filtering – phase 1
VLAN filtering – phase 2
RX TX RX TX Queue the packet if both are passed

Queue Queue Queue Queue


Outbound
NIC fetches the packet directly from TX queue and
transmits it

MAC & VLAN Filters VM-to-VM on same L2 domain


Virtual Ethernet Bridge forwards from TX queue of one
instance to RX queue of another instance
Virtual Ethernet Bridge
Bypasses hypervisor dom0
SR-IOV NIC
Bypassing dom0

• NIC-to-VPX – DMA action – “move Rx packet to VM”


○ Intel chipset (VT-d) does address translation, packet DMA’d into VPX’s VF driver buffer

• NIC-to-XenServer – MSI-X interrupt – “Rx packet move complete”


○ Hypervisor receives MSI-X from NIC

• XS posts virtual interrupt to VPX


○ Hypervisor notifies destination VPX to process packet
XenServer Customizations on SDX

• Initial inventory – lists system devices, disables access to XenCenter


• Management plugins – XAPI-based plugins for SVM management of SDX
• Networking changes – default VLAN handling
• Driver modifications – allows NS-specific modes on NICs
• Dom0 tools for health monitoring – smartmontools, hdparam, ipmitool
• XS console/ssh root login via user-defined IP, for troubleshooting
○ User: root, Password: <svm-nsroot-password>

• XenCenter not supported for managing SDX VMs


Management Service (SVM)

• Single point of management via HTTP & HTTPS


○ License SDX appliance and VPX instances
○ Configure SDX appliance network topology
○ Provision, upgrade/downgrade, modify, delete, monitor VPX instances

• Default management ports: 0/1, 0/2


○ 192.168.100.1/24, nsroot / nsroot

• Access via XenServer console for troubleshooting


○ ssh from XS IP to 169.254.0.10, nsroot credentials
NetScaler VPX for SDX

• NetScaler VPX image for XenServer (.XVA container)


• Functionally identical to NetScaler binary deployed on MPX bare metal
• Obtain from NetScaler SDX download site

• Same underlying licensing mechanism, but packaged & applied differently


Intra-SDX Communications

• SVM-to-VPX – uses NITRO API


○ Uses login credentials of ns_root_admin account (not changeable)
○ SVM network connectivity via vSwitch (hard wired to 0/1 and 0/2)

• SVM-to-XenServer – uses XenServer Management API (XML-RPC)


○ Authenticated VM / resource access over https and ssh
○ Via plugins @ Dom0

• VPX-to-VPX – uses switch fabric of NIC (Virtual Ethernet Bridge)


○ If NSIPs on different networks, then external route required
Physical & Logical CPU Cores on SDX

• Dual-socket 6-core Intel Xeon CPU


○ Corinth = E5645 (2.4GHz)
○ Constantinople & Galata = X5680 (3.33GHz)

• Total 12 physical cores per SDX


○ 6 per socket; 2 sockets

• Total 24 logical cores per SDX


○ 2 hyperthreads (HTs) per physical core; 1 logical core (LC) = 1 hyperthread
SDX Control Plane

• 1 NetScaler Management Engine (ME) per VPX instance


○ MEs do not require a dedicated LC

• Service VM – 1 dedicated LC + 1 reserved LC


○ 1 SVM per SDX appliance

• 1 XenServer Dom0 instance – 1 dedicated LC + 1 reserved LC


○ XenServer management via XenCenter not supported

• 2 physical cores total for control plane  20 logical cores for data plane
SDX Data Plane – 9.3.x

• Traffic processing done by Packet Engines (PE)


○ Packet processing element inherent to NetScaler

• Fixed PE allocation per VPX in 9.3


○ SDX 9.3 VPX instance can support exactly 2 vCPUs – 1 PE + 1 ME
○  Limit 1 PE per VPX instance
SDX Data Plane – 10.0.x

• If instance is given dedicated CPU cores, multiple PEs per VPX instance
○ An instance can be allocated up to 5 physical cores
○ If instance is given dedicated cores then:
○ 1 PE per core
○ ME will share a core with one of the PEs

• Receive-side scaling (RSS) leverages multi-core architecture


○ Primary PE (controller) distributes Rx flows to optimal PE(s) for fast packet processing
SDX Data Plane – Shared Cores

• Shared = PEs from multiple instances share a logical core


○ Default assignment type in SDX 9.3
○ Admin-assigned in SDX 10.0
○ Admin can control max density vs. max instance size based on need

9.3 (classic) 10.0 (nCore)


Dedicated Core – SDX 10.0 Only

• Physical (not logical) cores can be dedicated to an instance VPX instance


○ Provides CPU resource isolation
○ Max 5 cores per instance
○ 2048 MB memory required per dedicated core

• Core(s) reserved strictly for PE(s) of assigned VPX


○ SVM will ensure all cores for an instance are on same CPU socket
○ SVM will not allow an instance to have cores on both CPU sockets
Core Allocation: 1 to 10 Instances, Shared
SDX Appliance

PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY
Core Core Core Core Core Core Core Core Core Core Core Core
01 02 03 04 05 06 07 08 09 10 11 12
PE01 PE03 PE05 PE07 PE09 XS PE02 PE04 PE06 PE08 PE10 SVM
LC00 LC01 LC02 LC03 LC04 LC05 LC06 LC07 LC08 LC09 LC10 LC11

Reserved Reserved
ME01 ME03 ME05 ME07 ME09 ME02 ME04 ME06 ME08 ME10
LC12 LC13 LC14 LC15 LC16 LC17 LC18 LC19 LC20 LC21 LC22 LC23

CPU Socket 0 CPU Socket 1

SDX 9.3 (default) and 10.0 (shared only)


- PE and ME core allocation is the same up to 10 instances
- up to 10 instances, each instance has its own dedicated physical core
Core Allocation: 11 to 20 Instances, Shared
SDX Appliance

PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY
Core Core Core Core Core Core Core Core Core Core Core Core
01 02 03 04 05 06 07 08 09 10 11 12
PE01 PE03 PE05 PE07 PE09 XS PE02 PE04 PE06 PE08 PE10 SVM
LC00 LC01 LC02 LC03 LC04 LC05 LC06 LC07 LC08 LC09 LC10 LC11

Reserved Reserved
ME01 ME03 ME05 ME07 ME09 ME02 ME04 ME06 ME08 ME10
LC12 LC13 LC14 LC15 LC16 LC17 LC18 LC19 LC20 LC21 LC22 LC23

PE11
PE12
PE13
PE14
PE19 PE15
CPU Socket 0 PE20 ME11 PE16 CPU Socket 1
ME12
ME13 PE17
ME14 PE18
ME19 ME15
ME20 ME16
SDX 9.3.x (default) and 10.0 (shared only) – physical core sharing
ME17 from
11 to 20 instances:
ME18
- PEs are assigned to a specific logical core
- MEs are assigned floating allocation, scheduled based on resource availability
Core Allocation: 21 to 40 Instances, Shared
SDX Appliance – Galata Platforms Only

PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY
Core Core Core Core Core Core Core Core Core Core Core Core
01 02 03 04 05 06 07 08 09 10 11 12
PE01 PE03 PE05 PE07 PE09 XS PE02 PE04 PE06 PE08 PE10 SVM
LC00 LC01 LC02 LC03 LC04 LC05 LC06 LC07 LC08 LC09 LC10 LC11
PE21 PE23 PE25 PE27 PE29 PE22 PE24 PE26 PE28 PE30

PE11 PE13 PE15 PE17 PE19 Reserved PE12 PE14 PE16 PE18 PE20 Reserved

LC12 LC13 LC14 LC15 LC16 LC17 LC18 LC19 LC20 LC21 LC22 LC23
PE31 PE33 PE35 PE37 PE39 PE32 PE34 PE36 PE38 PE40

CPU Socket 0 CPU Socket 1

SDX 9.3 (default) and 10.0 (shared only) – LC sharing from 21 to 40 instances:
- PEs assigned up to 2 per LC - context switching between PEs
- no fixed mapping based on provisioning order – PEs will run wherever resource is available
- MEs continue to float – scheduled based on available resource
SDX Data Plane – Considerations

• VPX instances are auto-allocated to cores


○ Shared – no resource contention up to 10 instances
○ CPU-based workload performance could degrade after 10 instances
○ Dedicated – no resource contention for dedicated instances

• Current licensing ensures that:


○ On Corinth and Constantinople – no more than 1 instance per LC
○ On Galata – no more than 2 PEs per LC
SDX Data Plane – Considerations – cont’d

• Only Galata platforms can have more instances than LCs


○ Corinth and Constantinople – max 20 instances per appliance
○ Galata – max 40 instances per appliance

• Context switching occurs when 3+ PEs assigned to a single physical core


○ On Galata – with >20 instances, up to 2 PEs share a single LC
Max Instance Density vs. Max Instance Size

• Max instance density on any SDX platform = 40


○ 9.3 & 10.0  2 PEs per LC  20 LCs  Max 40 VPX instances
○ Corinth & Constantinople instance density limits are license-enforced @ max. 20
○ Galata licensing allows up to max. 40 instances
○ 41st instance – SVM will not allow

• Max instance size


○ 9.3  Max 2 vCPUs per instance (1ME, 1PE)  1 PE per instance
○ 10.0  N PHY cores per instance  1ME, N PEs per instance
○ SVM only allows up to 5 dedicated cores  max 5 PEs per instance
Core Allocation: Dedicated + Shared (10.0)
SDX Appliance

PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY PHY
Core Core Core Core Core Core Core Core Core Core Core Core
01 02 03 04 05 06 07 08 09 10 11 12
PE01 PE02 PE03 PE04 PE05 XS PE10 PE11 PE12 PE14 PE15 SVM
ME02 ME02
LC00 LC01 LC02 LC03 LC04 LC05 LC06 Dedicated
LC07 LC08 LC09 LC10 LC11
Dedicated 5-Core PE19
2-Core
ME01 VPX Instance
Reserved Reserved Reserved Reserved Reserved PE16
ME02 ME03VPXReserved PE17 ME05
PE18 Reserved

LC12 LC13 LC14 LC15 LC16 LC17 ME02


LC18 Instance
LC19 LC20 ME02
LC21 ME02
LC22 LC23
ME02 ME02

CPU Socket 0 CPU Socket 1

Example: 1 dedicated 5-core instance + 1 dedicated 2-core instance + 7 shared instances


- PHY cores 01 through 05 reserved for 5-core instance  1 ME, 5 PEs
- PHY cores 08 and 09 reserved for 2-core instance  1ME, 2 PEs
- Shared instances share PHY cores 07, 10, 11
- LC07 (on PHY core 07) context switching between PE10 and PE19
CPU Usage – Hypervisor View vs. Instance View

• Example: “vpx1” – 4-core dedicated VPX instance on SDX


○ 4 LCs on 4 different PHY cores on same CPU socket

• Corinth & Constantinople: ½ of PHY core (1 LC) is reserved by PE


○ XenServer sees Ave Core Usage stay @ 50% (PE) + “true” usage (ME)
○ Instance sees “true” CPU usage for PE + ME
CPU Usage – Hypervisor vs. Instance – cont’d

• Example: “vpx1” – 4-core dedicated VPX instance on SDX


○ 4 LCs on 4 different PHY cores on same CPU socket

• Galata: PE idles if not passing traffic  could get switched off LC


○ XS sees same CPU utilization as instance sees
Multi-PE Support – Considerations

• Only supported on NetScaler 10.0 / SDX 10.0


○ Requires both VPX 10.0 image and SVM 10.0 build

• 2048 GB memory required per PE


○ Provisioning Wizard returns error if memory inadequate

• Shared instances can have only 1 PE


○ Dedicated instances can have 1 to 5 PEs

Citrix Confidential - Do Not Distribute


Multi-PE Support – Considerations (cont’d)

• 1 PE per Physical Core


• Entire physical core is reserved

• 10.0 SVM will allow 9.3 instance to be “assigned” 2 PEs  only 1 PE utilized
○ Upgrading instance to 10.0  assigns 2 PEs to that instance

• Improves CPU tasks (compression, RPS performance)


○ Also improves network I/O characteristics

Citrix Confidential - Do Not Distribute


Performance
• SDX is not PPS rate limited
○ Additional SR-IOV layer impacts PPS performance but does not impact platform throughput
○ HTTP RPS and CPS performance is lower than MPX on a platform basis
○ Individual model performance will be similar at the lower end of each platform

• CPU based workloads


○ Compression, app firewall performance are based on number of packet engines
○ Leverages nCore architecture for scalability

• SDX uses all cores on all models


○ 9.3 – lower-density deployments (shared cores)
○ 10.0 – core usage is function of instance density and # of dedicated cores
Performance Characterization – Methodology

• SDX – additional variables = more ways to stress the system


○ Instance density
○ Instance size

• Instance density
○ All instances are symmetrically configured

• Instance size
○ Single nCore instance across varying resource configurations (1 to many PEs)
Galata Single-Instance Performance (21550)

# of Cores MPX MPX MPX MPX


(Single VPX Instance) 5500 7500 9500 11500
1 Core 4 Cores

Tput (Gbps) 15.3 24.6 .5 1 3 8

HTTP RPS 300k 1m 50,000 100,000 200,000 800,000

SYNs/sec 990k 4.4m 0.35m 0.5m 1m 3m

CMP (Mbps) 1,000 3800 500 1,000 2,000 2,500

SSL TPS 13k 51.8k 5,000 10,000 20,000 80,000


Galata – Multiple-Instance Performance (21550)
# of NetScaler Instances
(1 Packet Engine per Instance)
1 10

Tput (Gbps) 15.3 45.4

HTTP RPS 300k 2.4m

SYNs/sec 990k 8.6m

CMP (Mbps) 1,000 9.5k

SSL TPS 13k 288k

- Testing performed with 1 dedicated core per instance, up to 10 instances


- For 20 and 40 instances, system allocates core sharing across instances
SDX Networking
SDX Networking

• Physical NIC ports can be assigned to VPX instances


○ SR-IOV virtual functions enable NIC sharing – each VF mapped to a VIF on a VPX

• VLANs can be configured within VPX instance


○ Standard 802.1Q tagging options within NetScaler interface

• NICs perform MAC and VLAN filtering


○ 63 VLAN filters per 10G interface; 31 VLAN filters per 1G interface
○ Disabling VLAN filters on an interface  4096 VLANs

• SVM and instance NSIPs can be on different IFs or networks


○ Management or data interfaces, subnets

• 10.0 features VMAC, L2 Mode Support


○ Not supported on 9.3

• Manual link aggregation supported on 9.3 and 10.0


○ No LACP support
Corinth Interface Topology

Management 1G Data 10G Data


Interfaces Interfaces Interfaces

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4

Citrix Confidential - Do Not Distribute


Maximum Virtual Interface and VLANs

Each 1G Interface
• Virtual Interfaces: 7
• VLANs (w/ VLAN Filtering): 31
• VLANs (w/out VLAN Filtering): 4094

Management 1G Data 10G Data


Interfaces Interfaces Interfaces

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4

Citrix Confidential - Do Not Distribute


Maximum Virtual Interface and VLANs

Each 10G Interface


• Virtual Interfaces: 63
• VLANs (w/ VLAN Filtering): 63
• VLANs (w/out VLAN Filtering): 4094

Management 1G Data 10G Data


Interfaces Interfaces Interfaces

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4

Citrix Confidential - Do Not Distribute


VLAN Filtering

• L2 segmentation for instances sharing a physical interface


○ VPX1 on VLAN100, VPX2 on VLAN200, VPX1+VPX2 on port 1/x or 10/x

• Configure VLAN Filtering on interfaces from SVM


○ Make sure interface is not disabled for VLAN filtering

• Add VLAN on VPX instance listening on an interface


○ VLAN filters automatically added to interface as VLANs are created on instance

• Future: VLAN whitelists for defining VLANs from SVM


○ Prevents delegated instance admin from adding a VLAN he should not access

Citrix Confidential - Do Not Distribute


L2 Mode

• Requires XenServer supplemental pack


○ How to install: http://support.citrix.com/article/CTX132877/

• Multiple instances can be in L2 mode


○ E.g., VPX1 and VPX2 on different interfaces or different VLANs

• SVM has authoritative control over L2 mode


○ Can be enabled on the VPX only if configured during provisioning

• Configuration step in Create / Modify Instance wizard


○ Select “Allow L2 mode” on the Interfaces tab

Citrix Confidential - Do Not Distribute


L2 Mode – Cont’d

• Default disabled

Citrix Confidential - Do Not Distribute


VMACs

• specify comma separated list of VRIDs

Citrix Confidential - Do Not Distribute


VMACs – cont’d

• VRIDs must first be configured on SVM GUI


○ Configured on each VPX interface

• Add / Delete from SVM is applied immediately


○ No VPX reboot required

• Configure VRID on VPX


○ Same as MPX

Citrix Confidential - Do Not Distribute


VMACs – cont’d

• Typical config:
○ add vrid 10 –prio 100 …
○ bind vrid 10 –ifnum 10/4
○ set ns ip 192.168.1.1 –vrid 10

• VMAC format used in NetScaler


○ IPv4 00-00-5e-00-01-{VRID}
○ IPv6 00-00-5e-00-02-{VRID}

• MAC filters are limited

Citrix Confidential - Do Not Distribute


Link Aggregation

• No LACP support
• Manual only

Citrix Confidential - Do Not Distribute


Network Isolation

• Management plane network isolation

• Data plane network isolation

Citrix Confidential - Do Not Distribute


Simplest Deployment

ServiceVM
10.1.1.x (ServiceVM and NSIPs on same network)
Instance 1

Instance 2

Instance 3

Instance 4

Instance 5
No sharing of data interfaces

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4

Citrix Confidential - Do Not Distribute


Simplest Deployment

ServiceVM
10.1.1.x (ServiceVM and NSIPs on same network)
Deployments where compliance is not a concern
Deployments when all instances in the same security zone
Instance 1

Instance 2

Instance 3

Instance 4

Instance 5
• Instance density limited to number of physical interfaces
• Data plane isolation achieved via no sharing of physical interfaces
• No need for SDX to filter VLANs, given no sharing of physical interfaces
• 4096 VLANs per interface and instance
No sharing of data interfaces

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4

Citrix Confidential - Do Not Distribute


Further Management Plane
Isolation Options

Citrix Confidential - Do Not


Distribute
Separate networks for Service VM and NSIPs

ServiceVM

Instance 1

Instance 2

Instance 3

Instance 4

Instance 5
10.1.1.x

10.1.2.x

No sharing of data interfaces

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4

Citrix Confidential - Do Not Distribute


Separate networks for Service VM and NSIPs

ServiceVM

device admin doesn’t want instance admins on Service VM network


Deployments when all instances in the same security zone
Instance 1

Instance 2

Instance 3

Instance 4

Instance 5
• Instance density limited to number of physical interfaces
10.1.1.x

10.1.2.x

• Data plane isolation achieved via no sharing of physical interfaces


• No need for SDX to filter VLANs, given no sharing of physical interfaces
• 4096 VLANs per interface and instance
No sharing of data interfaces

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4

Citrix Confidential - Do Not Distribute


Multiple Networks/VLANs for NSIPs

ServiceVM

Instance 1

Instance 2

Instance 3

Instance 4

Instance 5
Instance 6
VLAN10
VLAN20
10.1.1.x

VLAN6 VLAN5

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4
Further Data Plane Isolation
Options

Citrix Confidential - Do Not


Distribute
Sharing Physical Interfaces without VLAN Filtering

ServiceVM

Instance 5
Instance 6
10.1.1.x

VLAN5,6 VLAN5, 6

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4

VLAN Filtering NOT enabled on 10/4 interface


Citrix Confidential - Do Not Distribute
Sharing Physical Interfaces without VLAN Filtering

ServiceVM

Compliance not an issue


Need a lot of instances, need a lot of VLANs per instance
Same admin for service VM and all instances sharing an interface

Instance 5
Instance 6
10.1.1.x

• Instance density limited only by platform maximum


• 4096 VLANs per interface
• SDX will forward all VLAN traffic on an interface to all instances on that interface
VLAN5,6 VLAN5, 6

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4

VLAN Filtering NOT enabled on 10/4 interface


Citrix Confidential - Do Not Distribute
Sharing Physical Interfaces with VLAN Filtering

ServiceVM

Instance 5
Instance 6
10.1.1.x

VLAN6 VLAN5

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4

Citrix Confidential - Do Not Distribute VLAN Filtering enabled on 10/4 interface


Sharing Physical Interfaces with VLAN Filtering

ServiceVM

Need more instances than physical ports


Scenarios where conserving switch ports is important

Instance 5
Instance 6
• Instance density limited only by platform maximum
10.1.1.x

• SDX will NOT forward VLAN5 traffic to Instance6


• VLANs/interface limited when VLAN filtering is used
• VLAN filtering can be enabled/disabled interface by interface
VLAN6 VLAN5

0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4

Citrix Confidential - Do Not Distribute VLAN Filtering enabled on 10/4 interface


SDX Workflow
SDX Appliance Installation

• LCD Panel / Keypad & NMI button – inactive on SDX appliances


• 2 ways to initially configure SVM or XenServer
○ DB9 port for serial console access
○ 0/1 management port for initial login (ssh for CLI, http or https for GUI)

• Default SVM IP – 192.168.100.1


○ default login credentials are nsroot / nsroot

• Default XS IP – 192.168.100.2
○ default login credentials are root / nsroot
Assigning Network Parameters
SDX Default User Account

• Change nsroot default user account password on SVM


Add Users

• 2 permission levels – superuser (admin) or readonly (no config changes)


• Local authentication
Configuring Clock Synchronization

• Sync SDX local time with NTP server


• Clock sync config not affected by restart or upgrade / downgrade
• Not propagated to secondary HA node
SNMP Traps on SDX

• SDX SNMP agent can generate asynchronous SNMP traps


○ Triggered by abnormal conditions

• Traps sent to SNMP trap listener


○ Signals abnormal conditions
SNMP Trap Destinations

• Destination server must be IPv4 address


• Port must match UDP port on trap listener – default 162, min. value 1
• Community password string can include letters, #s, symbols
○ Hyphen, period, dash, space
○ Also supported:
○ At (@)
○ Equals (=)
○ Colon (:)
○ Underscore (_)
MIB Object Definitions

• NetScaler enterprise MIB for SNMPv2 managers and trap listeners


○ SDX-MIB-smiv2.mib file from Downloads on SDX appliance

• NetScaler SDX-specific events


○ CPU / memory utilization
○ SVM-to-XenServer communication / reachability
○ Firmware validation
○ Hardware / software validation
○ Fan / temp / electrical / PS condition
○ Interface state
SDX User Login Password

•Only change user passwords via the SVM

•Credentials apply for both XenServer and SVM


•Do not change user credentials via XenServer
Installing SSL Certificates

• Upload SSL Certificate to SVM and install


○ Requires Certificate File, Key File, and Password

• Re-login to SVM required


○ All SVM sessions terminated
Viewing SSL Certificate Details

• 9.3e, 10.0 only


Applying Licenses

• Platform license file required at a minimum


○ Enables 5 allowed VPX instances – without platform license, 0 allowed

• Add-On Pack license file


○ 5 additional allowed VPX instances

• Upload platform, add-on licenses


○ Can apply all licenses at once

• Backup license files


○ Download to local machine
Managing SDX Interfaces

• Configure physical interfaces, or reset to default values


○ All interfaces Auto-Negotiate by default
VLANs on SDX • Within instance 1 – configure VLAN
yellow on port 1/4
• Within instance 2 – configure VLAN red
on port 1/4

• Nic 1/4 sees two VLANs – yellow and red


• Yellow goes to instance 1 SDX programs VLANs configured in instances as filters
• Red goes to instance 2 in the NIC
• 10G interface supports 63 VLAN filters
• 1G interface supports 31 VLAN filters
Configuring VLAN Filtering

• Default enabled on 9.3 and 10.0


○ Cannot disable without upgrading hypervisor to XS 6.0

• Configured per physical interface


○ Individual or batch selection

• Applies to associated VPXs


○ Can auto-reboot upon config change
○ Default no auto-reboot
Configuring VLAN Filtering – cont’d

• After adding interfaces to VLAN filter, configure VLANs on VPXs


○ Added to VLAN filter

• Packets filtered / dropped at the NIC


○ NIC acts as VLAN-aware L2 classifier / sorter

• When VLAN filter disabled, all instances on data plane see broadcast traffic
○ NIC acts as virtual ethernet bridge – packets dropped at the VPX
Static (Manual) Link Aggregation

• 802.3ad link aggregation supported at VPX instance level


• Switch ports must be statically configured – LACP must be disabled
○ Etherchannel port group (etherchannel) or virtual port channel (VPC)
Static (Manual) Link Aggregation – cont’d

• Configure link aggregation (LA) channel on VPX


○ Physical interfaces corresponding to etherchannel / VPC specified within the LA channel

• Once configured, LA channel on VPX assigned single MAC address

MAC 1 MAC 2
Static (Manual) Link Aggregation – cont’d

• Configure link aggregation (LA) channel on VPX


○ Physical interfaces corresponding to etherchannel / VPC specified within the LA channel

• Once configured, LA channel on VPX assigned single MAC address

MAC 1 MAC 2
Static LA – Unicast, VLAN Filtering Disabled

Switch forwards packet with DEST MAC 1 on Etherchannel


Packet arrives at PHY NIC 10/1 or 10/2
NIC applies MAC filter, forwards to queue for VPX1
Packet arrives at VPX1 on vNIC 10/1 or 10/2
VLAN policy applied at VPX1

MAC 1 MAC 2
Static LA – Unicast, VLAN Filtering Enabled

Switch forwards packet with DEST MAC 1 on Etherchannel


Packet arrives at PHY NIC 10/1 or 10/2
NIC applies MAC filter, VLAN filter, forwards to queue for VPX1
Packet arrives at VPX1 on vNIC 10/1 or 10/2
VLAN policy applied at VPX1

NIC will apply L2 sorting/classification, which indicates which queues to


forward to.

MAC 1 MAC 2
Static LA – Broadcast, VLAN Filtering Disabled

Switch broadcasts VLAN100 packet on Etherchannel


Packet arrives at PHY NIC 10/1 or 10/2
NIC broadcasts packet to queues for VPX1 & VPX2
Packet arrives at VPX1 (10/1 or 10/2) & VPX2 (10/1 or 10/2)
VLAN policy applied at VPX1 and VPX2
Packet dropped at VPX2

MAC 1 MAC 2
Static LA – Broadcast, VLAN Filtering Enabled

Switch broadcasts VLAN100 packet on Etherchannel


Packet arrives at PHY NIC 10/1 or 10/2
NIC applies MAC filter, VLAN filter
NIC forwards packet to queue for VPX1 only
Packet arrives at VPX1 (10/1 or 10/2)
VLAN policy applied at VPX1

The SR-IOV capable NIC in the SDX will apply a layer 2


classification that includes MAC lookup and VLAN filtering, prior
to forwarding to any proper destination queue(s).

MAC 1 MAC 2
Viewing SDX Appliance Properties

• Click on “Configuration” or “Monitoring” tab


Viewing Real-Time Appliance Throughput

• Click on “Monitoring” tab  Throughput


• Incoming and Outgoing traffic
• Plotted in real-time
• Regular interval updates
Viewing CPU and Memory Usage

• Plotted in real time at regular interval update


• Click for CPU, memory, both
○ Default both
Viewing CPU Usage for All Cores

• Shows committed / available CPU resource from hypervisor perspective


System Health Monitoring

• Detects errors in monitored components on SDX appliance


○ HW & SW resources
○ Physical & virtual disks
○ Hardware sensors
○ Fan
○ Temperature
○ Voltage
○ Power supply sensors
○ Interfaces
Monitoring Resources on SDX

• For software components, all values are “NA” except BMC firmware version
• “Status” field
○ Hardware and BMC Firmware Version
○ Calls to XenServer – SVM cannot reach XenServer via API, HTTP, PING, or SSH
○ For “Health Monitor Plugin” -- ERROR means plugin not installed on XenServer

• “Expected Value” field


○ Does not apply to software calls to XenServer
Monitoring Resources on SDX – cont’d
Monitoring Storage Resources on SDX

• Details displayed for


○ Disk (physical disks)
○ Storage Repository (virtual disks or physical disk partitions)

• “Transactions / s” – number of blocks being read or written per second


○ Read from iostat output
Monitoring Hardware Sensors on SDX Appliance

• For Fan Speed “Status” field – state (condition) of the fan


○ ERROR = deviation, NA = fan not present

• For Temperature “Status” field


○ ERROR = current value out of range

• For Voltage “Status” field


○ ERROR = current value out of range

• For Power Supply “Status”


○ ERROR = only 1 PS connected / working
○ OK = both PSes connected & working
Monitoring Interfaces on SDX

• VFs Assigned / Total


○ # of virtual functions assigned to an interface
○ Up to 7 VFs per 1G interface
○ Up to 40 VFs per 10G interface

• All Tx / Rx statistics
○ Count indicates since appliance last started
Managing Client Sessions

• View or End client sessions (login sessions) to the SVM


○ Cannot end a session from the client initiating that session
Configuring Policies

• Backup and data-pruning policies for logged data


○ Event logs, audit logs, task logs

• Default 3-day pruning – configurable


○ Runs daily at 00:00 AM

• Default 3 backups – configurable


○ Runs daily at 00:30 AM
Restarting the Management Service

• Reboots SVM only – no downtime for VPX instances


Upgrading the Management Service

1. Upload SVM software image

2. Upload SVM documentation files

3. Apply
upgrade
Upgrading the Management Service – cont’d

• File formats must be .tgz


○ Both SVM Software Image and SVM Documentation Files set

• Upgrade to later SVM build only – no downgrade


○ Factory reset restores SVM build that shipped with SDX appliance

• Delete old version files on SDX after backup and upgrade


○ Download to local client machine for backup
Upgrading XenServer

1. Upload XS software distribution

2. Apply
upgrade
Upgrading XenServer – cont’d

• File format for XS software image must be .iso


• Upgrade to later XS version only – no downgrade
○ Factory reset restores XS version that shipped with SDX appliance

• Delete old version files on SDX after backup and upgrade


○ Download to local client machine for backup

• XS Upgrade available on recent SDX releases


○ 9.3-54.5006.e and later
○ 10.0 and later

• XenServer 6.0 required for L2 mode, VMAC support, disable VLAN filtering
Backing Up / Restoring SDX Configuration Data

• Auto backup runs at 00:30 AM daily (system policy)


• Run manual backup to immediately backup config data
• Restore config data via backup file
○ Restore appliance – restores XS, SVM, all VPXs
○ All instances – restores only VPX instances
○ Specifed instances – restores only user-selected VPXs
Backing Up / Restoring SDX Config Data – cont’d
Factory Reset

• 3 choices to restores the SDX appliance to factory settings


○ Reset (Without Network Configuration) – current network config PRESERVED
○ Reset (With Network Configuration) – current network config DELETED
○ Reset Appliance – everything deleted (see next slide)

• Restoring from backup requires:


○ SDX appliance backup file, SSL certs and keys
○ License files, technical archive files
○ SVM & VPX files – builds, docs, XVA; XS – ISO
○ VPX instance configuration files
Citrix Confidential - Do Not Distribute
Factory Reset Actions
Factory Reset Actions – cont’d
Performing a Factory Reset

• Backup all files and save off-appliance


• Apply factory reset
Generating Tar Archive for Technical Support

• For submission of data and stats to Citrix Technical Support


• 3 modes: XS, SVM, XS+SVM
• Download to local client machine and send to Citrix Technical Support
Rebooting the SDX Appliance

• Shuts down all VPX instances


• Restarts XenServer
○ Boots SVM, all VPXs
Shutting Down the SDX Appliance
• Graceful hardware shutdown from SVM
• Disconnecting power supplies not recommended
• If the following are inaccessible:
○ SVM CLI
○ XS CLI
○ XS via serial port

• If unresponsive, use NMI button


○ Core dump + reboot
Managing Time Zones
• Default time zone is UTC
Modify System Settings
• Specify HTTPS-only for SVM-to-VPX communication
• Only allow HTTPS access to SVM GUI
Admin Profiles
• Define user / password for SDX admin login to VPX from GUI or CLI
○ Also used by SVM for provisioning VPX and retrieving VPX config data

• Default admin profile cannot be modified or deleted


○ VPX assigned default admin profile, unless user-defined admin profile is specified

• All password changes must be made from SVM


○ Changing password from VPX will break SVM-to-VPX communication
Admin Profiles – cont’d
• Create user-defined admin profile
• In HA – change password for secondary VPX first, primary VPX second
○ Both changes must be made from respective SVM(s)

• Can only delete admin profiles if not attached to an instance


SDX Admin Roles

Admin Type Permission SVM VPX Access Where Where Full VPX
Level Access Defined Stored config

SDX Admin Appliance Yes Yes SVM Users SVM Yes


Admin

Instance VPX Admin Depends Yes SVM Admin SVM, VPX Yes
Admin on Use Profile
Case
Instance VPX No Yes SVM VPX No
Admin – No Superuser Instance
Networking Add / Modify
Wizard
SDX Admin Roles – Example
• SDX admin creates user-defined admin profile on SVM
○ vpx1-admin / password1  for SDX admin to login to VPX, nsroot / password1
○ SVM will also be using nsroot and password1

• SDX admin provisions VPX, creates username & password for VPX admin
○ vpx1-admin-no-network / password2

• SDX admin can use vpx1-admin login for full config of VPX
○ VPX admin can use vpx1-admin-no-network login for limited config of VPX

• Only “vpx1-admin-no-network” appears in user accounts list on VPX


○ Do not change the nsroot password from the VPX instance
Uploading NetScaler .XVA Images
• File format: NSVPX-XEN-<release#>-<build#>_nc.xva
○ E.g.: NSVPX-XEN-10.0-54.7_nc.xva

• SDX ships with default 9.3 image


Adding a NetScaler Instance
• NSVLAN is VLAN to which subnet of NSIP address is bound
○ NSIP subnet available only on interfaces associated with NSVLAN

• Typically, SVM and VPX instance are in same subnet


○ Communication is over a management interface

• SVM and VPX can reside on different subnets


○ VLAN tag must be specified when provisioning VPX so the instance is reachable
Adding a NetScaler Instance – cont’d

• SVM and VPX instances on different subnets


○ Uncheck 0/1 and 0/2 interfaces
○ Select “Tagged Interfaces” in NSVLAN settings
Provisioning Status

Citrix Confidential - Do Not Distribute


Adding a NetScaler Instance – Considerations
• Multiple network interfaces can be assigned to an instance
○ For each TAGGED interface, specify a VLAN ID

• Interface ID #s for instances do not correspond to NIC numbering on SDX


○ E.g., SDX PHY interface 1/4 could appear as virtual interface 1/1, 1/2, or 1/x on instance

• Non-0 VLAN ID on instance IF  all packets from instance will be tagged


○ Tag IF with VLAN ID to ensure incoming packets of that VLAN ID forwarded to instance

• IFs can receive packets with several VLAN tags (trunking)


○ Specify VLAN ID = 0 for the IF, and specify required VLAN IDs for instance IF
Adding a NS Instance – Considerations (cont’d)
• Reboot ALL instances after modifying resource allocation on ANY instance
○ Could impact performance

• SSL cores cannot be shared


• Image and User Name parameters cannot be modified
• User Name / Password
○ Root user name for NS instance admin – superuser, but cannot configure VLANs / IFs

• NSVLAN ID parameter values


○ Min = 2, Max = 4095
Creating a MIP or SNIP on a NetScaler Instance
• SNIP or MIP can be assigned to instance from SVM
○ Done after provisioning the instance

• SNIP (subnet IP) – connection management and server monitoring


○ Not mandatory to specify a SNIP when initially configuring instance

• MIP (mapped IP) – used for server-side connections


○ MIP is a default SNIP – used when SNIP not available, or when USNIP mode disabled
○ Can be created or deleted without rebooting instance
Creating a MIP or SNIP on a NetScaler Instance
Saving Configuration on Instances
• Save running configuration of instance from SVM
Uploading SSL Cert & Key Files to SDX Appliance
• SSL transactions on instances require valid certificate and private / public key pair
• Certs and keys must be uploaded to SDX appliance before installing to instances
• Can be downloaded to local computer for backup
Installing SSL Certificate on NetScaler Instance(s)
• Admin can apply to 1 or more instances after uploading SSL cert file & key
Updating SSL Certificate on a NetScaler Instance
• Cert file, key file, and cert format can be updated via SVM
• IP address and certificate name cannot be modified
• Password protection for PEM format only
Polling for SSL Certs on the NetScaler Instance
• SVM polls for new SSL certs installed on an instance from that instance
• Specify polling interval for SVM to check for new certs on all instances
• Real-time polling for immediate list of all SSL certs on all instances
Upgrading a NetScaler Instance
• Upload build file to upgrade an existing instance – .tgz format
○ E.g., build-10.0-54.7_nc.tgz

• Upload XVA file to install a new instance – .xva format


○ E.g., NSVPX-XEN-10.0-54.7_nc.xva

• Upload documentation set for all NS versions that are on SDX -- .tgz format
○ E.g.,ns-10.0-54.7-doc.tgz
Upgrading a NetScaler Instance
• Upgrade 1 or many instances -- check software versions first*
• To ensure running config persists, save config on instance before upgrading

*Today, only Platinum Level 5 Instance Add-On Packs supported


Managing a NetScaler Instance
• Operational tasks for NS instance management via SVM:
○ Start
○ Shut down
○ Reboot
○ Delete
Allowing L2 Mode on a NS Instance
• NS can receive and forward packets for non-owned MAC addresses
• NS will act as learning bridge and forward all packets not destined for itself
• Some features require L2 mode
○ E.g., CloudBridge

• Bridging loops can occur – take precautions


L2 Mode on a NS Instance – PRECAUTIONS

• On 1/x interfaces, untagged packets must be allowed on only one instance


○ For all other instances enabled on the same interface, select TAGGED

• Select TAGGED for all interfaces assigned to instances in L2 mode


○ TAGGED interfaces cannot receive untagged interfaces
○ Log onto that instance and configure a VLAN to receive packets on that interface

• Shared 1/x and 1/10 IFs, L2 mode allowed:


○ Enable VLAN filtering on all interfaces and ensure each IF is on a different VLAN
○ Only one instance can receive untagged packets on that IF
○ If IF is assigned to other instances, select TAGGED on that iF for those instances
L2 Mode on a NS Instance – PRECAUTIONS

• If untagged packets allowed for instance on 1/x IF, L2 mode allowed:


○ No other instance can receive untagged packets on that IF, regardless of L2 mode

• If untagged packets allowed for instance on 1/x IF, L2 mode not allowed:
○ No instance with L2 mode allowed can receive untagged packets on that IF

• For an L2 mode instance (VPX1) on 0/x IF shared with VPX2:


○ Select TAGGED for all other 1/x and 10/x IFs that are assigned to VPX2

• For L2 mode enabled on instance associated with 0/1 and 0/2:


○ Only 0/1 OR 0/2 can be associated with another L2 mode enabled instance
○ Cannot associate 0/1 AND 0/2 with more than one L2 mode enabled instance
Configuring L2 Mode on a NS Instance
VMACs

• HA primary node owns all floating IPs (MIP, SNIP, VIP addresses)
○ Responds to ARPs with its own MAC – upstream device ARP table has primary’s MAC

• Upon failover, secondary node takes over as primary


○ Uses gratuitous ARP to advertise floating IPs acquired from primary, with its own MAC

• Some devices do not accept gratuitous ARP


○ Old IP-to-MAC mapping persists – can result in connectivity loss

• vMAC – floating virtual MAC address, identical on both HA nodes


○ Does not rely on updating upstream switch / router
Configuring VMACs on an Interface

• Add 8-bit virtual router ID (VRID) during instance provision / modify


○ IPv4 and IPv6 supported – possible values for VRID = 1 to 255

• SVM internally generates vMAC for that interface


VMACs on SDX – Considerations

• VMACs supported for Active-Active and Active-Standby configurations


○ For Active-Active, same VRID must be specified on instances

• VRID must be added from SVM


○ If VRID added directly from instance, instance cannot Rx packet with VMAC as dest MAC

• Same VRID can be used on different instances on 10G interfaces


○ VLAN filtering must be enabled & instances must belong to different tagged VLANs

• Cannot use same VRID on different instances on 1G interfaces


VMACs on SDX – Considerations (cont’d)

• Cannot add / delete VRIDs for IF assigned to instance, if instance is running


• In Active-Active, more than 1 VRID allowed for an IF assigned to instance
• Max 86 vMACs allowed per 10G interface
• Max 16 vMACs allowed per 1G interface
• If no more vMAC filters are available, reduce # of VRIDs on another instance
Viewing VPX Instance Properties on SDX
Viewing Running Config of an Instance
Viewing Saved Config of an Instance
View Running / Saved Instance Config – cont’d

• New “Change Management” view


• Only in release 10.0-69.4 and later
Pinging an Instance
Traceroute of an Instance
Manually Rediscovering an Instance

• Default auto-rediscover to fetch latest configuration and state every 30 min


Viewing Audit Logs

• View / sort all tasks performed by the SVM, logged in appliance database
Viewing Task Logs

• View tasks performed by SVM on instances – upgrade, install SSL cert, etc.
Viewing Task Device Logs

• View and track progress and status of tasks performed on each instance
Viewing Events

• View events generated by SVM for tasks performed on the SVM

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy