Link Search Menu Expand Document
calendar_month 16-Jun-25

Two-Tier Server Access

Configure Two-Tier access switches as VSX pairs for redundant multi-chassis link aggregation (MC-LAG) connections to the core and downstream data center initiator hosts and target arrays.

Table of contents

Configure Access VSX ISL Interface

To establish a VSX relationship between each pair of access switches, a link aggregation (LAG) interface must be created for assignment as the VSX data plane’s inter-switch link (ISL). Standardizing the ToR model enables configuring the same ports on all access switches for the VSX ISL link at the UI group level.

Step 1 On the left navigation menu, click DC-RSVCORE, then click the data center access switch group name in the Groups column.

Step 2 On the left navigation menu, click Devices.

Step 3 At the upper right of the Switches pane, click Config.

Step 4 In the Interfaces tile, click Ports & Link Aggregations.

Step 5 Scroll to the right of the Ports & Link Aggregations table, and click the + (plus sign) in the upper right.

Step 6 On the Add LAG page, assign the following values:

  • Name: lag256
  • Description: VSX_ISL_LAG
  • Port Members: 1/1/49, 1/1/50
  • Speed Duplex: <no value> (default)
  • VLAN Mode: trunk
  • Native VLAN: 1 (default)
  • Allowed VLANs: <no value> (default)
  • Admin Up: checked
  • Aggregation Mode: LACP Active

Step 7 In the Ports & Link Aggregations table’s title row, click (left arrow) to return to the main configuration page.

Note: The remaining VSX configuration steps will be completed using the HPE Aruba Central MultiEdit Configuration tool

Spanning Tree

MC-LAGs provide loop prevention in a Two-Tier architecture. Spanning-tree (STP) is configured as a backup loop prevention mechanism in case of initiator hosts and target array cabling errors to ToR switches.

Step 1 In the Bridging tile, click Loop Prevention.

Step 2 In the Loop Prevention window, set the Spanning Tree Region to RSVDC. Leave all other settings at their defaults, then click SAVE.

Enter MultiEdit Configuration

Step 1 At the upper left of the Switches pane, click the MultiEdit enable slider.

Step 2 Select all access switches in the Devices lists, then click EDIT CONFIG.

Configure Lossless Ethernet Features

The lossless feature is enabled through the deployment of a coordinated set of QoS components—covering packet classification, prioritization, queueing, and transmission. Link Level Flow Control (LLFC) serves as the underlying congestion control mechanism. To ensure an end-to-end lossless path for storage traffic, all devices in the network are configured to apply, negotiate, or honor QoS settings. Each VSX member manages an active path to the storage target in active-active mode, enabled by the Multi-Path I/O (MPIO) feature.

The following steps establish a Quality of Service (QoS) framework optimized for iSCSI traffic within the data center environment.

Step 1 Enter the following CLI commands for traffic classification. The purpose is to match iSCSI protocol traffic for both IPv4 and IPv6, for any source or destination

class ip tcp_iscsi_initiator
    10 match tcp any any eq iscsi count
class ip tcp_iscsi_target
    10 match tcp any eq iscsi any count
class ipv6 v6_tcp_iscsi_initiator
    10 match tcp any any eq iscsi count
class ipv6 v6_tcp_iscsi_target
    10 match tcp any eq iscsi any count

Step 2 Enter the following CLI commands to apply QoS markings for iSCSI Initiator and Target traffic.

policy remark_iSCSI_Initiator
    10 class ip tcp_iscsi_initiator action pcp 4 action local-priority 4 action dscp CS4
    20 class ipv6 v6_tcp_iscsi_initiator action pcp 4 action local-priority 4 action dscp CS4

policy remark_iSCSI_Target
    10 class ip tcp_iscsi_target action pcp 4 action local-priority 4 action dscp CS4
    20 class ipv6 v6_tcp_iscsi_target action pcp 4 action local-priority 4 action dscp CS4

Step 3 Enter the following CLI commands to create a Queue Profile. This profile defines how local priorities are mapped to output queues. Queue 1 is reserved for lossless iSCSI traffic (local priority 4).

qos queue-profile lossless4p
    map queue 0 local-priority 0,1,2,3,5
    map queue 1 local-priority 4
    map queue 2 local-priority 6,7

Step 4 Enter the following CLI commands to create a Scheduler Profile for bandwidth allocation. This profile uses a Weighted Round Robin (WRR) scheduler that prioritizes Queue 1 (iSCSI traffic) with an 80:20 bandwidth allocation ratio over other traffic.

qos schedule-profile lossless_sp4p
    dwrr queue 0 weight 20
    dwrr queue 1 weight 80
    strict queue 2

Step 5 Enter the following CLI commands to configure global QoS settings:

  • Apply the queue and scheduler profiles to all switch ports globally using the qos apply command.
  • Enable the switch to trust incoming DSCP values using the qos trust command.
  • Allocate lossless buffer memory for priority 4 traffic to absorb bursts, using the qos pool command.
apply qos queue-profile lossless4p schedule-profile lossless_sp4p
qos trust dscp
qos pool 1 lossless size 50.00 percent headroom 5000 kbytes priorities 4
dcbx application iscsi priority 4

Step 6 Initiator hosts use MPIO (Multi-Path I/O) to establish high-availability paths to the target array by leveraging a pair of VLANs-each representing a distinct path to the destination target array. The two-tier data center topology diagram at the beginning of this section outlines the VLAN assignments.

Enter the following CLI commands to create the required VLANs.

After entering the configuration, scroll to the bottom right of the MultiEdit Configuration window and click SAVE to apply the changes to all four access switches.

vlan 1
vlan 1101
   description iSCSI traffic vlan1101
vlan 1102
   description iSCSI traffic vlan1102
vlan 1201
   description iSCSI traffic vlan1201
vlan 1202
   description iSCSI traffic vlan1202

Configure Access Switch VSX Pairs

The access switches are configured as VSX pairs to support Layer 2 multi-chassis link aggregation to the core layer. A two-port link aggregation is configured and assigned as the VSX data path inter-switch link (ISL). The out-of-band mgmt interface is used for VSX keepalives to maximize the number of ports available for connecting access switches.

Step 1 Enter the initial VSX configuration.

vsx
	system-mac 02:00:00:00:10:01
	inter-switch-link lag 256
	role primary
	keepalive peer 172.16.117.84 source 172.16.117.83 vrf mgmt
  linkup-delay-timer 600

Step 2 Mouse-over the field values in the table column headings below, right-click, and set the appropriate values for each switch.

Switchsystem-macrolepeersource
RSVDC-ACCESS1-202:00:00:00:10:01
[no-change]
secondary172.16.117.83172.16.117.84
RSVDC-ACCESS2-102:00:00:00:10:02primary
[no-change]
172.16.117.86172.16.117.85
RSVDC-ACCESS1-202:00:00:00:10:02secondary172.16.117.85172.16.117.86

Step 3 Assign a description, flow-control (LLFC) and maximum MTU value for the VSX ISL physical interfaces.

interface 1/1/49
    description VSX-ISL
    # Enable end to end LLFC   
    flow-control rxtx pool 1 override-negotiation
    mtu 9198
interface 1/1/50
    description VSX-ISL
    # Enable end to end LLFC 
    flow-control rxtx pool 1 override-negotiation
    mtu 9198

Configure Access to Core MC-LAGs

Step 1 Create the core-facing MC-LAG interface.

interface lag 255 multi-chassis
    no shutdown
    description DC-CORE
    no routing
    vlan trunk native 1
    vlan trunk allowed all
    lacp mode active

Note: Tag all VLANs on all inter-switch MC-LAGs to support ubiquitous initiator host or target array mobility across all racks in the Two-Tier structure.

Step 2 Assign physical interfaces to the core MC-LAG interface. Enable LLFC for end to end congestion flow-control.

interface 1/1/53
    no shutdown
    mtu 9198
    description RSVDC-CORE1-1
    flow-control rxtx pool 1 override-negotiation
    lag 255
interface 1/1/54
    no shutdown
    mtu 9198
    description RSVDC-CORE1-2
    flow-control rxtx pool 1 override-negotiation
    lag 255

Note: The same physical interface on each access switch in the data center should connect to the same upstream core switch. For example, interface 1/1/53 on every ToR access switch can be configured to connect to the primary switch in the VSX core pair. This creates a consistent configuration that is easy to troubleshoot.

Configure Access Switch to Initiator Host and Target Array

Each VSX member manages a path in active-active mode, governed by the MPIO feature. The access switch connects to the initiator host as a trunk interface. Enable LLFC and the QoS Policy for marking ingress iSCSI traffic.

Select RSVDC-ACCESS2-1 and RSVDC-ACCESS2-2 access switches by following Step 1 and Step 2 under the MultiEdit Configuration section. This switch pair includes one initiator host and one target array that require configuration as part of the iSCSI deployment.

Step 1 Configure the initiator host access interface.

int 1/1/1
  no shut
  mtu 9198  
  # enable LLFC on the switch side
  flow-control rxtx pool 1 override-negotiation  
  description Connecting to (ESXi-01) Initiator Host-1
  vlan trunk native 1
  vlan trunk allowed 1101-1102,1201-1202  
  # apply policy for identifying, marking iscsi traffic
  apply policy remark_iSCSI_Initiator in
  

Step 2 Configure the target array access interface. The access switch connects to the target array as a trunk interface. Enable LLFC and QoS policy for marking ingress iSCSI traffic.

int 1/1/33
  no shut
  mtu 9198  
  # enable LLFC on the switch side
  flow-control rxtx pool 1 override-negotiation  
  description Connecting to Target Array
  vlan trunk native 1
  vlan trunk allowed 1101-1102,1201-1202  
  # apply policy for identifying, marking iscsi traffic
  apply policy remark_iSCSI_Target in
  

At the bottom right of the MultiEdit Configuration window, click SAVE to apply and save the configuration to the RSVDC-ACCESS2-1 and RSVDC-ACCESS2-2 access switches.

Step 3 Select RSVDC-ACCESS1-1 and RSVDC-ACCESS1-2 access switches by following Step 1 and Step 2 under the MultiEdit Configuration section, which form the access switch pair with one initiator host connected.

Step 1 Configure the initiator host access interface.

int 1/1/1
  no shut
  mtu 9198  
  # enable LLFC on the switch side
  flow-control rxtx pool 1 override-negotiation  
  description Connecting to (ESXi-01) Initiator Host-1
  vlan trunk native 1
  vlan trunk allowed 1101-1102,1201-1202  
  # apply policy for identifying, marking iscsi traffic
  apply policy remark_iSCSI_Initiator in
  

At the bottom right of the MultiEdit Configuration window, click SAVE to apply and save the configuration to the RSVDC-ACCESS2-1 and RSVDC-ACCESS2-2 access switches.

Verify Configuration

Step 1 On the left navigation menu, click Tools.

Step 2 On the Tools menu at the top, click the Commands tab.

Step 3 Click the Available Devices dropdown, select all access switches, then click elsewhere on the page.

Step 4 In the Categories list, click All Category. Enter vsx in the commands list filter, click show vsx status, then click Add >.

Step 5 Add the following additional commands to the Selected Commands list.

  • show lacp interfaces
  • show spanning-tree mst detail
  • show ntp status

Step 6 At the lower left of the Commands pane, click RUN.

Step 7 Scroll down to review the CLI command output for each switch. Verify key result data for each command.

  • show vsx status
    • ISL channel: In-Sync
    • ISL mgmt channel: operational
    • Config Sync Status: In-Sync
    • Device Role: set to primary and secondary on corresponding switches
    • Other VSX attributes display equal values for both VSX members

  • show lacp interfaces
    • Both Actor and Partner have a corresponding interface for each MC-LAG.
    • All Actor interfaces have a State of “ALFNCD”.
    • All Actor interfaces have a Forwarding State of “up” for all upstream core switch facing MC-LAGs.
    • All Partner interfaces have a state of “PLFNCD” or “ALFNCD”.

Note: “(mc)” in the Aggr Name column indicates an MC-LAG. The switch running the show lacp interfaces command is considered the Actor. The other VSX member switch is considered the Partner.

  • show spanning-tree mst detail
    • Verify that the Root Address value is the virtual VSX MAC address on the core switches.
    • Verify that the Role for LAG 255 connected to the core switches is “Root” with a State of “Forwarding”.
    • Verify that the Role for all other LAGs and ports with connections is “Designated” with a State of “Forwarding”.

  • show ntp status
    • Verify that NTP Server is populated with a configured NTP server IP address
    • Verify that the Time Accuracy field is populated.