Link Search Menu Expand Document
calendar_month 16-Jun-25

Two-Tier Core

Configure Two-Tier core switches as a VSX pair for Layer 2 aggregation of the data center access switches, spanning tree, lossless ethernet features and IP data center services.

Table of contents

Configure Core VSX ISL Interface

To establish a VSX relationship between the core switches, create a link aggregation (LAG) interface for assignment as the VSX data plane’s inter-switch link (ISL). The LAG can be defined at the Central UI group level when using the same ports for the VSX ISL on both core switches.

Step 1 On the left HPE Aruba Central menu, click the current context, then click the data center core switch group name in the Groups column.

Note: The current context in the screenshot above is Global.

Step 2 On the left navigation menu, click Devices.

Step 3 At the upper right of the Switches pane, click Config.

Step 4 In the Interfaces tile, click Ports & Link Aggregations.

Step 5 Scroll to the right of the Ports & Link Aggregations table, and click the + (plus sign) at the upper right.

Step 6 On the Add LAG page, enter the following values and click ADD:

  • Name: lag256
  • Description: VSX-ISL-LAG
  • Port Members: 1/1/31, 1/1/32
  • Speed Duplex: <no value> (default)
  • VLAN Mode: trunk
  • Native VLAN: 1 (default)
  • Allowed VLANs: <no value> (default)
  • Admin Up: checked
  • Aggregation Mode: LACP Active

Step 7 In the Ports & Link Aggregations table’s title row, click (left arrow) to return to the main configuration page.

Note: The remaining VSX configuration steps will be completed using the HPE Aruba Central MultiEdit Configuration tool

Spanning Tree

Multi-chassis link aggregations (MC-LAGs) provide the primary loop prevention mechanism in a Two-Tier architecture. When configured on both core and access switches, MC-LAGs allow loop-free forwarding on all inter-switch links simultaneously in both directions.

MC-LAGs provide efficient, hash-based load balancing with better performance than individually mapped VLANs to Multiple Spanning-Tree (MST) instances.

Spanning-tree (STP) is configured as a backup loop prevention mechanism in case of operator cabling errors when connecting initiator hosts/target arrays to top-of-rack switches.

Setting the spanning-tree priority to 0 ensures that the core VSX pair of switches is the STP root.

Step 1 In the Bridging tile, click Loop Prevention.

Step 2 In the Loop Prevention window, set the following Spanning Tree values, then click SAVE.

  • Priority: 0
  • Region: RSVDC

Enter MultiEdit Configuration

The Central UI interface provides simplified access to most common switch configuration features. MultiEdit is a tool in the Central UI for CX switches that enables configuration of any CX feature using CLI syntax. MultiEdit provides syntax checking, colorization, and command completion.

For complete details on using MultiEdit, refer to the Editing Configuration on AOS-CX section of Central online help.

The text configuration snippets in the following steps are intended for copying and pasting into MultiEdit. To prevent potential copy/paste errors, scroll to the bottom of the configuration, create a new line, then paste the new configuration lines. MultiEdit automatically positions new lines in the correct configuration context.

Step 1 At the upper left of the Switches pane, click the MultiEdit enable slider.

Step 2 Click both core switches in the Devices lists, then click EDIT CONFIG.

Note: When using the Central MultiEditor, it is beneficial to save small sets of configuration at a time. This reduces the volume of configuration that must be inspected when errors occur and makes troubleshooting configuration elements faster.

Configure Lossless Ethernet Features

The lossless feature is enabled through the deployment of a coordinated set of QoS components—covering packet classification, prioritization, queueing, and transmission. Link Level Flow Control (LLFC) serves as the underlying congestion control mechanism. To ensure an end-to-end lossless path for storage traffic, all devices in the network are configured to apply, negotiate, or honor QoS settings. Each VSX member manages an active path to the storage target in active-active mode, enabled by the Multi-Path I/O (MPIO) feature.

The following steps establish a Quality of Service (QoS) framework optimized for iSCSI traffic within the data center environment.

Step 1 Enter the following CLI commands for traffic classification. The purpose is to match iSCSI protocol traffic for both IPv4 and IPv6, for any source or destination

class ip tcp_iscsi_initiator
    10 match tcp any any eq iscsi count
class ip tcp_iscsi_target
    10 match tcp any eq iscsi any count
class ipv6 v6_tcp_iscsi_initiator
    10 match tcp any any eq iscsi count
class ipv6 v6_tcp_iscsi_target
    10 match tcp any eq iscsi any count

Step 2 Enter the following CLI commands to apply QoS markings for iSCSI Initiator and Target traffic.

policy remark_iSCSI_Initiator
    10 class ip tcp_iscsi_initiator action pcp 4 action local-priority 4 action dscp CS4
    20 class ipv6 v6_tcp_iscsi_initiator action pcp 4 action local-priority 4 action dscp CS4

policy remark_iSCSI_Target
    10 class ip tcp_iscsi_target action pcp 4 action local-priority 4 action dscp CS4
    20 class ipv6 v6_tcp_iscsi_target action pcp 4 action local-priority 4 action dscp CS4

Step 3 Enter the following CLI commands to create a Queue Profile. This profile defines how local priorities are mapped to output queues. Queue 1 is reserved for lossless iSCSI traffic (local priority 4).

qos queue-profile lossless4p
    map queue 0 local-priority 0,1,2,3,5
    map queue 1 local-priority 4
    map queue 2 local-priority 6,7

Step 4 Enter the following CLI commands to create a Scheduler Profile for bandwidth allocation. This profile uses a Weighted Round Robin (WRR) scheduler that prioritizes Queue 1 (iSCSI traffic) with an 80:20 bandwidth allocation ratio over other traffic.

qos schedule-profile lossless_sp4p
    dwrr queue 0 weight 20
    dwrr queue 1 weight 80
    strict queue 2

Step 5 Enter the following CLI commands to configure global QoS settings:

  • Apply the queue and scheduler profiles to all switch ports globally using the qos apply command.
  • Enable the switch to trust incoming DSCP values using the qos trust command.
  • Allocate lossless buffer memory for priority 4 traffic to absorb bursts, using the qos pool command.
apply qos queue-profile lossless4p schedule-profile lossless_sp4p
qos trust dscp
qos pool 1 lossless size 50.00 percent headroom 5000 kbytes priorities 4
dcbx application iscsi priority 4

Step 6 Initiator hosts use MPIO (Multi-Path I/O) to establish high-availability paths to the target array by leveraging a pair of VLANs-each representing a distinct path to the destination target array. The two-tier data center topology diagram at the beginning of this section outlines the VLAN assignments.

Enter the following CLI commands to create the required VLANs.

After entering the configuration, scroll to the bottom right of the MultiEdit Configuration window and click SAVE to apply the changes to all four access switches.

vlan 1
vlan 1101
   description iSCSI traffic vlan1101
vlan 1102
   description iSCSI traffic vlan1102
vlan 1201
   description iSCSI traffic vlan1201
vlan 1202
   description iSCSI traffic vlan1202

Configure Core Switch VSX

The core switches are configured as a VSX pair to support Layer 2 multi-chassis link aggregation (MC-LAG) to the access layer switches. The previously defined LAG is assigned as the VSX data path inter-switch link (ISL). The out-of-band (OOB) mgmt interface is used for VSX keepalives to maximize the number of ports available to connect access switches.

Step 1 Enter the initial VSX configuration.

vsx
	system-mac 02:00:00:00:10:00
	inter-switch-link lag 256
	role primary
	keepalive peer 172.16.117.102 source 172.16.117.101 vrf mgmt
	linkup-delay-timer 600

Note: When the mgmt vrf is specified, the keepalive peer addresses are the IPs assigned to the out-of-band management interfaces. When using DHCP IP address assignments on the OOB management network, DHCP reservations must be created for VSX-paired switches to avoid future keepalive failures.

Step 2 Mouse-over the role value of primary to display the values for each individual switch, then right-click.

Step 3 In the Modify Parameters window, click primary under RSVDC-CORE1-2, select secondary from the menu, then click SAVE CHANGES.

Note: Hover the mouse over the per-switch values to display a switch’s assigned value.

Step 4 Modify the VSX keepalive peer and source parameters by right-clicking on the values.

Switchpeersource
RSVDC-CORE1-1172.16.117.82172.16.117.81
RSVDC-CORE1-2172.16.117.81172.16.117.82

Step 5 Assign a description and maximum MTU value to the VSX ISL physical interfaces. The LLFC is enabled to pause the transmission of data temporarily to avoid packet loss during congestion. The assigned memory pool buffers paused traffic.

interface 1/1/31
    description VSX-ISL
    # enable LLFC for end to end congestion control
    flow-control rxtx pool 1 override-negotiation
    mtu 9198
interface 1/1/32
    description VSX-ISL
    # enable LLFC for end to end congestion control
    flow-control rxtx pool 1 override-negotiation
    mtu 9198

Configure Core Switch MC-LAGs

Step 1 Create MC-LAG interfaces to connect redundant top-of-rack access switches.

interface lag 1 multi-chassis
    description RACK-1
    no shutdown
    no routing
    vlan trunk native 1
    vlan trunk allowed all
    lacp mode active
    lacp fallback
    spanning-tree root-guard
interface lag 2 multi-chassis
    description RACK-2
    no shutdown
    no routing
    vlan trunk native 1
    vlan trunk allowed all
    lacp mode active
    lacp fallback
    spanning-tree root-guard

Note: MC-LAG interfaces can scope trunked VLANs only to those required for a specific downstream rack. Tagging all VLANs on all core-to-access MC-LAGs supports ubiquitous Initiator host or target array mobility across all racks within the Two-Tier structure and reduces the administrative overhead of maintaining VLAN assignments per rack.

Step 2 Assign physical interfaces to the MC-LAGs.

interface 1/1/1
    description RSVDC-ACCESS1-1
    # enable LLFC for end to end congestion control
    flow-control rxtx pool 1 override-negotiation 
    no shutdown
    mtu 9198
    lag 1
interface 1/1/2
    description RSVDC-ACCESS1-2
    # enable LLFC for end to end congestion control
    flow-control rxtx pool 1 override-negotiation
    no shutdown
    mtu 9198
    lag 1
interface 1/1/3
    description RSVDC-ACCESS2-1
    # enable LLFC for end to end congestion control
    flow-control rxtx pool 1 override-negotiation
    no shutdown
    mtu 9198
    lag 2
interface 1/1/4
    description RSVDC-ACCESS2-2
    # enable LLFC for end to end congestion control
    flow-control rxtx pool 1 override-negotiation
    no shutdown
    mtu 9198
    lag 2

Configure Routing Services

In a Two-Tier architecture, the core switches provide IP gateways to downstream Initiator hosts to route non-iSCSI traffic and to external networks when connected.

Configure Routing Components

An OSPF adjacency is configured between the core switches.

Step 1 Set the switch profile.

profile l3-agg

Note: The available profile options are platform-dependent. Selecting a profile optimizes switch hardware resources for its role in the network. It is recommended to assign the l3-agg profile to CX 8325 and CX 10000 core switches. CX 8360 switches should use their default aggregation-leaf profile. CX 9300 switches should use their default leaf profile.

Step 2 Create the OSPF process.

router ospf 1
    router-id 10.250.12.1
    passive-interface default
    area 0.0.0.0

Step 3 Mouse-over the OSPF router ID values 10.250.12.1, right-click to set per switch values, set the router-id of RSVDC-CORE-1-2 to 10.250.12.2, and click SAVE CHANGES.

Step 4 Create core switch loopback interfaces. The loopback IP should be the same value assigned to the OSPF router-id.

interface loopback 0
	ip address 10.250.12.1/32
	ip ospf 1 area 0.0.0.0

Step 5 Mouse-over the loopback 0 ip address value of 10.250.12.1, right-click to set per switch values, set the ip address of RSVDC-CORE1-2 to 10.250.12.2/32, and click SAVE CHANGES.

Configure Initiator host VLAN SVIs

Step 1 Configure VLAN-switched virtual interfaces (SVIs) for data center initiator host VLANs. Core switches provide the default gateway to downstream data center initiator hosts for non-iSCSI traffic. An active gateway IP and MAC address are configured for each VLAN to allow both core switches to represent the same IP gateway.

interface vlan1101
    description STORAGE-VLAN-1101-SVI
    ip mtu 9198
    ip address 10.12.101.2/24
    active-gateway ip mac 02:00:0a:01:65:01
    active-gateway ip 10.12.101.1
    ip ospf 1 area 0.0.0.0
interface vlan1102
    description STORAGE-VLAN-1102-SVI
    ip mtu 9198
    ip address 10.12.102.2/24
    active-gateway ip mac 02:00:0a:01:65:01
    active-gateway ip 10.12.102.1
    ip ospf 1 area 0.0.0.0
interface vlan1201
    description STORAGE-VLAN-1201-SVI
    ip mtu 9198
    ip address 10.12.201.2/24
    active-gateway ip mac 02:00:0a:01:65:01
    active-gateway ip 10.12.201.1
    ip ospf 1 area 0.0.0.0
interface vlan1202
    description STORAGE-VLAN-1202-SVI
    ip mtu 9198
    ip address 10.12.202.2/24
    active-gateway ip mac 02:00:0a:01:65:01
    active-gateway ip 10.12.202.1
    ip ospf 1 area 0.0.0.0

Step 2 Mouse-over the VLAN 1101 ip address value of 10.12.101.2, right-click to set per switch values, set the ip address of RSVDC-CORE1-2 to 10.12.101.3, and click SAVE CHANGES.

Step 3 Mouse-over the VLAN 1102 ip address value of 10.12.102.2, right-click to set per switch values, set the ip address of RSVDC-CORE1-2 to 10.12.102.3, and click SAVE CHANGES.

Step 4 Mouse-over the VLAN 1201 ip address value of 10.12.201.2, right-click to set per switch values, set the ip address of RSVDC-CORE1-2 to 10.12.201.3, and click SAVE CHANGES.

Step 5 Mouse-over the VLAN 1202 ip address value of 10.12.202.2, right-click to set per switch values, set the ip address of RSVDC-CORE1-2 to 10.12.202.3, and click SAVE CHANGES

Step 6 At the bottom right of the MultiEdit Configuration window, click SAVE.

Verify Operational State

Step 1 On the left navigation menu, click Tools.

Step 2 On the Tools menu at the top, click the Commands tab.

Step 3 Click the Available Devices dropdown, select both data center core switches, then click elsewhere on the page.

Step 4 In the Categories list, click All Category. Enter vsx in the commands list filter, click show vsx status, then click Add >.

Step 5 Add the following additional commands to the Selected Commands list.

  • show lacp interfaces
  • show ip ospf interface all-vrfs
  • show spanning-tree mst detail
  • show ntp status

Step 6 At the lower left of the Commands pane, click RUN.

Step 7 Scroll down to review the CLI command output for each switch. Verify key results for each command.

  • show vsx status
    • ISL channel: In-Sync
    • ISL mgmt channel: operational
    • Config Sync Status: In-Sync
    • Device Role: set to primary and secondary on corresponding switches
    • Other VSX attributes display equal values for both VSX members

  • show lacp interfaces
    • Both Actor and Partner have corresponding interfaces for each MC-LAG.
    • All Actor interfaces have a Forwarding State of “up” for all host facing MC-LAGs and the upstream core switch facing MC-LAGs.
    • All Actor and Partner interfaces have a state of “ALFNCD”.

  • show ip ospf interface all-vrfs
    • All interfaces display Area “0.0.0.0” and Process “1”.

  • show spanning-tree mst detail
    • Verify that the Bridge Address and Root Address values are the same.
    • Verify that all LAG interfaces have a Role of “Designated” and State of “Forwarding”.

  • show ntp status
    • Verify that NTP Server is populated with a configured NTP server IP address.
    • Verify that the Time Accuracy field is populated.