Link Search Menu Expand Document
calendar_month 27-Jan-25

Initialize Fabric Components

The first step for deploying a data center is the physical installation of the switches and computing hosts.

Table of contents

Switch Installation

Verify the airflow configuration for the products to be installed to ensure that they support the cooling design for the data center. If required, an optional air duct kit is available for Aruba data center top-of-rack (ToR) switches to redirect hot air away from servers inside the rack.

Before installing switches, download the Aruba Installation Guide for the specific models. Review the Installation Guide before installing and deploying the switches. Carefully review requirements for power, cooling, and mounting to ensure that the data center environment is outfitted adequately for safe, secure operations .

Step 1 Open a web browser, navigate to the HPE Networking Support Portal, and login with using appropriate credentials.

Step 2 On the landing page, click Software and Documents Search panel.

Step 3 On the Software & Documents page, select the following filters.

  • File Type: Document
  • Product: Aruba Switches
  • File Category: Installation Guide

Step 4 Download the Installation Guide version for the switch model to be installed.

Step 5 Complete the physical installation of switches in the racks.

Note: Spine switches can be installed centrally, in middle-of-row or end-of-row locations depending on cabling requirements and space availability. The key consideration is cable distance and the types of media used between leaf and spine switches.
Leaf switches should be installed top-of-rack (ToR) in high-density environments or middle-of-row in low-density environments.

Physical Cabling

Consistent port selection across racks and in the spine switches increases the ease of configuration management, monitoring, reporting, and troubleshooting tasks in the data center.

Breakout cables are numbered consistently with their split port designation on the switch.

Document all connections.

Ensure that distance limitations are observed for your preferred host connection media and between switches.

Top of Rack Cabling

The illustrations below show the port configuration on two types of 48-port ToR switches. Redundant ToR switch pairs must be the same model.

Ports on an Aruba CX 8325-48Y8C:

**8325 ToR switch**

Ports on an Aruba CX 10000-48Y6C:

**8325 ToR switch**

In a redundant ToR configuration, the first two uplink ports should be allocated to interconnect redundant peers (ports 49-50 on 8325-48Y8C and 10000-48Y6C switches), which provides physical link redundancy and sufficient bandwidth to accommodate a spine uplink failure on one of the switches.

Two links between redundant peers are sufficient for most deployments, unless the design may result in high traffic utilization of the inter-switch links under normal operating conditions, such as when many hosts in a rack are single-homed to only one of the redundant switches.

Additional uplink ports should be allocated to connect spine switches (ports 51-56 on an 8325-48Y8C and ports 51-54 on a 10000-48Y6C).

The highest numbered non-uplink port should be reserved as the VSX keepalive link between a ToR redundant pair.

Note: VSX automation in HPE Aruba Networking Fabric Composer requires a dedicated physical port or a loopback address for the VSX keepalive interface. The recommended configuration is a dedicated port.

Determine a consistent number of leaf-to-spine links required on each ToR to achieve the desired oversubscription ratio. The number of spine switches is equal to the number of per ToR links required.

Follow a similar approach when using lower density ToR designs. Before deploying ToR configurations that require server connectivity at multiple speeds, review the switch guide to determine if adjacent ports are affected.

Configuration steps for changing port speeds are covered later in this guide. Refer to the Data Center Reference Architecture section for guidance on port speed groups on different hardware platforms.

Spine-to-Leaf Cabling

The illustration below shows the port configuration on an 8325 32-port spine switch.

**Spine switch**

In a dual ToR configuration, a spine switch must be connected to each switch in the redundant ToR pair in each rack. A 32-port spine switch supports up to 16 racks in this design. Use the same port number on each spine switch to connect to the same leaf switch to simplify switch management and documentation. For example, assign port 1 of each spine switch to connect to the same leaf switch.

Border Leaf Cabling

In a VXLAN spine-and-leaf design, a pair of leaf switches serves as the single entry and exit point to the data center. This is called the border leaf, but it does not require dedication to only border leaf functions. It may provide services leaf functions and, in some cases, provide connectivity to directly attached data center workloads. Cabling the border leaf can vary among deployments, depending on how the external network is connected and if services such as firewalls and load balancers are connected.

After all switches are physically installed with appropriate power and networking connections, continue to the next procedure.

Out-of-Band Management

The use of a dedicated management LAN for the data center is strongly recommended.

A dedicated management LAN on separate physical infrastructure ensures reliable connectivity to data center infrastructure for automation, orchestration, and traditional management access. The management LAN provides connectivity to HPE Aruba Networking Fabric Composer, Aruba NetEdit, and AMD Pensando Policy and Services Manager (PSM) applications. Ensure that the host infrastructure needed for those applications also can be connected to the management LAN or is reachable from the management LAN.

Deploy management LAN switches top-of-rack with switch and host management ports connected. Plan for an IP subnet with enough capacity to support all management addresses in the data center. DNS and NTP services for the fabric should be reachable from the out-of-band management network.

Configuration steps for the management LAN are not covered in this guide.

Switch Initialization

Go to the HPE Networking Support Portal and download the AOS-CX Fundamentals Guide for the version of the operating system you plan to run using the steps noted above for “Switch Installation.”

Note: Refer to the operating system release notes and consult with an HPE Aruba Networking SE or TAC team member for assistance with determining and selecting the version.

The “Initial Configuration” section of each Fundamentals Guide presents detailed instructions for connecting to the switch console port. After connecting to the console port, follow the steps below.

Step 1 Enable power to the switch by connecting power cables to the switch power supplies.

Step 2 Login with the username admin and an empty password.

Step 3 Enter a new password for the admin account.

Note: The “Initial Configuration” section of the Fundamentals Guide provides detailed instructions for logging into the switch the first time.

Step 4 Confirm that all CX 10000 switches in the fabric are running an AOS-CX version compatible with the Fabric Composer and PSM versions of a deployment. Table 2 in the Fabric Composer’s Pensando PSM & AOS-CX 10000 Software Selection Guidance document provides a matrix for compatibility. This guide uses the following versions of firmware and software:

  • AOS-CX: 10.13.1050
  • HPE Aruba Networking Fabric Composer: 7.0.5
  • PSM: 1.80.1-T-6

Step 5 Confirm that all other switches are running AOS-CX 10.10 long-term stability release or AOS-CX 10.13+ for compatibility with Fabric Composer 7.0.5 used in this guide.

Step 6 If the switch was previously configured, reset it to the factory default configuration. Fabric Composer requires a factory default configuration for orchestration during the fabric configuration process.

8325# erase all zeroize
This will securely erase all customer data and reset the switch
to factory defaults. This will initiate a reboot and render the
switch unavailable until the zeroization is complete.
This should take several minutes to one hour to complete.
Continue (y/n)? y

Step 7 Configure 6300M VSF stacks using the Aruba AOS-CX VSF Guide.

Note: VSF stacks should be configured on 6300 switches before making any other configuration changes after zeroization.

Step 8 Configure switch hostnames.

hostname RSVDC-FB1-LF1-1

Note: It is important to use a canonical naming scheme to easily identify the function of each switch. The hostname scheme above uses <physical location>-<fabric identifier>-<role and unique VSX pair identifier>-<VSX pair member id> to identify the correct fabric and role when using Fabric Composer. When using this scheme for switches that are not in a VSX pair, the number in the role field is sufficient for unique identification (i.e., RSVDC-FB1-SP1).

Step 9 Configure the Switch Management Interface. By default, the management interface uses DHCP for its configuration. DHCP reservations can be used to assign a consistent IP address, default gateway, and nameserver. Static IP configuration eliminates dependence on DHCP service availability.

interface mgmt
    no shutdown
    ip static 172.16.116.101/24
    default-gateway 172.16.116.1
    nameserver 172.16.1.98

Note: Based on the existing IP address management process, determine a subnet to be used for the management LAN, where out-of-band (OOB) management ports on your switches are connected. Aruba Fabric Composer must be reachable from this network. The “Initial Configuration” section of the Fundamentals Guide provides detailed instructions for configuring the management interface.

Step 10 When spines use breakout cabling, configure split ports with the appropriate number of child interfaces and connection speeds, then confirm the operational port change.

interface 1/1/1-1/1/3
    split 2 100g

Note: Typically, a spine uses a consistent split port strategy. An interface range is used to assign the same split configuration to multiple ports. The confirm parameter in the split configuration statement disables the operational warning. For example, split 2 100g confirm.

Split interfaces also can be configured in HPE Aruba Networking Fabric Composer.

Download HPE Aruba Networking Fabric Composer

Step 1 Navigate to the HPE Networking Support Portal.

Step 2 Click the Software and Documents pane.

Step 3 In the File Type filter, select Software.

Step 4 In the Product filter, select “Aruba Fabric Composer”, and click Apply.

Step 5 In the search results, select the appropriate OVA version and download it to your computer. This guide uses Fabric Composer 7.0.5.

Step 6 In the File Type filter, uncheck Software, then select Documents.

Step 7 Type release notes in the Search Files bar.

Step 8 Click on the HPE Aruba Networking Fabric Composer release notes for the version of software downloaded. The download link on the resulting page forwards the browser to Fabric Composer’s online help, install guide, and compatibility matrix. Review the installation considerations in the Install Guide to ensure that adequate host resources are available.

Note: HPE Aruba Networking Fabric Composer is provided in ISO format for installation using other hypervisors. High availability Fabric Composer clusters are only supported when using an ISO image.

Install HPE Aruba Networking Fabric Composer

Install Fabric Composer using the best process for your organization. The following process installs the Fabric Composer OVA using VMware vCenter.

Step 1 In the Hosts and Clusters tab, right click on the location to install Fabric Composer and select Deploy OVF Template… to launch the installation wizard.

Step 2 On the Select an OVF template page, click Local file, choose the downloaded AFC OVA file, and click NEXT.

Step 3 On the Select a name and folder page, enter a virtual machine name, select a target folder for the installation, and click NEXT.

Step 4 On the Select a compute resource page, select a cluster or cluster member and click NEXT.

Step 5 On the Review details page, read the information presented and click NEXT.

Step 6 On the License agreements page, read the license agreement, select I accept all license agreements, and then click NEXT.

Step 7 On the Select storage page, select the preferred provisioning method and storag volume, then click NEXT.

Step 8 On the Select networks page, select a VM Network with connectivity to the data center out-of-band netowkr and click NEXT.

Step 9 On the Customize template page, enter values for the following fields and click Next.

(A) Network - General settings

  • (1) Hostname: rsvdc-afc-01
  • (2) Domain Name: example.local
  • (3) Primary NTP Server: 172.16.1.99
  • (4) Secondary NTP Server: 172.16.1.98

(B) Network - Static IP settings

  • (1) IP Address: 172.16.1.50
  • (2) Network Mask: 255.255.255.0
  • (3) Default Gateway: 172.16.1.1
  • (4) Primary Name Server: 172.16.1.99
  • (5) Secondary Name Server: 172.16.1.98

(D) Linux Password

  • Password: <password&gt
  • Confirm Password: <password>

Note: Check Use DHCP when dynamic addressing is preferred over static IP assignment.

Step 10 On the Ready to complete page, click FINISH.

Step 10 Open a web browser and connect to Fabric Composer at the previously configured IP address.

Note: The software version is not displayed and login is not allowed while the system is initializing.

Step 12 On the Fabric Composer page, enter the following default credentials, and click LOGIN.

  • Username: admin

  • Password: aruba

Step 13 Enter the current and new password and click APPLY.

Add HPE Aruba Networking Fabric Composer Licenses

Step 1 On the Maintenance menu, select Licenses.

Step 2 On the ACTIONS menu in the Maintenance/Licenses pane, select ADD.

Step 3 On the License page, paste the JSON license string in the License field and click APPLY.

Step 4 Review the installed license to verify that the Start Date, End Date, Quantity, and Tier values display as expected.

Note: Fabric Composer manages two tiers of switches (Tier 3 and Tier 4). The datasheet for each switch model identifies the license tier required.

Install HPE Aruba Networking Fabric Composer for High Availability

Refer to the HPE Aruba Networking Fabric Composer Installation Guide available on the HPE Networking Support Portal. In the “Installing High Availability for HPE ANW Fabric Composer using ISO” section, review the installation requirements and ensure that adequate host resources are available. Follow the steps provided to deploy the HA cluster.

Download AMD Pensando Policy and Services Manager

When using the firewall capabilities of the CX 10000 switch in a data center, AMD Pensando Policy and Services Manager (PSM) VMs must be installed on a network that is accessible by Fabric Composer and switch management interfaces.

Step 1 Navigate to https://asp.arubanetworks.com/.

Step 2 On the menu at the top of the page, select Software & Documents.

Step 3 In the Search Files field at the top, type Pensando.

Step 4 In the search results, select the latest OVA version and download it to your computer.

Install AMD Pensando Policy and Services Manager

In the Aruba Support Portal search results, find the Pensando Policy and Services Manager for Aruba CX 10000: User Guide. Review the “PSM Installation” section and ensure that adequate host resources are available. PSM requires a minimum of three VM instances for a production deployment.

Step 1 Select the OVA file using the Deploy OVF Template workflow within vCenter and click NEXT.

Step 2 Choose the appropriate options in Select a compute resource and proceed through Review details.

Step 3 On the Configuration page, click the radio button for Production and click NEXT.

Step 4 Proceed with selecting the appropriate storage and network resources for the deployment.

Step 5 Complete the Customize template form using the example below.

Step 6 Complete the VM creation workflow.

Step 7 Create additional PSM VMs as needed.

Note: Additional VMs can be created by importing the OVA again or by cloning the first VM as a template as described in the “Installing OVA on ESXi” section of the Pensando Policy and Services Manager for Aruba CX 10000: User Guide.

Configure the AMD PSM Cluster

Step 1 In vCenter, login to one of the Penando PSM VM consoles.

  • Username: root

  • Password: < Specified during VM creation process >

Step 2 At the VM console, bootstrap the PSM cluster with the bootstrap_PSM.py utility using the following command-line switch/value pairs followed by a space-delimited list of IP addresses for all cluster members.

  • -enablerouting: < No value required >
  • -distributed_services_switch: < No value required >
  • -autoadmit: False
  • -clustername: < User supplied cluster name >
  • -domain: < Domain name >
  • -ntpservers: < Comma-separated list of NTP servers >
    bootstrap_PSM.py -enablerouting -distributed_services_switch -autoadmit False -clustername FB1_PSM -domain example.local -ntpservers 172.16.1.98,172.16.1.99 172.16.104.51 172.16.104.52 172.16.104.53
    

Note: The -autoadmit command line switch is set to True by default. This automatically enables any Distributed Services Switch to join PSM. When a strict admission policy to PSM is required, set this command line switch to False.

Step 3 When prompted, read and accept the End User License Agreement.

Step 4 Verify that the PSM cluster bootstrap completes successfully.

Step 5 On the VM console, enter the following to generate a PSM security token.

/usr/pensando/bin/psmctl get node-token --psm-ip localhost --psm-port 443 --audience "*" --token-output ~/dse-tok

Note: The token can be used for disaster recovery and backup purposes. Store it with other sensitive network credentials.

Step 6 When prompted, enter the following default credentials:

  • User name: admin

  • Password: Pensando0$

Step 7 Open a web browser and connect to PSM at one of the configured VM IP addresses.

Step 8 On the AMD Pensando login page, enter the following default credentials and click SIGN IN.

  • Username: admin

  • Password: Pensando0$

Step 9 Go to System > Cluster and verify that each PSM VM is listed under Nodes in the Cluster Detail pane with the following values.

  • Quorum: true

  • Phase: Joined

Step 10 Go to Admin > User Management, mouse-over the admin user, and click the Change password icon.

Step 11 Enter the old and new passwords and click Save changes.

Note: Changing the password on one VM updates all cluster members.