Ansible Two-Tier Data Center
HPE Aruba is committed to providing effective, flexible network automation strategies tailored to customer needs. In addition to workflow-based automations provided by Aruba Central and Aruba Fabric Composer, the HPE Aruba Networking Developer Hub provides comprehensive tooling to support CX switch configuration using Ansible.
Table of contents
Overview
Ansible is an open-source orchestration framework maintained by Red Hat. It automates provisioning, configuration management, and application deployment.
An Ansible playbook automates CX switches using the AOS-CX Ansible Collection that configures switches using multiple REST API calls and CLI commands via SSH.
The Ansible workflow in this guide provides turnkey automation of an AOS-CX Two-Tier Data Center. The zipped version of the project can be downloaded from the Github repository into your Ansible control machine using the following git clone command:
$ git clone https://github.com/aruba/aoscx-ansible-dcn-workflows.git
Note: HPE Aruba’s Getting Started with Ansible and AOS-CX guide provides additional information on how to use the AOS-CX Ansible Collection.
Ansible Project Prerequisites
This project assumes a working knowledge of Ansible. If you are new to Ansible automation, please review HPE Aruba’s Getting Started with Ansible and AOS-CX guide on the Developer Hub.
An automation server or VM in the networking environment with SSH reachability to the IP addresses assigned to the Aruba CX out-of-band management interfaces is required.
The Ansible control machine requires Python3.5+ and Ansible 2.13.1+, which can be installed using Ansible’s Installing Ansible guide.
This project requires HPE Aruba’s AOS-CX Ansible Collection, which can be installed by executing the ansible-galaxy command using the requirements.yml file in the HPE Ansible data center repository.
$ cd aoscx-ansible-dcn-workflows
$ ansible-galaxy install -r requirements.yml
The following Python libraries are required for this project.
- Jinja2 2.10+
- paramiko 2.1.1+
- pip 6.0+
- requests 2.2.0+
- netaddr 0.7.5+
- pyaoscx 2.5.1+
- openpyxl
The Python libraries can be installed by running pip with the requirements.txt file in the HPE Ansible data center repository.
$ cd aoscx-ansible-dcn-workflows
$ pip install -r requirements.txt
Ansible Project Structure
The Ansible files for the Two-Tier Data Center project are maintained in a general data center workflow repository on Github. The repository also contains files for additional projects. Visit the AOS-CX Data Center Automation with Ansible Developer Hub for details on the other workflows hosted within the data center repository.
The files necessary for the Two-Tier Data Center workflow are listed in the repository structure below.
configs # Directory for generated configurations
|- sample_configs # Sample Final Configurations for all workflows
templates # Place to hold Jinja templates for config generation
|- 2TierV2 # Jinja2 configuration Templates for Two-Tier DCN V2
| |- access.j2 # Access switch Jinja2 template for Architecture II version 2
| |- core.j2 # Core switch Jinja2 template for Architecture II version 2
deploy_2tierv2_dcn.yml # Playbook for Architecture II version 2
inventory_2tierv2_dcn.yml # Inventory for Architecture II version 2
requirements.txt # Python library requirements for project
requirements.yml # Galaxy collection requirements for project
The inventory and template files are critical for running the Two-Tier Data Center playbook, as described in separate chapters in this guide.
Two-Tier Data Center Topology
The Ansible workflow deploys the same sample topology used in the Aruba Central Two-Tier Data Center guide. Some information from the Aruba Central guide is repeated here for reference and readability.
HPE Aruba Two-Tier data centers meet the requirements of small- and medium-size data centers. For network resiliency, multi-chassis link aggregations (MC-LAGs) are used at both switch tiers. The diagram below summarizes the physical topology configured in this deployment guide and the relationship between components.
Two-Tier Core Layer
The core layer provides redundant Layer 2 connectivity to downstream access switches. A VSX pair of core switches is configured with an MC-LAG to each downstream rack. All links from the core layer to the access layer for a single rack are members of the same MC-LAG, whether the rack is populated with a single switch or with a VSX-pair of access switches. MC-LAG provides network resiliency and load-balancing. It also mitigates the need for loop avoidance mechanisms between the core and access layer switches.
Layer 3 services for the data center are provided by the core layer. VLAN switched virtual interfaces (SVIs) define data center subnets, and Aruba Active Gateway provides redundant IP gateways to data center hosts. The core layer also provides redundant IP connectivity to upstream external networks. Typically, firewalls are placed between a data center and external networks for policy enforcement. The redundancy strategies between the data center core and external networks can vary, depending on device features and organizational requirements. In this guide, a traditional active/passive redundant pair of firewalls is connected to the core switch pair using MC-LAGs.
Two-Tier Access Layer
The access layer provides Layer 2 connectivity to downstream data center hosts.
When a single access switch is at the top-of-rack (ToR) position, the access layer connects to the core layer using a standard LAG. A single ToR switch can provide physical link redundancy using a standard LAG, but host connectivity is lost when performing firmware upgrades or when the ToR switch fails.
This example deployment uses a VSX pair of ToR switches at the access layer, which provides physical switch redundancy to directly attached hosts. This model supports uninterrupted host connectivity, even when one of the ToR switches fails or a firmware upgrade is performed. Each access layer switch also is connected to each core switch using an MC-LAG for redundancy, load balancing, and loop avoidance.