This is the multi-page printable view of this section. Click here to print.
Design Fundamentals and Concepts
- 1: Access Point Deployments
- 2: Gateway Deployments
- 3: Gateway Serviceability
- 4: Clusters
- 4.1: Types of Clusters
- 4.2: Cluster Roles
- 4.3: Automatic and Manual Modes
- 4.4: Formation Process
- 4.5: Features
- 4.6: Dynamic Authorization in a Cluster
- 4.7: Failover
- 4.8: Planning
- 5: Forwarding Modes of Operation
- 6: Access Point Port Usage
- 7: User VLANs
- 8: Roles
- 8.1: Fundamentals
- 8.2: Management and Configuration
- 8.3: Bridge Forwarding
- 8.4: Tunnel Forwarding
- 8.5: User-Based Tunneling
1 - Access Point Deployments
Access Points are the underpinning of the Campus wireless architecture. To provide maximum flexibility, an ArubaOS 10 AP can support WLANs configured to either bridge or tunnel user traffic. A special WLAN is also supported that can offer both forwarding types. One important feature of the AOS 10 architecture is that APs are no longer dependent on Gateways. Customers are free to deploy APs with or without Gateways depending on their traffic forwarding needs, feature requirements, and size of the network.
An AOS 10 AP-only deployment consists only of APs, and no Gateways. The APs are strategically deployed in one or more buildings to establish RF coverage areas. Since no Gateways are used, the APs publish WLANs that are configured to bridge the user traffic directly onto the wired network at the access layer.

AP Only Topology
An AP-only deployment can be considered for any environment where must provide Wi-Fi access to client devices, but do not require tunneling or other advanced features offered by Gateways. This includes:
- Small offices and branches
- Regional branches or headquarters
- Warehouses
- Campuses
There are some environments where AP-only deployments might not be suitable. For example, large hospitals and medical centers where high scale and seamless roaming are key requirements for specific medical applications and real-time communications.
Roaming Domains
A roaming domain is the population of APs in a common RF coverage area that share VLAN IDs and broadcast domains (IP networks). The coverage area may be contained within a single physical location such as a building / floor or if scaling and the LAN architecture permits may be extended between physical locations such as co-located buildings in a campus environment.
As the WLANs bridge the user traffic directly into the access layer, VLANs must be created and extended between the APs. A typical AP-only deployment utilizes a single AP management VLAN and two or more wireless user VLANs. There is no hard requirement for using a single AP management VLAN, and a customer may implement multiple AP management VLANs, if required. The only requirement is that the AP management VLANs are dedicated for management and not shared with client devices.
The number of wireless user VLANs varies based on deployment size, broadcast domain tolerance, and customer segmentation requirements. These VLANs are extended between the APs within a given RF coverage area. This is needed so that the wireless client devices can seamlessly roam between the APs and maintain their VLAN membership and IP addressing.

Seamless Roaming in AP Only Deployments
The AP management and wireless user VLANs are terminated on a multilayer switch that resides in the core or aggregation layers. This may vary based on the customer environment and LAN architecture. The multilayer switch is the default Gateway for the AP-management and wireless user VLANs, and includes IP helper addresses to facilitate DHCP addressing.
Roaming Domain Scaling
Each roaming domain can scale to support a maximum number of APs and client devices. As the AP management and user VLANs are extended between the APs in a roaming domain, any broadcast/multicast frames forwarded over the AP or wireless user VLANs will be received and processed by all the APs in the roaming domain. Broadcast/multicast frames are normal and are used by both APs and clients for various functions. As a general rule of thumb, the more APs and clients you deploy in a roaming domain, the more broadcast/multicast frames will be transmitted and processed by all connected hosts in each VLAN.
The frequency of broadcast/multicast frames that are flooded over the AP and wireless user VLANs are the main limiting factor for scaling as CPUs on APs can only process so many of these frames before other software services are impacted. Aruba has validated that a single individual roaming domain can support a maximum of:
- 500 x APs
- 5,000 x clients
These are the maximum limits that have been tested and verified by Aruba with APs connected to a single management VLAN and wireless clients connected to a single wireless user VLAN. Broadcast/multicast traffic was also generated to ensure correct operation in heavier broadcast/multicast environments. These limits also apply when multiple AP management and user VLANs are deployed. The aggregate number of APs and clients should not exceed the verified limits across all the VLANs. For example, if a customer configures four user VLANs, we recommend that you do not exceed a total of 5,000 clients across all the user VLANs.
Multiple Roaming Domains
An AP-only deployment can include as many roaming domains of up to 500 APs / 5,000 clients as needed as long as the AP management and wireless user VLANs for each roaming domain are separated at layer 3 (that is, reside on separate IP networks / broadcast domains).
For example, a 10-building campus design can include up to 500 APs per building, with each building supporting a maximum of 5,000 clients. The campus in this example would include 5,000 APs supporting a maximum of 50,000 clients across the campus. The AP management and wireless user VLANs in each building will be assigned unique IP subnets resulting in unique broadcast domains for each building. Larger buildings requiring higher scaling may implement multiple roaming domains as needed.

Multiple Roaming Domains in AP Only Deployments
Consider deploying multiple roaming domains for the following scenarios:
- Scaling—You have a coverage area such as a large building or co-located buildings that must support more than 500 APs and/or 5,000 clients. Additional roaming domains will be incorporated into the design to accommodate the additional APs and/or wireless client devices.
- LAN Architecture—The AP-management and wireless-user VLANs cannot be extended between access layer switches either between floors within a building or between buildings. If the LAN switches cannot be reconfigured to extend the needed VLANs, separate roaming domains will be required.
Overlapping RF Coverage
If the RF coverage areas between buildings or floors do not overlap, there is no expectation of network connectivity as you move between areas. But if the RF coverage areas overlap, you may expect continuous network connectivity as you move between buildings or floors. However, the IP network membership for client devices moving between two roaming domains will not be seamless. The client devices must obtain new IP addressing after the roam.
While the roam itself can be a fast roam, since the wireless-user VLANs in each roaming domain map to different broadcast domains, the client device must re-DHCP to obtain a new host address and default Gateway before it can continue to communicate over the intermediate IP network. The user VLAN ID can be consistent between roaming domains for simplified operations and management, but for scaling, the IP networks assigned to each roaming domain must be unique.
Deployments requiring multiple roaming domains with overlapping RF coverage therefore require careful planning and consideration:
-
User Experience—Do you expect uninterrupted network connectivity when moving between roaming domains? For example, between buildings or floors?
-
RF Design—Can the design accommodate or implement RF boundaries to minimize the hard roaming points between adjacent roaming domains to provide the best user experience?
-
Client Devices—Do you have any specialized or custom client devices deployed? Test to validate that they can tolerate and support hard roaming. Modern Apple, Android, and Microsoft operating systems will issue a DHCP discover and re-ARP after each roam.
-
Applications—What applications do you have deployed and can they tolerate hosts changing IP addresses? While some applications such as Teams and Zoom can automatically recover after host readdressing, others cannot.
A decision would need to be made on whether users, client devices, and applications can tolerate hard roaming in the environment before considering co-located roaming domains. If not, then Gateways and tunneling can be considered.
Types of Roaming
A client device can experience two types of roams in an AOS 10 deployment—hard roams and seamless roams. This section provides additional details for each roaming type and clarification on when we provide seamless and hard roaming in non-tunneled environments (i.e. AP only networks).
Hard Roaming
In a multiple roaming domain environment, client devices obtain new IP addressing and a default Gateway when transitioning between APs in separate roaming domains. Client devices may retain the same VLAN ID assignment depending on the LAN environment. While the user VLAN IDs may be common or unique between roaming domains, IP subnets or broadcast domains are always unique.
When a client device transitions between roaming domains, the following actions take place before the client device can continue to communicate over the intermediate IP network:
-
The client device issues a DHCP discover to obtain new addressing. A full DHCP exchange occurs and new host addressing and options are assigned. As the DHCP discover is a broadcast frame, it also populates the MAC address tables on all the layer-2 switches where the new user VLAN extends.
-
The client device sends an ARP for the default Gateway. This permits the client device to communicate with hosts on other IP networks.
-
All running applications on the client device re-establish their sessions. This may occur automatically or require user intervention.
Seamless Roaming
In a single roaming domain, client devices experience a seamless roam since all the APs in the RF coverage share user VLANs and broadcast domains. Client devices can maintain their user VLAN ID, IP addressing, and default Gateway after each roam. The roaming time will vary based on the WLAN type, its configuration, and the fast-roaming capabilities of the client.
After a successful roam, the client device’s MAC address or port-bindings are updated on all the layer-2 switches where the user VLAN extends. There are two ways in which this happens:
-
The client device sends a broadcast frame such as an ARP or DHCP-discover, which is flooded over the user VLAN.
-
The new AP sends a Gratuitous ARP, if proxy ARP is enabled, which is flooded over the user VLAN.
FAQ
- Do ArubaOS 10 APs support layer-3 mobility?
-
No. We do not have plans to support this. Gateways provide a scalable and resilient option if layer-3 mobility is required.
- Does Aruba Central have limits on how many instances of 500 APs or 5,000 clients can be deployed? Are there any Aruba Central group or site limits?
-
There are no Aruba Central group or site restrictions.
- Do the AP-management and wireless-user VLANs must be dedicated or can they be shared with other hosts?
-
Our best practice recommendation is to use dedicated VLANs for AP management and wireless users, especially in larger environments where large amount of broadcast or multicast traffic is expected.
- Is there anything I can do additionally to prevent unwanted broadcast or multicast frames from impacting the APs?
-
In addition to implementing dedicated management and user VLANs, it is strongly recommended that you remove any unwanted VLANs on the AP switchports, on the access layer switches. These switchports should be explicitly configured to untag the AP-management VLAN and only tag the user VLANs. No other VLANs should be extended to the APs. This will ensure that any broadcast or multicast frames from other VLANs are not received by the APs.
- Can APs in a roaming domain be connected to different management VLANs?
-
The APs may be distributed between multiple VLANs as long as you do not exceed 500 APs in the roaming domain. However, since DTLS tunnels are established between APs for session state synchronization and clean-up, the APs must be able to reach each other over same IP.
- Can I deploy a single wireless-user VLAN supporting up to 5,000 clients in a roaming domain?
-
A single user VLAN is supported, however most real-world deployments will include two or more user VLANs that naturally establish smaller broadcast domains.
- Can I exceed 500 APs or 5,000 clients if I deploy additional VLANs?
-
No. These recommended limits apply per roaming domain regardless of the number of AP management and user VLANs.
- When do I need to consider Gateways for my customers’ deployment?
-
Please refer to the Gateway Use Cases section.
- Can APs with foundation and advanced licenses be mixed in a roaming domain? What will happen if they are mixed?
-
While you may mix APs with different licenses, it is not recommended. Certain features such as Live Upgrade, HPE Aruba Networking AirGroup (custom services), Air Slice, UCC, and Multizone require an advanced license. Mixing APs with different license tiers in a building or floor will result in feature discrepancies and an operationally challenging environment.
- What happens if the total number of APs or clients in a roaming domain exceeds 500 APs or 5,000 clients?
-
These are soft limits which are not enforced and have been provided as a recommended best practice. If either limit is exceeded, AP and client performance may degrade, especially in high traffic environments.
- Can I deploy APs and tunneled WLANs across a WAN?
-
Not all WANs are created equally.
-
Public Internet including VPNs: Not supported
-
Private WANs (MPLS, TDM etc.): These types of deployments are not tested and are therefore, not supported without additional validation. Please contact your HPE Aruba Networking sales team to discuss these requirements further.
-
Metropolitan Ethernet Services or Ethernet extensions between sites: These are supported since they would not be much different from dark fiber implementations. However, we do require the service to support standard 1,518 byte or larger Ethernet frames. Gigabit Ethernet speeds or higher are also recommended.
-
- Are APs and bridged WLANs supported with EVPN-VXLAN?
-
APs using bridge forwarding are now supported on the NetConductor solution, with the limitation of no role-to-role group-based policies. Support for 3rd party EVPN-VXLAN environments will vary depending on the vendor’s ability to support rapid MAC address moves. Please reference the validated solutions guide (VSG) for more information.
2 - Gateway Deployments
Gateways are high-performance appliances that have evolved to support a wide range of use cases and can act as (1) the wireless control plane for greater security and scalability or (2) SD-Branch device with intelligent routing and tunnel orchestration software. Gateways are not a refresh of wireless controllers; they are expressly designed to be both cloud and IoT ready.
Use Cases
While Gateways are optional, they offer certain features and capabilities that are not available in AP-only deployments. There are deployment scenarios when Gateways should be considered to provide a better end-user experience, simplify operations, or take advantage of advanced features. There are also scenarios where Gateways are mandatory and required.
The following are some common features and use cases for Gateway deployments:
-
LAN Architecture — The LAN architecture does not permit management and user VLANs to be extended between the APs, and seamless roaming is required.
-
Roaming Domain Scaling — Gateways and tunneled WLANs are required to establish roaming domains that exceed 500 APs and 5,000 clients.
-
Layer-3 Mobility — Gateways and tunneled WLANs are required to centralize wireless-user VLANs and permit client devices to seamlessly roam between APs, across layer 3 network boundaries.
-
RADIUS Proxy - One does not want to configure large numbers of APs as clients on their RADIUS server. If Gateways are deployed, RADIUS messages can be proxied through the Gateway cluster.
-
Security & Policy - For policy and compliance, user traffic needs to be segmented and/or terminated in different zones within the network where user VLANs are deemed insufficient. Newer Gateways could also help in enhancing the security further by enabling IDS/IPS. The IDS/IPS engine performs deep packet inspection to monitor network traffic for malware and suspicious activity. When either of the two is detected, the IDS function alerts network administrators, while the Intrusion Prevention System (IPS) takes immediate action to block threats.
-
Traffic Optimization - For high broadcast or multicast environments, Gateways offer more granular controls that can be enabled per VLAN to prevent unwanted broadcast or multicast frames and/or datagrams from reaching the APs.
-
Data Plane Termination - Gateways are required to terminate tunnels from Aruba devices. This includes APs, Gateways and Switches.
-
Solutions - Gateways are required for Aruba SD-Branch, Microbranch and VIA deployments.
-
MultiZone - Two or more clusters of Gateways are required to deploy MultiZone when separate tunneled WLANs are terminated on different Gateway clusters within the network.
-
Datacenter Redundancy - If layer 3 mobility and failover between datacenters is required.
-
Dynamic Segmentation - Dynamic Segmentation unifies role‑based access and policy enforcement across wired, wireless, and WAN networks with centralized policy definition and dedicated enforcement points, ensuring that users and devices can only communicate with destinations consistent with their role. Gateways play an essential role in policy enforcement – keeping traffic secure and segregated.
Personas
An AOS 10 Gateway can operate in one of the three personas i.e., a Mobility, Branch or VPN Concentrator. These personas could be set while creating a new group in Aruba Central. Setting a group type to any of the personas essentially dictates what configuration options would be exposed in the group settings. For example, if the group type is set to Mobility, then only WLAN related configuration options are available in that group whereas if the type is set to Branch, then in addition to the WLAN configuration options, other SD-Branch specific Branch Gateway options are also available for configuration.
The Mobility persona is used for WLAN deployments whereas the Branch and VPN Concentrator personas are used for SD-Branch deployments.

Gateway Personas
- Mobility
-
The Mobility persona configures a Gateway to support wireless (WLAN) and wired (LAN) functionalities in a campus network. When a Mobility Gateway is used in a WLAN deployment, all APs will form Internet Protocol security (IPsec) and Generic Routing Encapsulation (GRE) tunnels to the Gateways when a tunneled or a mixed mode WLAN is created.
Gateways in this mode do not provide any WAN capabilities.
- Branch
-
The Branch persona sets a Gateway to operate as an SD-Branch Gateway, supporting the optimization and control of WAN, LAN, WLAN, and cloud security services. The Branch Gateway provides features such as routing, firewall, security, Uniform Resource Locator (URL) filtering, and compression. With support for multiple WAN connection types, the Branch Gateway routes traffic over the most efficient link based on availability, application, user-role, and link health. This allows organizations to take advantage of high-speed, lower-cost broadband links to supplement or replace traditional WAN links such as MPLS.
In addition to providing Branch functionalities, Branch Gateways also support all of the WLAN functionalities of a Mobility Gateway.
- VPN Concentrator
-
The VPN Concentrator persona sets a Gateway to act as a headend Gateway, or Virtual Private Network Concentrator (VPNC) for all branch offices. Branch Gateways establish IPsec tunnels to one or more headend Gateways over the Internet or other untrusted networks. High Availability options support either multiple headend Gateways deployed at a single site or headend Gateways deployed in pairs at multiple sites for the highest availability. The most widely deployed topology is the dual hub-and-spoke where branches are multi-homed to a primary and backup data center. Any of the headend Gateways can perform the function of VPNC at the hub site. These devices offer high-performance and support a large number of tunnels to aggregate data traffic from hundreds to thousands of branches.
VPNCs can act as headend Gateways for either other Branch Gateways or Microbranch APs.
Role Matrix
Some Gateways do not support all available personas and this restriction should be taken into account when choosing a Gateway model.
Platform | Mobility | VPNC | Branch |
---|---|---|---|
7000 Series | |||
7005 | Yes | No | Yes |
7008 | Yes | No | Yes |
7010 | Yes | Yes | Yes |
7024 | Yes | Yes | Yes |
7030 | Yes | Yes | Yes |
7200 Series | |||
7205 | Yes | Yes | Yes |
7210 | Yes | Yes | Yes |
7220 | Yes | Yes | Yes |
7240XM | Yes | Yes | Yes |
7280 | Yes | Yes | Yes |
9000 Series | |||
9004 | Yes | Yes | Yes |
9004-LTE | No | Yes | Yes |
9012 | Yes | Yes | Yes |
9100 Series | |||
9106 | Yes | Yes | Yes |
9114 | Yes | Yes | Yes |
9200 Series | |||
9240 | Yes | Yes | Yes |
Roaming With Gateways
An AOS 10 deployment with Gateways supports the ability to configure WLAN profiles to tunnel the user traffic to a cluster of Gateways where the user VLANs reside. Client devices are statically or dynamically assigned to a user VLAN that is extended between all the Gateway nodes in the cluster. The user VLANs either terminate on the core switching layer or a dedicated aggregation switching layer that is also the default Gateway for the Gateway management and user VLANs.
For more details on Gateway clustering, refer to the Clusters topic.
With a centralized forwarding architecture, client devices can seamlessly roam between APs that are tunneling user traffic to a common Gateway cluster. The client devices can maintain their VLAN membership, IP addressing, and default Gateway since the user VLANs and broadcast domains are common between the cluster members. With the clustering architecture, the client’s MAC address is also set to a single cluster member irrespective of the AP that the client device is attached to. The client MAC address will only move in the event of a cluster node upgrade or outage.
Hard roaming is required in AP-Gateway deployments if a client device transitions between APs that tunnel the user traffic to separate Gateway clusters. While the user VLAN IDs may be common between clusters, the IP subnets or broadcast domains must be unique per cluster. Any client device that moves between Gateway clusters must obtain a new IP address and default Gateway after the roam.

AP-Gateway Roaming
Gateway Scaling with AOS 10
Scaling numbers related to clients, AOS 10 devices, tunnels and cluster sizes for various Gateway models can be accessed in the Capacity Planning section of the Validated Solution Guide.
Gateway Cluster Scaling Calculator
This calculator is used to determine the number of gateways required for AOS 10 tunneled WLAN and user based tunneling (UBT) deployments.
The calculator can be accessed in the Capacity Planning section of the Validated Solution Guide.
3 - Gateway Serviceability
To ensure continuous reachability to gateways, the HPE Aruba Networking Central management platform facilitates essential operations such as configuration, monitoring, debugging, and general management.
Upon initial boot, HPE Aruba Networking AOS-10 gateways in factory default state establish a WebSocket connection with Central through provisioning data obtained via zero-touch provisioning or manual setup.
show configuration setup-dialog
can be used to verify configuration applied during setup.
Subsequent connectivity disruptions might arise from user-initiated configuration errors, network anomalies, or device malfunctions. To mitigate these disruptions, robust device-side recovery mechanisms are supported.
Automatic recovery
Auto rollback
The following configuration errors, including but not limited to those listed, can all disrupt communication between the gateway and HPE Aruba Networking Central:
- Incorrect uplink VLAN settings
- Uplink port misconfiguration
- Bandwidth contract policy restrictions
- Access control list conflicts
For example, these errors could stem from simple typographical mistakes, rendering corrective actions impossible once connectivity is lost.
Example output for the show switches
command showing a configuration state of rollback:
To address this, AOS-10 gateways implement automatic rollback to the last known good configuration upon detection of connectivity loss due to configuration service updates.
Furthermore, the gateway will communicate the rollback event to the Central configuration service, enabling user visibility and diagnostic capabilities.
Auto re-provisioning
HPE Aruba Networking Central provides an automated solution for connectivity loss detection and restoration, minimizing user intervention to the correction of erroneous configurations.
In scenarios where the initial connection to Central fails due to inaccurate provisioning parameters (e.g., incorrect management URL), or where automatic rollback is unsuccessful (e.g., expired certificates), gateways support self-reprovisioning from the Activate service. Upon user modification of provisioning data, the gateway will initiate a reset and reconnect to Central using the provided information.
4 - Clusters
A cluster is a group of HPE Aruba Networking Gateways operating as a single entity to provide high availability and service continuity for tunneled clients in a network. Gateway clusters provide redundancy for HPE Aruba Networking APs with mixed or tunneled WLANs, HPE Aruba Networking switches configured for user-based tunneling (UBT), and tunneled clients in the event of maintenance or failure.
Clustering provides the following features and benefits:
-
Stateful Client Failover – When a Gateway is taken down for maintenance or fails, APs, UBT switches and clients continue to receive service from another Gateway in the cluster without any disruption to applications.
-
Load Balancing – Device and client sessions are automatically distributed and shared between the Gateways in the cluster. This distributes the workload between the cluster nodes, minimizes the impact of maintenance and failure events and provides a better connection experience for clients.
-
Seamless Roaming – When a client roams between APs, the clients remain anchored to the same Gateway in the cluster to provide a seamless roaming experience. Clients maintain their VLAN membership and IP addressing as they roam.
-
Ease of Deployment – A Gateway cluster is automatically formed when assigned to a group or site in Central without any manual configuration.
-
Live Upgrades – Allows customers to perform in-service cluster upgrades of Gateways while the network remains fully operational. The Live Upgrade feature allows upgrades to be completely automated. This is a key feature for customers with mission-critical networks that must remain operational 24/7.

Reference diagram of a typical cluster in AOS 10.
4.1 - Types of Clusters
Types of Clusters
A resilient cluster consists of two or more Gateways that service clients and devices. A cluster that consists of Gateways of the same model is referred to as homogeneous cluster while a cluster that consists of Gateways of different models is referred to as a heterogeneous cluster. As a best practice, HPE Aruba Networking recommends deploying homogeneous clusters whenever possible.
Homogeneous Clusters
A homogeneous cluster is a cluster built with Gateways of the same model. The primary benefit of a homogeneous cluster is that each node provides equal client, device, and forwarding capacity along with common port configurations. This makes homogeneous clusters much easier to plan, design, and configure than heterogeneous clusters.

Example cluster consisting of gateways of same series and model.
The maximum number of nodes you can deploy in a homogeneous cluster will vary by series. The 7000 or 9000 series Gateways can support a maximum of four nodes, the 7200 series can support a maximum of twelve nodes, and the 9100 or 9200 series Gateways can support a maximum of six nodes.
Gateway Series | Maximum Gateways per Cluster |
---|---|
7000 | 4 |
7200 | 12 |
9000 | 4 |
9100 | 6 |
9200 | 6 |
Heterogeneous Clusters
A heterogeneous cluster is a cluster built with Gateways of different models. Heterogeneous cluster support is primarily provided to help customers migrate existing clusters using older Gateways to newer models. For example, migrating an existing cluster of 7005 series Gateways to 9004 series Gateways or 7200 series Gateways to 9200 series Gateways.

Example cluster consisting of gateways of differing series and models.
The primary benefit of a heterogeneous cluster is that multiple Gateways models can co-exist within a cluster during a migration, however this comes with some considerations:
-
The maximum cluster size will be limited by the lowest common denominator Gateway series. For example, a heterogeneous cluster of 7200 series and 9200 series Gateways will be limited to a maximum of six nodes.
-
Base and failover capacities are extremely difficult to calculate. Active and standby client and device sessions will be unevenly distributed between the available nodes based on the capacity of each node. Careful planning must be performed to ensure that the loss of a high-capacity node does not impact clients or devices.
-
Forwarding performance, scaling and uplink capacities will vary between the nodes.
-
Configuration in Central may require device level overrides to accommodate uplink port differences between Gateway models.
While heterogeneous clusters are supported, they are not recommended for long-term production use. Heterogeneous clusters should only be implemented when migrating Gateways in existing clusters to a new model. If a heterogeneous cluster must be implemented, the cluster should be limited to two models of Gateways. While more than two Gateway models can be supported, troubleshooting and debugging will be more complicated if technical issues occur.
Gateway series | Maximum gateways per cluster |
---|---|
7000 and 9000 7000 and 7200 9000 and 7200 |
4 |
7200 and 9100 7200 and 9200 9100 and 9200 |
6 |
4.2 - Cluster Roles
Gateways in a cluster are assigned various roles to distribute client and device sessions between the available nodes. For each cluster, one gateway is elected a cluster leader which is responsible for device session assignment, bucket map computation and node list distribution. In addition to a cluster leader role, a gateway may assume one or more of the following roles:
-
Device Designated Gateway (DDG) or Standby Device Designated Gateway (S-DDG)
-
Switch Designated Gateway (SDG) or Standby Switch Designated Gateway (S-SDG)
-
User Designated Gateway (UDG) or Standby User Designated Gateway (S-UDG)
-
VLAN Designated Gateway (VDG) or Standby VLAN Designated Gateway (S-VDG)
The roles that are assigned to gateways within a cluster will be dependent on the number of cluster nodes, persona of the gateways, and the types of devices that are tunneling client traffic to the cluster. The UDG/S-UDG roles are assigned to gateways for tunneled clients, DDG/S-DDG roles are assigned to gateways for APs, and SDG/S-SDG roles are assigned to gateways for UBT switches. VDG/S-VDG roles are assigned to Branch Gateways configured for Default Gateway mode that terminate user VLANs.
A cluster can consist of a single gateway or multiple gateways. A single gateway is still considered a cluster as the cluster name must be selected for profiles configured for mixed and tunnel forwarding. When a cluster consists of a single gateway, no standby sessions are assigned as there are no gateways available to assume the standby roles. Standalone gateways will assume the cluster leader and designated role for client and device sessions. When a cluster consists of two or more gateways, designated and standby roles are distributed between the available cluster nodes.
Bucket maps
The cluster leader is responsible for computing a bucket map for the cluster which is published to both APs and UBT switches by their assigned DDGs. Unlike AOS-8 where a bucket map was published per ESSID, in AOS-10 one bucket map is published per cluster. APs and UBT switches tunneling to multiple clusters will have a published bucket map for each cluster.
Bucket maps are used by APs and UBT switches to determine the UDG and S-UDG session assignments for each tunneled client. Each tunneled client is assigned a UDG to anchor north / south traffic. To determine the active and standby UDG role assignments, the last 3 bytes of each client’s MAC address is XORed to derive a decimal value (0-255) which is used as an index in the bucket map table to determine the UDG and S-UDG assignments. Each AP and switch that is tunneling to a cluster will be provided with the same bucket map. If multizone is deployed, each AP and UBT switch will receive separate bucket maps for each cluster.
The following illustration provides an example bucket map published by a two-node homogeneous cluster. Each gateway in the UDG list is assigned a numerical value (0 and 1 in this case) that have an equal number of active and standby assignments. Each client MAC address is hashed to provide a numerical index value (0-255) that determines each client’s active and standby UDG assignment. In this example, the hashed index value 32 will assign node 0 as the UDG and node 1 as the S-UDG while the index value 15 will assign node 1 as the UDG and node 0 as the S-UDG.

Bucket map output from a gateway cluster.
Roles and tunnels
Each AP and UBT switch that is tunneling clients to a cluster will establish tunnels to each gateway node within the cluster:
-
Campus AP – Establishes IPsec and GRE tunnels to each cluster node, this operation is orchestrated by Central.
-
EdgeConnect Microbranch AP - Establishes IPsec tunnels to each VPN Concentrator in a cluster, this operation is orchestrated by Central. When using centralized layer 2 (CL2) forwarding, GRE tunnels are encapsulated in the IPsec tunnels.
-
UBT Switches – Establish GRE tunnels to each cluster node based on switch configuration.
The role of each gateway within a cluster determines which cluster node is responsible for exchanging signaling messages to APs and UBT switches in addition to the forwarding of broadcast (BC), multicast (MC), and unicast traffic destined to tunneled clients.
Device | Tunnel Type | Traffic Type | Gateway Role |
---|---|---|---|
Campus AP | IPsec | Device Signaling & BC/MC to Clients | DDG |
GRE | Unicast to / from Clients & BC/MC from Clients | UDG | |
EdgeConnect Microbranch AP (CL2) | IPsec | Device Signaling & BC/MC to Clients | DDG |
GRE in IPsec | Unicast to / from Clients & BC/MC from Clients | UDG | |
UBT Switch | GRE | Device Signaling & BC/MC to Clients (UBT 1.0) | SDG |
GRE | Unicast to / from Clients BC/MC from clients (UBT 1.0) BC/MC to and from Clients (UBT 2.0) |
UDG |
Device designated gateway
Each AP is assigned a Device Designated Gateway (DDG) which is responsible for publishing the bucket map to the AP. The bucket map is used for UDG/S-UDG assignments for each tunneled client. One bucket map is published per cluster.
For each AP, the cluster leader selects a DDG and S-DDG as part of the initial orchestration and messaging. The assignments are performed in a round-robin fashion based on each cluster node’s device capacity and load. The resulting distribution will be even for homogeneous clusters and uneven for heterogeneous clusters as gateways will have uneven device capacities. Higher capacity nodes will have more DDG/S-DDG assignments than lower capacity nodes.
Gateways with a DDG role are responsible for the following functions:
-
Bucket map distribution
-
Forwarding of north / south broadcast and multicast traffic destined to wireless clients
-
Forwarding IGMP/MLD group membership reports for IP multicast
The S-DDG assumes the role of publishing the bucket map and other forwarding functions if the DDG is taken down for maintenance or fails. New DDG/S-DDG role assignments are event driven as nodes are added and removed from the cluster. There is no periodic load-balancing. If a failover occurs, the S-DDG assumes the DDG role and a new bucket map is published. Impacted devices from failover are assigned a new S-DDG node.
A cluster can accommodate multiple node failures and assign DDG and S-DDG roles until the cluster’s maximum device capacity has been reached. Once a cluster’s device capacity has been reached and additional nodes are lost, impacted APs will become orphaned as there is no remaining device capacity available in the cluster to accommodate new DDG role assignments.
DDG and S-DDG assignments are performed by the cluster leader and done in a round-robin fashion.

A depiction of the DDG and S-DDG assignments for a four-node heterogeneous cluster.
Switch designated gateway
Each UBT switch is assigned a Switch Designated Gateway (SDG) which, like the DDG role, is responsible for publishing the bucket map to the switches. Unlike APs, where the cluster leader dynamically determines each AP’s DDG and S-DDG role assignment, a UBT switch’s initial SDG assignment is determined by the explicit configuration of the primary and backup gateways as part of the UBT configuration:
-
AOS-S – The gateway’s IP address specified as the controller-ip or backup-controller-ip
-
AOS-CX – The gateway’s IP address specified as the primary-controller-ip or backup-controller-ip
The switches initial SDG assignment is based on the controller-ip or primary-controller-ip defined as part of the switch configuration. The switches S-SDG assignment is automatic and is distributed between the cluster members based on capacity and load.
When a UBT switch first initializes, an attempt will be made to establish a PAPI session to the primary gateway IP address specified in the configuration. If the primary gateway IP does not respond, the secondary gateway IP is used. Once a connection is established, an S-SDG role is assigned by the gateway cluster leader.
Gateways with an SDG role are responsible for the following functions:
-
Bucket map distribution
-
Forwarding of broadcast and multicast traffic destined to UBT version 1.0 clients
-
Forwarding IGMP/MLD group membership reports for IP multicast (UBT version 1.0)
The S-SDG assumes the role of publishing the bucket map and other forwarding functions if the SDG is taken down for maintenance or fails. If a failover occurs, the S-SDG assumes the SDG role and a new bucket map is published. Impacted devices from failover are assigned a new S-SDG node.
The initial SDG assignments are based on the switch configuration while the S-SDG assignments are performed by the gateway cluster leader in a round-robin manner.

A depiction of the SDG and S-SDG assignments for a four-node heterogeneous cluster.
As the AOS-S / AOS-CX switch configuration influences the SDG role assignments, HPE Aruba Networking recommends assigning different primary and backup IP addresses to groups of switches to provide an even distribution of SDG roles between the available cluster nodes. The distribution must be performed manually by the switch admin when defining the golden configuration for each group of access layer switches.
An equal distribution of SDG roles between the available cluster nodes is especially important for UBT version 1.0 deployments as each cluster node with an SDG role for a group of UBT switches is responsible for replication and forwarding of broadcast and multicast traffic destined to UBT clients. Distributing the SDG role ensures that broadcast and multicast traffic replication and forwarding is distributed between all the available cluster nodes.
An example distribution of primary IP addresses for a four-node cluster is provided in the table below:
Switch Group | Primary IP |
---|---|
1 | GW-A |
2 | GW-B |
3 | GW-C |
4 | GW-D |
When failover between clusters is required, both the primary-controller-ip and secondary-controller-ip addresses are configured on each group of UBT switches where the primary IP points to a cluster node residing in the primary cluster and the secondary IP points to a cluster node residing in the backup cluster. As with a single cluster deployment, the SDG roles should be evenly distributed between the avilable cluster nodes in each cluster. This will ensure even SDG role distribution regardless of the cluster that is servicing the UBT switches.
An example distribution of primary and secondary IP addresses for failover between a primary and secondary cluster for four-node clusters is provided in the table below:
Switch Group | Primary IP | Secondary IP |
---|---|---|
1 | GW-DC1-A | GW-DC2-A |
2 | GW-DC1-B | GW-DC2-B |
3 | GW-DC1-C | GW-DC2-C |
4 | GW-DC1-D | GW-DC2-D |
User designated gateway
Each tunneled client is assigned a User Designated Gateway (UDG) to anchor north / south traffic. Each client’s unique MAC address is assigned a UDG and S-UDG via the bucket map that is published by the cluster leader for each cluster.
The bucket indexes used for UDG and S-UDG assignments are allocated in a round-robin fashion based on each cluster node’s client capacity. For homogeneous clusters, each gateway in the cluster will be allocated equal buckets while for heterogeneous clusters higher capacity nodes will be allocated more buckets than lower capacity nodes. Client MAC address hashing is utilized to ensure good session distribution but also ensures that each client is anchored to the same gateway while roaming.
Gateways with a UDG role are responsible for the following functions:
-
Forwarding broadcast and multicast traffic received from clients.
-
Forwarding of IP multicast traffic destined to UBT 2.0 clients.
-
Forwarding of unicast traffic (bi-directional).
The S-UDG assumes the role of forwarding functions if the UDG is removed from the cluster through maintenance or failure. A new bucket map is published by the cluster leader when nodes are added or removed from the cluster and is event driven. With AOS 10 there is no periodic load-balancing. If a failover occurs, the S-UDG assumes the UDG role and a new bucket map is published. Impacted clients from failover are assigned a new S-UDG node.
A cluster can accommodate multiple node failures and assign UDG and S-UDG roles until the cluster’s maximum client capacity has been reached. Once a cluster’s client capacity has been reached and additional nodes are lost, impacted clients will become orphaned as there is no remaining client capacity available in the cluster to accommodate new UDG role assignments.
UDG/S-UDG role assignments are determined using the published bucket map for the cluster by hashing each client’s MAC address to determine an index value (0-255).

In this example the hashing results in Client 1 being assigned GW-A for UDG and GW-B for S-UDG while Client 2 is assigned GW-C for UDG and GW-D for S-UDG.
Branch high availability
When high availability (HA) is required for branch office deployments, a pair of Branch gateways are deployed to terminate the WAN uplinks and VLANs within the branches and provide resiliency. Each gateway is configured with an IP interface on the management and user VLANs, and Virtual Router Redundancy Protocol (VRRP) is automatically orchestrated to provide first-hop router redundancy and failover for clients and devices. Dynamic Host Control Protocol (DHCP) services may also be enabled to provide host addressing which will also operate in HA mode.
With the convergence of clustering and branch HA, role assignments are further optimized to prevent client traffic from taking multiple hops within the cluster. Branch HA is enabled on pairs of gateways using auto-site clustering and requires the default gateway mode to be enabled within the Central configuration group. A peer connection is established between the gateways at each site where a preferred leader is configured by the admin or is automatically elected.
The cluster leader performs the following roles within the cluster during normal operation:
-
VLAN designated gateway (VDG) and VRRP active role for the management and user VLANs
-
DDG role for each AP
-
SDG role for each UBT switch
-
UDG role for each tunneled client
The leader is responsible for routing and forwarding of all branch management and client traffic during normal operation. The forwarding of WAN traffic is distributed between the gateways and may traverse the virtual peer link. The assignment of all the active roles to the preferred gateway ensures that all client traffic is anchored to the preferred gateway during normal operation, preventing unnecessary east-west traffic. The VDG and VRRP state for the management and user VLANs is synchronized and pinned to the active gateway. The secondary gateway operates in a standby mode and assumes all the standby roles. The only client traffic that is forwarded by a standby gateway is WAN traffic for any WAN uplinks it terminates.
If the active gateway is taken down for maintenance or fails, the standby gateway will take over all the active roles within the cluster along with all routing and forwarding functions. As multiple layers of convergence are required, failover is not seamless and will temporarily impact user traffic.

The DDG, SDG, UDG and VDG role assignments for a branch HA cluster.
4.3 - Automatic and Manual Modes
Cluster Modes
AOS 10 supports automatic and manual clustering modes to support Gateways that are deployed for wireless access, User Based Tunneling (UBT) or VPN Concentrators (VPNCs). A cluster can be automatically or manually established between Gateways that are assigned to the same configuration group. A cluster cannot be formed between Gateways that are assigned to separate configuration groups.
When the clustering mode for a configuration group is set auto group or auto site clustering modes, a cluster will be automatically established between the Gateways within the group with no additional configuration being required. A unique cluster name is automatically generated by Central, and the cluster configuration and establishment is automatically orchestrated by Central. When the clustering mode is set to manual, the admin must select the cluster members and specify a cluster name.
Additional cluster configuration options are available for both automatic and manual clustering modes based on the Mobility, Branch or VPN Concentrator role assigned to the Gateway configuration group. These additional options are available when the Manual Cluster configuration option is enabled within the configuration group. Different options are available for Mobility, Branch and VPN Concentrator roles.
The cluster mode is defined per configuration group and each configuration group may support Gateways using both automatic and manual clustering modes. The following cluster combinations are supported per group:
-
One auto group cluster and one or more manual clusters
-
One or more auto site clusters and one or more manual clusters
-
Multiple manual clusters
The only limitation is that a configuration group cannot support multiple auto group clusters or an auto group and auto site cluster.
Auto Group Clustering
Auto group clustering mode is the default clustering mode for Mobility and VPN Concentrator Gateway configuration groups. Gateways within the configuration group with shared configuration will automatically form a cluster amongst themselves.
Gateways in configuration groups with auto group clustering enabled are assigned a unique cluster name using the auto_group_XXX format where XXX is the unique numerical ID of the configuration group. This applies to configuration groups with a single Gateway or multiple Gateways. Only one auto group cluster is permitted for each configuration group. Campus deployments with multiple clusters will implement one configuration group for each cluster. This is demonstrated in the following graphic where three configuration groups with auto group clustering are used to configure Gateways in two data centers and a DMZ:

Auto Group Clustering Mode
When auto group clusters are present in Central, they can be assigned to WLAN and wired-port profiles configured for tunnel or mixed forwarding modes. The APs can reside in the same configuration group as the Gateways or a separate configuration group. The auto group cluster you assign each profile determines where client traffic is tunneled to. You can assign one auto group cluster as a Primary Gateway Cluster and one auto group cluster as a Secondary Gateway Cluster. If present, you may assign other cluster types as a Secondary Gateway Cluster. Once the profile configuration has been saved, Central will automatically orchestrate the IPsec and GRE tunnels from the APs to the Gateway cluster nodes selected for each profile.
The following graphic demonstrates the auto group cluster options that are presented for a WLAN profile when the Tunnel forwarding mode is selected:

Auto Group cluster profile assignment
Auto Site Clustering
Auto site clustering mode is the default clustering mode for Branch Gateway configuration groups. Auto site clusters simplify operation and configuration for branch office deployments by allowing APs to automatically tunnel to Gateways in their site. The Gateways must reside in the same configuration group and site for a cluster to form. Only Gateways in the same configuration group and site will automatically form a cluster amongst themselves.
Gateways with auto site clustering enabled are assigned a unique cluster name using the auto_site_XX_YYY format where XX is the unique numerical ID of the site and YYY is unique numerical ID of the configuration group. A unique cluster name is generated for sites with standalone Gateways or multiple Gateways. Only one auto group cluster is permitted per site.
Branch office deployments will often include Branch Gateways of different models deployed in standalone or HA configurations depending on the size and needs of each branch site. One configuration group with auto site clustering is created for each Gateway model and variation. This demonstrated below where two configuration groups are used for 9004 series Gateways deployed in standalone and HA pairs. Each standalone and HA pair of Gateways are assigned to their respective sites and are automatically assigned a unique cluster name:

Auto Site clustering mode
When auto site clusters are present in Central, they can be assigned to WLAN and wired-port profiles configured for tunnel or mixed forwarding modes. The APs may reside in the same configuration group as the Gateways or a separate configuration group. If separate configuration groups are deployed, one AP configuration group will be required for each Gateway configuration group.
Unlike auto group clusters where profiles are configured to tunnel traffic to specific cluster, auto site allows the admin to select an auto site group. The dropdown for the Primary Gateway Cluster lists each Gateway configuration group with auto site clustering enabled. Once the profile configuration has been saved, Central will automatically orchestrate the IPsec and GRE tunnels from the APs to the Gateway cluster nodes in their site.
The following graphic demonstrates the auto site cluster options that are presented for a WLAN profile when the Tunnel forwarding mode is selected. In this example four configuration groups configured for auto site clustering for 9004 and 9012 series Gateways in standalone and HA pairs are presented:

Auto Group cluster profile assignment
A site may also include a second auto site cluster if additional failover is required. As only one auto site cluster can be established between Gateways in the same configuration group and site, a second configuration group is required for the additional auto site cluster to be established. The Gateways in the second auto site cluster are assigned to the same site as the Gateways in the primary auto site cluster. The second auto site configuration group can then be assigned as a Secondary Gateway Cluster within the profile. This is demonstrated below where a primary and secondary auto site cluster is assigned:

Auto Group cluster failover
Manual Clustering
Manual clustering mode is optional for Branch Gateway, Mobility and VPN Concentrator Gateway configuration groups. When automatic clustering is disabled, clusters can be manually created and named by the admin. When automatic clustering mode in a configuration group is disabled, existing auto group or auto site clusters are not removed. Existing automatic clusters can either be retailed as-is or they can be removed and re-created manually.
Each manual cluster requires a unique cluster name and one or more Gateways in the group to be assigned. Each configuration group can support multiple manual mode clusters if required. Gateways within a configuration group can only be assigned to one automatic or one manual cluster at a time. Gateways can only form a manual cluster with other Gateways in the same configuration group.
Manual mode clusters are useful for situations where user defined cluster names are required, members need to be deterministically assigned or multiple clusters need to be formed between Gateways within the same configuration group. This is demonstrated as follows where a two configuration groups are used to configure and manage Mobility Gateways in two data centers. As VLANs and other configuration is shared, manual mode clustering is used to establish two clusters in each configuration group. This simplifies configuration and operation as two configuration groups can be used instead of four configuration groups using auto group clustering mode.

Manual clustering mode
When manual clusters are present in Central, they can be assigned to WLAN and wired-port profiles configured for tunnel or mixed forwarding modes. The APs can reside in the same configuration group as the Gateways or a separate configuration group. The clusters you assign each profile determines where client traffic is tunneled to. You can assign one manual cluster as a Primary Gateway Cluster and one manual cluster as a Secondary Gateway Cluster. Once the profile configuration has been saved, Central will automatically orchestrate the IPsec and GRE tunnels from the APs to the Gateway cluster nodes selected for each profile.
The following graphic demonstrates the manual cluster options that are presented for a WLAN profile when the Tunnel forwarding mode is selected:

Manual cluster profile assignment
4.4 - Formation Process
Cluster Formation
Cluster formation between Gateways is determined by the cluster configuration within each configuration group. When an automatic cluster mode is enabled, Central orchestrates the cluster name and configuration for each cluster node:
-
Auto group – A cluster is orchestrated between active Gateways within the same configuration group.
-
Auto site – A cluster is orchestrated between active Gateways within the same configuration group and site.
When manual cluster mode is enabled, the admin defines the cluster name and cluster members. The admin configuration initiates the cluster formation between the active Gateways.
Handshake Process
The first step of cluster formation involves a handshake process where messages are exchanged between all potential cluster members over the management VLAN between the Gateways system IP addresses. The handshake process occurs using PAPI hello messages that are exchanged between nodes to verify reachability between all cluster members. Information relevant to clustering is exchanged through these hello messages which includes platform type, MAC address, system IP address and version. After all members have exchanged hello messages, they establish IKEv2 IPsec tunnels with each other in a fully meshed configuration.
What follows is a depiction of cluster members engaging in the hello message exchange process as part of the handshake prior to cluster formation:

Handshake Process / Hello Messages
Cluster Leader Election
For each cluster one Gateway will be selected as the cluster leader. Depending on the persona of the Gateways, the cluster leader has multiple responsibilities including:
-
Active and standby VLAN designated Gateway (VDG) assignment
-
Active and standby device designated Gateway (DDG) assignment
-
Active and standby user designated Gateway (UDG) assignment
-
Standby switch designated Gateway (S-SDG) assignment
The cluster election takes place after the initial handshake as a parallel thread to VLAN probing and the heartbeat process.
WLAN Gateways
The cluster leader is elected as the result of the hello message exchange which includes each platform’s information, priority, and MAC address. The leader election process considers the following (in order):
-
Largest Platform
-
Configured Priority
-
Highest MAC Address
For homogeneous clusters, the Gateway with the highest configured priority or MAC address will be elected as the cluster leader. For heterogeneous clusters, the largest Gateway with the highest configured priority or MAC address will be elected as the cluster leader. The MAC address being the tiebreaker when equal capacity nodes with the same priority are evaluated.
The following graphic depicts a cluster leader election for a four-node 7240XM heterogeneous cluster. In this example DC-GW2 has the highest MAC address and is elected as the cluster leader. All other nodes become members:

WLAN cluster leader election
Branch HA Gateways
When branch HA is configured on two branch Gateways, the leader can be either automatically elected or manually selected by the admin. When a preferred leader is manually selected, no automatic election occurs, and the selected node becomes the leader.
When no preferred leader is configured, the leader election process considers the following (in order):
-
Number of Active WAN Uplinks (Uplink Tracking)
-
Largest Platform
-
Highest MAC Address
Most branch Gateway deployments will implement a pair of Gateways of the same series and model forming a homogeneous cluster. When uplink tracking is disabled, the branch Gateway with the highest MAC address will be elected as the cluster leader. The MAC address being the tiebreaker when equal capacity nodes with the same priority are evaluated.
When uplink tracking is enabled, the number of active WAN uplinks are evaluated and the Gateway with the highest number of active WAN uplinks will be elected as the cluster leader. Inactive, virtual, and backup WAN uplinks are not considered.
VLAN Probes
Gateways in a configuration group share the same VLAN configuration and port assignments. The management and user VLANs are common between the Gateways in a cluster and must therefore be extended between the Gateways by the respective core / aggregation layer switches. A missing or isolated VLAN on one or more Gateways can result in blackholed clients.
VLAN probes are used by Gateways in a cluster to detect isolated or missing VLANs on each cluster node. Each cluster node transmits unicast EtherType 0x88b5 frames out each VLAN destined to other cluster node. For a cluster consisting of four nodes, each node may transmit a VLAN probe per VLAN to three peers. To prevent unnecessary or duplicate probes, each Gateway keeps track of probe requests and responses to each cluster peer for each VLAN. If a Gateway responds to a probe for a given VLAN from a peer, the Gateway marks the VLAN as successful and will skip transmitting a probe to that peer for that VLAN.
VLANs that are present on each node that receive a response and are marked as successful while VLANs that do not receive a response are marked as failed and displayed as failed in Central. Prior to 10.6, Gateways will probe configured VLANs including VLAN 1. As there is no configuration to exclude explicit VLANs, VLAN 1 will often show in Central as being failed.
In 10.6 and above, VLAN probing has been enhanced to be more intelligent where only VLANs with assigned clients are probed. While the gateways management VLAN is always probed as its required for cluster establishment, only user VLANs with active tunneled clients will be probed. VLANs with no tunneled clients are no longer automatically probed preventing unused VLANs from being displayed as being failed in Central. Only user VLANs that have not been extended will be displayed.
VLANs that have failed probes are listed in the cluster detail’s view in Central. This is demonstrated below where VLANs 100 and 101 have not been extended to one Gateway node in a cluster and are both listed as failed for that node. Note that in this example the Gateways are running 10.5, as such VLAN 1 is also listed as being failed for each node:

Cluster polling failed VLANs
Heartbeats
Cluster nodes exchange PAPI heartbeat messages to cluster peers at regular intervals in parallel to the leader election and VLAN probing messages. These heartbeat messages are bidirectional and serve as the primary detection mechanism for cluster node failures. A round trip delay (RTD) is computed for every request and response. Heartbeats are integral to the process the cluster leader uses to determine the role of each cluster node and detect node failures.
Failure detection and failover time is determined by the cluster heartbeat threshold configuration for the cluster. The recommended detection time for a port-channel is 2000ms while the default value of 900ms is recommended for a single uplink. Failure detection is based on no response for the configured heartbeat threshold which is configurable between 500ms > 2000ms.
Connectivity and Verification
The Gateway Cluster dashboard displays a list of Gateway clusters provisioned and managed by Central. This can be accessed in Central by selecting Devices > Gateways > Clusters then selecting a specific cluster name. This view can be accessed with a global context filter or by selecting a specific configuration group or site.
The Summary view for a cluster provides important cluster information such leader version, capacity and number of node failures that can occur. The graphic below provides an example summary for a two node 7220 cluster. Note that the summary view provides color coded client capacity over time for each node which is useful for determining client distribution during normal and peak times. In this example each nodes client capacity is below 40% for the past 3 hours:

Cluster summary and capacity
The Gateways view provides a list of cluster nodes, operational status, per node capacity, model, and role information. The following graphic demonstrates the status view for the above production cluster. This below view shows that each cluster node is UP and SJQAOS10-GW11 has been elected as the cluster leader. Note that the number of current active and standby client sessions for each node is also provided. Clients are distributed between the available nodes based on published bucket map for the cluster:

Cluster gateway status
The Gateways view also provides additional heartbeat and VLAN probe information for each peer. You can view the peer details for each member of the cluster using the dropdown. This demonstrated below where the peer details for SJQAOS10-GW11 is shown. In this example the peer Gateways has a member role and is connected. Note that all VLANs (including 1) have been correctly extended between the Gateways, therefore no VLANs have failed probes:

Cluster peer status
4.5 - Features
Cluster Features
Seamless Roaming
The advantage of introducing the concept of the UDG is that it significantly enhances the experience for client roaming within a cluster. Once a client associates to an AP, it hashes the client’s MAC address and assigns it a UDG using the bucket map published for the cluster. Each client’s traffic is always anchored to its UDG which remains the same regardless of which AP the clients roams to. As each AP maintains GRE tunnels to each cluster node, any AP the client roams to will automatically forward the traffic to the UDG upon association and authentication.
A visual representation of the roaming process within a cluster is displayed below. In this example, GW-B is the assigned UDG for the client:

Seamless client roaming
Stateful Failover
Stateful failover is a critical aspect of cluster operations that safeguards clients from any impacts associated with a Gateway failure event. When multiple Gateways are present in a cluster, each client’s state is fully synchronized between the UDG and the S-UDG meaning that information such as the station table, the user table, layer 2 user state, layer 3 user state, will all be shared between both Gateways.
In addition, high value sessions such as FTP and DPI-qualified sessions are also synced to the S-UDG. Synchronizing client state and high value session information enables the S-UDG to assume the role as the client’s new UDG if the client’s current UDG fails. This permits stateful failover with no client de-authentication when clients move from their UDG to their S-UDG.
Event Driven Load Balancing
Client and device distribution is greatly simplified in AOS 10. One major change is that load balancing is no longer periodically performed during run-time and is now event driven as Gateways are added or removed from the cluster. Client distribution between cluster nodes is performed using the published bucket map for the cluster while device distribution is performed by the cluster leader based on each Gateways device capacity.
The goal of load balancing during a node addition or removal is to avoid disruption to clients and devices. When a Gateway in a cluster is taken down for maintenance or fails, impacted UDG, DDG and S-DDG sessions seamlessly transition to their standby nodes with little or no impact to traffic:
-
The cluster leader recomputes a new bucket map which is published to all devices. The bucket map is not immediately republished to provide sufficient time to activate standby client entries. The new bucket map includes the new S-UDG assignments for the clients.
-
The cluster leader reassigns the S-DDG/S-SDG sessions which are immediately published.
If the cluster leader is taken down for maintenance or fails, a new cluster leader is elected, and a role change notification is sent to all devices. The new cluster leader is responsible for recomputing and distributing the new bucket map for the cluster and performing DDG/SDG reassignments.
When a Gateway is added to a cluster, the cluster leader recomputes UDG and S-UDG assignments to avoid disruption to clients. The new bucket map from the first pass is published after 120 seconds while the bucket map for the second pass is published after 165 seconds.
DDG assignments are also recomputed when Gateways are added to a cluster. If the cluster is operating with a single node, S-DDG assignments are made for all devices that don’t have an S-DDG assignment. The cluster leader also performs load-balancing and re-assigns DDG and S-DDG sessions based on each Gateways capacity.
Live Upgrades
In AOS 10 Gateways are configured, managed, and upgraded independently from APs. AOS 10 APs and Gateways can run different AOS 10 software versions and can both be independently upgraded with zero network downtime as maintenance windows allow.
The live upgrade feature for Gateways allows cluster nodes to be upgraded with minimal or no impact to clients. When a live upgrade is initiated, the new firmware version is downloaded to all the Gateways in the cluster to the specified partition. Once the new firmware version has been downloaded and validated, Gateways are upgraded then sequentially rebooted to ensure all tunneled sessions are synchronized as UDGs, DDGs and SDGs are rebooted.
When a live upgrade is initiated for a cluster, the upgrade status of each node is displayed. Each node will first download the specified firmware image from the cloud and will upgrade the target partition. Once upgraded, the nodes are sequentially rebooted to minimize the impact to clients and devices:

Example of Live Upgrade
Live upgrades can be performed on-demand or be scheduled. Scheduled upgrades can be scheduled for any time within 1 week of the current date and time. A time zone, date and start time in hours and minutes must be specified. Scheduled live upgrades can be cancelled any time prior to the scheduled event. Here’s an example of a live upgrade being scheduled for an individual cluster where new firmware will be downloaded and installed on the Gateways’ primary partitions. The time zone is set to UTC and date and time is specified.

Live Upgrade scheduling
4.6 - Dynamic Authorization in a Cluster
Change of Authorization
Change of Authorization (CoA) is a feature which extends the capabilities of the Remote Authentication Dial-In User Service (RADIUS) protocol and is defined in RFC 5176. CoA request messages are usually sent by a RADIUS server to a Network Access Server (NAS) device for dynamic modification of authorization attributes for an existing session. If the NAS device is able to successfully implement the requested authorization changes for the client, it will respond to the RADIUS server with a CoA acknowledgement also referred to as a CoA-ACK. Conversely, if the change is unsuccessful, the NAS will respond with a CoA negative acknowledgement or CoA-NAK.
For tunneled clients, CoA requests are sent to the target client’s user designated Gateway (UDG). The UDG will then return an acknowledgement to the RADIUS server upon the successful implementation of the changes or a NAK if the implementation was unsuccessful. However, a clients UDG may change during normal cluster operations due to reasons such as maintenance or failures. These scenarios can cause CoA requests to be dropped as the intended client would no longer be associated with the Gateway that received the CoA request. HPE Aruba Networking has implemented cluster redundancy features to prevent the scenario.
Cluster CoA Support
The primary protocol used to provide CoA support for clusters in AOS 10 is Virtual Router Redundancy Protocol (VRRP). In every cluster there are the same number of VRRP instances as there are nodes and each Gateway serves as the conductor of an instance. For example, a cluster with four Gateways would have four instances of VRRP and four virtual IP addresses (VIPs). The VRRP conductor receives messages intended for the VIP of its instance while the remaining Gateways in the cluster are backups for all other instances where they are not acting as the conductor. This configuration ensures that each cluster is protected by a fault-tolerant and fully redundant design.
AOS 10 reserves VRRP instance IDs in the 220-255 range. When the conductor of each instance sends RADIUS requests to the RADIUS server, it injects the VIP of its instance into the message as the NAS-IP by default. This ensures that CoA requests from the RADIUS server will always be forwarded correctly regardless of which Gateway is the acting conductor for each instance. For example, the RADIUS server sends CoA requests to the current conductor of a VRRP instance and not to an individual station. From the perspective of the RADIUS server, it is sending the request to the current holder of the VIP address of the instance. Here’s a depiction of sample architecture that will be used for the duration of the CoA section:

Example CoA implementation
This sample network consists of a four-node cluster with four instances of VRRP. The assigned VRRP ID range falls between 220 and 255, therefore the four instances in this cluster are assigned the VRRP IDs of 220, 221, 222, and 223. The priorities for the Gateways in each instance are dynamically assigned where the conductor of the instance is assigned a priority of 255, the first backup is assigned a priority of 235, the second backup is assigned a priority of 215 and the third backup is assigned a priority of 195.
VRRP Instance | Virtual IP | GW-A Priority | GW-B Priority | GW-C Priority | GW-D Priority |
---|---|---|---|---|---|
220 | VIP 1 | 255 | 235 | 215 | 195 |
221 | VIP 2 | 195 | 255 | 235 | 215 |
222 | VIP 3 | 215 | 195 | 255 | 235 |
223 | VIP 4 | 235 | 215 | 195 | 255 |
GW-A is the conductor of instance 220 with a priority of 255, GW-B is the first backup with a priority of 235, GW-C is the second backup with a priority of 215 and GW-D is the third backup with a priority 195. Similarly, GW-B is the conductor for instance 221 due to having the highest priority of 255, GW-C the first backup with a priority of 235, GW-D is the second backup with a priority of 215 and GW-A is the third backup with a priority of 192. Instances 222 and 223 follow the same pattern as instances 220 and 221.
CoA with Gateway Failure
The failure of a cluster node can adversely impact CoA operations if the network doesn’t have the appropriate level of fault tolerance. If a user’s anchor Gateway fails, the RADIUS server will push the CoA request to their UDG with the assumption that it will enforce the change and respond with an ACK. However, if a redundancy mechanism such as VRRP hasn’t been implemented then the request will go unanswered and will not result in a successful change. In such a scenario, the users associated with the failed node will failover to their standby UDG as usual. However, the UDG will never receive the change request from the RADIUS server since the server is not aware of the cluster operations. VRRP instances must be implemented for each node to prevent such an occurrence and maintain CoA operations in the cluster.
In the figure below, GW-A is the master of instance 220 with GW-B serving as the first backup, GW-C serving as the second backup and GW-D serving as the third backup. A client associated to GW-A has been fully authenticated using 802.1X with GW-D acting as the client’s standby UDG. When communicating with ClearPass, GW-A automatically inserts the VIP for instance 220 as the NAS-IP. From the perspective of ClearPass, it is sending CoA requests to the current conductor of VRRP instance 220.

Client authentication against ClearPass
If GW-A fails, the client session will failover to GW-D. The client’s session moves over to GW-D as it’s the standby UDG. GW-D then assumes the role of UDG for the client. Since GW-B has a higher priority than GW-C or GW-D, it will assume the role of conductor and take ownership of the VIP.

GW-A failure
Any CoA requests sent by ClearPass for client 1 will be addressed to the VIP for instance 220. From the perspective of ClearPass, the VIP of instance 220 is the correct address for any CoA request intended for the client in the example. As GW-A has failed, GW-B is now the conductor of VRRP instance 200 and owns the VIP. When ClearPass sends a CoA request for the client, GW-B will receive it and then forward it to all nodes in the cluster. In this case GW-B forwards the request to GW-C and GW-D.

CoA message forwarded to GW-B
After the change in the CoA request has been successfully implemented, GW-D will send a CoA acknowledgement message back to ClearPass.

CoA ACK from GW-D
4.7 - Failover
Cluster Failover
Cluster failover is a new feature in AOS 10 which permits APs servicing mixed or tunneled profiles to failover between datacenters in the event that all the cluster nodes in the primary datacenter fail or become unreachable. Cluster failover is enabled by selecting a secondary Gateway cluster when defining a new mixed or tunnel profile. Unlike failover within a cluster which is non-impacting to clients and applications, failover between clusters is not hitless.
When a secondary cluster is selected in a profile, APs servicing the profile will tunnel the client traffic to the primary cluster during normal operation. IPsec and GRE tunnels are established from the APs to cluster nodes in both the primary and secondary cluster. Failover to the secondary cluster is initiated once all the tunnels to the cluster nodes in the primary cluster go down and at least one cluster node in the secondary cluster is reachable. A primary and secondary cluster selection within a WLAN profile is depicted below.

Configuring for primary and secondary cluster.
A primary cluster failure detection typically occurs within 60 seconds. When a primary cluster failure is detected, the profiles are disabled for a further 60 seconds to bounce the tunneled clients to permit broadcast domain changes when moving between datacenters. Once re-enabled, the tunneled clients obtain new IP addressing and are able to resume communications across the network through the secondary cluster. AP and client sessions are distributed between the secondary cluster nodes in the same way as the primary cluster. Each AP is assigned a DDG & S-DDG session based on each node’s capacity and load while each client is assigned a UDG & S-UDG session based on bucket map assignment.
Failover between clusters can be enabled with or without preemption. When preemption is enabled, APs can automatically fail-back to the primary cluster when one or more nodes in the primary cluster become available. When preemption is triggered, the APs include a default 5-minute hold-timer to prevent flapping. The primary cluster must be up and operational for 5 minutes (non-configurable) before fail-back to the primary cluster can occur. As with failover from the primary to secondary cluster, the profiles are disabled for 60 seconds to accommodate broadcast domain changes.
When considering deploying cluster failover, careful planning is required to ensure that the Gateways in the secondary cluster have adequate client and device capacity to accommodate a failover. Capacity of the secondary cluster should be equal or greater than the capacity in the primary cluster.
In addition to capacity planning, VLAN assignments must also be considered. While the IP networks can be unique within each datacenter, any static or dynamically assigned VLANs must be present in both datacenters and configured in both clusters. This will ensure that tunneled clients are assigned the same static or dynamically assigned VLAN during a failover. If VLAN pools are implemented, the hashing algorithm will ensure that the tunneled clients are assigned the same VLAN in each cluster.
Cluster failover can be implemented and leveraged in different ways. Your profiles can all be configured to prefer a cluster in the primary datacenter and only failover to a cluster residing in the secondary datacenter during a primary datacenter outage. All the traffic workload in this example being anchored to the primary datacenter during normal operation. A primary-secondary datacenter failover model is depicted below.

Datacenter workload failover
Alternatively, your WLAN profiles in different configuration groups can be configured to distribute the primary and secondary cluster assignments between the datacenters. For example, half the APs in a campus can be configured to prefer the primary datacenter and failover to the secondary datacenter while the other half of the APs in the campus can be configured to prefer the secondary datacenter and failover to the primary datacenter. With this model the traffic workload would be evenly distributed between both datacenters. This is sometimes referred to as salt-and-peppering as depicted below.

Datacenter workload distribution
4.8 - Planning
Planning a Gateway Cluster
Each cluster can support a specific number of tunneled clients and tunneling devices. The Gateway series, model, and number of cluster nodes determines each cluster’s capacity. When planning a cluster, the primary consideration is the number of Gateways that are required to meet the base client, device, and tunnel capacity needs in addition to how many Gateways are required for redundancy.

Total cluster capacity factors in the base and redundancy requirements.
Cluster Capacity
A cluster’s capacity is the maximum number of tunneled clients and tunneling devices each cluster can serve. This includes each AP and UBT switch/stack that establishes tunnels to a cluster and each wired or wireless client device that is tunneled to the cluster.
For each Gateway series, HPE Aruba Networking publishes the maximum number of clients and devices supported per Gateway and per cluster. The maximum number of cluster nodes that can be deployed per Gateway series is also provided. This information and other considerations such as uplink types and uplink bandwidth are used to select a Gateway model and the number of cluster nodes that are required to meet the base capacity needs.
Once your base capacity needs are met, you can then determine the number of additional nodes that are needed to provide redundant capacity to accommodate maintenance events and failures. The additional nodes added for redundancy are not dormant during normal operation and will carry user traffic. Additional nodes can be added as needed up to the maximum supported cluster size for the platform.
7000 / 9000 Series - Gateway Scaling
Scaling | 7005 | 7008 | 7010 | 7024 | 7030 | 9004 | 9012 |
---|---|---|---|---|---|---|---|
Max Clients / Gateway | 1,024 | 1,024 | 2,048 | 2,048 | 4,096 | 2,048 | 2,048 |
Max Clients / Cluster | 4,096 | 4,096 | 8,192 | 8,192 | 16,384 | 8,192 | 8,192 |
Max Devices / Gateway | 64 | 64 | 128 | 128 | 256 | 128 | 512 |
Max Devices / Cluster | 256 | 256 | 512 | 512 | 1,024 | 512 | 1,024 |
Max Tunnels / Gateway | 5,120 | 5,120 | 5,120 | 5,120 | 10,240 | 5,120 | 5,120 |
Max Cluster Size | 4 Nodes | 4 Nodes | 4 Nodes | 4 Nodes | 4 Nodes | 4 Nodes | 4 Nodes |
7200 Series – Gateway Scaling
Scaling | 7205 | 7210 | 7220 | 7240XM | 7280 |
---|---|---|---|---|---|
Max Clients / Gateway | 8,192 | 16,384 | 24,576 | 32,768 | 32,768 |
Max Clients / Cluster | 98,304 | 98,304 | 98,304 | 98,304 | 98,304 |
Max Devices / Gateway | 1,024 | 2,048 | 4,096 | 8,192 | 8,192 |
Max Devices / Cluster | 2,048 | 4,096 | 8,192 | 16,384 | 16,384 |
Max Tunnels / Gateway | 12,288 | 24,576 | 49,152 | 98,304 | 98,304 |
Max Cluster Size | 12 Nodes | 12 Nodes | 12 Nodes | 12 Nodes | 12 Nodes |
9100 / 9200 Series – Gateway Scaling
Scaling | 9114 | 9240 Base | 9240 Silver | 9240 Gold |
---|---|---|---|---|
Max Clients / Gateway | 10,000 | 32,000 | 48,000 | 64,000 |
Max Clients / Cluster | 60,000 | 128,000 | 192,000 | 256,000 |
Max Devices / Gateway | 4,000 | 4,000 | 8,000 | 16,000 |
Max Devices / Cluster | 8,000 | 8,000 | 16,000 | 32,000 |
Max Tunnels / Gateway | 40,000 | 40,000 | 80,000 | 160,000 |
Max Cluster Size | 6 Nodes | 6 Nodes | 6 Nodes | 6 Nodes |
Maximum cluster capacity
Each cluster can support a maximum number of clients and devices that cannot be exceeded. The number of cluster nodes required to reach a cluster’s maximum client or device capacity will vary by Gateway series and model. In some cases the maximum number of clients and devices for a cluster can only be reached by ignoring any high availability requirements and running with no redundancy.
Gateway series | Gateway model | Max cluster client capacity |
---|---|---|
7000 | All | 4 Nodes |
9000 | All | 4 Nodes |
7200 | 7205 | 12 Nodes |
7210 | 6 Nodes | |
7220 | 4 Nodes | |
7240XM / 7280 | 3 Nodes | |
9100 | All | 6 Nodes |
9200 | All | 4 Nodes |
Gateway series | Gateway model | Max cluster device capacity |
---|---|---|
7000 | All | 4 Nodes |
9000 | All | 4 Nodes |
7200 | All | 2 Nodes |
9100 | All | 2 Nodes |
9200 | All | 2 Nodes |
When a cluster’s client or device maximum capacity has been reached, the addition of more cluster nodes will not provide any additional client or device capacity. A cluster cannot support more clients or devices than the stated maximum for the Gateway series or model. Once the maximum client or device capacity has been reached for a cluster, each additional node will add forwarding and uplink capacity for client traffic in addition to client and device capacity for failover.
What Consumes Capacity
Each tunneled client and tunneling device consumes resources within a cluster. Each Gateway model can support a specific number of clients and devices that directly correlates to the available processing, memory resources and forwarding capacity for each platform. HPE Aruba Networking tests and validates each platform at scale to determine these limits.
With AOS 10 the Gateway scaling capacity has changed from what was set with AOS 8. These new capacities should be considered when evaluating a Gateway model or series for deployment with AOS 10. As AP management and control is no longer provided by Gateways, the number of supported devices and tunnels has been increased.
Client Capacity
Each tunneled client device (unique MAC) consumes one client resource within a cluster and counts against the cluster’s published client capacity. For each Gateway series and model, HPE Aruba Networking provides the maximum number of clients that can be supported per Gateway and per homogeneous cluster. Each Gateway model and cluster cannot support more clients than the stated maximum.
When determining client capacity needs for a cluster, consider all tunneled clients that are connected to Campus APs, Microbranch APs, and UBT switches. Each tunneled client consumes one client resource within the cluster. Clients that need to be considered include:
-
WLAN clients connected to Campus APs.
-
WLAN clients connected to Microbranch APs implementing Centralized Layer 2 (CL2) forwarding.
-
Wired clients connected to tunneled downlink ports on APs.
-
Wired clients connected to UBT ports.
{: .note } Only tunneled clients that terminate in a cluster need to be considered. WLAN and wired clients connected to Campus APs, Microbranch APs or UBT ports that are bridged by the devices are excluded. WLAN and wired clients connected to Microbranch APs implementing Distributed Layer 3 (DL3) forwarding may also be excluded.
Each AP and active UBT port establish GRE tunnels to each cluster node. The bucket map published by the cluster leader determines each tunneled client’s UDG and S-UDG assignment. A client’s UDG assignment determines which GRE tunnel the AP or UBT switch uses to forward the client’s traffic. If the client’s UDG fails, the client’s traffic is transitioned to the GRE tunnel associated with the client’s assigned S-UDG.
The number of tunneled clients does not influence the number of GRE tunnels that APs or UBT switches establish to the cluster nodes. Each AP and active UBT port will establish one GRE tunnel to each cluster node regardless of the number of tunneled client devices the WLAN or UBT port is servicing. The number of WLAN and wired port profiles also does not influence the number of GRE tunnels. The GRE tunnels are shared by all the profiles that terminate within a cluster.
The figure below depicts the client resource consumption for a 4 node 7240XM cluster supporting 60K tunneled clients. A four node 7240XM cluster can support a maximum of 98K clients and each node can support a maximum of 32K clients. In this example each client is assigned a UDG and S-UDG using the published bucket map for the cluster that is distributed between the four cluster nodes. Each cluster node in this example is allocated 15K UDG sessions and 15K S-UDG sessions during normal operation.

An example showing how a four node cluster of 7240XM Gateways supporting 60K tunneled clients will consume the available client capacity on each node within the cluster.
Device Capacity
Each tunneling device consumes one device resource within a cluster and counts against the cluster’s published device capacity. For each Gateway series and model, HPE Aruba Networking provides the maximum number of devices that can be supported per Gateway and per homogeneous cluster. Each Gateway model and cluster cannot support more devices than the stated maximum.
When determining device capacity for a cluster, you need to consider all devices that are tunneling client traffic to the cluster. Each device that is tunneling client traffic to a cluster consumes a device resource within the cluster. Devices that need to be considered include:
-
Campus APs
-
Microbranch APs
-
UBT Switches
Each AP and UBT switch that is tunneling client traffic to a cluster establishes IPsec tunnels to each cluster node for signaling, messaging and bucket map distribution. The cluster leader determines each AP’s DDG and S-DDG assignment which are load balanced based on each cluster nodes capacity and load. For UBT switches, the admin configuration determines each UBT switch’s SDG assignment while the cluster leader determines the S-SDG assignment. UBT switches implement a PAPI control-channel to the SDG node for signaling, messaging and bucket map distribution.
The figure below depicts the device resource consumption for a 4 node 7240XM cluster supporting 8K APs. A four node 7240XM cluster can support a maximum of 16K devices and each node can support a maximum of 8K devices. In this example each AP is assigned a DDG and S-DDG by the cluster leader that are distributed between the four cluster nodes. Each cluster node in this example is allocated 2K DDG sessions and 2K S-DDG sessions during normal operation.

An example showing how a four node cluster of 7240XM Gateways supporting 8K APs will consume the available device capacity on each node within the cluster.
Tunnel Capacity
APs and UBT switches establish IPsec and/or GRE tunnels to each cluster node. APs will only establish tunnels to a cluster when a WLAN or wired-port profile is configured for mixed or tunnel forwarding, and a cluster is selected as the primary or secondary cluster. UBT switches will only tunnel to the cluster that is configured as the primary or secondary IP as part of the switch configuration.
The following types of tunnels will be established:
-
Campus APs – IPsec and GRE tunnels
-
Microbranch APs (CL2) – IPsec and GRE tunnels. GRE tunnels are encapsulated in IPsec.
-
UBT Switches – GRE Tunnels
The tunnels from Campus APs and Microbranch APs are orchestrated by Central while the GRE tunnels from UBT switches are initiated based on admin configuration. Each tunnel from an AP or UBT switch consumes tunnel resources on each Gateway within a cluster. Unlike client and device capacity that is evaluated per cluster, tunnel capacity is evaluated per Gateway.
The number of tunnels that a device can establish to each Gateway in a cluster will vary by device. During normal operation, APs will establish 2 x IPsec tunnels (SPI-in and SPI-out) per Gateway for DDG sessions and 1 x GRE tunnel per Gateway for UDG sessions. The number of IPsec tunnels will periodically increase to 4 x IPsec tunnels per Gateway during re-keying (5 total). Microbranch APs configured for CL2 forwarding consume the same number of tunnels as Campus APs. The main difference being that each GRE tunnel is encapsulated in the IPsec tunnel.
Tunnel consumption for a Campus AP is depicted in the figure below. In this example the AP has established 2 x IPsec tunnels and 1 x GRE tunnel to each Gateway in the cluster. The 2 additional IPsec tunnels that are periodically established to each Gateway for re-keying are also shown. Worst case, each AP will establish a total of 5 tunnels to each Gateway in the cluster during re-keying.

An AP will potentially have five individual tunnels operational to each cluster node, each AP will reserve tunnel capacity appropriately.
For WLAN only deployments, the need for calculating the tunnel consumption per Gateway is not required as the maximum number of devices supported per Gateway already factors in the worst-case maximum of 5 tunnels per AP. As the maximum number of devices per Gateway is a hard limit, there will never be more tunnels established by APs than a Gateway can support.
The number of GRE tunnels established to each cluster node per UBT switch or stack is variable based on UBT version and number of UBT ports. For both UBT versions, 1 x GRE tunnel is established per UBT port to each Gateway in the cluster which are used for UDG sessions. The total number of UBT ports will therefore influence the total number of GRE tunnels that are established to each cluster node.
When UBT version 1.0 is deployed, two additional GRE tunnels are established from each UBT switch or stack to their SDG/S-SDG cluster nodes. These additional GRE tunnels are used to forward broadcast and multicast traffic destined to clients similar to how DDG tunnels are used on APs. Each UBT switch or stack configured for UBT version 1.0 will therefore consume two additional GRE tunnels per cluster.
Tunnel consumption for a UBT switch with two active UBT ports is depicted in the figure below. In this example the UBT switch is configured for UBT version 1.0 and has established 1 x GRE tunnel to its SDG/S-SDG Gateways for broadcast / multicast traffic destined to clients. Additionally, each active UBT port has established 1 x GRE tunnel to each Gateway for UDG sessions. If all 48-ports were active in this example, a total of 49 x GRE tunnels would be established per Gateway. Note that the number of clients per UBT port does not influence GRE tunnel count but would count against the cluster’s client capacity.

Example of how a switch will build GRE tunnels to a gateway when configured with support for UBT.
As the tunnel consumption for UBT deployments is variable, it is therefore, important to understand the UBT version that will be implemented, the total number of UBT switches or stacks and total number of UBT ports. For UBT version 1.0, each switch / stack will consume 2 x GRE tunnels per cluster and each UBT port will consume 1 x GRE tunnel per Gateway in the cluster for UDG sessions. For UBT version 2.0, each UBT port will consume 1 x GRE tunnel per Gateway in the cluster for UDG sessions.
For mixed WLAN and UBT switch deployments, the number of tunnels that can be consumed by both the APs and UBT switches may potentially exceed the Gateways tunnel capacity. As such it is important to calculate the total number of tunnels needed to support your deployment as each Gateway in the cluster will be terminating tunnels from both APs and UBT switches.
Determining Capacity
To successfully determine a cluster’s base capacity requirements, a good understanding of the environment is needed. Each Gateway model is designed to support a specific number of clients, devices and tunnels, and can forward a specific amount of encrypted and unencrypted traffic. The number of cluster nodes you deploy in a cluster will determine the total number of clients and devices that can be supported during normal operation and during maintenance or failure events.
Base Capacity
A successful cluster design starts by gathering requirements which will influence the Gateway model and number of cluster nodes you deploy. Once the base capacity has been determined, additional nodes can then be added to the base cluster as redundant capacity.
To determine a clusters base capacity requirements, the following information needs to be gathered:
-
Total Tunneled Clients – The total number of client devices that will be tunneled to the cluster. This includes wireless clients, clients connected to wired AP ports and wired clients connected to UBT ports. Each unique client MAC address counts as one client.
-
Total Tunneling Devices – The total number of devices that are establishing tunnels to the cluster. This will include Campus APs, Microbranch APs and UBT switches. Each AP, UBT switch / stack counts as one device.
-
Total UBT Ports – If UBT is deployed, the total number of UBT ports across all switches and stacks must be known.
-
UBT Version – The UBT version determines if additional GRE tunnels are established to the cluster from each UBT switch or stack for broadcast / multicast traffic destined to clients. This can be significant if the total number of UBT switches or stacks are high.
-
Traffic Forwarding – The minimum aggregate amount of user traffic that will need to be forwarded by the cluster. This will help with Gateway model selection.
-
Uplink Ports – The types of Ethernet ports needed to connect each Gateway to their respective switching layer and the number of uplink ports that need to be implemented.
Determining the number of clients and devices that need to be supported by a cluster is a straightforward process. Each tunneled client (wired and wireless) will consume one client resource within the cluster. Each AP and UBT switch or stack that is tunneling client traffic to a cluster will consume one device resource within that cluster. A Gateway model and number of nodes can then be selected to meet the client and device capacity needs. The primary goal is to deploy the minimum number of cluster nodes required to meet your base client and device capacity needs.
When evaluating client and device capacities to select a Gateway, the best practice is to use 80% of published Gateway and cluster scaling numbers to ensure that your base cluster design will include 20% additional capacity to accommodate future expansion. Designing a cluster at 100% scale is not recommended as there will be no additional capacity to support additional clients or devices after the initial deployments.
The general philosophy used to select a Gateway model and determine the minimum number of nodes required to meet the base capacity needs starts with referencing the tables below. These tables provide the maximum number of clients and devices supported per Gateway and per cluster and can aid by narrowing the choice of Gateways to a specific series or model.
For example, if your base cluster needs to support 50,000 clients and 5,000 APs, the 7000 and 9000 series Gateways can be quickly eliminated as can the 7205 and 7210 series Gateways. The remaining Gateway options are reduced to the 7220, 7240XM, 7280 and 9240 base models.
Using 80% scaling numbers, the minimum number of nodes required to meet the client and device capacity requirements for each Gateway model can be calculated and evaluated. For each model the maximum clients and devices supported per platform are captured and 80% numbers determined. The number of nodes required to meet the client and device requirements for each platform can then be determined. The minimum number of nodes required to meet your client and device capacity will likely differ. For example, a specific model of Gateway may require 2 nodes to meet client capacity needs and 1 node to meet device capacity needs.
This is demonstrated below where the 80% client and device capacities for each Gateway model is captured and listed under Per Node. This value multiplied to determine how many nodes are required to meet the 50,000 client and 5,000 AP requirement. Using the 7220 as an example, a minimum of 3 nodes is required to meet the client capacity requirements (19,660 x 3 = 58,980) while a minimum of 2 nodes are required to meet the device capacity requirements (3,277 x 2 = 6,554).
Other Gateway models require a minimum of 1 or 2 nodes to meet the above client and device capacity requirements. As such the 7220 can be excluded from consideration as 3 nodes are required to meet the capacity needs vs. 2 nodes for other models.
Model | 80% client cap per node | Min Nodes | Cluster | 80% device cap per node | Min Nodes | Cluster |
---|---|---|---|---|---|---|
7220 | 19,660 | 3 | 58,980 | 3,277 | 2 | 6,554 |
7240XM | 26,214 | 2 | 52,428 | 6,554 | 1 | 6,554 |
7280 | 26,214 | 2 | 52,428 | 6,554 | 1 | 6,554 |
9240 Base | 25,600 | 2 | 51,200 | 3,200 | 2 | 6,400 |
The next step is to evaluate the number of uplink ports and port types needed to connect the Gateways to their respective core / aggregation layer switches. As a best practice, each Gateway should connect to a redundant switching layer using a minimum of two ports in a LACP configuration. Each Gateway model is available with different Ethernet port configurations supporting different speeds. Gateways models are available with copper, SFP, SFP+, SFP28+ and QSFP+ interfaces which are provided in the datasheets.
In the above example, the 7240XM, 7280 and 9240 base models all support a minimum of four SFP+ ports and either can be selected if 10Gbps uplinks are required. If higher speed uplinks such as 25Gbps or 40Gbps are needed, the 7240XM can be excluded.
In parallel, the forwarding performance of each Gateway model needs to be considered. The maximum amount of traffic that each Gateway model can forward is provided in the published datasheets. Each Gateway model can forward a specific amount of user traffic and the number of nodes in the cluster determines the aggregate throughput of the cluster. For example, a 9240 base Gateway can provide up to 20Gbps of forwarding capacity. A 2-node 9240 base cluster will offer an aggregate forwarding capacity of 40Gbps (2 x 20Gbps).
If more aggregate forwarding capacity is required, a different Gateway model and uplink type might be selected. For example, a 7280 series Gateway that is connected using QSFP+ interfaces can provide up to 80Gbps of forwarding capacity per Gateway. A 2 node 7280 cluster offering an aggregate forwarding capacity of 160Gbps (2 x 80Gbps).
In the above example, both the 9240 base and 7280 series Gateways meet the base capacity requirements with a 2-node cluster. The ultimate decision as to which Gateway model to use will likely come down to uplink port preference based on the port types that are available on the switching layer and aggregate forwarding capacity requirements. Additional nodes can be added to the base cluster design if more uplink and aggregate forwarding capacity is required.
The above example captured the methodology used to select a Gateway model and determine the minimum cluster size for a wireless LAN only deployment and did not evaluate tunnel capacity. As a Gateway cannot support more APs than its maximum device capacity, a Gateways tunnel capacity cannot be exceeded for a wireless LAN only deployment.
When UBT is deployed, the number of clients and devices will influence your base cluster client and device capacity requirements while the UBT version and total number of UBT ports will influence tunnel capacity requirements. As the total number of UBT switches or stacks and UBT ports are variable, additional validation will be required to ensure that tunnel capacity on a selected Gateway model is not exceeded:
-
UBT version 1.0 – Each UBT switch or stack will consume 2 x GRE tunnels to the cluster for broadcast / multicast traffic destined to clients. Additionally, each UBT port will consume 1 x GRE tunnel too each Gateway in the cluster.
-
UBT version 2.0 – Each UBT port will consume 1 x GRE tunnel to each Gateway in the cluster.
Expanding on the previous example, let’s assume the base cluster needs to support 50,000 clients, 4,500 APs, 512 UBT switches / stacks and 12,288 UBT ports and UBT version 2.0 will be implemented. The total number of clients and devices remains the same, but we have now introduced additional GRE tunnels to support the UBT ports.
We have already determined that a 2-node cluster using a 7240XM, 7280 or 9240 base series Gateways can meet the base client and device capacity needs. The next step is to calculate tunnel consumption. Each AP will establish 5 tunnels, each UBT port will establish 1 tunnel. With simple multiplication and addition, we can easily determine to total number of tunnels that are required:
-
AP Tunnels / Gateway: 5 x 4500 = 22,500
-
UBT Port Tunnels / Gateway: 12,288
For this example, a total of 34,788 tunnels per Gateway is required. We can determine the maximum tunnel capacity for each Gateway model and calculate the 80% tunnel scaling number. The number of required tunnels is then subtracted to determine the remaining number of tunnels for each model.
This is demonstrated in the table below that shows that our tunnel capacity requirements can be met by both the 7240XM and 7280 series Gateways but not the 9240 base series Gateway. The 9240 base Gateway would not be a good choice for this mixed wireless LAN / UBT deployment unless a separate cluster is deployed.
Model | Capacity (80%) | Required | Remaining |
---|---|---|---|
7240XM | 76,800 | 34,788 | 42,012 |
7280 | 76,800 | 34,788 | 42,012 |
9240 Base | 32,000 | 34,788 | -2,788 |
If UBT version 1.0 was deployed in the above example, two additional GRE tunnels would be consumed per UBT switch or stack to the cluster. In this example 1,024 additional GRE tunnels would be established from the 512 UBT switches to different Gateways within the cluster based on the SDG/S-SDG assignments. To calculate the additional per Gateway tunnel capacity for UBT version 1.0, the total number of tunnels is divided by the number of base cluster nodes. For a 2-node base cluster, 512 additional tunnels would be consumed per Gateway.
Redundant Capacity
Once a base cluster design has been determined, additional nodes can be added to provide redundant capacity. Each additional node added to a base cluster will provide additional forwarding capacity, uplink capacity and redundant client and device capacity to accommodate maintenance and failure events. It’s important to note that each additional node added to your base cluster are not dormant and will support client and device sessions and provide forwarding during normal operation.
The number of additional nodes that you add to your base cluster for redundant capacity will be influenced by your tolerance for how many cluster nodes can be lost before client or device capacity is impacted. Your cluster design may include as many redundant nodes as the maximum cluster size for the Gateway series supports.
Minimum redundancy is provided by adding one redundant node to the base cluster. This is referred to as N+1 redundancy where the cluster can sustain the loss of a single node without impacting clients or devices. An N+1 redundancy model is typically employed for base clusters consisting of a single node but may also be used to provide redundancy for base clusters with multiple nodes. The following is an example of a N+1 redundancy model where one additional node is added to each base cluster:

N+1 redundancy in a cluster is achieved by adding a gateway to a cluster, allowing for single node failure without interruptions.
The maximum number of redundant nodes that you add to your base cluster will typically be less than or equal to the number of nodes in the base cluster. The only limitation is the maximum number of cluster nodes the Gateway series can support.
When the number of redundant nodes equals the number of base cluster nodes, maximum redundancy is provided. This is referred to as 2N redundancy (also known as N+N redundancy) where the cluster can sustain the loss of half its nodes without impacting clients or devices. 2N redundancy is typically employed in mission critical environments where continuous operation is required. The cluster nodes may reside within the same datacenter or be distributed between datacenters when bandwidth and latency permits. The 2N redundancy model is depicted below where three redundant nodes are added to a three-node base cluster design:

2N Redundancy
Most cluster designs will not include more redundant nodes than the base cluster unless additional forwarding, uplink or firewall capacity is required. Your cluster design may include a single node for redundancy for N+1 redundancy, twice as many nodes for 2N redundancy or something in between.
MultiZone
One main architectural change in AOS 10 is that WLAN and wired-port profiles in an AP configuration group can terminate on different clusters. This capability is referred to as MultiZone and is supported by Campus APs using profiles configured for mixed or tunnel forwarding and Microbranch APs with profiles configured for Centralized Layer 2 (CL2) forwarding.
MultiZone has various applications within an enterprise network. The most common use is segmentation where different classes of traffic are tunneled to different points within the network. For example, trusted traffic from an employee WLAN is tunneled to a cluster located in the datacenter while untrusted traffic from a guest/visitor WLAN is tunneled to a cluster located in a DMZ behind a firewall. Other common uses include departmental access and multi-tenancy.
When planning for capacity for a MultiZone deployment, the following considerations need to be made:
-
Each AP will consume a device resource on each cluster it is tunneling client traffic to.
-
Each AP will establish IPsec and GRE tunnels to each cluster node for each cluster it is tunneling client traffic to.
-
Each tunneled client will consume a client resource on the cluster it is tunneled to.
-
Each AP can tunnel to a maximum of twelve Gateways across all clusters.
MultiZone is enabled when WLAN or wired-port profiles configured for mixed, or tunnel forwarding are provisioned that terminate on separate clusters within the Central instance. When enabled, APs will establish IPsec and GRE tunnels to each cluster node in each cluster. As with a single cluster implementation, the APs will establish 3 tunnels to each cluster node during normal operation and 5 tunnels during re-keying.
DDG and S-DDG sessions are allocated in each cluster by each cluster leader that also publishes the bucket map for their respective cluster. Each tunneled client is allocated a UDG and S-UDG session in their respective cluster based on the bucket map for that cluster.
Tunnel consumption for a MultiZone AP deployment is depicted below. In this example an AP is configured with three WLAN profiles where two WLAN profiles terminate on an employee cluster while one WLAN profile terminates on a guest cluster. The APs establish IPsec and GRE tunnels to each cluster and are assigned DDG sessions in each cluster and receive a bucket map for each cluster. Clients connected to WLAN A or WLAN B are assigned UDG sessions in the employee cluster while clients connected to WLAN C are assigned UDG sessions in the guest cluster.

Multizone capacity
Capacity planning for a MultiZone deployment follows the methodology described in previous sections where the base capacity for each cluster is designed to support the maximum number of tunneling devices and tunneled clients that terminate in each cluster. Additional nodes are then added for redundant capacity.
As mixed and tunneled WLAN and wired-port profiles can be distributed between multiple configuration groups in Central, a good understanding of the total number of APs that are assigned to profiles terminating in each cluster is required. Device capacity and tunnel consumption may be equal across clusters if profiles are common between all APs and configuration groups or unequal if different profiles are assigned to APs in each configuration group.
For example, if WLAN A, WLAN B and WLAN C in this illustration are assigned to 1,000 APs in configuration group A and WLAN A and WLAN B are assigned to 1,000 APs in configuration group B, 2,000 device resources would be consumed in the employee cluster while 1,000 device resources would be consumed in the guest cluster. Tunnel consumption would be 10,000 on the Gateways in the employee cluster and 5,000 on the Gateways in the guest cluster.
An understanding of the maximum number of tunneled clients per cluster across all WLANs is also required and this will typically vary between clusters. For example, the employee cluster may be designed to support a maximum of 10,000 employee devices while the guest cluster may be designed to support a maximum of 2,000 guest or visitor devices. In this case WLAN A and WLAN B would consume 10,000 client resources on the employee cluster while WLAN C would consume 2,000 client resources on the guest cluster.
5 - Forwarding Modes of Operation
In AOS 10, client traffic can be bridged locally by the APs or be tunneled to a Gateway cluster. How the client traffic is forwarded by the APs is determined by the traffic forwarding mode configured in each WLAN and downlink wired port profile:
-
Bridge – APs will bridge client traffic out its uplink interface on the desired VLAN.
-
Tunnel – APs will tunnel client traffic to a Gateway cluster on the desired VLAN.
-
Mixed – The AP will bridge, or tunnel client traffic based on VLAN assignment.

Traffic Forwarding Modes
The traffic forwarding mode configured in each profile determines if the APs or the Gateways are the authenticators and where the VLAN and user-role assignment decisions are made:
-
Bridge Forwarding – APs are the authenticators and determine the static or dynamic VLAN and user role assignment for each client.
-
Tunnel or Mixed Forwarding – Gateways are the authenticators and determine the static or dynamic VLAN and user role assignments for each client.
For mixed forwarding, the assigned VLAN ID determines if the client’s traffic is bridged locally by the AP or tunneled to a Gateway cluster. Client traffic is bridged if the assigned VLAN is not present within the assigned Gateway cluster and is tunneled if the VLAN is present within the assigned Gateway cluster.
The traffic forwarding modes are extremely flexible permitting wireless client traffic to be bridged or tunneled as needed. Selecting the Bridge or Tunnel forwarding mode will exclusively configure the forwarding mode of the WLAN profile to the specified forwarding mode. Mixed forwarding permits both forwarding types but requires dedicated VLANs to be implemented for bridged and tunneled clients. A profile configured for bridge forwarding cannot tunnel user traffic and vice versa.
Bridge Forwarding
When bridge traffic forwarding is configured in a WLAN or downlink wired port profile, the client traffic will be directly forwarded out of the APs uplink port(s) onto the access switching layer with an appropriate 802.1Q VLAN tag. To support bridge forwarding, the APs management and bridged user VLANs are extended from the access switching layer to the APs uplink ports. Each bridged client is assigned a VLAN that is 802.1Q tagged out the APs uplink port. As a recommended security best practice, no bridged clients should be assigned to the APs management VLAN.
An example of a bridge forwarding deployment is depicted below where the AP management VLAN (not shown) and bridged user VLANs 76 and 79 are extended from the access switching layer to hospitality AP which services wired and wireless clients. In this example the WLAN client is assigned VLAN 76 while the wired client is assigned VLAN 79. The core / aggregation switch has IP interfaces and IP helper addresses defined for each VLAN and is the default gateway for each VLAN.

Bridge Forwarding Mode
Seamless Roaming
To provide the best possible experience for bridged clients and their applications, the AP management and user VLANs are extended between APs that establish common RF coverage areas within a building or floor. The AP management and bridged user VLANs are shared between the APs and are allocated a specific IP network based on the number of hosts each VLAN needs to support.
Clients roaming between APs sharing VLANs are able to maintain their VLAN assignment, IP addressing and default gateway after a roam. This often is referred to as a seamless roam as it is the least disruptive to applications. The roam can be a fast roam or slow roam depending on the WLAN profile configuration and capabilities of the client.
A seamless roam is depicted in below where bridged user VLAN 76 and its associated IP network (10.200.76.0/24) has been extended between all the APs within a building. In this example the client is able to maintain its VLAN 76 membership, IP addressing and default gateway after each roam.

Bridge Forwarding & Seamless Roaming
Bridge Forwarding Scaling
The total number APs and bridged clients that can be supported across shared management and user VLANs will also influence your VLAN and IP network design. Broadcast / multicast frames and packets are normal in IP networks and are used by both APs and clients to function. The higher the number of active hosts that are connected to a given VLAN, the higher the quantity and frequency of broadcast / multicast frames and packets that are flooded over the VLAN. As broadcast / multicast frames are flooded, they must be received and processed by all active hosts in the VLAN.
When bridge forwarding is deployed, HPE Aruba Networking has validated that we can support a maximum of 500 APs and 5,000 bridged clients across all shared management and user VLANs. The total number of APs in a shared management VLAN cannot exceed 500 and the total number of clients across all bridged user VLANs cannot exceed 5,000.
When scaling beyond 500 APs and 5,000 clients is required for a deployment within a building or campus, two design options are available:
-
A cluster of Gateways can be deployed with centralized user VLANs offering higher scaling and seamless mobility.
-
Multiple instances of 500 APs and 5,000 clients can be strategically deployed where the AP management and user VLANs for each instance are layer 3 separated (i.e., implement separate broadcast domains).
If Gateways are not an option, with careful planning and design multiple instances of APs can be deployed where the AP management and user VLANs for each instance of APs connect to separate IP networks providing scaling. Each instance of APs and clients being limited to a floor, building or co-located buildings as needed. There is no limit as to how many instances of 500 APs and 5,000 clients you can deploy as long as each instance of APs and clients are layer 3 separated from other instances. The VLAN IDs used by each instance of APs and clients can be common to simplify configuration and operations, but the IP networks for each instance must be unique.
The compromise for this design is that roaming between separate instances of APs such as between buildings requires a hard roam as bridged clients will require new IP addressing and a default gateway to be assigned (see Hard Roaming). As such a good understanding of your user expectations, application and client behavior needs to be understood before considering a hard roaming design. If hard roaming is not acceptable, a cluster of Gateways must be deployed.
A good understanding of the LAN architecture is also helpful when scaling as larger LANs will typically include natural layer 3 boundaries such as aggregation switching layers within buildings that prevent AP management and user VLANs from being extended. These layer 3 boundaries provide natural segmentation boundaries between each instance of APs and clients.
An example of a campus design implementing bridge forwarding is depicted below. In this example each building implements a dedicated layer 3 aggregation switching layer that connects to a routed core. Each building is completely layer 3 separated from neighboring buildings preventing AP management and user VLANs from being extended. Each building in this example implements a specific number of APs that use the same VLAN IDs but with unique IP networks. Each building can potentially scale to support a maximum of 500 APs or 5,000 clients (whichever limit is reached first).

Bridge Forwarding Scaling
Hard Roaming
When scaling without Gateways is required or the LAN architecture precludes VLANs and IP networks from being extended between APs across buildings or floors, a bridged forwarding implementation is still possible but with compromises to application and user experience.
There are some situations where it is not possible to extend VLANs and their IP networks between APs in larger deployments such as within a building or between buildings. The local area network (LAN) design may include intentional layer 3 boundaries within the distribution switching layer that prevent VLANs and their associated IP networks from being extended between access layer switches servicing buildings or floors. Access layer switches configured for routed access will also prevent VLANs and IP networks from being extended between wiring closets within a building or floor.
When a client device roams between APs separated by a layer 3 device, a hard roam is performed as the client’s broadcast domain membership changes. While the APs in each building or floor may implement the same management and user VLAN IDs, the associated IP networks will be unique for each. Clients roaming between APs separated across a layer 3 device will require new IP addressing and a default gateway to be assigned after the roam. While modern clients are able to obtain new IP addressing to accommodate the IP network change, the transition between IP networks will impact active applications as the source IP addresses of the clients will change after a hard roam.
A hard roam between APs deployed in separate buildings within a campus separated by layer 3 aggregation switching layers is depicted below. In this example, a common bridged user VLAN ID 76 is deployed in both buildings but has a different IP network assigned in each building:
-
VLAN 76 / Building A – 10.200.76.0/24
-
VLAN 76 / Building B – 10.201.76.0/24
Bridged clients roaming between APs within each building will have a seamless roaming experience while bridged clients roaming between buildings have a hard roaming experience. The roam can be a fast roam or a slow roam depending on the configuration of the WLAN profile and client capabilities.

Bridge Forwarding & Hard Roaming
Depending on your LAN architecture and environment, a hard roam may be required for clients moving between buildings within a campus, between floors within a multi-story building or between co-located buildings. This will be dependent on where the layer 3 boundaries reside within the LAN for each environment. For most deployments these boundaries will reside between buildings.
Before considering a hard roaming design, the following needs to be investigated and considered:
-
User Experience – Do the users expect uninterrupted network connectivity when moving between buildings or floors?
-
RF Design* – Can the AP placement and cell design accommodate / implement RF boundaries to minimize the hard roaming points across layer 3 boundaries to provide the best possible user and application experience?
-
Client Devices – Do you have any specialized or custom client devices deployed? These will need to be tested to validated to ensure they can tolerate and support hard roaming. Modern Apple, Android and Microsoft operating systems will initiate the DHCP DORA process after each roam.
-
Applications – What applications are you using, and can they tolerate hosts changing IP addresses? While some applications such as Outlook, Teams and Zoom can automatically recover after host re-addressing others cannot.
Ultimately you will need to decide if your users, client devices, and applications can tolerate hard roaming before considering and implementing a hard roaming design. If a hard roaming design cannot be tolerated and seamless roaming is required, a design using Gateways and tunnel forwarding should be considered.
MAC Address Learning
When bridge forwarding is enabled, client traffic is forwarded out the APs uplink ports on the assigned VLAN to the access switching layer. Each bridged client MAC address will be learned by all the layer 2 devices participating in the VLAN using normal layer 2 learning. Each bridged client’s MAC address is initially learned from DHCP and ARP broadcast messages transmitted by each client during association, authentication and roaming. Each switch participating in the VLAN will either learn a bridged client’s MAC address from a switchport that is connected to the AP where the client is attached or from its uplink / downlink port connecting to a peer switch.
An example of MAC learning for a bridge forwarding deployment is depicted below. All the layer 2 switches participating in VLAN 76 in this example will learn client 1’s MAC address upon client 1 transmitting a broadcast frame or packet after a successful association and authentication:
-
SW-2 – Will directly learn Client 1’s MAC address on port 1/1/1 that connects to the AP (where Client 1 is attached).
-
SW-1 – Will learn client 1’s MAC address on port 1/1/20 that connects to SW-2 (layer 2 path to Client 1).
-
SW-3 – Will learn Client 1’s MAC address on port 1/1/52 that connects to SW-1 (layer 2 path to Client 1).

MAC address learning in a bridged network.
When a bridged client roams between APs, a MAC move will occur. Upon a successful roam, a frame or packet from the roaming client will trigger the upstream switches to update their layer 2 forwarding tables to reflect the new layer 2 path to the roamed client.
A MAC address move resulting from a roaming bridged client is depicted below. In this example client 1 has roamed from an AP connected to SW-2 to an AP connected to SW-3. Upon client 1 transmitting a broadcast frame or packet after the roam, all the switches participating in VLAN 75 will update their MAC address forwarding tables to reflect the new layer 2 path to client 1:
-
SW-3 – Will re-learn client 1’s MAC address on port 1/1/2 that connects to the AP (where client 1 is attached).
-
SW-1 – Will re-learn client 1’s MAC address on port 1/1/2 that connects to SW-3 (new layer 2 path to client 1)
-
SW-2 – Will re-learn client 1’s MAC address on port 1/1/52 (new layer 2 path to client 1).

MAC Address Move
Tunnel Forwarding
When tunnel traffic forwarding is configured in a WLAN or downlink wired port profile, the client traffic is encapsulated in GRE by the APs and is tunneled to the primary Gateway cluster. Client traffic forwarded within the GRE tunnels are tagged with the clients assigned VLAN. As covered in the clustering section, the role of each Gateway within the primary cluster determines which Gateway is responsible for transmitting and receiving traffic for each tunneled client.
With tunnel traffic forwarding, the user VLANs are centralized and reside within each cluster. Each tunneled profile terminates within a primary cluster and can optionally failover to a secondary cluster. For each primary cluster, the Gateways management and user VLANs are extended from each Gateway in the cluster to their respective core / aggregation switching layer. As a best practice, all the VLANs are 802.1Q tagged. Each Gateway within a cluster shares the same management VLAN, user VLANs and associated IP networks. Each tunneled client is either statically or dynamically assigned to a centralized user VLAN within its primary cluster.
An example of a bridge forwarding deployment is depicted below where the tunneled user VLANs 73 and 75 are extended from the Gateway to the core / aggregation switching layer. In this example the WLAN client is assigned VLAN 73 while the wired client is assigned VLAN 75. The core / aggregation switch has IP interfaces and IP helper addresses defined for each VLAN and is the default gateway for each VLAN.

Tunnel Forwarding Mode
The above example uses a single Gateway to simplify the datapath of each tunneled client. When multiple Gateways are deployed within a cluster, each AP establishes IPsec and GRE tunnels to each cluster node. The role of each Gateway within the cluster determines which Gateway is responsible for anchoring each client’s traffic and which Gateway is responsible for forwarding broadcast / multicast traffic destined to clients attached to each AP. The Gateway role effectively determines which GRE tunnel the AP selects when forwarding traffic from a client and which tunnel is selected by the Gateway for unicast, broadcast and multicast return traffic.
The figure below expands on the previous example by adding an additional Gateway to the cluster and includes each Gateways role assignment. In the below diagram:
-
Client 1 is dynamically assigned VLAN 73, and GW-A is assigned the UDG role.
-
Client 2 is dynamically assigned VLAN 73, and GW-B is assigned the UDG role.
-
GW-A is assigned the DDG role for the AP.

Tunnel Forwarding by Role
In the above example GW-A is assigned the UDG role for client 1 and is responsible for receiving all traffic transmitted by client 1 and transmitting all unicast traffic destined to client 1. GW-B is assigned the UDG role for client 2 and is responsible for receiving all traffic transmitted by client 2 and transmitting all unicast traffic destined to client 2. This traffic is encapsulated and forwarded in the respective GRE tunnels that terminate on GW-A or GW-B based on the UDG role assignment for each client.
GW-A is also assigned the DDG role for the AP and is responsible for transmitting all broadcast and multicast traffic that is flooded on VLAN 73. This traffic is encapsulated and forwarded in the IPsec tunnel that terminates on GW-A.
Roaming
Each tunneled WLAN terminates in a primary cluster where the user VLANs are centralized. Each tunneled client is either statically or dynamically assigned a VLAN which is present on all the Gateways within the primary cluster. For each client, one Gateway in the cluster is assigned a UDG role that determines which Gateway the client’s traffic is anchored to. As the bucketmap is published per cluster, each client will maintain its UDG assignment as it roams. Each client’s traffic is always be anchored to the same Gateway within a cluster regardless of which AP the client roams to.
When a client roams between APs that tunnel to the same primary cluster, the client is able to maintain its VLAN assignment, IP addressing and default gateway after each roam providing a seamless roaming experience. Clients can perform a slow roam or fast roam depending on the WLAN profile configuration and capabilities of the client. Seamless roaming can be achieved between APs in the same Central configuration group (same profile) or between APs in separate configuration groups (duplicated WLAN profiles). The only requirement for a seamless roam is that the primary cluster must be the same between the APs.
A seamless roam is depicted below where user VLAN 73 and its associated IP network (10.200.73.0/24) is centralized within a primary cluster consisting of four Gateways. In this example APs in each building and floor connect to AP management VLANs implementing separate IP networks. Using the published bucketmap for the cluster, GW-B is assigned the UDG role for the client which is maintained after each roam. Regardless of which AP the client is connected to, the client is able to maintain its VLAN membership, IP address and default gateway.

Tunnel Forwarding & Seamless Roaming
For some larger campus deployments, tunneled WLANs might be distributed between primary clusters located in separate datacenters to distribute traffic workloads. The IP networks for each user VLAN being unique in each cluster. Using configuration groups in Central, APs in each building are strategically distributed between the primary clusters to evenly distribute the user traffic between each datacenter. For availability and failover, the alternate cluster is assigned as the secondary cluster.
As with bridged forwarding, clients roaming between APs serviced by a separate cluster will perform a hard roam as new IP addressing will be required. While the VLANs will be common, the IP networks for each user VLAN in each datacenter will be unique. Clients roaming between APs tunneling to separate primary clusters will require a new IP address and default gateway after each roam.
MAC Address Learning
When tunnel forwarding is enabled, the APs tunnel the client’s traffic to the Gateway in the cluster that is assigned the UDG role. Each tunneled clients MAC address will be learned by the core / aggregation switch along with all the active Gateways within the cluster:
-
Core / Aggregation Switch – Will learn each client’s MAC address from the physical or logical aggregated port that connects to the UDG Gateway for each client.
-
Gateways – Will learn each client’s MAC address either from the GRE tunnel (UDG role) or physical or logical aggregated uplink port from the core / aggregation switch.
Each tunneled client’s MAC address is anchored to the Gateway assuming the UDG role for each client. The layer 2 path for each tunneled client will remain bound to the physical or logical port of its assigned UDG Gateway regardless of which AP the client roams. No MAC address move will occur between the Gateways and the core / aggregation switching layer after a roam. A client’s MAC address will only move between Gateways as a result of a UDG -> S-UDG transition.
An example of MAC learning for a tunnel forwarding deployment is depicted below. In this example, client 1 is dynamically assigned VLAN 73 and GW-A is assigned the UDG role. During normal operation, SW-GW-AGG learns client 1’s MAC address on port 1/1/1 that connects to GW-A. GW-B learns the client 1’s MAC address on port 0/0/0 that connects to SW-GW-AGG.

MAC Address Learning
Mixed Forwarding
When mixed traffic forwarding is configured in a WLAN or downlink wired port profile, the client traffic will either be directly forwarded out of the APs uplink port(s) onto the access switching layer with an appropriate 802.1Q VLAN tag or encapsulated in GRE by the APs and is tunneled to the primary Gateway cluster:
-
Bridged – The AP will bridge the traffic when a client device is assigned a VLAN ID or Name that is not present in the primary or secondary Gateway cluster.
-
Tunneled – The AP will tunnel the traffic when a client device is assigned a VLAN ID or Name that is configured in the primary or secondary Gateway cluster.
When a profile configured for mixed forwarding is created and a primary cluster is selected, the VLANs present in the primary and secondary cluster are learned by the APs and are tagged in the GRE tunnels. The APs use this knowledge to determine when to bridge or tunnel clients when a VLAN is assigned.
For branch deployments using Branch Gateways, the AP management and bridged user VLANs are typically extended from the Branch Gateways to the APs and are common to both. The Branch Gateways provide DHCP services and routing for each VLAN within the branch. When no layer 3 separation exists between the APs and the Gateways in a branch deployment, profiles implementing mixed traffic forwarding will always tunnel the clients. If bridge traffic forwarding is required and layer 3 separation between the APs and the Branch Gateways is not possible, separate profiles implementing bridge traffic forwarding must be implemented.
To support mixed forwarding, the APs management and bridged user VLANs are extended from the access switching layer to the APs uplink port(s). It is recommended that dedicated VLAN IDs be used for bridged and tunneled clients and the VLANs must not overlap. As a recommended best practice, only the AP management and bridged user VLANs should be extended to your APs.
An example of a typical mixed WLAN deployment is depicted below where dedicated VLANs are implemented for bridged and tunneled clients. In this example the untagged AP management VLAN (not shown) and 802.1Q tagged bridged user VLAN 76 is extended from the access switching layer to an AP. VLAN 73 is centralized within a cluster and is 802.1Q tagged from the Gateway to the core / aggregation switching layer. Client 1 is dynamically assigned VLAN 73 and is tunneled to the primary cluster while client 2 is dynamically assigned VLAN 76 and is locally bridged by the APs.

Mixed Forwarding Mode
Best Practices & Considerations
AOS 10 APs can support any combination of WLAN and downlink wired port profiles implementing bridge, tunnel, or mixed forwarding modes. When profiles of different forwarding types are serviced by APs, the following considerations and best practices should be followed:
-
Implement dedicated VLAN IDs for bridged and tunneled clients. An AP can only bridge or tunnel clients for a given VLAN ID and cannot do both simultaneously.
-
Prune all tunneled VLANs from the APs uplink ports at the access switching layer. A tunneled VLAN must not be extended to the uplink ports on the APs. As a recommended best practice, only the AP management and bridged user VLANs should be extended to the APs.
-
Avoid using VLAN 1 whenever possible. VLAN 1 is the default management VLAN for APs and is also present on the Gateways. Assigning clients to VLAN 1 may have unintentional consequences such as bridging clients onto the APs native VLAN or blackholing tunneled clients.
-
If profiles using bridge traffic forwarding are implemented, it is recommended that you change the APs default management VLAN ID to match the native VLAN ID configured on your access layer switches.
-
If the APs default management VLAN 1 is retained, avoid assigning tunneled clients to a VLAN in the primary cluster that indirectly maps to the APs untagged management VLAN. For example, if your APs are managed on untagged VLAN 70 which is terminated on a Branch Gateway, you must not assign tunneled clients to VLAN 70.
-
If implementing mixed forwarding with Branch Gateways, bridged user VLANs must be layer 3 separated from the Gateways. If no layer 3 separation is implemented, all the clients will be tunneled as all the VLANs will be present within the primary cluster. If layer 3 separation cannot be implemented, a dedicated profile using bridged forwarding must be implemented.
6 - Access Point Port Usage
AP ports can be used in different ways depending on the AP model and deployment type. Using wired port profiles, AP ports can be configured with an uplink or downlink persona. The persona of an AP’s Ethernet port determines how the port is used, where the port connects, and what type of traffic is carried.
-
Uplink Ports – Are used to connect APs to the access switching layer. Uplink ports support the APs management VLAN, carry AP management traffic, establish tunnels to Gateways and forward bridged client traffic.
-
Downlink Ports – Are used to connect wired client devices to the APs or APs operating as Mesh Points to downstream switches. Similar to WLAN profiles, downlink ports can bridge or tunnel client traffic, support authentication and apply policies via user roles.
HPE Aruba Networking Central and the default configuration of the APs include default profiles that configure specific ports as uplinks or downlinks depending on the number of physical Ethernet ports that are installed on the AP and the intended use of the AP. With a few exceptions, uplink ports are used to connect APs to an access switching layer and by default, all models of HPE Aruba Networking APs will implement Ethernet 0/0 as an uplink port.
AP models equipped with dual Ethernet ports may implement both Ethernet 0/0 and Ethernet 0/1 as uplink ports permitting both ports to be connected to the access switching layer in an active / active or active / standby configuration. Hospitality, remote, or APs that can provide Power over Ethernet (PoE) (H, R, or P variants) implement Ethernet 0/0 as an uplink port with all other ports configured as downlink ports.
An example of uplink and downlink port usage for various AP types is depicted below. In this example all APs connect to the access switching layer using their Ethernet 0/0 ports which have a default or user defined uplink wired port profile assigned. All APs will obtain a management IP address on their configured management VLAN, communicate with Central and forward client traffic using their uplink port.
Wired client devices connect to downlink ports which varies by platform. Each wired client’s traffic either being locally bridged or tunneled by the AP based depending on the traffic forwarding configuration within the assigned downlink wired port profile. Wired client devices optionally being MAC or 802.1X authenticated by a RADIUS server or Cloud Auth service.

Uplink and Downlink Ports
Uplink ports
Uplink ports are used to connect APs to the access switching layer. Depending on the AP model, an AP can be connected using a single uplink port or dual uplink ports operating in an active / active or active / standby configuration. Both APs and Central include a default uplink wired port profile named default_wired_port_profile
that is assigned to AP uplink ports by default. The default port assignment will vary based on AP series and model.
AP Family | AP Model | Default Assignment |
---|---|---|
300 Series | AP-303, AP-303H, AP-303P, AP-304, AP-305 | Ethernet 0/0 |
310 Series | AP-314, AP-315, AP-318 | Ethernet 0/0 |
320 Series | AP-324, AP-325 | Ethernet 0/0 & Ethernet 0/1 |
330 Series | AP-334, AP-335 | Ethernet 0/0 & Ethernet 0/1 |
340 Series | AP-344, AP-345 | Ethernet 0/0 & Ethernet 0/1 |
360 Series | AP-365, AP-367 | Ethernet 0/0 |
370 Series | AP-374, AP-375, AP-375EX, AP-375ATEX, AP-377, AP-377EX | Ethernet 0/0 & Ethernet 0/1 |
380 Series | AP-387 | Ethernet 0/0 |
500 Series | AP-503H, AP-504, AP-505, AP-505H | Ethernet 0/0 |
503 Series | AP-503, AP-503R | Ethernet 0/0 |
510 Series | AP-514, AP-515, AP-518 | Ethernet 0/0 & Ethernet 0/1 |
530 Series | AP-534, AP-535 | Ethernet 0/0 & Ethernet 0/1 |
550 Series | AP-555 | Ethernet 0/0 & Ethernet 0/1 |
560 Series | AP-565, AP-565EX, AP-567, AP-567EX | Ethernet 0/0 |
570 Series | AP-574, AP-575, AP-575EX, AP-577, AP-577EX | Ethernet 0/0 & Ethernet 0/1 |
580 Series | AP-584, AP-585, AP-585EX, AP-587, AP-587EX | Ethernet 0/0 & Ethernet 0/1 |
605R Series | AP-605R | Ethernet 0/0 |
610 Series | AP-615 | Ethernet 0/0 |
630 Series | AP-634, AP-635 | Ethernet 0/0 & Ethernet 0/1 |
650 Series | AP-654, AP-655 | Ethernet 0/0 & Ethernet 0/1 |
Defaults for uplink ports
The default uplink wired port profile default_wired_port_profile
is present on all HPE Aruba Networking APs in a factory defaulted state as well as in each configuration group in Central. This default assignment permits both un-provisioned and provisioned APs to be connected to the access switching layer using a single uplink or dual uplinks without any additional configuration being required. When connected using dual uplink ports, a high-availability bonded link is automatically created by the APs that operates in either active / active configuration if LACP is detected or active / standby if LACP is absent.
APs using the default uplink wired port profile implement untagged VLAN 1 for management by default and require a dynamic host configuration protocol (DHCP) server to service the VLAN for host addressing. To successfully discover and communicate with Central, the DHCP server must provide a valid IPv4 address, subnet mask, default gateway and one or more domain name servers. Internally, a switched virtual IP interface (SVI) with a DHCP client is bound to VLAN 1.
The default configuration of the uplink wired port profile will:
-
Configure the port as a trunk
-
Configure VLAN 1 as the native VLAN
-
Permit all VLANs (1-4094)
-
Enable port-bonding
With the default uplink wired port profile, APs can support both bridged and/or tunneled clients with no modification being required. The AP’s native VLAN is set to 1 and all other VLANs are permitted on the uplink ports. All AP management traffic will be forwarded on VLAN 1 untagged while bridged client traffic will be forwarded out the assigned VLAN with a corresponding VLAN tag.
The default wired port profile:
wired-port-profile default_wired_port_profile
switchport-mode trunk
allowed-vlan all
native-vlan 1
port-bonding
no shutdown
access-rule-name default_wired_port_profile
speed auto
duplex full
no poe
type employee
captive-portal disable
no dot1x
Default wired port profile assignments:
Port Profile Assignments
------------------------
Port Profile Name
---- ------------
0 default_wired_port_profile
1 default_wired_port_profile
2 wired-SetMeUp
3 wired-SetMeUp
4 wired-SetMeUp
USB wired-SetMeUp
AP deployments exclusively using tunnel forwarding only require an untagged management VLAN to be configured on the access switching layer for operation. The switchports that each AP connect to will be configured for access mode with the desired AP management VLAN ID assigned. Both the AP and the access layer switches will forward all Ethernet frames untagged. The AP will implement VLAN 1 while the peer access layer switch will implement the configured access VLAN. This is identical to how Campus APs operated in AOS 6 and AOS 8.
An example of an AP implementing the default uplink wired port profile connected to an access switchport is depicted below. In this example the AP is connected to port 1/1/1 on an access layer switch that is configured with the access VLAN 70. The AP in this example implements VLAN 1 for management which indirectly maps to VLAN 70 on the access layer switch.

AP Connected to an Access Switchport
When WLAN and/or wired port profiles are configured with bridged or mixed forwarding, the AP management and one or more dedicated bridged user VLANs will be extended from the access switching layer to the APs. The switchports that each AP connect will be configured for trunk mode with a native VLAN and 802.1Q tagged bridged VLANs assigned. As a recommended best practice, only the untagged management VLAN and the 802.1Q tagged bridged user VLANs should be extended to the APs. AP management traffic is forwarded untagged while bridge user traffic is forwarded 802.1Q tagged.
An example of an AP implementing the default uplink wired port profile connected to a trunk switchport is depicted below. In this example the AP is connected to port 1/1/1 on an access layer switch that is configured with the native VLAN 70 and allowed VLANs 70,76-79. The AP in this example implements VLAN 1 for management which indirectly maps to native VLAN 70 on the access layer switch. All bridged clients are assigned to VLAN IDs 76–79 which are 802.1Q tagged between the AP and the peer access layer switch.

AP Connected to a Trunk Switchport
Management VLAN
By default, access points implement native VLAN 1 for management which is untagged out the AP’s uplink ports. APs will utilize untagged VLAN 1 for IP addressing and communication with Central without any further configuration being required in Central. APs require a DHCP server to provide an IP address, default gateway and one or more name server IP addresses and Internet access to be able to communicate with Central.
The default management VLAN for an AP can be seen by issuing the show ip interface brief
command in the console. The br0
label will indicate that the default VLAN 1 is being used by the AP for management, when a different VLAN is assigned as the management VLAN then the VID will be appended to the br0
interface.

AP Default Management VLAN
AOS 10.5 introduces the option to change the management VLAN ID configuration of APs to a new value for deployments that require such a configuration. APs can be easily re-configured to use a new untagged VLAN for management that matches the management VLAN ID configured in the access switching layer or may implement an 802.1Q tagged management VLAN if required.
Changing the AP’s management VLAN to a different value requires a new uplink wired port profile to be configured and assigned to the AP’s Ethernet 0/0 and optionally Ethernet 0/1 uplink ports. A new uplink wired port profile is recommended to preserve the configuration in the default uplink port profile. This permits the default profile to be reassigned to the AP’s uplink ports in the event of a misconfiguration.
The new uplink wired port profile includes the Use AP Management VLAN as Native VLAN option that must be enabled for the management VLAN to be modified. The new profile can be configured for access or trunk depending on if bridged user VLANs are required. When trunk mode is configured, the Native VLAN and Allowed VLANs must be configured and the AP’s management VLAN must be included in the Allowed VLAN list. The configuration is similar to how a trunk port is configured on a typical Ethernet switch.
The topic Configuring Wired Port Profiles on APs covers the configuration of wired port profiles for access points running AOS 10.
Example configuration of an access wired port profile applied to an AP’s configuration:
wired-port-profile uplink_profile_access
switchport-mode access
allowed-vlan all
native-vlan ap-ip-vlan
port-bonding
no shutdown
access-rule-name uplink_profile_access
speed auto
duplex auto
no poe
type employee
captive-portal disable
no dot1x
!
enet0-port-profile uplink_profile_access
enet1-port-profile uplink_profile_access
Example configuration of a trunk wired port profile applied to an AP’s configuration:
wired-port-profile uplink_profile_trunk
switchport-mode trunk
allowed-vlan all
native-vlan ap-ip-vlan
port-bonding
no shutdown
access-rule-name uplink_profile_trunk
speed auto
duplex auto
no poe
type employee
captive-portal disable
no dot1x
!
enet0-port-profile uplink_profile_trunk
enet1-port-profile uplink_profile_trunk
Once the uplink profile has been saved and applied, the APs management VLAN ID can then be changed under System > VLAN configuration. When the AP Management VLAN is changed, the Customize VLANs of Uplink Ports option will automatically change to Native VLAN Only. The APs will continue to use the default VLAN 1 for management until a new management VLAN ID is specified and saved.

AP Management VLAN
Demonstrating the change of an AP using a modified management VLAN can be accomplished by issuing the show ip interface brief
command to the AP. With the management VLAN changed to VLAN 71, the output will now show the br0.71
interface with an IPv4 address and network mask assigned. The br0.71
label indicates that VLAN 71 is now being used by the AP for management. The management VLAN in this example is untagged from the AP as VLAN 71 is configured as the Native VLAN in the uplink wired port profile.

AP New Management VLAN
VLAN enforcement
The uplink wired port profile is used to configure the operation of the uplink ports which includes the VLAN configuration. By default, the APs uplink ports are assigned the default_wired_port_profile
which configures the native VLAN as 1 and accepts traffic from all VLANs.
The ability to configure and apply a new uplink port profile with more restrictive VLAN configuration has been supported for some time, this option is typically not needed as we recommend that the VLANs be pruned at the access switching layer. As a recommended best practice only the AP management and bridged user VLANs should be extended to the APs.
Some customers may not wish to prune VLANs at the access layer and instead extend all VLANs to the AP. By default, APs will automatically discover VLANs based on traffic that is received on their uplink ports. If all VLANs are extended to the APs, the APs will automatically learn VLAN IDs, and MAC addresses as flooded frames and packets are received on their uplink ports. If tunneled VLANs are also extended to the APs, MAC flapping may occur as MAC addresses can be learned on two traffic paths.
If VLANs cannot be pruned at the access switching layer, VLAN enforcement can be enabled on the APs to restrict which VLANs that the APs accept. VLAN enforcement requires a new trunk uplink port profile to be configured and applied to the APs that includes the Native VLAN and a restrictive Allowed VLAN list. The Allowed VLAN list must only include the APs management VLAN and bridged user VLANs. All other VLANs must be excluded.
An example of an uplink wired port profile configured to only accept traffic from a specific range of VLANs is depicted below. In this example the APs management VLAN is 71 and the bridged user VLANs are 76-79. The Allowed VLAN list in this example includes the VLANs 71, 76-79:
wired-port-profile uplink_profile_trunk
switchport-mode trunk
allowed-vlan 71,76-79
native-vlan ap-ip-vlan
port-bonding
no shutdown
access-rule-name uplink_profile_trunk
speed auto
duplex auto
no poe
type employee
captive-portal disable
no dot1x
!
enet0-port-profile uplink_profile_trunk
enet1-port-profile uplink_profile_trunk
Once the new uplink wired port profile has been configured and applied to the uplink ports, VLAN enforcement can be enabled under System > VLAN within the configuration group. VLAN enforcement is enabled by setting the Customize VLANs of Uplink Ports option to All VLAN Settings. Once saved, the APs will only accept traffic from VLANs you configured in the Allowed VLAN list within the uplink wired port profile.
VLAN enforcement configuration within a configuration group is depicted below. The APs in this example have also been re-configured to use VLAN 71 for management which is configured as the Native VLAN in the above wired port profile. The management VLAN has been included to highlight that the management VLAN must be included in the Allowed VLAN list within the modified uplink profile.

AP Management VLAN
Dual uplinks
HPE Aruba Networking APs equipped with a second Ethernet port can optionally be dual connected to an access switching layer. If LACP is implemented, traffic can also be load-balanced between the uplink ports. Each APs uplink port can be strategically distributed between switchports in separate I/O modules within a chassis or between members of a stack. APs may also be connected to separate chassis or stacks placed in separate wiring closets if VLANs and broadcast domains are common to both uplink ports. Dual uplinks allow APs to maintain network connectivity to the access switching layer in the event of an I/O module, stack member or wiring closet failure.
APs can be connected using dual uplinks operating in an active / active or active / standby configuration without any additional configuration being required in Central. The default uplink wired port profile permits port-bonding by default and will place the APs Ethernet 0/0 and Ethernet 0/1 ports into either an active / active or active / standby state:
-
Active / Active – If LACP BPDUs from the same LACP group are received on both the APs Ethernet 0/0 and Ethernet 0/1 ports.
-
Active / Standby – If no LACP BPDUs are received on both the APs Ethernet 0/0 and Ethernet 0/1 ports.
Active / standby
With an active / standby dual-uplink deployment, both the Ethernet 0/0 and Ethernet 0/1 ports are connected to the access switching layer. During normal operation the APs Ethernet 0/0 uplink port is used for AP management and traffic forwarding while the APs Ethernet 0/1 uplink port is in a standby state and will not transmit or receive management or user traffic. The APs Ethernet 0/1 port will only become active if the link on the Ethernet 0/0 uplink port is lost.

Active-Standby Failover
The primary LAN requirement to support APs using an active / standby uplink configuration is that the VLANs and associated IP networks (broadcast domains) must be common to both AP uplink ports. APs implementing active / standby uplinks do not support layer 3 failover and cannot be connected to switchports implementing separate VLAN IDs or broadcast domains. The switchport configuration and broadcast domains for both uplink ports must be identical for failover to work. If the link to the Ethernet 0/0 interface is lost, the APs will transition their management IP interface, orchestrated tunnels, and bridged client traffic to their Ethernet 0/1 link. From the access switching layer perspective, the APs management IP address, MAC address and all bridged clients MAC addresses will move.
For most active / standby deployments, each AP will be connected to a common access layer switch or stack where the APs uplink ports are distributed between I/O modules within in a chassis or members of a stack. This permits the APs to continue operation in the event that an I/O module or stack member fails.
An example of a typical active / standby deployment using a stack of CX switches is depicted below. In this example the APs Ethernet 0/0 and Ethernet 0/1 ports implement the default uplink wired port profile where each uplink port connects to a separate stack member within a VSF stack:
-
Ethernet 0/0 – The active uplink port is connected to switchport 1/1/10 (first stack member)
-
Ethernet 0/1 – The standby uplink port is connected to switchport 2/1/10 (second stack member)
Within the VSF stack, both switchports are configured as trunks with the same Native VLAN and Allowed VLANs configured. The AP in this example will implement untagged VLAN 71 for management and 802.1Q tagged VLANs 76-79 to service bridged clients.

Illustration of a switch stack and AP setup for active-standby failover within a single closet.
interface 1/1/10
no shutdown
description [BLD10-FL1-AP-1-0/0]
no routing
vlan trunk native vlan 71
vlan trunk allowed 71,76-79
...
interface 2/1/10
no shutdown
description [BLD10-FL1-AP-1-0/1]
no routing
vlan trunk native vlan 71
vlan trunk allowed 71,76-79
If additional redundancy is required, APs implementing active / standby uplinks can be connected to separate switches or stacks located in the same wiring closet or separate wiring closets. This permits additional redundancy in the event of a power failure. Both deployments are supported as long as the same VLAN IDs and broadcast domains are present on both uplink ports. Connecting APs to switchports using different VLAN IDs or broadcast domains is not supported.
An example of a typical active / standby deployment using separate stacks of CX switches is depicted below. In this example the APs Ethernet 0/0 and Ethernet 0/1 ports implement the default uplink wired port profile where each uplink port connects to a stack member on in separate VSF stacks:
-
Ethernet 0/0 – The active uplink port is connected to switchport 1/1/10 (first VSF stack)
-
Ethernet 0/1 – The standby uplink port is connected to switchport 2/1/10 (second VSF stack)
Within each VSF stack, the switchports are configured as trunks with the same Native VLAN and Allowed VLANs configured. The AP in this example will implement untagged VLAN 71 for management and 802.1Q tagged VLANs 76-79 to service bridged clients. VLANs 71,76-79 in this example are extended between both VSF stacks.

Illustration of multiple switches or switch stacks and AP setup for active-standby failover across switches or closets.
interface 1/1/10
no shutdown
description [BLD10-FL1-AP-1-0/0]
no routing
vlan trunk native vlan 71
vlan trunk allowed 71,76-79
interface 1/1/10
no shutdown
description [BLD10-FL1-AP-1-0/1]
no routing
vlan trunk native vlan 71
vlan trunk allowed 71,76-79
Active / active
With an active / active dual-uplink deployment, both the Ethernet 0/0 and Ethernet 0/1 ports are connected to a common access layer switch or stack using Link Aggregation Control Protocol (LACP). During normal operation, both the Ethernet 0/0 and Ethernet 0/1 ports are active and using hashing algorithms will both carry management and user traffic. If either link or path fails, management and user traffic will automatically failover to the remaining active link.

Active-active load sharing
Active / active configuration requires that both AP uplink ports be connected to peer switchports that are in the same LACP link aggregation group. The LACP bond will not establish if the uplink ports are connected to switchports configured in separate LACP groups. Note that HPE Aruba Networking switches will detect this mismatch condition and place one of the switchports into a LACP blocking state. Additionally, for the LACP bond to become active, all AP uplinks and peer switchports in the LACP bond must negotiate at the same speed. If one of the links in the bond negotiate at a slower speed than the other link, the LACP bond will not establish.
An active / active uplink deployment using LACP requires each AP to be connected to a common access layer switch. This can be a chassis, stack or a logical switch implementing virtualization technology permitting LACP links to be distributed between two physical switches. The APs uplink ports are distributed between I/O modules within in a chassis, members of a stack or the logical switches.
An example of a typical active / active deployment using a stack of CX switches is depicted below. In this example the APs Ethernet 0/0 and Ethernet 0/1 ports implement the default uplink wired port profile where each uplink port connects to a separate stack member within a VSF stack:
-
Ethernet 0/0 – Is connected to switchport 1/1/10 in LACP LAG group 110 (first stack member)
-
Ethernet 0/1 – Is connected to switchport 2/1/10 in LACP LAG group 110 (second stack member)

Illustration of an AP using an active / active connection to a switch stack.
interface 1/1/10
no shutdown
description [BLD10-FL1-AP-1-0/0]
lag110
...
interface 2/1/10
no shutdown
description [BLD10-FL1-AP-1-0/1]
lag110
...
interface lag 110
no shutdown
description BLD10-FL1-AP1
no routing
vlan trunk native vlan 71
vlan trunk allowed 71,76-79
lacp mode active
During normal operation, traffic transmitted by the AP to the access switching layer is hashed and distributed across both of the AP’s Ethernet 0/0 and Ethernet 0/1 ports. This includes AP management, tunneled user traffic and bridged user traffic. The fields that APs use to hash egress traffic will be dependent on the traffic type and number of headers that are available:
-
Layer 2 Frames – APs will hash egress traffic across both uplinks based on source MAC / destination MAC.
-
Layer 3 Packets – APs will hash egress traffic across both uplinks based on source MAC / destination MAC and source IP / destination IP.
For tunneled user traffic to a primary cluster consisting of two or more cluster nodes, multiple layers of traffic distribution will occur. The IPsec and GRE tunnels will be distributed between the APs uplink ports based on layer 2 and layer 3 headers while tunneled clients will be distributed between GRE tunnels based on each tunneled client’s bucketmap assignment:
-
GRE Tunnels – APs will hash GRE tunnels based on source MAC / destination MAC and source IP / destination IP.
-
Tunneled Clients – Traffic for each tunneled client is anchored to a specific cluster node based on bucketmap assignment.
PoE redundancy
When utilizing dual uplinks, APs may receive power from the Ethernet 0/0 and/or Ethernet 0/1 uplink ports. Depending on the AP series and model, APs may either simultaneously source power from both uplink ports using sharing or source power from either port using failover. With the exception of the 510 series that can only source power from Ethernet 0/0, APs will either support sharing or failover.
PoE standards and failover options for dual Ethernet equipped AP models:
AP Series | PoE Standards | PoE Redundancy |
---|---|---|
320 Series | 802.3af, 802.3at | Failover |
330 Series | 802.3af, 802.3at | Failover |
340 Series | 802.3af, 802.3at | Failover |
510 Series | 802.3af, 802.3at, 802.3bt | No |
530 Series | 802.3at, 802.3bt | Sharing |
550 Series | 802.3at, 802.3bt | Sharing |
570 Series | 802.3at, 802.3bt | Sharing |
630 Series | 802.3at, 802.3bt | Failover |
650 Series | 802.3af, 802.3at, 802.3bt | Sharing |
The AP-530, AP-550, and AP-570 series APs will balance the draw power on each uplink port and will generally draw 40% / 60% power on each port, best case. The AP 650 will draw power from Ethernet 0/0 first and then Ethernet 0/1 once Ethernet 0/0 is maxed out. The max budget on the AP-650 series is the sum of both ports whereas on the AP-530, AP-550, and AP-570 series whichever port is lowest divided by .6.
Downlink ports
Downlink ports are used to connect wired client devices to APs but may also be used to connect APs operating as Mesh Points to clients or downstream switches when Mesh bridging is deployed. The number of ports that can be implemented as downlinks will vary based on the number of physical Ethernet ports available on the AP and the number of Ethernet ports that are employed as uplinks.
When downlinks are implemented to connect wired client devices, user traffic can be bridged or tunneled based on the traffic forwarding mode configured in the profile. Client devices can also be optionally MAC, 802.1X or Captive Portal authenticated with static or dynamic VLAN and user role assignments.
Defaults for downlink ports
The default downlink wired port profile wired_SetMeUp
is present on all HPE Aruba Networking APs in a factory defaulted state but is absent in Central. The default downlink profile is assigned to non-uplink ports by default on Hospitality APs.
wired-port-profile wired-SetMeUp
no shutdown
switchport-mode access
allowed-vlan all
native-vlan guest
access-rule-name wired-SetMeUp
speed auto
duplex auto
type guest
captive-portal disable
inactivity-timeout 1000
Port Profile Assignments
------------------------
Port Profile Name
---- ------------
0 default_wired_port_profile
1 default_wired_port_profile
2 wired-SetMeUp
3 wired-SetMeUp
4 wired-SetMeUp
USB wired-SetMeUp
Bridged
Downlink ports configured for bridge forwarding can be used to connect wired client devices to APs or to connect Mesh Points to downstream access layer switches when Mesh bridging is deployed. The downlink wired port profile can be configured for access supporting a single untagged access VLAN or as a trunk supporting a single Native VLAN and one or more 802.1Q tagged VLANs.
When the downlink profile is configured for bridge forwarding, the AP bridges traffic received on a downlink port to an uplink port on the assigned VLAN. The VLAN assignment and uplink port profile configuration determines if the bridged traffic is forwarded out the uplink port untagged or tagged.
When configuring a downlink port profile with bridge forwarding, the VLANs that are configured must be present on APs uplink ports. If the default uplink port profile is implemented, all VLANs are allowed by default. If a user defined uplink port profile is implemented, the bridged VLANs must be included in the Allowed VLAN list. The VLANs must also be extended to the APs from the access switching layer.
An example of a downlink bridged port profile configured for access is depicted below. In this example an IP camera is connected to the APs Ethernet 0/1 downlink port and is assigned to access VLAN 79. VLAN 79 is extended between the access switching layer and the APs Ethernet 0/0 uplink port and is 802.1Q tagged between both ports.

Access bridged downlink port
An example of a downlink bridged port profile configured for trunk is depicted below. In this example an IP phone is connected to the APs Ethernet 0/1 downlink port where untagged VLAN 76 is used for data and 802.1Q tagged VLAN 77 is used for voice. Both VLANs are extended between the access switching layer and the APs Ethernet 0/0 uplink port and is 802.1Q tagged between both ports.

Trunk bridged downlink port
An example of a downlink bridged port profile configured for trunk used for Mesh bridging is depicted below. In this example a user defined uplink port profile with the native VLAN 71 and allowed VLANs 71,76-79 has been assigned to the Mesh Portals Ethernet 0/0 uplink port that connects to the access switching layer. A user defined downlink port profile has been assigned to the Mesh Points Ethernet 0/0 port with the same native VLAN 71 and allowed VLANs 71,76-79. VLANs 71,76-79 are effectively extended from the access switching layer over the mesh link to the remote access layer switch.

Mesh trunk bridged downlink port
Tunneled
Downlink ports configured for tunnel forwarding can be used to connect wired client devices to APs. A downlink wired port profile can only be configured for access supporting a single untagged VLAN. Tunneled trunk ports configured with multiple VLANs is not supported today.
When the downlink profile is configured for trunk forwarding, the AP tunnels traffic received on a downlink port to the selected primary cluster. As with tunneled WLAN clients, each tunneled wired client is assigned a UDG and S-UDG session within the primary cluster via the published bucketmap. If datacenter redundancy is required, failover between a primary and secondary cluster is also supported.
Each tunneled downlink port profile can be configured to tunnel traffic to a specified primary cluster. APs supporting multiple downlink ports can implement port profiles that all tunnel to the same primary cluster or may implement port profiles tunneling to separate primary clusters (MultiZone).
An example of downlink tunneled port profiles applied to hospitality APs is depicted below. In this example two downlink port profiles with tunnel forwarding have been assigned to the APs downlink ports to support in-room services and guest devices:
-
Ethernet 0/1 – A downlink port profile is assigned to support a SmartTV which is MAC authenticated and assigned to VLAN 74.
-
Ethernet 0/2 – Ethernet 0/4 – A downlink port profile is assigned to support hotel guest devices which are Captive Portal authenticated and assigned to VLAN 75.

Access Tunnel Downlink Ports
7 - User VLANs
Each client is either statically or dynamically assigned a VLAN upon connecting to a WLAN or downlink port on an AP. The VLAN assignment decision is either made by the AP or the Gateway depending on the traffic forwarding mode configured in the profile:
-
Bridge Forwarding – The VLAN assignment decision is made by the APs
-
Mixed or Tunnel Forwarding – The VLAN assignment decision is made by the Gateways
For mixed forwarding, the static or dynamically assigned VLAN Id determines if the client’s traffic is locally bridged by the AP or tunneled to the primary cluster. If the assigned VLAN is present within the cluster, the client’s traffic is tunneled. If the assigned VLAN is not present within the cluster, the client’s traffic is locally bridged by the AP.
When planning an AOS 10 deployment that implements a combination of bridged, tunneled, and mixed forwarding profiles, it’s extremely important to dedicate VLAN IDs for bridged and tunneled clients. The VLAN IDs must be unique for each forwarding mode and must not overlap. An AP can only bridge or tunnel client traffic to each VLAN ID and cannot do so simultaneously. Mixing bridged and tunneled client traffic on the same VLAN ID is not recommended or supported.
Static VLANs
Each profile requires a static VLAN to be configured which is assigned to client devices if no dynamic VLAN assignment is derived. The VLAN you assign can be considered a catchall VLAN if no other VLAN is derived. For WLAN profiles the static VLAN can be individual VLAN ID or VLAN Name that you specify or select depending on the configured forwarding mode. If named VLANs are implemented, additional configuration is required to map the VLAN names to their respective VLAN IDs. The mapping configuration is either performed within the WLAN profile (bridge forwarding) or Gateway configuration group (tunnel forwarding).
As a recommended best practice, the static VLANs you assigned within a profile should not be the management VLAN of the AP or the Gateway. The assigned VLAN should be dedicated to bridged or tunneled clients. VLAN 1 should also be avoided for bridged and mixed forwarding as VLAN 1 is used as the APs default management VLAN and is also present within each primary cluster. VLAN 1 should only be used if the APs management VLAN has been changed to a different value or is legitimately used to serve client traffic within the primary cluster.
An example of VLAN Id and VLAN name assignments for a WLAN profile is depicted in the below figure. In this example WLAN clients with no dynamic VLAN assignment will be assigned to VLAN 75.

WLAN Profile Example 1

WLAN Profile Example 2
For downlink wired port profiles, the static VLAN configuration options will vary depending on the configured traffic forwarding mode. For bridge forwarding, the downlink ports can be configured as access supporting a single VLAN or trunk supporting multiple VLANs. The configuration is identical to how access or trunk ports are configured on an Ethernet switch:
-
Access – A single access VLAN ID is required which determines the VLAN ID the AP uses to forward the bridged clients traffic out its uplink port.
-
Trunk – A Native VLAN ID and a list of Allowed VLANs must be configured. This will determine which 802.1Q tagged VLANs are accepted by the downlink port and which VLAN is used to forward untagged traffic received from a wired client.
An example of a bridged downlink wired port profile configured as access is depicted in figure XXX. In this example the APs will assign bridged wired clients with no dynamic VLAN assignment to VLAN 75. The AP will forward wired clients traffic out its uplink port on to the access switching layer with the 802.1Q tag 75.

Bridged Access Wired Port Profile Example
An example of a bridged downlink wired port profile configured as a trunk is depicted in the below figure. This example represents a downlink port profile that is configured to support a VoIP telephone that implements an untagged data VLAN (72) and an 802.1Q tagged voice VLAN (73). Untagged traffic from the VoIP phone is placed into the native VLAN 72 while 802.1Q tagged voice traffic is accepted. As only VLANs 72-73 are accepted by the downlink port, all other tagged VLAN IDs received on the downlink port will be discarded by the AP.

Bridged Trunk Wired Port Profile Example
Wired port profiles configured for tunnel forwarding support a single VLAN and must be configured for access mode. The access VLAN can be a VLAN ID or Named VLAN that resides within primary cluster. Downlink wired port profiles configured for tunnel forwarding can only be configured for access mode and will only accept untagged traffic from the wired clients. All traffic received with an 802.1Q tag will be discarded by the AP.
An example of a valid tunneled downlink wired port profile is depicted in the below figure. In this example the APs will assign tunneled wired clients with no dynamic VLAN assignment to VLAN 73 that resides within the primary cluster.

Tunnel Access Wired Port Profile Example
Default VLAN
WLAN profiles configured for mixed traffic forwarding or with dynamic VLAN assignment enabled require a default VLAN to be assigned. As with a static VLAN, the default VLAN is assigned to WLAN clients if no dynamic VLAN assignment is derived from RADIUS or VLAN assignment rule. The default VLAN can be an individual VLAN ID or VLAN Name that you specify or select depending on the configured traffic forwarding mode.
The default VLAN that you defined within each WLAN profile can be considered a catchall VLAN if no dynamic VLAN assignments are made. As a recommended security best practice, the default VLAN should be different than the APs or Gateways management VLAN. If a default AP management VLAN is used, it is recommended that you change the default VLAN to a different value (below figure). This will ensure that no bridged clients are accidentally assigned to the APs management LAN.

Default VLAN
VLAN Pools
A VLAN pool allows bridged or tunneled WLAN clients to be distributed across multiple VLANs. When VLAN pooling is implemented, each client is assigned to a VLAN within the pool based using a MAC address hashing algorithm. The implemented algorithm is consistent on both the APs and Gateways and ensures each client will maintain its VLAN assignment each time it connects or roams.
For bridge traffic forwarding, a VLAN pool can consist of a range of contiguous VLAN IDs, a list of non-contiguous VLAN IDs or a list of selected VLAN Names. VLAN Names requiring the respective VLAN Name to ID mappings to be performed within the WLAN profile before selection. A VLAN pool using a contiguous range of VLAN IDs and a list of VLAN names is depicted in figure below:

Bridged WLAN Profile VLAN Pool 1

Bridged WLAN Profile VLAN Pool 2
For tunnel traffic forwarding, a VLAN pool is selected by configuring a Named VLAN within the Gateway configuration group that includes a range of VLAN IDs. The Named VLAN is selected and assigned to the WLAN profile after the primary cluster is selected. A VLAN pool assigned to a WLAN profile configured for tunnel forwarding is depicted in figure below. The Named VLAN and VLAN IDs mappings were configured and managed within the Gateway configuration group:

Tunneled WLAN Profile VLAN Pool
RADIUS Assigned
Clients connected to WLANs or downlink ports requiring MAC or 802.1X authentication can be dynamically assigned a VLAN from a RADIUS server such as ClearPass or the Cloud Auth service. A RADIUS server can directly assign a VLAN by directly providing a VLAN ID or VLAN Name to an AP or Gateway. A RADIUS server may also indirectly assign a VLAN to a client by providing a user role name which includes the VLAN assignment.
APs and Gateways can directly assign a VLAN provided by a RADIUS server or Cloud Auth service that are configured to provide HPE Aruba Networking vendor-specific attribute value pairs (AVPs) or standard IETF AVPs in RADIUS Access-Accept or change or authorization (CoA) messages. A VLAN can be dynamically assigned from a RADIUS server that is configured to return one of the following AVPs:
-
Aruba-User-VLAN – HPE Aruba Networking AVP that provides a numerical VLAN ID (1-4094).
-
Aruba-Named-User-VLAN – HPE Aruba Networking AVP that provides a VLAN Name that maps to a VLAN ID (1-4094) configured in the WLAN profile (bridged) or Gateway configuration group (tunneled).
-
Tunnel-Private-Group-ID – IETF AVP that provides a numerical VLAN ID (1-4094).
A VLAN can also be dynamically assigned based on user role assignment. Each user role may optionally include a VLAN ID assignment that is configured within the user role as part of the WLAN profile configuration. For tunneled or mixed WLANs, the configured VLAN ID is automatically copied to the user role orchestrated on the Gateways. A RADIUS server or Cloud Auth service dynamically assigns the user role to the clients which determines the bridged or tunneled VLAN assignment. A user role can be dynamically assigned by a RADIUS server by returning the following HPE Aruba Networking vendor-specific AVP:
- Aruba-User-Role – HPE Aruba Networking AVP that provides the user role name.
To simplify operations and troubleshooting, only one dynamic VLAN assignment method should be implemented at a given time. A VLAN ID or VLAN Name assignment should either be directly assigned or indirectly assigned via a user role. If both VLAN assignment options are provided to an AP or a Gateway, the VLAN ID or VLAN Name will take precedence.
VLAN Assignment Rules
A VLAN assignment may also be determined by creating VLAN assignment rules as part of the profile configuration. VLAN assignment rules are optional and permit dynamic VLAN assignments based on admin defined rules that include an attribute, operator, string value and resulting VLAN assignment.
VLAN assignment rules are often implemented during migrations to HPE Aruba Networking by permitting VLAN assignments to be made by an existing RADIUS server and policies that implement IETF or some 3rd party vendor specific AVPs. Assignment rules permit customers to easily migrate to AOS 10 without having to modify their existing RADIUS policies.
An example of VLAN assignment rules can be leveraged during a migration is depicted in figure XXX. In this example the RADIUS server policies implement the IETF filter-id AVP that returns string values such as employee-acl, iot-acl and guest-acl. Each VLAN assignment rule matches a partial string value and assigns an appropriate VLAN ID. For example, if filter-id returns the string value employee-acl, VLAN 75 is assigned. If no assignment rule is matched, the clients are assigned to VLAN 76.

VLAN Assignment Rules Example
VLAN assignment rules are supported for bridged, tunneled, and mixed forwarding profiles. When VLAN assignment rules are configured for tunneled or mixed profiles, the assignment rules are automatically orchestrated on the Gateways. Each VLAN assignment rule will be added as a server defined rule (SDR) under the server group for the tunneled or mixed profile.
Assignment Priorities
Depending on the traffic forwarding mode configured within a profile, the AP or the Gateway will make the VLAN assignment decision. For bridge profiles the AP makes the VLAN assignment decision while for tunneled and mixed profiles the Gateway makes the VLAN assignment decision. By default, if no VLAN is dynamically derived, the static VLAN or default VLAN configured in the profile will be assigned.
When multiple VLANs outcomes are possible for a client, an assignment priority is followed by the APs and Gateways. When multiple VLAN outcomes are presented, the AP or Gateway will assign the VLAN based on the assignment option with the highest priority. As a general rule, the VLAN ID or VLAN Name assigned by a RADIUS server, or Cloud Auth service will have the highest priority.
Below table provides the AP VLAN assignment priority for bridged profiles. When multiple VLAN assignment outcomes are presented for a bridged client, the AP will select the assignment option with the highest priority:
Priority | Assignment | Notes |
---|---|---|
1 (Lowest) | Static or default within the WLAN profile | VLAN ID, VLAN Range or VLAN Name |
2 | VLAN derived from user role | Default, Aruba-User-Role (RADIUS) or Role Assignment Rule |
3 | VLAN assignment rule | User defined derivation rule |
4 (Highest) | RADIUS | In order of priority:
|
Below table provides the Gateway VLAN assignment priority for tunneled WLAN profiles. When multiple VLAN assignment outcomes are presented, the Gateway will select the assignment option with the highest priority:
Priority | Assignment | Notes |
---|---|---|
1 (Lowest) | Static or default within the WLAN profile | VLAN ID or VLAN Name |
2 | VLAN from initial user role | |
3 | VLAN from UDR user role | |
4 | VLAN from UDR rule | |
5 | VLAN from DHCP option 77 UDR user role | Wired Clients |
6 | VLAN from DHCP option 77 UDR rule | Wired Clients |
7 | VLAN from MAC-based Authentication default user role | |
8 | VLAN from SDR user role during MAC-based Authentication | |
9 | VLAN from SDR rule during MAC-based Authentication | |
10 | VLAN from Aruba VSA user role during MAC-based Authentication | Aruba-Named-VLAN Aruba-User-VLAN |
11 | VLAN from Aruba VSA during MAC-based Authentication | |
12 | VLAN from IETF tunnel attributes during MAC-based Authentication | Tunnel-Private-Group-ID |
13 | VLAN from 802.1X default user role | |
14 | VLAN from SDR user role during 802.1X | |
15 | VLAN from SDR rule during 802.1X | |
16 | VLAN from Aruba VSA user role during 802.1X | Aruba-User-Role |
17 | VLAN from Aruba VSA during 802.1X | Aruba-Named-VLAN Aruba-User-VLAN |
18 | VLAN from IETF tunnel attributes during 802.1X | Tunnel-Private-Group-ID |
19 | VLAN from DHCP options User Role | VLAN inherited by user role assigned from DHCP options |
20 (Highest) | VLAN from DHCP options |
VLAN Best Practices & Considerations
AOS 10 APs can support any combination of profiles implementing bridge, tunnel, or mixed forwarding modes. When profiles of different forwarding types are implemented, the following considerations should be followed:
-
Avoid using VLAN 1 whenever possible. VLAN 1 is the default management VLAN for APs and is also present on the Gateways. VLAN 1 should only be implemented if the default management VLAN is changed on the APs from 1 to a different value.
-
If the default AP management VLAN 1 is retained, avoid assigning tunneled clients to a VLAN on the Gateway that indirectly maps to the APs untagged management VLAN. For example, if APs are managed on untagged VLAN 70 on the access layer switch and this VLAN is extended to a Branch Gateway, don’t assign tunneled clients to VLAN 70.
-
Implemented dedicated VLAN IDs and broadcast domains for bridged and tunneled clients. An AP can either bridge or tunnel clients on a given VLAN ID and cannot do both simultaneously.
-
Prune all tunneled VLANs from the APs uplink ports at the access switching layer. A tunneled VLAN must not be extended to the uplink ports on the APs. As a best practice, only the AP management and bridged user VLANs should be extended to the APs.
-
If implementing mixed forwarding with Branch Gateways, bridged user VLANs must be layer 3 separated from the Gateways. If no layer 3 separation is implemented, all the clients will be tunneled as all the VLANs will be present within the cluster. If layer 3 separation cannot be implemented, a dedicated WLAN using bridged forwarding must be implemented.
-
Each user VLAN can support a maximum of one IPv4 subnet and one IPv6 prefix. Support for multiple IPv4 subnets or IPv6 prefixes (i.e., multinetting) is not supported.
8 - Roles
8.1 - Fundamentals
Roles are policy and configuration containers that are assigned to client devices connected to HPE Aruba Networking access points (APs), gateways, and access layer switches. Usage of roles is mandatory for APs and gateways but optional for access layer switches except when User-Based Tunneling (UBT) is deployed.
Roles are a differentiating foundational architectural element supported by HPE Aruba Networking infrastructure devices. They can be used to implement dynamic segmentation and policy enforcement between different sets of client devices and may optionally include other attributes for assignment. Initially introduced for use on wireless controllers and controllerless APs (AOS-8), roles are now supported by all current infrastructure devicesincluding APs, gateways, and switches.

HPE Aruba Networking devices that support roles.
Role uses
Roles are used to apply network access policies and other attributes to client devices or user identities. The policy language and supported attributes are network infrastructure device type specific and vary between APs, gateways, and switches. The available policy options and attributes being limited by the capabilities and supported features for each device type.
In AOS-10, roles contain policy language used to determine host, network, and application permissions. They may optionally include other configuration attributes such as VLAN assignment, captive portal configuration or bandwidth contracts. Global client roles applied to gateways also include group policy identifiers (GPIDs) used by gateways and switches for role-to-role policy enforcement.

AOS-10 role attributes.
On switches, roles are used to dynamically apply configuration to access ports when port-access security is enabled. When a wired client device or user successfully authenticates, the RADIUS authentication server or Central NAC service can return a role name that determines the port’s operation mode, forwarding behavior, switchport mode, and access or trunk VLAN assignments. If UBT is enabled, the assigned role will also determine the cluster (zone) where traffic is tunneled to, and the role assigned on the gateways.

Switch role attributes.
Role assignment
Roles can be assigned to client devices or user identities on APs, gateways, or access layer switches at the point each client device connects to the network. When traffic is tunneled from an AP or UBT access layer switch to gateways, a role is assigned on the tunnelingdevice where the client is attached in addition to the gateways.
APs
Roles are assigned to each wired and wireless client device (unique MAC) that connects to an AP regardless of the forwarding mode configured in the profile. This includes:
-
Wired devices connected to downlink port.
-
Wireless devices connected to WLANs.
Each client device is assigned a default role or a user defined role from a RADIUS authentication server, Central NAC service, or role assignment rule. If no role is dynamically assigned or the assigned role does not exist, a default role is assigned. As wireless clients are nomadic, the assigned role will follow each client as they roam between APs within a roaming domain,the assigned role being cached and automatically distributed by services within Central to neighboring APs.

Default and user defined roles assigned to AOS-10 APs.
Gateways
When a wired or wireless client on an AP or a wired client connected to a UBT switch is tunneled to a gateway cluster, two roles are assigned:
-
A role is assigned at the AP where the wired or wireless client device is attached.
-
A role is assigned on the UBT switch where the wired client device is attached.
Within a cluster, each tunneled client device (unique MAC address) is assigned an active and standby User Designated Gateway (UDG) via the published bucket map for the cluster (see Cluster Roles). Each client’s assigned UDG gateway is the anchor point for all traffic and is persistent. The only time a tunneled client’s UDG gateway assignment is changed is if a gateway is added or removed from a cluster, a failover to a secondary cluster occurs, or the wireless client roams to an AP that is tunneling to a different cluster.

Default and user defined roles assigned to AOS-10 gateways.
A role may also be assigned to wired client devices that are serviced by a switchport on a gateway. When a port or VLAN is untrusted, each wired device can be optionally authenticated where a user defined role can be dynamically provided by a RADIUS server or Central NAC service. For non-authenticated ports or VLANs, a user defined role may be statically assigned.
Access layer switches
When port-access security is configured on an access layer switch, a role can be dynamically assigned to wired devices from a RADIUS authentication server or Central NAC service. The attributes in each role determine the configuration that is applied to the switchport and if user based tunneling (UBT) is activated for forwarding.
When a wired UBT client is tunneled to a gateway cluster, two roles are assigned:
-
A role is assigned on the access layer switch where the wired client device is attached.
-
A role is assigned on the user designated gateway (UDG) for each UBT client.
For UBT to function, a user defined role is assigned on the access layer switch that includes attributes that specifies the cluster (zone) the UBT client’s traffic is tunneled to and the user defined role that is assigned on each UDG gateway. For flexibility, the role mapping configured for each role permits the same role name to be assigned on both the access layer switches and gateways or different role names to be assigned. Additionally, CX access layer switches implement zones allowing UBT clients traffic to be terminated on different clusters within the network.

Roles assignments on access layer switches, gateways, and mappings.
Role types
AOS-10 APs and gateways support default roles, user defined roles, and global client roles. Default roles are applied to wired or wireless client devices when no user defined role is assigned while user defined roles and global client roles are assigned by either an authentication server or role derivation rule.
Default roles
Default roles are automatically created for each downlink port profile and WLAN profile that are configured within an AP configuration group. Each default role has the same name as its parent profile and is assigned to client devices when no user defined role is assigned.

Default roles
Default roles are either created within an AP configuration group or both AP and gateway configuration groups depending on the forwarding mode of the profile:
-
Bridge forwarding – The default role is created in the AP configuration group only.
-
Mixed / tunnel forwarding – The default role is created in both the AP and gateway configuration groups. When both a primary and secondary cluster are assigned, they are created in both the primary and secondary gateway configuration groups.
Default roles are mandatory and must exist on the AP for each profile. They can be used to apply security policies to client devices as well as assign other attributes such as VLANs, captive portal configuration, or bandwidth contracts. They may be used exclusively when no dynamic role assignment is required or be employed as a fall-through/catchall role when no dynamic user defined role is assigned.
While a default role can be dynamically assigned to client devices or user identities connected to other profiles, this is not recommended as default roles are deleted when their parent profile is deleted. If a role needs to be assigned to multiple profiles, a user defined role should be used. A default role should only be used within the context of the parent profile.
User defined roles
User defined roles are configured and named by the administrator. They can be independently configured per AP or gateway configuration group or be orchestrated by Central to the necessary configuration groups by a profile creation workflow. They are assigned to client devices or users either by a RADIUS authentication server, Central NAC service, or role derivation rule. A default user role is assigned to client devices when no user defined role is dynamically assigned or if a dynamically assigned role does not exist on the AP or gateway.

User defined roles
When user defined user roles are added or modified using a profile creation workflow, the roles and associated policies are either created in the AP configuration group or both the AP and gateway configuration groups depending on the forwarding mode of the profile:
-
Bridge forwarding – User defined roles are created in the AP configuration group only.
-
Tunnel forwarding – User defined roles are created in the respective gateway configuration groups. When both primary and secondary clusters are assigned, they are created in both the primary and secondary gateway configuration groups.
-
Mixed forwarding - User defined roles are created in both the AP and gateway configuration groups.
If no user defined roles are configured using the profile creation workflow, they must be manually created in the respective AP and gateway configuration groups by the admin. Only roles added or modified using a profile creation workflow are automatically orchestrated between AP and gateway configuration groups. When a profile creation workflow is used, policies, attributes and derivation rules are also orchestrated between AP and gateway configuration groups. The orchestrated roles can be used across profiles as needed.
For most AOS-10 deployments, user defined roles will either be created in their respective AP or gateway configuration groups as the profiles on the APs will implement either a bridged or tunnel forwarding mode. User defined roles will only need to be created in both AP and gateway configuration groups if the AP is simultaneously bridging and tunneling user traffic and the same user defined role is assigned to client devices or user identities for both forwarding modes. For example, an employee role is assigned to tunneled wireless clients in addition to bridged wired clients connected to wall-plate APs. In this scenario the employee role would be assigned to both AP and the respective gateway configuration groups.
Global client roles
Global client roles are configured and managed in Central then propagated to CX switches or gateways but are not supported on APs or AOS-S switches. Unlike user defined roles which are configured and managed per configuration group, global client roles are centrally configured and managed in Central then propagated to the CX switches, branch gateways, and mobility gateways.
When propagated to branch or mobility gateways, each global client role will be listed in the roles table in each applicable gateway configuration group and are identified with a ‘Yes’ flag in the global column. Each global client role must have a unique name and cannotoverlap with existing default or user defined roles.

Gateway configuration group roles table with global client roles
A global client role can be assigned to tunneled client devices terminating on a gateway cluster in addition to wired client devices that are connected to an untrusted port or VLAN on a gateway. They can be used the same way as user defined roles and can include IP-based policies and attributes.
Unlike default and user defined roles, global client roles do not contain any IP-based network access permissions by default, and these must be assigned post propagation by the admin. If used in an unmodified state, client devices will be unable to obtain IP addressing or be able to communicate over the intermediate IP network. For each propagated role, the admin must assign one or more session access control lists (SACLs) that allows basic network services such as Dynamic Host Configuration Protocol (DHCP) and Domain Name Services (DNS) in addition to the necessary destination host and network permissions.
Global client roles may also be used to apply role-to-role group-based policy enforcement with a NetConductor solution in addition to role-to-role enforcement across gateways as detailed in theVSG.
8.2 - Management and Configuration
Role management and configuration in Central is separated into two management functions. The first management function involves role creation or removal which can be performed in different areas within the Central UI depending on the role type:
-
Default roles – Are supported on APs and gateways. They are added or removed to AP and gateway configuration groups with their parent profile. Default roles cannot be manually created or removed.
-
User defined roles – Are supported on APs and gateways. They are added or removed using either the profile creation workflow or are manually added or removed directly within each AP or gateway configuration group.
-
Global client role – Are added or removed globally within a Central instance then propagated to gateways and switches.
As roles are policy and configuration containers, the second management function involves adding, removing, or modifying network access policies and attributes for each role. For default and user defined roles, the forwarding mode selected for a profile will influence where role management can be performed:
-
Bridge forwarding – Network access policies and attributes can be configured and managed using the profile creation workflow or by directly modifying each role within an AP configuration group.
-
Mixed or tunnel forwarding – Network access policies and attributes are configured and managed directly per AP and gateway configuration group. This recent change permits different network access policies and attributes to be assigned to a role on APs and gateways.
Role to role permission management and group policy identifier configuration for global client roles is performed globally within each Central instance. For global client roles that are propagated to mobility gateways, additional network access policies and attributes are configured and managed directly within each gateway configuration group.
Profile creation workflow
The profile creation workflow provides a convenient way to configure default and user defined roles as part of an intuitive workflow. Roles can be added and removed without requiring the admin to exit the profile workflow. The access slider in the workflow determines the level of role configuration that is exposed:
-
Unrestricted – No role configuration is exposed within the workflow.
-
Network Based – Network access permissions and attributes can be configured and modified for the default role only.
-
Role Based – Full role configuration is exposed.
For bridge forwarding profiles, roles can be added, removed, and configured using the workflow. When Role Based access is selected, adding, editing, or removing user defined roles is possible.

Bridge profile role configuration within the workflow.
The current state of the slider in the user interface is dependent on the current configuration of the WLAN profile and the associated default user role.
-
Default is Unrestricted.
-
Setting access control policy within the default user role other than Allow any to all destinations will result in the slider showing Network Based.
-
Creating any assignment rules will result in the slider showing Role Based.
The current state of the slider has no impact on the ability of the access point to utilize or assign roles returned by RADIUS or Central NAC.
For mixed and tunnel forwarding profiles, roles can be added and removed using the profile creation workflow, but policies cannot be configured. User defined roles added or removed using the workflow are added or removed from their respective AP and gateway configuration groups. Note that network access policies and attributes are no longer configurable using the profile creation workflow for mixed and tunnel forwarding profiles and must be manually configured in the respective AP and gateway configuration groups. A warning is displayed in theconfiguration workflow advising of this requirement.

Mixed / tunnel profile role configuration within the profile creation workflow
Configuration groups
User defined roles can be added, removed, and configured directly per AP and gateway configuration group using the Central UI. The admin can configure network access permissions and attributes for existing roles or add, delete, and configure user defined roles. The UI also offers a convenient way to pre-configure user defined roles, network access permissions and attributes prior to creating profiles.
For AP configuration groups, default and user defined roles can be configured and managed under Security > Roles. User defined roles can be added, removed, or configured, but default roles can only be configured and not removed. Default roles can only be removed by removing the parent profile.
Each role is configured by selecting a role from the list which presents the network access policies and attributes that are configured for the selected role. An example of role management within an AP configuration group is depicted below.

AP group role configuration and management.
For gateway configuration groups, default and user defined roles can be configured and managed under Security > Roles. The role table lists all the roles configured in the gateway configuration group which includes predefined roles, default roles, user defined roles, and global client roles. Global client roles are identified with a Global “Yes” flag.
Each role is configured by selecting a role in the table which displays an additional table that presents the network access policies and attributes that are assigned to the selected role.

Gateway group role configuration and management.
8.3 - Bridge Forwarding
Please refer to the Forwarding Modes of Operation for a detailed overview of bridge forwarding.
Supported role types
For bridge forwarding, the AP makes the role assignment decision. Bridged clients can be assigned a default role or user defined role but not a global role. A bridged client is either assigned a default role or user defined role depending on if a user defined role is dynamically assigned from an authentication server or role derivation rule.
Role derivation and assignment
For bridge forwarding, the APs operate as authenticators and make the role assignment decision. When a client device attaches to an AP or a device/user identity is authenticated, a default or user defined role is assigned:
-
Default role – Is assigned when no user defined role is dynamically assigned, or the dynamically assigned role is not present on the AP.
-
User defined role – Is dynamically assigned from a RADIUS authentication server, Central NAC service or role assignment rule.
A user defined role may also be assigned post-authentication using a DHCP role assignment rule. DHCP role assignment rules are evaluated post authentication as a DHCP message exchange must occur. A default or user defined role may also be changed post-authentication by an authentication server that sends a change of authorization (CoA) message.
Default role
A default role is created for every bridge profile with a default role assignment rule that cannot be modified. The default role assignment for a profile can be viewed in the profile creation workflow when Role Based access is selected. An example of a default role assignment for a profile named BridgeProfile is depicted below.

Bridge profile default role assignment rule.
A default role is assigned to client devices or user identities when no role is dynamically assigned from a RADIUS authentication server, Central NAC service or role assignment rule. They are also assigned if a dynamically assigned role is not present in the AP configuration.
Assignment rules
User defined roles can be dynamically assigned to client sessions by creating role assignment rules within the profile creation workflow. They are optional and permit dynamic user defined role assignment based on admin defined rules that include an attribute, operator, string value, and the resulting role assignment. They operate like security access control lists (ACLs) where rules are evaluated in order (top down). The first assignment rule that is matched is applied. Assignment rules may also be re-ordered at any time.
Role assignment rules are often implemented during migrations to HPE Aruba Networking by allowing role assignments to be made using the attribute value pairs (AVP) from existing RADIUS server policies that implement IETF or vendor specific attributes (VSA).
As an example, a third-party RADIUS server is configured with policies that return the IETF Filter-Id AVP that provides unique string values that can be used by the APs to assign a user defined role. Each condition in the UDR includes a match condition and user defined role assignment.

Role assignment rule using the Filter-Id AVP to determine the role to assign.
Assignment rules can also be used for dynamic role assignment for non-authenticated sessions. For example, assignment rules can be created to dynamically assign user defined roles based on MAC OUI or DHCP options. This can be useful if dynamic VLAN assignments or unique network access policies need to be applied to sets of headless devices that do not support 802.1X or for profiles that do not have 802.1X or MAC authentication enabled.
DHCP option-based rules are evaluated post authentication and are only applicable once a VLAN assignment has been made as the assignment rules operate by matching option fields exchanged in DHCP discover and request messages. DHCP optional-based rules are not applicable for profiles with Captive Portal enabled and should not be used to assign user defined roles that result in a VLAN assignment change.
RADIUS assigned
Clients connected to WLANs or downlink ports requiring MAC or 802.1X authentication can be directly assigned a user defined role from a RADIUS authentication server or Central NAC service that return the HPE Aruba Networking Aruba-User-Role vendor-specific AVP. Policies on the RADIUS authentication server or Central NAC service can be configured to directly return a user defined role name based on the authenticating device/user identity, user identity store attributes such as department, or other contextual conditions such as date or time, location, or posture.
APs performing MAC or 802.1X authentication will accept the Aruba-User-Role AVP from a RADIUS Server or Central NAC with no additional configuration being required. If the user defined role is present on the AP and no role assignment rule is matched, the role name provided by the Aruba-User-Role AVP is assigned.
A role assignment rule can be configured to use a specific role based on the received role name if required. For example, if the Aruba-User-Role is returned with the value Employees, a role assignment rule can be configured to match the received role name and apply a different role. This can be a useful tool for migrations and troubleshooting.
Assignment order
When multiple role assignment outcomes are possible for a client device or user identity, an assignment priority is followed by the AP. As a rule, a user defined role that is derived from a role assignment rule will take precedence over a user defined role received from the Aruba-User-Role AVP. If no user defined role is derived or the derived role does not exist on the AP, a default role is assigned.
Bridge forwarding role assignment order
Priority | Assignment | Notes |
---|---|---|
1 (Highest) | Role Assignment Rule | Evaluated in order |
2 | Aruba VSA | Aruba-User-Role |
3 (Lowest) | Default role | If no user defined role is derived |
User defined roles can also be dynamically assigned post authentication which is not captured in the above assignment flow. A user defined role change can occur as the result of a DHCP assignment rule during attachment or change of authorization (CoA) message received from a RADIUS authentication server or the Central NAC service. User defined roles assigned from a DHCP assignment rule or CoA will take precedence over a previously assigned default or user defined role post authentication.
For example, if an 802.1X client device is assigned a user role using the Aruba-User-Role AVP and a DHCP assignment rule is matched that assigns a different role, the role derived from the DHCP assignment rule will take precedence.
Policy enforcement
When bridge forwarding is selected in a profile, the APs operate as the sole policy enforcement point. The APs inspect all user traffic and can make forwarding and drop decisions based on each client device’s role assignment and the network access policies that are configured in each role.
Each AP has a deep packet inspection (DPI) capable firewall that can permit or deny traffic flows based on available information contained within IP headers. When application visibility or unified communications (UCC) is enabled, the APs can also identify applications and real-time application flows by leveraging deep packet inspection (DPI), application layer gateways (ALGs) and advanced heuristics.
Each AP is fully capable of inspecting traffic received from attached client devices and making a forward or drop decision based on the network access rules that are configured within each assigned role. All north / south and east / west traffic flows are inspected and can be acted on by the firewall. Client devices can either be assigned a default role or be dynamically assigned a user defined role. When dynamic role assignment is used, individual clients connected to a WLAN or downlink port can be assigned separate roles each with the necessary network access policies assigned.

Bridge forwarding policy enforcement.
Scaling considerations
When configuring user defined roles within an AP configuration group, scaling must be considered as each AP can only support a specific number of default and user defined roles which is dependent on the version of AOS-10 in use.
AP maximum supported roles
AOS-10 version | Max roles |
---|---|
10.5 and below | 32 |
10.6 and above | 128 |
Each wired-port profile and WLAN profile includes a default role that counts against the maximum number of roles supported by the APs. This also includes the 2x default wired-port profiles that are present on each AP and cannot be removed.
To determine the number of user defined roles that can be configured in an AP group, you must subtract the total number of wired-port and WLAN profiles that are present on the AP from the maximum number of roles that are supported. For example, an AP running 10.6 that has a total of 6 wired-port + WLAN profiles configured in the group can support a total of 122 user defined roles (128 – 6 = 122).
8.4 - Tunnel Forwarding
Please refer to the Forwarding Modes of Operation for a detailed overview of tunnel forwarding.
Supported role types
When mixed or tunnel forwarding mode is enabled in a profile, the gateway determines the role assignment. That role assignment is done at both the AP and the gateway:
-
AP – A default or user defined role
-
Gateway – A default, user defined, or global role.
Split role assignment
Tunneled clients can be assigned the same or different roles on the AP and gateway. A default role is assigned on both the AP and gateway if no role is dynamically assigned from an authentication server, Central NAC service, server derivation rule (SDR), or user derivation rule (UDR). Additionally, an AP will assign a default role to a tunneled client if a dynamically assigned role is not present on the AP. As global client roles are not supported by APs, an AP can only assign a default or user defined role to a tunneled client.
The following combinations of role assignments are supported for tunnel forwarding:
-
Default role – Assigned on both APs and gateways if no dynamic role assignment is made.
-
User defined role – Assigned on both APs and gateways if a dynamic role assignment is made and the role is present in both the AP and gateway configuration groups.
-
Separate roles – A default role is assigned on the APs and a user defined or global role is assigned on the gateways.
Separate roles can only be assigned on the AP and gateway when a dynamically assigned role is not present on the AP. When a role is dynamically assigned to a tunneled client device or user identity that is not present on the AP, the gateway will assign the dynamically assigned role while the AP will assign the default role. For most deployments, the default role on the AP will only contain the default network access policy (allow all) while the user defined or global role on the gateway will contain more restrictive network access rules and attributes.
Role derivation and assignment
For mixed and tunnel forwarding, the gateway operates as the authenticator and makes the role assignment decision. When a client device attaches to an AP or a device/user identity is authenticated, a role is assigned on both the AP and the gateway.
Default role
A default role is created for every mixed or tunnel mode profile. The default role assignment for a profile can be viewed in the profile creation workflow when Role Based access is selected. The default assignment rule cannot currently be modified.

Tunnel profile default role assignment rule.
A default role is assigned to client devices or user identities when no role is dynamically assigned from a RADIUS authentication server, Central NAC service or derivation rule. They are also assigned if a dynamically assigned role is not present on the AP or gateway.
Assignment rules
User defined and global client roles can be dynamically assigned to client devices or user identities by creating role assignment rules. Gateways supports two types of role assignment rules:
-
Server derivation rules (SDR) – Can assign roles based on rules that match IETF or vendor-specific RADIUS attributes and values that are returned from a RADIUS server or Central NAC service.
-
User derivation rules (UDR) – Can assign roles based on rules that match MAC OUIs or DHCP options.
SDR and UDR assignment rules are optional and permit dynamic user defined role or global role assignment based on admin defined rules that include an attribute, operator, string value and the resulting role assignment. They operate like security access control lists (ACLs) where rules are evaluated in order (top down). The first assignment rule that is matched is applied. Assignment rules may also be re-ordered at any time.
Server derivation rules
SDR rules can be either configured within a profile creation workflow or directly within each gateway configuration group. They can be implemented for profiles that use MAC or 802.1X authentication.
SDR rules configured using a profile creation workflow are automatically orchestrated on the respective primary/secondary gateway cluster configuration groups. Each mixed or tunnel mode profile includes a corresponding authentication server group for the profile in the primary/secondary cluster gateway configuration groups. SDR rules configured in workflow are automatically added as server rules in the respective tunnel profile authentication server groups.
When both a primary and secondary gateway cluster are assigned to the profile, the server derivation rules should be managed directly in the mixed or tunnel mode profile. This ensures that the derivation rules are the same for each cluster by modifying the authentication server group configurations in both locations automatically. If SDR rules are defined directly within each gateway configuration group, additional care must be taken to ensure the rules are the same in both authentication server groups else unpredictable role assignments will occur.

Tunnel profile SDR rule example.
User derivation rules
UDR rules are configured per gateway configuration group and can be used to dynamically assign user defined or global client roles to tunneled client devices based on MAC address or DHCP signatures. Each UDR ruleset can contain multiple rules that are evaluated in order (top-down). The first rule that is matched is applied.
UDR rules are configured per gateway configuration group by selecting Security > Advanced > Local User Derivation Rules. Each ruleset has a unique name and can contain multiple rules in order of priority. Existing rules can be re-ordered at any time by selecting a rule and moving it above or below another rule.

Example of an UDR, a ruleset named tunnelprofile with two DHCP option rules has been created. The first rule matches the option 55 signature for MacBook Pro’s running Sonoma while the second rule matches the option 55 signature for an HP Windows 11 notebook.
A ruleset must be assigned to an orchestrated AAA profile by selecting Security > Role Assignment (AAA Profiles). Each mixed or tunnel mode forwarding profile will have a corresponding AAA profile orchestrated on the applicable gateway configuration groups. Only one UDR ruleset can be applied per orchestrated AAA profile.

UDR rule-set assignment to a AAA profile.
DHCP option-based rules are evaluated post authentication and are only applicable once a VLAN assignment has been made as DHCP assignment rules operate by matching option fields transmitted by client devices in DHCP discover and request messages. DHCP option based rules should not be used to assign user defined roles or global client roles that result in a VLAN assignment change and are not applicable for profiles with Captive Portal enabled.
RADIUS assigned
Clients connected to mixed or tunnel mode forwarding WLANs or downlink ports requiring MAC or 802.1X authentication can be directly assigned a user defined or global role from a RADIUS authentication server or Central NAC service configured to return the Aruba-User-Role AVP.
APs forward RADIUS access requests to their assigned designated device gateway (DDG) which is proxied to the configured external RADIUS server or the Central NAC service. The gateways will accept the Aruba-User-Role AVP from a RADIUS Server or Central NAC with no additional configuration being required in the profile. If the user defined role is present on the gateway, the role name supplied by the Aruba-User-Role AVP is assigned.
A role assignment rule can be configured to change the received role name if required. For example, if the Aruba-User-Role is returned with the value Employees, a role assignment rule can be configured to match the received role name and apply a different role name such as employee-role. This can be a useful tool for migrations and troubleshooting.
Assignment order
When multiple role assignment outcomes are possible for a client device or user identity, an assignment priority is followed by the gateway. As a rule, a user defined role received in the Aruba-User-Role AVP, or an SDR will take precedence over a user defined role assigned from a UDR. If no user defined role is derived or the derived role does not exist on the AP or gateway, a default role is assigned.
Mixed/tunnel forwarding role assignment order
Priority | Assignment | Notes |
---|---|---|
1 (Highest) | Aruba VSA | Aruba-User-Role |
2 | Server derivation rule (SDR) | Evaluated in order |
3 | User derivation rule (UDR) | Evaluated in order |
4 (Lowest) | Default role | If no user defined role is derived |
User defined roles can also be dynamically assigned post authentication which is not captured in the above assignment order. A user defined role change can occur as the result of a DHCP UDR assignment rule during attachment or change of authorization (CoA) message received from a RADIUS authentication server or the Central NAC service. User defined roles assigned from a DHCP UDR assignment rule or CoA will take precedence over a previously assigned default or user defined role post authentication.
For example, if an 802.1X client device is assigned a user role using the Aruba-User-Role AVP and a DHCP UDR assignment rule is matched that assigns a different role, the role derived from the DHCP assignment rule will take precedence.
Policy enforcement
When tunnel forwarding is enabled in a profile, the APs and gateways can both operate as policy enforcement points. Both can inspect user traffic and make forwarding and drop decisions based on the network access policies defined within each assigned role.
The network access policies included in the role assigned at the AP and gateway determines which device inspects the traffic and makes the drop or forwarding decision. For most tunneled deployments, the client device or user identity will be assigned a default role on the AP and a user defined role on the gateway. The default role on the AP includes a default allow-all rule that permits all traffic to be forwarded while the user defined role on the gateway includes more restrictive network access policies and provides enforcement.

Tunnel forwarding policy enforcement.
For mixed forwarding, the enforcement point depends on the forwarding mode utilized for each client device or user identity.
Ultimately the network access policies assigned to the default and user defined roles determine if the AP, gateway, or both perform the packet inspection and enforcement. As a general recommendation, use the AP as the enforcement point for bridged forwarding and the gateway as the enforcement point for tunnel forwarding.
While the roles on both AP and gateway can each contain separate network access policies, this should be avoided as doing so will result in a more complex policy deployment model as the firewall functions are distributed between the two devices. If network access rules must be implemented on both AP and gateway for tunneled traffic, the less restrictive policies should be applied at the AP with the more restrictive or complex policies at the gateway.
8.5 - User-Based Tunneling
This section provides an overview of how user defined roles are implemented on AOS-10 gateways and AOS-CX access layer switches for User-Based Tunneling (UBT) deployments. This section covers the role types that are supported on gateways and switches and how the roles are configured and managed. This section also provides details for how roles are assigned and where network access permissions are enforced.
Role types
UBT deployments implement user defined roles on the access layer switches and gateways which are independently configured and managed by the administrator. The access layer switches may optionally implement downloadable user roles (DUR) from a ClearPass Policy Manager (CPPM) server if needed where user defined roles are dynamically downloaded and installed on the access layer switches upon successful authentication and authorization.
User defined roles
User defined roles are configured and named by the administrator and must be configured per gateway and switch configuration group. They may also be directly configured per access layer switches that are not managed by Central.
User defined roles are assigned to UBT client devices or user identities either by a RADIUS authentication server or Central NAC service. As theaccess layer switch is the authenticator, the user defined role cannot be assigned by gateways using role derivation rules.
The user defined role assigned on the access layer switch and gateway can be the same role name or a different role name. Each user defined role on the access layer switch that is used for UBT includes specific attributes that determines the cluster UBT traffic is tunneled to and the gateway role that is assigned.
As gateways and switches are often managed and configured by separate IT teams, the gateway role mapping allows for discrepancies between role names. For example, a VoIP phone can be assigned a user defined role named ip_phone on the access layer switch and a role named voip-role on the gateway. The same role name may also be assigned on both.

UBT switch roles and gateway mappings
Role configuration and management
User defined roles must be configured and managed separately per gateway and switch configuration group. For access layer switches, UBT configuration, user defined roles and gateway mappings can be applied either using configuration templates or the MultiEdit configuration editor. For gateways, user defined roles, attributes, and network access permissions are configured per gateway configuration group using the Central UI.
Access layer switches
User defined roles can be added, removed, and configured directly per switch configuration group using either templates or the MultiEdit configuration editor. Template groups allow for configuration to be applied to all CX switches within a configuration group or different configurations to be applied to groups of switches based on model and version. The UBT zone configuration, user defined roles and gateway role mappings being defined in each respective template.
The MultiEdit configuration editor allows for configuration to be applied to multiple CX switches simultaneously or individual switches based on selection with the Central UI. The MultiEdit configuration editor allows for UBT zone configuration, user defined roles and gateway role mappings to be added, removed or modified by selecting one or more CX access layer switches, editing the configuration then adding the necessary CLI commands all within a single intuitive workflow within the Central UI. Syntax checking is provided within the workflow.

An example switch group role configuration using MultiEdit that includes three user defined roles named contractor, employee, and ip_phone each with a common UBT zone assignment but unique gateway role mappings.
Gateways
User defined roles can be added, removed, and configured directly per gateway configuration group using the Central UI. The admin can configure network access permissions and attributes for existing roles or add, delete, and configure user defined roles.
For gateway configuration groups, default and user defined roles can be configured and managed under Security > Roles. The role table lists all the roles configured in the gateway configuration group which includes all the roles including predefined roles, default roles, user defined roles and global client roles.
Each role is configured by selecting a role in the table which displays an additional table that presents the network access policies and attributes that are assigned to the selected role. An example of role management within a gateway configuration group is depicted below. In this example a role named contractor-role is selected and the network access policies displayed.

Gateway user defined role configuration and management
Each user defined role on the gateway that is used for UBT must include a VLAN assignment which is defined as an attribute within each user defined role. Each role can be assigned a VLAN ID or VLAN Name defined within the configuration group using a dropdown selection within the More option for each role. The VLAN ID or VLAN Name must be configured and present within the configuration group.
An example of VLAN assignment for a user defined role named contractor-role a is depicted below. In this example UBT clients will be assigned to VLAN ID 82 within the cluster.

Gateway user defined role VLAN assignment.
Role derivation and assignment
When User-Based Tunneling (UBT) is deployed, a user defined role configured on the access layer switch initiates the user based tunneling session to a cluster of gateways. For a typical deployment, the UBT ports are configured with MAC and/or 802.1X port-access security where each wired device (unique MAC) is authenticated against a RADIUS server or Central NAC service. Upon successful authentication, the RADIUS authentication server or Central NAC service returns the Aruba-User-Role AVP that determines the user defined role assignment.
Each user defined role used for UBT includes additional attributes that specifies a UBT zone and gateway role:
-
UBT zone – References configuration within the access layer switch that determines the primary and optionally secondary cluster that traffic is tunneled to. Each role supports one zone assignment.
-
Gateway role – Determines the role that is assigned on the gateway.
The user defined role assigned to the UBT client device or user identity must include both the UBT zone and gateway role attributes as they determine the primary or secondary cluster the traffic is tunneled to in addition to the role assigned within the cluster. The assigned role on the cluster determines the network access policies that are applied in addition to the VLAN assignment within the cluster.
When a wired client device or user identity is authenticated by the access layer switch and user defined role with UBT attributes is assigned, the traffic is tunneled to the respective primary or secondary cluster. Each UBT client is anchored to a user designated gateway (UDG) node within the cluster based on the published bucket map. The configured gateway role determines the user defined role that is assigned to the UBT session on the UDG in addition to the VLAN assignment. Each UBT session can be assigned the same role name on the access layer switch and UDG, or separate roles names if required.
Policy enforcement
For UBT, the access layer switches and gateways can both operate as policy enforcement points, however the traffic inspection capabilities of both devices are quite different. The access layer switches do not implement a stateful packet inspection firewall and only support stateless access control lists (ACLs) which can be applied to ingress or egress traffic. Gateways implement a deep packet inspection (DPI) firewall that is stateful and application aware. Traffic is inspected on ingress.
For most UBT deployments, the network access policies will be defined within the user defined roles on the gateways and all north / south and east / west traffic flows can be inspected and enforced by the gateways. Gateway enforcement also allows for the same user defined roles, network access policies and attributes to be applied to both wireless and UBT clients, but the different client types should be assigned separate VLANs.

UBT policy enforcement