WLAN Features
The WLAN Features section describes the technologies and features used to implement an HPE Aruba Networking wireless LAN. Topics covered include wireless security, visitor Wi-Fi, wireless multicast, wireless QoS, wireless network traffic engineering, and WLAN resiliency.
Table of contents
Wireless Security
Wireless security is a key component of the HPE Aruba Networking WLAN solution. The latest improvements to wireless security are included in a protocol update called WPA3, which HPE Aruba Networking was instrumental in defining. Migrate wireless clients to WPA3 as soon as supported to ensure a reliable and secure WLAN. More information on security models can be found in the AOS-10 Wi-Fi Design and Deployment Guide.
WPA3
WPA3 can be deployed using WPA3-Personal (SAE) or WPA3-Enterprise. WPA3 incorporates increased security, while the complexity remains the same as WPA2. WPA3 requires no changes in workflows or usage, with no new steps or caveats to remember. The Simultaneous Authentication of Equals (SAE) protocol was added to the IEEE 802.11s mesh networking standard and certified in 2012. SAE is an implementation of the dragonfly key exchange that performs a password-authenticated key exchange using a zero-knowledge proof. Each side proves it knows the password without exposing the password or password-derived data. The WPA3-SAE user experience is identical to WPA2-PSK, where a user simply enters the password and connects.
The Wi-Fi Alliance has published a list of WPA3-Certified Client Devices.
WPA3-Personal
WPA3-Personal is a replacement for WPA2-PSK. It uses password-based authentication using the dragonfly key exchange (RFC 7664), which is resistant to active, passive, and dictionary attacks. For backward compatibility, enable “Transition Mode” so that WPA3 capable clients connect using WPA3-SAE and legacy clients connect using WPA2-PSK.
WPA3-Enterprise (CCM 128)
CCM 128 is WPA3 with AES CCM encryption and dynamic keys using 802.1X.
CCM 128 is the correct choice for networks moving to WPA3 today. The operating mode is backward compatible with WPA2, but adds optional support for 802.11w Protected Management Frame (PMF). Clients that are PMF capable (support 802.11w) and legacy clients can connect to the same SSID. The mode is supported in bridge, tunnel, and mixed-mode SSIDs.
WPA3-Enterprise (GCM 256)
WPA3 with AES GCM-256 encryption requires new key management (SHA-256), new ciphers, and PMF. Legacy clients are not supported. The operating mode can be used for sites requiring stronger key management and encryption when the client population supports GCM 256.
WPA3-Enterprise (CNSA)
WPA3 with AES GCM-256 encryption uses Commercial National Security Algorithm (CNSA) (192 bit), new key management (SHA-384), and mandatory PMF endpoint support. The WPA3-Enterprise CSNA (192-bit) mode requires a compatible EAP server (such as ClearPass Policy Manager, version 6.8 or later) and requires EAP-TLS. Strict key exchange and cipher requirements may not be supported on all devices. The mode is supported in bridge, tunnel, and mixed-mode SSIDs. It is used primarily by government agencies.
Enhanced Open
Enhanced Open uses Opportunistic Wireless Encryption (OWE) to provide unauthenticated data encryption for open Wi-Fi networks. To the user, an Enhanced Open network appears just like an open network with no padlock symbol, but data are encrypted. OWE performs an unauthenticated Diffie-Hellman key exchange when the client associates with the AP.
This key is used to derive keys to encrypt all management and data traffic sent and received by the client and AP. Central proactively copies the keys to neighboring APs.
No additional device provisioning is required for Enhanced Open. HPE Aruba Networking recommends enabling Enhanced Open for visitor networks where encryption is needed but authentication is not required, such as coffee shops, bars, schools, public venues, and stadiums.
Transition Mode enables an administrator to configure a single open SSID for backward compatibility. The AP automatically creates two basic SSIDs with separate beacons when Enhanced Open is enabled.
BSSID 1 — An open network for non-Enhanced Open stations with an information element (IE) to indicate a BSSID 2 is available. Legacy clients connect to this BSSID, and their traffic is not encrypted.
BSSID 2 — A hidden Enhanced Open network with the Robust Secure Network Indicator Element (RSN-IE) Authentication Key Management (AKM) field indicating the use of suite 18 (the OWE suite) for authentication. In addition, an IE to indicate BSSID 1 is available. Enhanced Open-capable clients connecting to the hidden SSID receive PMF and encryption benefits.
HPE Aruba Networking supports configuring Enhanced Open SSID in bridge or tunnel mode.
Multiple Pre-Shared Key
The Multiple Pre-Shared Keys (MPSK) feature enables devices connecting to the same SSID to use different PSKs. One helpful example is headless Internet-of-Things (IoT) devices that do not support 802.1X. MPSK enhances WPA2 pre-shared key mode by enabling device-specific or group-specific passphrases. Using ClearPass Policy Manager, passphrases are assigned administratively to individual devices or to groups of devices based on common attributes such as profiling data, or assigned uniquely to individual device registrations. This establishes a one-to-one relationship between devices and a specific user to provide visibility, accountability, and management, and subsequently reduces the administrative burden when changing the passphrase for a set of devices.
Note: MPSK is not compatible with WPA3-Personal (SAE).
Visitor Wireless
HPE Aruba Networking can provide access to visitors and employees over the same infrastructure, while ensuring that visitor access does not compromise corporate network security.
Using the organization’s existing WLAN provides a convenient, cost-effective way to offer Internet access for visitors and contractors. The wireless visitor network:
- Provides Internet access to visitors through an open wireless SSID, with web access control in the gateway’s firewall.
- Supports the creation of temporary visitor authentication credentials that an authorized internal user can manage.
- Keeps visitor network traffic separate from the internal network.
Every AP can be provisioned with controlled, open access to wireless connectivity to the Internet. Visitor traffic is tunneled securely from the wireless AP back to the gateway and into a separate VLAN with Internet-only access. The figure below shows how traffic is passed from the wireless visitor network VLAN to the firewall.
Visitor Wireless Network
A visitor network should require a username and password entered on a captive portal. Lobby ambassadors or other administrative staff can issue temporary visitor accounts. This design provides the flexibility to tailor control and administration to the organization’s requirements while maintaining a secure network infrastructure.
It is common for the gateway to act as a DHCP server and router for visitor clients. As long as the projected load metrics are below the gateway’s recommended limits, Layer 3 operations can be enabled for visitors or IoT networks.
When routing is enabled on a gateway, use firewall policies to control traffic between VLANs. The DHCP service on the gateway is not redundant, so an external DHCP server is recommended for mission-critical visitor access.
MultiZone
If an organization’s security policy mandates wireless guest traffic to be tunneled to the DMZ, AOS-10’s MultiZone feature can be configured to send client traffic from a campus AP or switch to other gateway clusters through GRE tunnels with APs offering optional IPSEC encryption. It is supported by the following:
- Campus APs using profiles configured for mixed or tunnel forwarding
- Microbranch APs with profiles configured for Centralized Layer 2 (CL2) forwarding
- CX switches configured for User-Based Tunneling (UBT).
The illustration below depicts an example campus topology with an AOS-10 gateway pair in the DMZ. Tunneling from the campus AP and UBT-enabled CX switch directly to the cluster in the DMZ allows for the segmentation of untrusted guest traffic required by the organization.
Visitor Wireless with Multizone
More details about wireless and wired MultiZone can be found in the MultiZone section of the AOS-10 Design Fundamentals and Concepts Guide guide and the MultiZone in UBT section of the TechDocs User-Based Tunneling page, respectively.
Bridge Mode Deployment
Bridge mode provides a simplified deployment option when tunneled traffic and advanced gateway features are not required. In this mode, wireless traffic is bridged directly from the AP into the wired network. AP access switch ports must be configured as trunks to extend SSID-to-VLAN connectivity. The AP performs packet encryption, user authentication, and policy enforcement, while Aruba Central manages RF optimization, key management, live upgrades, monitoring, and troubleshooting.
An AP-only deployment is suitable for environments that require Wi-Fi connectivity but do not need traffic tunneling or other advanced gateway functions. Typical use cases include small offices and branch locations.
More information AP-only deployments and Bridge Forwarding can be found in the AOS-10 Fundamentals Guide.
The figure below illustrates the an AOS 10 bridge mode topology, which does not include Mobility Gateways.
Roaming
A roaming domain consists of APs within a common RF coverage area that share VLANs and broadcast domains (IP networks). This coverage area may be limited to a single floor or building or, if the network architecture allows, it can span multiple buildings within a campus.
In an AP-only deployment, WLANs bridge user traffic directly into user VLANs, while a management VLAN facilitates communication between APs. To support seamless client roaming and maintain VLAN membership, these VLANs must be trunked across AP uplinks, access switches, and the broader switching infrastructure.
A typical deployment includes a dedicated AP management VLAN and multiple wireless user VLANs. The number of user VLANs depends on deployment scale, broadcast domain size, and segmentation requirements.
Each roaming domain supports a defined maximum number of APs and clients. Because AP management and user VLANs are extended across all APs in the domain, broadcast and multicast frames are forwarded over these VLANs and processed by all APs. These frames are essential for AP and client functions but can introduce scalability limitations. As the number of APs and clients increases, so does the volume of broadcast and multicast traffic, which impacts AP processing capacity.
The maximum validated scale for a single roaming domain is 500 APs or 5,000 clients, whichever limit is reached first. These limits have been tested and verified by HPE Aruba Networking.
An AP-only deployment can support multiple roaming domains, each with up to 500 APs and 5,000 clients, provided that the AP management and wireless user VLANs for each domain are isolated at Layer 3 (residing on separate IP networks and broadcast domains).
Tunnel Mode Deployment
Gateways can be introduced into a greenfield deployment or an existing bridge mode environment to enable tunnel mode. This deployment model enhances security and operational flexibility by centralizing traffic forwarding through gateways. Gateways can be deployed individually or in a cluster for improved redundancy and scale. Clusters are formed automatically when gateways are assigned to the same group in Central.
Tunnel mode provides increased visibility into application traffic, enabling prioritization of business-critical data. Additional benefits include micro-segmentation, dynamic RADIUS proxy, and encryption over the LAN. Seamless roaming is supported across an entire Layer 3 campus, ensuring a consistent user experience.
While AOS-10 allows multiple versions of gateways and APs within a single deployment, HPE Aruba Networking recommends matching AP and gateway software versions. This ensures consistent feature support and maintains overall WLAN stability.
More information on Gateway Deployments and Tunneled Forwarding can be found in the AOS-10 Fundamentals Guide.
The diagram below shows the AOS-10 tunneled mode topology.
Gateway Cluster Design
Tunneled WLANs use gateways as a critical component of the data plane for wireless traffic. A well-designed gateway cluster is essential for ensuring the reliability and scalability of a WLAN deployment.
A cluster is a group of HPE Aruba Networking Gateways operating as a single entity to provide scale, high availability, and service continuity for tunneled clients in a network.
Clustering provides the following features and benefits:
- Stateful Client Failover – When a gateway is taken down for maintenance or fails, APs and clients continue to receive service from another gateway in the cluster without any disruption to applications.
- Load Balancing – Device and client sessions are automatically distributed and shared among the gateways in the cluster. This distributes the workload between the cluster nodes, minimizes the impact of maintenance and failure events, and provides a better connection experience for clients.
- Seamless Roaming – When a client roams between APs, the clients remain anchored to the same gateway in the cluster to provide a seamless roaming experience. Clients maintain their VLAN membership and IP addressing as they roam.
- Ease of Deployment – A gateway cluster is automatically formed when assigned to a group or site in Central without requiring manual configuration.
- Live Upgrades – Customers can perform in-service cluster upgrades of gateways while the network remains fully operational. The Live Upgrade feature enables fully automated upgrades. This is a key feature for customers with mission-critical networks that must remain operational 24/7.
More information on clustering can be found in the AOS-10 Fundamentals Guide.
The AOS-10 Cluster Scale Calculator provides guidance for cluster design and helps determine the optimal number of nodes needed to meet capacity and redundancy requirements. Proper planning ensures efficient resource utilization, resulting in a resilient and scalable tunneled WLAN deployment. For a detailed guide on gateway cluster planning, refer to the AOS-10 Gateway Planning documentation.
Key design considerations include:
- Base Capabilities: Choose a platform that meets requirements for the number of clients, throughput, and supported cluster nodes to ensure long-term scalability.
- Redundancy: Deploy an N+1 gateway cluster to ensure failover capability and maintain service continuity.
- Cluster Uniformity: Use homogenous clusters with gateways of the same model whenever possible to optimize performance and simplify operations.
- Platform Compatibility: Select a gateway platform that supports the AP models deployed in the environment.
Network Integration
In campus environments, gateways should be deployed at Layer 2, with Layer 3 operations handled by an upstream device such as a VSX switch pair. The default gateway for client traffic should reside on the upstream infrastructure rather than on the WLAN gateway. This design approach maintains consistency, optimizes performance, and ensures seamless Layer 3 operations. The upstream switching infrastructure should be configured for high availability, such as with VSX, to provide redundancy.
A dedicated management VLAN should be provisioned to assign system IP addresses to gateways, which are required for clustering. Wireless clients can be assigned manually to a single VLAN at the WLAN profile level or assigned dynamically to different VLANs based on their role. VLANs configured on gateways must be properly trunked to ensure Layer 2 reachability across the cluster.
When selecting switching platforms to connect the gateways to the LAN, consider support for large MAC address tables and high interface speeds. This is particularly critical in large-scale deployments, such as public venues and universities, where a high volume of wireless clients is expected.
Following these best practices ensures that a gateway cluster is designed for high availability, scalability, and end-to-end redundancy.
Underlay Requirements
A well-designed underlay is critical for a successful tunneled WLAN deployment, ensuring reliable reachability between infrastructure components. The primary requirement is connectivity between the management interfaces of APs and the system IPs of the gateways in a cluster.
AP to Gateway Reachability For tunneled WLAN, each AP establishes tunnels to the gateway cluster. This requires consistent IP reachability between the AP management IP and the gateway system IP. Any routing or firewall policies in the underlay must allow this communication.
Jumbo MTU Support (Optional) Tunneled WLANs use GRE encapsulation, which adds overhead to the packet size. Standard MTU values are sufficient in most deployments. However, when clients generate large frames or when deploying User-Based Tunneling (UBT), enabling jumbo MTU support in the underlay is recommended. HPE Aruba Networking recommends configuring an MTU of 9198 on AOS-10 gateways and AOS-CX switches when large frames are expected. Improper MTU sizing can result in fragmentation, latency, and performance degradation under specific traffic conditions.
Underlay validation should include path testing, proper routing configuration, and confirmation that all intermediate devices support the required MTU where applicable.
Roaming
With a centralized forwarding architecture, client devices can roam seamlessly among APs that are tunneling user traffic to a common gateway cluster. The client devices can maintain their VLAN membership, IP addressing, and default gateway because the user VLANs and broadcast domains are common between the cluster members. With the clustering architecture, the client’s MAC address is also set to a single cluster member regardless of the AP to which the client device is attached. The client MAC address moves only during a cluster node upgrade or outage.
Hard roaming is required in AP-Gateway deployments if a client device transitions between APs that tunnel the user traffic to separate gateway clusters. While the user VLAN IDs may be common between clusters, the IP subnets or broadcast domains must be unique per cluster. Any client device that moves between gateway clusters must obtain a new IP address and default gateway after the roam.
WLAN Policy
Both APs and gateways support robust enforcement mechanisms to enhance network security and traffic control:
- Role-Based Stateful Firewall – Enforces security policies using firewall aliases, Application Layer Gateways (ALGs), and role-based rules, allowing for scalable and granular access control.
- Deep Packet Inspection (DPI) – Uses Qosmos’s application engine and signature database to identify nearly 3,500 applications, enabling advanced traffic classification and policy enforcement.
- Web Content, Reputation, and Geo-location Filtering – Uses Webroot’s machine learning-based classification system to assess content categories, website reputation, and geographic origin for billions of URLs.
The Rogue Access Point Intrusion Detection System (RAPIDS) automatically detects and locates unauthorized APs, regardless of the deployment persona, using a combination of wireless and wired network scans. RAPIDS uses existing, authorized APs to scan the RF environment for unauthorized devices in range. RAPIDS also scans a wired network to determine if the rogues detected wirelessly are connected physically. RAPIDS can be deployed with “hybrid” APs serving as both APs and sensors or as an overlay architecture where APs act as dedicated sensors called air monitors (AMs). RAPIDS uses data from both the dedicated sensors and deployed APs to provide the most complete view of the wireless environment. The solution improves network security, manages compliance requirements, and reduces the cost of manual security efforts. For more details on RAPIDS, refer to the AOS-10 Fundamentals Guide.
Gateways provide additional security capabilities, including Intrusion Detection and Prevention Systems (IDS/IPS). The IDS/IPS engine performs deep packet inspection to analyze network traffic for malware and suspicious activity. When a threat is detected, IDS generates alerts for administrators, while IPS actively blocks malicious traffic to prevent network compromise.
Role-Based Policy
Roles are a foundational architectural element of HPE Aruba Networking infrastructure, enabling dynamic segmentation and policy enforcement for client devices. A role represents the identity of a client or device. It is assigned a configurable set of policies and attributes that define network access privileges for users and devices connected to APs and gateways. In addition to access control, roles may include attributes such as VLAN assignments, captive portal configurations, or bandwidth contracts. Policy language within a role determines host, network, and application permissions. More details on roles can be found in the AOS-10 Fundamentals Guide.
AOS-10 APs and gateways support default roles, user-defined roles, and global client roles. Default roles apply to wired or wireless clients when no user-defined role is assigned by an authentication server or role derivation rule. Global client roles are used in the NetConductor solution and are out of scope for this chapter.
Bridge Mode Deployment
In bridge mode, the AP determines the role assignment. Typically, bridged clients receive a user-defined role assigned by an authentication server such as ClearPass or Central NAC.
APs performing MAC or 802.1X authentication accept the Aruba-User-Role RADIUS attribute without requiring additional configuration.
When bridge forwarding is enabled, the AP acts as the policy enforcement point, inspecting user traffic and applying forwarding or drop decisions based on the assigned role and its associated network access policies.
Tunnel Mode Deployments
In tunnel mode, the gateway determines the role assignment. Tunneled clients typically receive a user-defined role from an authentication server such as ClearPass or Central NAC. Role assignment occurs at both the AP and the gateway:
- AP – Assigned a default or user-defined role.
- Gateway – Assigned a default, user-defined, or global role.
APs forward RADIUS authentication requests to their assigned gateway, which proxies them to the configured external RADIUS server. The gateway accepts the Aruba-User-Role RADIUS attribute without additional configuration.
If roles are not used for a WLAN, a default role can be specified in the WLAN profile to assign the same role to all clients connecting to that WLAN.
In most tunneled-mode deployments, APs use a default role with an allow-all rule, while role-based enforcement occurs on the gateway cluster.
Role Enforcement
Policy enforcement occurs twice when using roles: once at the source role and again at the destination role.
When traffic is initiated from Role A, it is subject to policy enforcement based on that role. The traffic is then routed to Role B, where policies associated with the destination role also are enforced.
This dual enforcement model applies to both gateway-based and AP-only deployments. It enables granular control and enhances overall network security.
Note: If the deployment contains more than two gateways in a cluster using role based policy, consult an HPE Aruba Networking account team for design assistance.
WLAN Multicast and Broadcast
Dynamic Multicast Optimization
The 802.11 standard states that multicast traffic over a WLAN must be transmitted at the lowest basic rate. Dynamic Multicast Optimization (DMO) is an HPE Aruba Networking technology that converts multicast frames to unicast before forwarding from a gateway to an AP. Unicast frames are acknowledged by the client and can be retransmitted if a frame is lost over the air. Unicast frames also are transmitted at the highest possible data rate supported by the client, which greatly reduces duty cycle in the cell, freeing up bandwidth for all users.
For performance optimization, avoid having more than one multicast source broadcasting the same data on the same WLAN datapath. Use the largest possible Layer 2 network to avoid converting multiple multicast streams simultaneously.
HPE Aruba Networking recommends leaving DMO disabled by default. Only enable DMO when a specific IP multicast application requires it, such as video streaming to many clients. Multicast implementations should be accompanied by a design review to ensure proper behavior and performance. Improper or unnecessary use of DMO can lead to increased overhead or unexpected results, particularly in large or complex deployments.
The figures below show a typical IP multicast topology with DMO enabled.
IP Multicast BSR, RP, MSDP, IGMP Snooping, and DMO Placement
Broadcast to Unicast Conversion
HPE Aruba Networking WLANs can convert broadcast packets into unicast frames to optimize airtime usage. Broadcast frames over the air must be transmitted with the lowest possible data rate configured (called the “basic rate”). Since broadcasts have no delivery acknowledgment, there is no option to retransmit a lost broadcast frame. When the frame over the air is converted to unicast, the AP can send it at a much higher data rate and retrieve delivery confirmation. A lost frame can be retransmitted.
Unicast greatly decreases channel duty cycle and delivers frames at the highest possible data rate per client.
Broadcast Filtering
An SSID can be configured for broadcast filtering to optimize the WLAN performance. The default setting for an WLAN managed in Central is ARP.
- ARP - The WLAN drops broadcast and multicast frames except DHCP, ARP, IGMP group queries, and IPv6 neighbor discovery protocols. Additionally, it converts ARP requests to unicast and sends frames directly to the associated clients.
- All - The WLAN drops all broadcast and multicast frames except DHCP, ARP, IGMP group queries, and IPv6 neighbor discovery protocols.
- Unicast ARP Only - This option enables the WLAN to convert ARP requests to unicast frames and send them to the associated clients.
- Disabled - The IAP forwards all the broadcast, and multicast traffic is forwarded to the wireless interfaces.
WLAN Quality of Service
Quality of Service (QoS) is implemented in WLAN networks for two purposes:
- Optimize limited wireless channel resources for voice and video applications.
- Support an end-to-end QoS strategy across wired and wireless network infrastructure.
The Wi-Fi shared medium is a challenge for network applications that are sensitive to delay and jitter. As the number of devices contending for a single channel’s transmission resources grows, the probability of delayed access to the medium increases. Additionally, if one or more devices sharing the same channel use high bandwidth applications, it is possible that channel resources can be starved out for other devices. Due to the susceptibility of congestion on Wi-Fi’s shared medium, the benefits of prioritizing traffic sensitive to delay and jitter are more pronounced than typically found on wired networks.
Implementing QoS prioritizes sensitive application traffic access to a transmission medium over lower priority traffic. Applying QoS only to wireless traffic can provide significant benefits for real-time voice and video applications, even without a broader QoS strategy that encompasses the wired network.
Some applications may require end-to-end QoS configurations across both wired and wireless networks to ensure optimal operation. It also may be required by some vendors for application support.
HPE Aruba Networking gateways and APs support a default wireless prioritization strategy, dynamically identifying applications using its Unified Communications and Collaboration (UCC) feature, and QoS customization that enables end-to-end traffic prioritization across both wired and wireless networks.
When a client roams between access points, delay in Wi-Fi transmissions can reduce the quality of voice and video applications. In addition to QoS prioritization, network administrators should implement the fast roaming best practices documented in the Transmit and Basic Data Rates section of the VSG’s Radio Frequency Design chapter.
Wi-Fi Multimedia
Wi-Fi Multimedia (WMM) is a certification program created by the Wi-Fi Alliance that defines QoS for wireless transmission over Wi-Fi networks. WMM prioritizes network traffic into one of four access classes (ACs). Based on the assigned AC, differentiated access to the wireless medium is applied. The strategy provides a statistically higher probability of wireless medium access to frames associated with voice and video applications that are sensitive to delay and jitter.
Both Wi-Fi access points and endpoints can implement WMM to gain statistically higher prioritized channel access.
Note: WMM is supported in all Aruba Wi-Fi products.
Wi-Fi manages airtime contention using carrier-sense, multiple-access with collision avoidance (CSMA/CA). CSMA/CA requires that each device monitors the wireless channel for nearby Wi-Fi transmissions before transmitting a frame. The Wi-Fi standard defines a distributed system in which there is no central coordination or scheduling of clients or APs.
Based on the AC priority of the frame to be transmitted, the WMM protocol adjusts two CSMA/CA parameters: the random back-off timer and the arbitration inter-frame space. High-priority frames are assigned shorter random back-off times and arbitration inter-frame spaces, while low-priority frames must wait longer. This provides high priority frames a greater probability of gaining access to the shared medium ahead of lower priority traffic. The strategy improves the user experience of voice and video applications.
WMM defines four AC traffic levels in ascending priority:
- Background (AC_BK)
- Best-Effort (AC_BE)
- Video (AC_VI)
- Voice (AC_VO)
Back-off and Arbitration Inter-frame Timers for WMM
By default, traffic is forwarded using the best-effort WMM AC.
Marking Traffic for Prioritization
Network infrastructure requires a mechanism to assess how received traffic should be prioritized for further transmission.
There are two primary methods to mark network traffic for prioritization: Differentiated Services Code Point (DSCP) and Class of Service (CoS). Both methods modify header fields in network traffic to inform network infrastructure of traffic priority. DSCP is included in the Layer 3 IP header, and CoS is included in the 802.1Q header of a Layer 2 Ethernet frame. DSCP is the recommended method for marking traffic priority.
When wired traffic is received by an AP, QoS markings associate traffic to a WMM AC. When traffic is received by a wired network port, QoS markings determine the QoS policy and queueing strategy.
DSCP
Assigning DSCP values is the most flexible method of marking traffic prioritization. DSCP values are 5-bits in length and assigned within the Type of Service (ToS) field in the IP header. There are 64 possible values (0-63), using zero as the default value. DSCP and industry standard values used to identify traffic priority are defined in RFC 4594.
DSCP values can be assigned by any host originating traffic. For example, a host can be configured to assign a DSCP value to Teams or SIP traffic. Some network devices, including HPE Aruba Networking gateways and APs, also can modify DSCP values based on traffic characteristics such as source, destination, port, protocol, or application.
Because DSCP is included in the Layer 3 IP header, it can be preserved end-to-end between hosts across all network infrastructure in a communication path, including over routed links. This facilitates consistent, end-to-end QoS prioritization.
HPE Aruba Networking APs, gateways, and CX switches support DSCP for both IPv4 and IPv6.
CoS
A Class of Service (CoS) marking uses the 3-bit Priority Code Point (PCP) value defined by the IEEE 802.1p task group as part of the IEEE 802.1Q standard. A CoS priority has eight possible values, using zero as the default value.
A CoS marking has only link level significance and can be applied only to Ethernet frames with VLAN tags. CoS is not a field included in an 802.11 Wi-Fi frame header, so it cannot be used to mark prioritization over the wireless medium. A wireless client must use DSCP to mark traffic prioritization.
Note: The 802.11 header includes a Traffic Identifier (TID) field to classify traffic prioritization. This 8-bit TID value is not used by HPE Aruba Networking WLAN products to automatically assign CoS or DSCP values on traffic forwarded from the WLAN to the wired network.
WMM Implementation
The following information describes the WMM implementation for AOS 10. Please consult product documentation or an HPE Aruba Networking account team for additional information when using AOS 8 or prior firmware versions.
HPE Aruba Networking APs running AOS 10 assign traffic to WMM ACs based on the DSCP value. They do not associate CoS values to WMM ACs. It is recommended that Wi-Fi only and end-to-end QoS strategies use DSCP markings.
The following table includes the default DSCP to WMM AC mapping:
DSCP Value Range | WMM AC |
---|---|
48-63 | AC_VO (Voice) |
32-47 | AC_VI (Video) |
0-7, 24-31 | AC_BE (Best Effort) |
8-23 | AC_BK (Background) |
The default DSCP mapping can be modified for both tunneled and bridged SSIDs. Individual DSCP values can be reassigned to another WMM AC priority to meet individual business needs. When reassigning one or more DSCP values to a non-default WMM AC, all remaining values (not specifically reassigned) persist at their default mapping.
The following table shows the default mapping of common DSCP class names often used in wired switch configuration to WMM ACs. The general categories below are Class Selector (CS), Assured Forwarding (AF), and Expedited Forwarding (EF), where a numerical value differentiates treatment within a class.
Class Name | DSCP Value | WMM AC |
---|---|---|
CS7 | 56 | AC_VO (Voice) |
CS6 | 48 | AC_VO (Voice) |
EF | 46 | AC_VI (Video) |
CS5 | 40 | AC_VI (Video) |
AF43 | 38 | AC_VI (Video) |
AF42 | 36 | AC_VI (Video) |
AF41 | 34 | AC_VI (Video) |
CS4 | 32 | AC_VI (Video) |
AF33 | 30 | AC_BE (Best Effort) |
AF32 | 28 | AC_BE (Best Effort) |
AF31 | 26 | AC_BE (Best Effort) |
CS3 | 24 | AC_BE (Best Effort) |
CS0 | 0 | AC_BE (Best Effort) |
SIP and other voice applications typically use DSCP 46 (EF), when marked by endpoints or third-party network infrastructure. The UCC feature also dynamically assigns a DSCP value of 46 to voice traffic. On HPE Aruba Networking APs, DSCP 46 is mapped by default to the WMM video AC. Therefore, it is recommended to manually reassign DSCP 46 to the WMM voice AC for any SSID that requires QoS. The value is assigned using the SSID’s WiFi Multimedia advanced settings.
DSCP priorities are assigned using the following methods:
- Gateways and APs can dynamically mark traffic with a DSCP value using the Unified Communication and Collaboration (UCC) feature set. This method automatically optimizes over-the-air Wi-Fi transmissions, where congestion is most likely to occur. Many endpoints do not mark sensitive traffic, making UCC an invaluable tool for improving the quality of common voice and video collaboration tools.
- Endpoints can mark traffic with a DSCP value when originating traffic. This method enables consistent, end-to-end QoS for both wired and wireless traffic. This typically requires administrative configuration, and it is primarily implemented on corporate assets with centralized management tools.
- Gateways and APs can mark traffic with a DSCP value using a role-based firewall policy. This provides network administrators the flexibility to prioritize wireless traffic based on traffic characteristics, which benefits sensitive application traffic that UCC does not identify.
- Third-party network infrastructure can re-mark traffic based on policy and application profiling.
Note: DSCP assignment using firewall policy should be focused on prioritizing over-the-air wireless transmission of wired traffic received with a default DSCP value of 0. HPE Aruba Networking gateways and APs cannot use DSCP or CoS values as matching criteria in firewall policy.
When endpoints assign a non-default DSCP value, administrators should modify the DSCP to WMM AC mapping to assign traffic to the desired WMM AC.
UCC Prioritization
When optimization of Wi-Fi resources is the primary objective and there is no requirement to implement a full end-to-end QoS strategy, the UCC feature can automatically identify traffic that benefits from WMM prioritization and assign a DSCP value. UCC provides a useful method to optimize over-the-air transmissions for common voice and video applications. This optimization is applied to both tunneled and bridged traffic.
The following table identifies UCC prioritized applications and their default DSCP reassignments:
Protocol | Voice DSCP Assignment | Video DSCP Assignment |
---|---|---|
SIP | 46 | 34 |
Skype for Business | 46 | 34 |
Teams | 46 | 34 |
Wi-Fi Calling | 46 | - |
Zoom | 46 | - |
Note: SIP TLS traffic is not detected using DPI and cannot be dynamically prioritized using UCC. In this case, endpoints can specify a DSCP value or a WLAN administrator can modify a role-based firewall policy to associate traffic with the appropriate WMM AC.
UCC voice and video DSCP assignments can be modified from their default value, but it is best practice to modify the DSCP to WMM AC assignment to change the WMM AC transmission selection.
UCC is enabled globally, and it is applied to all groups and sites in an HPE Aruba Networking Central account with the following prerequisites:
- Advanced licenses for HPE Aruba Networking APs.
- For each Central group using UCC, the deep packet inspection configuration must be set to App in Access Point > Services under the AppRF tab.
- AOS firmware version 10.5.0.0 or above.
End-to-End QoS
When end-to-end traffic prioritization is required between wireless and wired hosts, wired network infrastructure also must be configured to support QoS.
HPE Aruba Networking switches use QoS policy to prioritize egress traffic, when a port experiences congestion and must queue traffic for delayed transmission on the wire. The switch places specified traffic in higher priority queues that gain access to the link before lower priority queues using a weighted round-robin scheduling method. This provides predictable behavior for sensitive traffic during congested periods, while some lower priority traffic may be dropped.
Each wired switch in the network must be configured with a consistent QoS policy that specifies traffic queueing behavior. QoS policy handling is assessed based on the DSCP or CoS trust configuration of the ingress switch port; therefore, the DSCP or CoS values assigned to traffic leaving the WLAN are critical to establishing end-to-end QoS.
HPE Aruba Networking switch ports do not trust QoS markings by default. Switch ports can trust either DSCP or CoS for prioritization, but not both. Ports connected to gateways, access points, and hosts must be configured to trust the preferred marking strategy. Interswitch links also require QoS trust configuration.
Most wired networks operate below capacity with very little congestion. When congestion is routinely experienced on a port, it is common to mitigate the need to implement QoS by increasing link capacity, which may not be possible due to hardware or budgetary constraints.
DSCP Considerations
DSCP is the recommended method of marking traffic for end-to-end QoS prioritization. The value is placed in the IP header, making DSCP persistent over bridged and routed links, and it is maintained between WLAN and wired infrastructure, enabling easy creation of a consistent QoS policy across both wired and wireless networks. HPE Aruba Networking APs also use DSCP to make WMM AC assignments.
When a gateway or AP receives traffic for a tunneled SSID, the DSCP value is copied to the outer IP header of the GRE tunnel, which enables consistent QoS treatment of both tunneled and native traffic for the same application in the campus wired network.
No additional configuration is required in the WLAN, except optional adjustments of DSCP value mappings to the desired WMM AC priorities for specific application traffic.
CoS Considerations
CoS traffic prioritization may be required when wired campus switches do no support DSCP. CoS traffic prioritization is more complex, and GRE tunnel traffic can only be partially prioritized. If available, DSCP marking is the recommended basis for QoS prioritization.
In AOS 10, CoS values are not associated with WMM ACs. Regardless of the CoS 802.1p value assigned in Ethernet frames arriving from the wired network, APs forward wireless traffic using the WMM AC associated with the DSCP value in the IP header. The default DSCP value of 0 is transmitted using the best-effort WMM AC.
When the wired network uses CoS priority marking, a bridged SSID can achieve end-to-end QoS by manually assigning CoS and DSCP values to application traffic using role-based firewall policy. The DSCP assignment is required to associate the correct WMM AC at the AP. The CoS assignment is added to wireless traffic bridged to a tagged VLAN on the AP’s wired uplink. CoS marking is not supported when bridging wireless traffic to the untagged, native VLAN.
When using a tunneled SSID, CoS and DSCP values are assigned in a similar manner as a bridged SSID. The DSCP value is used by the AP to transmit application traffic with the correct WMM AC. The CoS priority is assigned to decapsulated traffic forwarded to the wired network by a gateway.
GRE tunnel traffic is typically sent between gateways and APs on untagged VLANs, which do not include the required 802.1Q header for marking CoS priority. Tunnel traffic sent from the gateway to the AP can be optimized by assigning the gateway’s management interface to a tagged VLAN. When configured in this manner, a gateway copies the CoS priority value it receives from VLAN tagged wired traffic to the outer 802.1Q header of GRE tunnel traffic sent to an AP. CoS priority assignment in firewall policy does not modify this behavior. APs always send tunnel traffic to the gateway on the native VLAN, which cannot mark CoS priority.
WLAN Resiliency
HPE Aruba Networking provides a variety of components useful for designing a highly available, fault-tolerant network. This section provides general guidelines for software features that increase fault tolerance and allow for upgrades with minimal service impact.
Authentication State/Key Sync
Authentication keys are synchronized across APs by the Key Management Service (KMS) in Central. This allows clients to roam between APs without reauthenticating or rekeying encrypted traffic. Key sync reduces the load on the RADIUS servers and speeds the roaming process for a seamless experience. Key synchronization and management are handled automatically by the APs and Central; no additional user configuration is required.
Firewall State Sync
Traffic from a client can be synchronized across primary and secondary gateways when using a cluster. This allows the client to fail from the primary gateway to the secondary seamlessly. The system synchronizes encryption keys between APs, so when a client moves to its secondary gateway, the client does not need to reauthenticate or rekey its encrypted traffic. To the client, moving between gateways or APs is transparent.
This is a crucial component of a high availability design and Live Upgrade features. When using a bridged SSID, the firewall state is synced for each roaming event, so the client experiences seamless roaming with no traffic disruption.
Cluster Design Failure Domain
When a gateway fails, clients left with a single gateway connection are rebalanced across the cluster. The length of time required for this operation depends on the number of clients on the network. If a second gateway fails before the rebalancing can occur, the client is disassociated and reconnected to an available gateway. The client can reestablish a connection as long as other gateways are not at capacity.
To mitigate a multiple gateway failure, minimize the common points of failure. To limit the risk of domain failure, use disparate line cards or switches, multiple uplinks spanning line cards or switches, port configuration validation, and multiple gateways.
Campus Wireless Summary
The campus WLAN provides network access for employees, visitors, and IoT devices. Regardless of their location, wireless devices have the same experience when connecting to their services.
The benefits of the HPE Aruba Networking wireless solution include:
- Seamless network access for employees, visitors, and IoT devices.
- Plug-and-play deployment for wireless APs.
- Wi-Fi 6 enhancements that address connectivity issues for high-density deployments and improve the performance of the network.
- Live upgrades to perform operating system updates with little to no impact on service.