Link Search Menu Expand Document
calendar_month 23-Feb-26

Small Campus Network Overview

The small campus architecture is designed to deliver a simple, cost-effective, and resilient campus network for smaller environments. Unlike the Large Campus architecture, which uses Layer 3 routing at the core and aggregation layers, the small campus design uses a flattened Layer 2-only switching topology, with all Layer 3 services centralized on WAN gateway devices.

Consolidating routing, policy enforcement, and WAN connectivity at the gateway layer reduces operational complexity, while still providing redundancy and consistent security across wired and wireless access.

The following sections describe the functional components of the small campus design and how they interact to provide end-to-end connectivity.

Table of contents

Small Campus Overview Diagram

Gateway Layer

Configuration of the WAN gateway devices is outside the scope of this guide. In this context, the Gateway Layer refers to the functional Layer 3 routing boundary of the site rather than a specific wireless mobility appliance. While an AOS-10 Gateway could be deployed to fulfill this role, the term here specifically denotes the default gateway and policy enforcement point.

In the small campus design, gateway devices function as both the Internet/WAN edge and the default gateway for all client and management subnets. All Layer 3 routing, including inter-VLAN traffic and Internet-bound traffic, is handled at this layer.

Gateway devices also enforce policy for Internet traffic and traffic routed between subnets.

Aggregation Layer

A VSF pair of switches operates as an aggregation layer for the small campus, providing redundant Layer 2 transport between the gateway and access layers using LAGs.

Layer 3 functions traditionally implemented at the core are moved to the gateway layer. No SVIs are configured on aggregation switches, which simplifies the design while still providing redundancy and high availability through VSF and redundant uplinks.

Access Layer

Access switches are deployed as VSF stacks in a ring topology. This provides resilient uplinks to the aggregation layer and simplifies operations by reducing the number of managed devices. The access layer remains Layer 2 only and extends all required VLANs to wireless access points operating in bridge mode.

Redundancy and Loop Protection

Because the small campus design utilizes a Layer 2-only switching fabric, robust redundancy and loop prevention mechanisms are critical for maintaining network stability. This design employs a multi-layered approach to eliminate loops and protect the environment from both infrastructure and user-induced faults.

  • Primary Prevention: Multi-chassis and standard LAGs create a physically loop-free hierarchy that allows all uplinks to forward traffic simultaneously.
  • Secondary Defense: MSTP (Single Instance) provides a standards-based safety net for initial staging and protection against cabling errors.
  • Edge Hardening: A combination of MSTP and HPE Loop Protect secures the network against unmanaged downstream devices and user-induced loops.

Non-blocking Link Aggregation Groups (LAGs) serve as the foundation of this strategy, providing redundancy between the access, aggregation, and gateway layers. These LAGs establish a loop-free topology that maximizes available bandwidth by allowing all uplinks to actively forward traffic. This is the primary mechanism for loop prevention, as it creates a physical architecture that eliminates logical loops from the data plane by design.

Multiple Spanning Tree Protocol (MSTP) is enabled globally as a secondary loop prevention mechanism using a single instance for all VLANs. MSTP provides critical protection during initial topology formation and blocks potential loops prior to the full establishment of infrastructure LAGs. To ensure a predictable and hierarchical topology, the aggregation layer is configured as the STP root with a priority of 4, while access switches retain the default priority of 8. Beyond initial provisioning, MSTP remains active to provide a fallback defense against loop formation caused by operator error or cabling mistakes after the topology is in production.

Additional STP safeguards and edge-port protections are implemented. BPDU filtering is applied on links toward the gateway layer to prevent unintended STP interaction with the WAN edge, while Root Guard is enabled on aggregation-layer downlinks to ensure the root bridge placement remains consistent. On the network edge, all non-AP ports are configured with BPDU Guard, TCN Guard, and Root Guard to block loops and topology changes caused by user-attached devices. These ports are also designated as admin-edge to allow for faster convergence.

Finally, HPE’s proprietary Loop Protect is enabled on these same edge ports to catch loops created by downstream devices, such as unmanaged switches, that do not originate or forward BPDUs and would otherwise go undetected by MSTP.

Wireless

The small campus wireless network uses a modern AOS-10 architecture fully managed by HPE Aruba Networking Central. All WLANs operate in bridge mode (local switching), where client traffic is bridged directly to the appropriate VLAN at the AP uplink. This ensures efficient traffic paths while maintaining centralized visibility and control.

Although traffic is locally switched, access control is enforced on the AP with user roles, which define firewall policies, rate limits, and VLAN assignment based on user or device identity after authentication. The following SSIDs are deployed as part of this design:

SSID NameAuthentication TypePurpose
Employee_CorpDot1XProvide secure access for employees to corporate resources and the Internet.
Guest_AccessCaptive PortalAllow guests restricted Internet access.
IoT_DevicesMAC-AuthAllow IOT devices access to limited corporate resources and restricted Internet access.

Services

HPE Aruba Networking Central is used for configuration, monitoring, and lifecycle management of both wired and wireless infrastructure. This example uses the cloud-delivered version of Central, which aligns well with the operational and consumption requirements of most small campus environments. An on-premises Central deployment is also supported for organizations that require local management.

Core network services such as network access control (NAC), DHCP, DNS, and NTP can be hosted on-site, centrally within a data center, or delivered as cloud-based services. For standalone small campus deployments, locally hosted or cloud-based services are typically the most practical options. When the small campus is part of a larger enterprise network, centrally hosted or cloud-based services often provide greater consistency and operational efficiency.

In this design, ClearPass is used as the network access control solution.

VLANs and Subnets

VLAN configuration is managed by Aruba Central. In this design, all IP default gateways reside on the gateway devices, which are out-of-scope for this guide.

The table below provides a summary of the VLANs used in this guide.

IP Address Space: 10.10.0.0/16

VLAN IDNameSubnetPurpose
1NET-MGMT10.10.0.1/24Management network for network infrastructure.
20EMPLOYEE-WIRED10.10.20.1/24Wired employee devices.
25EMPLOYEE-WLAN10.10.25.1/24Wireless employee devices.
30IOT10.10.30.1/24IoT devices connected via wired or wireless.
40GUEST10.10.40.1/24Guest wireless access
50REJECT-AUTH10.10.50.1/24Restricted network for wired devices that fail authentication.
51CRITICAL-AUTH10.10.51.1/24Limited access network for wired devices, when the RADIUS server is unreachable.
999BLACKHOLENo IP servicesDefault VLAN assigned to the WLAN and wired ports.

Note: This guide maps a VLAN’s number to the third octet in its associated IPv4 address space. This allows the reader to easily track VLAN and IP relationships. This is for readability purposes only. This association is not practical for production deployments.

VLAN 999 (BLACKHOLE) serves as the default VLAN assignment for all wired ports and WLANs to ensure no IP connectivity is provided prior to user identification. Because this design relies on RADIUS to dynamically assign a VLAN to the user, clients remain isolated in this non-routable VLAN until a successful authentication response overrides the default and moves the user into a functional VLAN. This restricted-by-default approach is a security best practice that prevents unauthorized network access during the initial connection phase.

VLAN Propagation

  • Aggregation Layer: All VLANs are present at the aggregation layer, except VLAN 999. They are tagged on all downstream LAGs to access switches and upstream links to the WAN gateways.
  • Access Layer: All VLANs are present on VSF access switch stacks.
    • AP Ports: Ports are auto-configured using port profiles and LLDP authentication: VLANs 1, 25, 30, and 40 are present on AP downlink ports.
    • User Ports: Ports are auto-configured using port profiles and returned RADIUS attributes from 802.1X and MAC authentication.
  • Access Points: VLANs 1, 25, 30, and 40 are defined in the AP uplink profile. Named VLANs are used to map specific SSIDs to their respective VLAN IDs.

Multicast

Layer 2 multicast traffic is optimized using IGMP snooping, which is enabled on all switches for all VLANs. This limits multicast forwarding to only those ports with active listeners. Layer 3 multicast routing is handled by the gateway devices and is outside the scope of this guide.

Security Roles and Policies

Network access control leverages user roles for both wired and wireless network policy enforcement.

The Access Point applies these policies to a specific user session regardless of the underlying VLAN, IP assignment, or SSID association. This allows the network to distinguish between different types of users, such as employees and contractors, even if they reside on the same wireless segment.

For wired clients, policy enforcement is VLAN based. During authentication, the client is assigned a user role and VLAN from RADIUS . Once the client is placed into that VLAN, the enforcement of policy occurs at the WAN gateway. By assigning a user role as part of authentication this allows for enhanced policy enforcement if the gateway is role aware.

Note: When HPE Aruba Networking AOS-10 gateways are deployed as the WAN gateways, User-Based Tunneling (UBT) can be implemented. This provides a unified policy enforcement model by using a common set of user roles and a centralized enforcement point for both wired and wireless traffic.

User Roles

The design defines multiple user roles to separate access policy from network topology for wireless clients. While users may share the same VLAN or SSID, roles are used to enforce security and segmentation based on identity and function rather than physical or logical attachment to the network.

Roles are utilized on wired switches to enhance 802.1X and MAC authentication processes. They facilitate the presentation of remediation options for devices failing authentication, provide limited network access when authentication servers are unreachable, and dynamically configure wired ports to support access points.

The following user roles are used in this guide:

User RolePurpose
EMPLOYEEAccess to corporate resources and the internet.
GUEST-USERInternet-only access; deny internal subnets.
HRDepartment-specific access policy.
FINANCEDepartment-specific access policy.
ITAdministrative and management access.
IOTRestricted access with limited external destinations.
BLACKHOLEDefault wireless user role with no access.
REJECT-AUTHDeny all traffic following an 802.1X reject.
CRITICAL-AUTHLimited access when RADIUS is unavailable.
ARUBA-APInfrastructure role for access points.