Optimization Policies Tab
Configuration > Templates & Policies > Policies > Optimization Policies
Use the Optimization Policies tab to view the optimization policies that exist on the appliances selected in the appliance tree. This includes the appliance-based defaults, entries applied manually by using the Appliance Manager or CLI, and entries that result from applying an Orchestrator Optimization Policy template or Business Intent Overlay.
To directly manage optimization policies for a particular appliance, click the associated edit icon. To create and manage optimization policies by using a template, click Manage Optimization Policies with Templates.
For details about fields on this tab, see Optimization Policies Dialog Box below.
Optimization Policies Dialog Box
Use the Optimization Policies tab to directly manage optimization policies for a particular appliance.
Priority
-
If you are using Orchestrator templates to add rules, Orchestrator deletes all entries from 1000 – 9999 before applying its policies.
-
You can create rules with higher priority than Orchestrator rules (1 – 999) and rules with lower priority (10000 – 19999 and 25000 – 65534).
NOTE: The priority range from 20000 to 24999 is reserved for Orchestrator.
-
When adding a rule, the priority is incremented by 10 from the previous rule. The priority can be changed, but this default behavior helps ensure that you can insert new rules without having to change subsequent priorities.
Match Criteria
Match criteria are used universally across all policy maps for route, QoS, optimization, SaaS NAT, and firewall zone security policies.
To use the same match criteria in different maps, create an ACL (Access Control List), which is a named, reusable set of rules. For efficiency, navigate to Configuration > Templates & Policies > ACLs > Access Lists to create ACLs, and then apply them across your appliances.
Use the Match Criteria dialog box to select and configure a variety of match criteria options. Some are explained below:
-
Address Map: Use this option to sort by country, IP address owner, or SaaS application. You can also select and configure Microsoft Instance, Microsoft Category, and Proxy attributes for an address map. These attributes are secondary parameters to the address map. They are evaluated for a policy match only when the configured address map matches the flow. To select and configure these attributes, click +Attributes.
-
Match criteria options related to Secure Web Services include:
-
URL: Omit the protocol in URLs and include a slash character (/) or a slash followed by a query parameter. For example,
google.com/orgoogle.com/mapsare valid;https://google.com/is not valid. You can also use the asterisk (*) wildcard character, such as ingoogle.com/*, to specify a domain, but it is more appropriate to use the Domain match criteria option to specifygoogle.com/. Separate multiple URL addresses with the pipe character (|). For example, you can specify two URL addresses asbing.com/*|google.com/*. -
Web Category: If you select this option, click Select, and then click Strict, Moderate, or Custom. Strict and Moderate include predetermined web category selections, which you cannot change. However, if you attempt to select or clear a web category for these, the Custom group of web category selections opens automatically. You can customize web category selections in the Custom group. After you make your selections, click Ok.
-
Web Reputation: If you select this option, select one or more web reputation categories (High Risk, Suspicious, Moderate Risk, Low Risk, or Trustworthy) from the list. To select more than one, press and hold the Shift key, and then click the appropriate categories.
-
Bad IP Reputation: Selecting this option matches on flows for which the source or destination IP address (or both) has a High Risk reputation score.
-
-
Match criteria options related to RADIUS user role include:
-
User Role: This is the user role as specified in the authentication exchange with the ClearPass RADIUS server.
-
User Name
-
User Group
-
User Device
-
User MAC
-
User Vlan
Configuring these match criteria related to user role enables an EdgeConnect to automatically assign traffic steering and firewall zone policies.
-
-
Use the Src:Dest check box associated with several match criteria options to specify separate criteria for inbound and outbound traffic. You can configure source and destination role-based policies when both source and destination users are in the same network.
Source or Destination
-
An IP address can specify a subnet; for example, 10.10.10.0/24 (IPv4) or fe80::204:23ff:fed8:4ba2/64 (IPv6).
-
To allow any IP address, use 0.0.0.0/0 (IPv4) or ::/0 (IPv6).
-
Ports are available only for the protocols tcp, udp, and tcp/udp.
-
To allow any port, use 0.
Wildcard-based Prefix Matching Rules
-
Even when using a range or a wildcard, the IPv4 address must be specified in the 4-octet format, separated by the dot notation. For example, A.B.C.D.
-
Range is specified using a single dash. For example, 128-129.
-
Wildcard is specified as an asterisk (*).
-
Range and Wildcard can both be used in the same address, but an octet can only contain one or the other. For example, 10.136-137.*.64-95.
-
A wildcard can only be used to define an entire octet. For example, 10.13*.*.64-95 is not supported. Use 10.130-139.*.64-95 to specify this range.
-
The same rules apply to IPv6 addressing.
-
CIDR notation and (Range or Wildcard) are mutually exclusive in the same address. For example, 192.168.0.1-127/24 is not supported. Use either 192.168.0.0/24 or 192.168.0.1-127.
-
These prefix-matching rules apply to the following policies only: Route, QoS, Optimization, NAT, Security, and ACLs.
Set Actions
| Set Action | Description |
|---|---|
| Network Memory | Addresses limited bandwidth. This technology uses advanced fingerprinting algorithms to examine all incoming and outgoing WAN traffic. Network Memory localizes information and transmits only modifications between locations. maximize reduction: Optimizes for maximum data reduction at the potential cost of slightly lower throughput and/or some increase in latency. It is appropriate for bulk data transfers, such as file transfers and FTP, when bandwidth savings are the primary concern. minimize latency: Ensures that Network Memory processing adds no latency. This might come at the cost of lower data reduction. It is appropriate for extremely latency-sensitive interactive or transactional traffic. It is also appropriate when the primary objective is to fully utilize the WAN pipe to increase the LAN-side throughput, as opposed to conserving WAN bandwidth. balanced: This is the default setting. It dynamically balances latency and data reduction objectives. It is the best choice for most traffic types. disabled: Turns off Network Memory. |
| IP Header Compression | Process of compressing excess protocol headers before transmitting them on a link and uncompressing them to their original state at the other end. It is possible to compress the protocol headers due to the redundancy in header fields of the same packet as well as in consecutive packets of a packet stream. |
| Payload Compression | Uses algorithms to identify relatively short byte sequences that are repeated frequently. These are then replaced with shorter segments of code to reduce the size of transmitted data. Simple algorithms can find repeated bytes within a single packet; more sophisticated algorithms can find duplication across packets and even across flows. |
| TCP Accel | TCP Acceleration. Uses techniques such as selective acknowledgments, window scaling, and maximum segment size adjustment to mitigate poor performance on high-latency links. NOTE: The slow LAN alert is issued when the loss has fallen below 80% of the specified value configured in the TCP Accel Options dialog box. To open the TCP Accel Options dialog box and view the available advanced options, click the info icon. For more information, see TCP Acceleration below. |
| Protocol Accel | Protocol Accelertion. Provides explicit configuration for optimizing CIFS, SSL, SRDF, Citrix, and iSCSI protocols. In a network environment, it is possible that not every appliance has the same optimization configurations enabled. Therefore, the site that initiates the flow (the client) determines the state of the protocol-specific optimization. |
TCP Acceleration
TCP acceleration uses techniques such as selective acknowledgment, window scaling, and message segment size adjustment to compensate for poor performance on high latency links.
The TCP Accel Options dialog box includes a set of advanced options, provided with default settings.

CAUTION: Because changing the default settings can affect service, it is highly recommended that you contact Support for guidance before modifying them.
| Option | Description |
|---|---|
| Adjust MSS to tunnel MTU | Limits the TCP MSS (Maximum Segment Size) advertised by the end hosts in the SYN segment to a value derived from the Tunnel MTU (Maximum Transmission Unit). This is TCP MSS = Tunnel MTU – Tunnel Packet Overhead. This feature is enabled by default so that the maximum value of the end host MSS is always coupled to the Tunnel MSS. If the end host MSS is smaller than the tunnel MSS, the end host MSS is used instead. A use case for disabling this feature is when the end host uses Jumbo frames. |
| Preserve packet boundaries | Preserves the packet boundaries end to end. If this feature is disabled, the appliances in the path can coalesce consecutive packets of a flow to use bandwidth more efficiently. This feature is enabled by default so that applications that require packet boundaries to match do not fail. |
| Enable TCP SYN option exchange | Controls whether the proprietary TCP SYN option is forwarded on the LAN side. Enabled by default, this feature detects whether more than two EdgeConnect appliances are in the flow’s data path and optimizes accordingly. Disable this feature if there is a LAN-side firewall or a third-party appliance that would drop a SYN packet when it encounters an unfamiliar TCP option. |
| Route policy override | Attempts to override asymmetric route policy settings. It emulates auto-opt behavior by using the same tunnel for the returning SYN+ACK as it did for the original SYN packet. Disable this feature if the asymmetric route policy setting is necessary to correctly route packets. In this case, you might need to configure flow redirection to ensure optimization of TCP flows. |
| Auto reset flows | If enabled, this feature resets all TCP flows that are not accelerated but should be (based on policy and internal criteria like a Tunnel Up event). NOTE: Regardless of whether this feature is enabled, the default behavior when a tunnel goes down is to automatically reset the flows. The internal criteria can also include: Resetting all TCP accelerated flows on a Tunnel Down event. TCP acceleration is enabled. SYN packet was not seen, so this flow was either part of WCCP redirection or it already existed when the appliance was inserted in the data path. |
| IP block listing | If selected, and if the appliance does not receive a TCP SYN-ACK from the remote end within five seconds, the flow proceeds without acceleration and the destination IP address is blocked for one minute. |
| End to end FIN handling | Assists in fine tuning TCP behavior during a connection’s graceful shutdown event. When this feature is enabled (default), TCP on the local appliance synchronizes this graceful shutdown of the local LAN side with the LAN side of the remote appliance. When not enabled (default TCP), synchronization does not occur and the two LAN segments at the ends gracefully and independently shut down. |
| WAN window scale | WAN-side TCP Window scale factor that is used internally for WAN-side traffic. This works independently of the WAN-side factor advertised by the end hosts. |
| Slow LAN defense | Resets all flows that consume a disproportionate amount of buffer and have very slow throughput on the LAN side. Because of a few slower end hosts or a lossy LAN, these flows affect the performance of all other flows so that no flows see the customary throughput improvement gained through TCP acceleration. This feature is enabled by default. The number relates indirectly to the amount of time the system waits before resetting slow flows. |
| WAN congestion control | This selects the internal Congestion Control parameter: optimized: This is the default setting. This mode offers optimized performance in almost all scenarios. standard: In some unique cases, it might be necessary to downgrade to standard performance to better inter-operate with other flows on the WAN link. aggressive: Provides aggressive performance. Use with caution. Recommended mostly for Data Replication scenarios. |
| Per-Flow buffer | Provides settings for Max LAN to WAN buffer and Max WAN to LAN buffer. These settings clamp the maximum buffer space that can be allocated to a flow in each direction. |
| Slow LAN window penalty | Penalizes flows that are slow to send data on the LAN side by artificially reducing their TCP receive window. This causes less data to be received and helps to reach a balance with the data sending rate on the LAN side. This setting is not selected by default. |
| LAN side window scale factor clamp | Allows the appliance to present an artificially lowered Window Scale Factor (WSF) to the end host. This reduces the need for memory in scenarios in which many out-of-order packets are being received from the LAN side. These out-of-order packets cause significant buffer utilization and maintenance. |
| Persist timer timeout | Allows TCP to terminate connections that are in the Persist timeout stage after the configured number of seconds. |
| Keep alive timer | Allows the keep alive timer for the TCP connections to change. Probe interval: Time interval in seconds between two consecutive keep alive probes. Probe count: Maximum number of keep alive probes to send. First timeout (idle): Time interval until the first keep alive timeout. |