Networking
This section provides an overview of Citrix Hypervisor networking, including networks, VLANs, and NIC bonds. It also discusses how to manage your networking configuration and troubleshoot it.
Important:
vSwitch is the default network stack of Citrix Hypervisor. Follow the instructions in vSwitch networks to configure the Linux network stack.
If you are already familiar with Citrix Hypervisor networking concepts, you can skip ahead to Manage networking for information about the following sections:
-
Create networks for standalone Citrix Hypervisor servers
-
Create networks for Citrix Hypervisor servers that are configured in a resource pool
-
Create VLANs for Citrix Hypervisor servers, either standalone or part of a resource pool
-
Create bonds for standalone Citrix Hypervisor servers
-
Create bonds for Citrix Hypervisor servers that are configured in a resource pool
Note:
The term ‘management interface’ is used to indicate the IP-enabled NIC that carries the management traffic. The term ‘secondary interface’ is used to indicate an IP-enabled NIC configured for storage traffic.
Networking support
Citrix Hypervisor supports up to 16 physical network interfaces (or up to 4 bonded network interfaces) per host and up to 7 virtual network interfaces per VM.
Note:
Citrix Hypervisor provides automated configuration and management of NICs using the xe command line interface (CLI). Do not edit the host networking configuration files directly.
vSwitch networks
vSwitch networks support open flow.
-
Supports fine-grained security policies to control the flow of traffic sent to and from a VM.
-
Provides detailed visibility about the behavior and performance of all traffic sent in the virtual network environment.
A vSwitch greatly simplifies IT administration in virtualized networking environments. All VM configuration and statistics remain bound to the VM even when the VM migrates from one physical host in the resource pool to another.
To determine what networking stack is configured, run the following command:
xe host-list params=software-version
<!--NeedCopy-->
In the command output, look for network_backend
. When the vSwitch is configured as the network stack, the output appears as follows:
network_backend: openvswitch
<!--NeedCopy-->
When the Linux bridge is configured as the network stack, the output appears as follows:
network_backend: bridge
<!--NeedCopy-->
To revert to the Linux network stack, run the following command:
xe-switch-network-backend bridge
<!--NeedCopy-->
Restart your host after running this command.
Citrix Hypervisor networking overview
This section describes the general concepts of networking in the Citrix Hypervisor environment.
Citrix Hypervisor creates a network for each physical NIC during installation. When you add a server to a pool, the default networks are merged. This is to ensure all physical NICs with the same device name are attached to the same network.
Typically, you add a network to create an internal network, set up a new VLAN using an existing NIC, or create a NIC bond.
You can configure the following different types of networks in Citrix Hypervisor:
-
External networks have an association with a physical network interface. External networks provide a bridge between a virtual machine and the physical network interface connected to the network. External networks enable a virtual machine to connect to resources available through the server’s physical NIC.
-
Bonded networks create a bond between two or more NICs to create a single, high-performing channel between the virtual machine and the network.
-
Single-Server Private networks have no association to a physical network interface. Single-server private networks can be used to provide connectivity between the virtual machines on a given host, with no connection to the outside world.
Note:
Some networking options have different behaviors when used with standalone Citrix Hypervisor servers compared to resource pools. This section contains sections on general information that applies to both standalone hosts and pools, followed by specific information and procedures for each.
Network objects
This section uses three types of server-side software objects to represent networking entities. These objects are:
-
A PIF, which represents a physical NIC on a host. PIF objects have a name and description, a UUID, the parameters of the NIC they represent, and the network and server they are connected to.
-
A VIF, which represents a virtual NIC on a virtual machine. VIF objects have a name and description, a UUID, and the network and VM they are connected to.
-
A network, which is a virtual Ethernet switch on a host. Network objects have a name and description, a UUID, and the collection of VIFs and PIFs connected to them.
XenCenter and the xe CLI allow you to configure networking options. You can control the NIC used for management operations, and create advanced networking features such as VLANs and NIC bonds.
Networks
Each Citrix Hypervisor server has one or more networks, which are virtual Ethernet switches. Networks that are not associated with a PIF are considered internal. Internal networks can be used to provide connectivity only between VMs on a given Citrix Hypervisor server, with no connection to the outside world. Networks associated with a PIF are considered external. External networks provide a bridge between VIFs and the PIF connected to the network, enabling connectivity to resources available through the PIF’s NIC.
VLANs
VLANs, as defined by the IEEE 802.1Q standard, allow a single physical network to support multiple logical networks. Citrix Hypervisor servers support VLANs in multiple ways.
Note:
- We recommend not to use a GFS2 SR with a VLAN due to a known issue where you cannot add or remove hosts on a clustered pool if the cluster network is on a non-management VLAN.
- All supported VLAN configurations are equally applicable to pools and standalone hosts, and bonded and non-bonded configurations.
Using VLANs with virtual machines
Switch ports configured as 802.1Q VLAN trunk ports can be used with the Citrix Hypervisor VLAN features to connect guest virtual network interfaces (VIFs) to specific VLANs. In this case, the Citrix Hypervisor server performs the VLAN tagging/untagging functions for the guest, which is unaware of any VLAN configuration.
Citrix Hypervisor VLANs are represented by additional PIF objects representing VLAN interfaces corresponding to a specified VLAN tag. You can connect Citrix Hypervisor networks to the PIF representing the physical NIC to see all traffic on the NIC. Alternatively, connect networks to a PIF representing a VLAN to see only the traffic with the specified VLAN tag. You can also connect a network such that it only sees the native VLAN traffic, by attaching it to VLAN 0.
For procedures on how to create VLANs for Citrix Hypervisor servers, either standalone or part of a resource pool, see Creating VLANs.
If you want the guest to perform the VLAN tagging and untagging functions, the guest must be aware of the VLANs. When configuring the network for your VMs, configure the switch ports as VLAN trunk ports, but do not create VLANs for the Citrix Hypervisor server. Instead, use VIFs on a normal, non-VLAN network.
Using VLANs with management interfaces
Management interface can be configured on a VLAN using a switch port configured as trunk port or access mode port. Use XenCenter or xe CLI to set up a VLAN and make it the management interface. For more information, see Management interface.
Using VLANs with dedicated storage NICs
Dedicated storage NICs can be configured to use native VLAN or access mode ports as described in the previous section for management interfaces. Dedicated storage NICs are also known as IP-enabled NICs or secondary interfaces. You can configure dedicated storage NICs to use trunk ports and Citrix Hypervisor VLANs as described in the previous section for virtual machines. For more information, see Configuring a dedicated storage NIC.
Combining management interfaces and guest VLANs on a single host NIC
A single switch port can be configured with both trunk and native VLANs, allowing one host NIC to be used for a management interface (on the native VLAN) and for connecting guest VIFs to specific VLAN IDs.
Jumbo frames
Jumbo frames can be used to optimize the performance of traffic on storage networks and VM networks. Jumbo frames are Ethernet frames containing more than 1,500 bytes of payload. Jumbo frames are typically used to achieve better throughput, reduce the load on system bus memory, and reduce the CPU overhead.
Note:
Citrix Hypervisor supports jumbo frames only when using vSwitch as the network stack on all hosts in the pool.
Requirements for using jumbo frames
Note the following when using jumbo frames:
-
Jumbo frames are configured at a pool level
-
vSwitch must be configured as the network back-end on all servers in the pool
-
Every device on the subnet must be configured to use jumbo frames
-
Enabling jumbo frames on the management network is not supported
To use jumbo frames, set the Maximum Transmission Unit (MTU) to a value between 1500 and 9216. You can use XenCenter or the xe CLI to set the MTU.
NIC Bonds
NIC bonds, sometimes also known as NIC teaming, improve Citrix Hypervisor server resiliency and bandwidth by enabling administrators to configure two or more NICs together. NIC bonds logically function as one network card and all bonded NICs share a MAC address.
If one NIC in the bond fails, the host’s network traffic is automatically redirected through the second NIC. Citrix Hypervisor supports up to eight bonded networks.
Citrix Hypervisor supports active-active, active-passive, and LACP bonding modes. The number of NICs supported and the bonding mode supported varies according to network stack:
-
LACP bonding is only available for the vSwitch whereas active-active and active-passive are available for both the vSwitch and the Linux bridge.
-
When the vSwitch is the network stack, you can bond either two, three, or four NICs.
-
When the Linux bridge is the network stack, you can only bond two NICs.
In the illustration that follows, the management interface is on a bonded pair of NICs. Citrix Hypervisor uses this bond for management traffic.
All bonding modes support failover. However, not all modes allow all links to be active for all traffic types. Citrix Hypervisor supports bonding the following types of NICs together:
-
NICs (non-management). You can bond NICs that Citrix Hypervisor is using solely for VM traffic. Bonding these NICs not only provides resiliency, but doing so also balances the traffic from multiple VMs between the NICs.
-
Management interfaces. You can bond a management interface to another NIC so that the second NIC provides failover for management traffic. Although configuring a LACP link aggregation bond provides load balancing for management traffic, active-active NIC bonding does not. You can create a VLAN on bonded NICs and the host management interface can be assigned to that VLAN.
-
Secondary interfaces. You can bond NICs that you have configured as secondary interfaces (for example, for storage). However, for most iSCSI software initiator storage, we recommend configuring multipathing instead of NIC bonding as described in the Designing Citrix Hypervisor Network Configurations.
Throughout this section, the term IP-based storage traffic is used to describe iSCSI and NFS traffic collectively.
You can create a bond if a VIF is already using one of the interfaces that will be bonded: the VM traffic migrates automatically to the new bonded interface.
In Citrix Hypervisor, An additional PIF represents a NIC bond. Citrix Hypervisor NIC bonds completely subsume the underlying physical devices (PIFs).
Notes:
- Creating a bond that contains only one NIC is not supported.
- The bonded NICs can be different models to each other.
- NIC bonds are not supported on NICs that carry FCoE traffic.
Best practices
Follow these best practices when setting up your NIC bonds:
- Connect the links of the bond to different physical network switches, not just ports on the same switch.
- Ensure that the separate switches draw power from different, independent power distribution units (PDUs).
- If possible, in your data center, place the PDUs on different phases of the power feed or even feeds being provided by different utility companies.
- Consider using uninterruptible power supply units to ensure the network switches and servers can continue to function or can perform an orderly shutdown in the event of a power failure.
These measures add resiliency against software, hardware, or power failures that can affect your network switches.
Key points about IP addressing
Bonded NICs either have one IP address or no IP addresses, as follows:
-
Management and storage networks.
-
If you bond a management interface or secondary interface, a single IP address is assigned to the bond. That is, each NIC does not have its own IP address. Citrix Hypervisor treats the two NICs as one logical connection.
-
When bonds are used for non-VM traffic, for example, to connect to shared network storage or XenCenter for management, configure an IP address for the bond. However, if you have already assigned an IP address to one of the NICs (that is, created a management interface or secondary interface), that IP address is assigned to the entire bond automatically.
-
If you bond a management interface or secondary interface to a NIC without an IP address, the bond assumes the IP address of the respective interface.
-
If you bond a tagged VLAN management interface and a secondary interface, the management VLAN is created on that bonded NIC.
-
-
VM networks. When bonded NICs are used for VM traffic, you do not need to configure an IP address for the bond. This is because the bond operates at Layer 2 of the OSI model, the data link layer, and no IP addressing is used at this layer. IP addresses for virtual machines are associated with VIFs.
Bonding types
Citrix Hypervisor provides three different types of bonds, all of which can be configured using either the CLI or XenCenter:
-
Active-Active mode, with VM traffic balanced between the bonded NICs. See Active-active bonding.
-
Active-Passive mode, where only one NIC actively carries traffic. See Active-passive bonding.
-
LACP Link Aggregation, in which active and stand-by NICs are negotiated between the switch and the server. See LACP Link Aggregation Control Protocol bonding.
Note:
Bonding is set up with an Up Delay of 31,000 ms and a Down Delay of 200 ms. The seemingly long Up Delay is deliberate because of the time some switches take to enable the port. Without a delay, when a link comes back after failing, the bond can rebalance traffic onto it before the switch is ready to pass traffic. To move both connections to a different switch, move one, then wait 31 seconds for it to be used again before moving the other. For information about changing the delay, see Changing the up delay for bonds.
Bond status
Citrix Hypervisor provides status for bonds in the event logs for each host. If one or more links in a bond fails or is restored, it is noted in the event log. Likewise, you can query the status of a bond’s links by using the links-up
parameter as shown in the following example:
xe bond-param-get uuid=bond_uuid param-name=links-up
<!--NeedCopy-->
Citrix Hypervisor checks the status of links in bonds approximately every five seconds. Therefore, if more links in the bond fail in the five-second window, the failure is not logged until the next status check.
Bonding event logs appear in the XenCenter Logs tab. For users not running XenCenter, event logs also appear in /var/log/xensource.log
on each host.
Active-active bonding
Active-active is an active/active configuration for guest traffic: both NICs can route VM traffic simultaneously. When bonds are used for management traffic, only one NIC in the bond can route traffic: the other NIC remains unused and provides failover support. Active-active mode is the default bonding mode when either the Linux bridge or vSwitch network stack is enabled.
When active-active bonding is used with the Linux bridge, you can only bond two NICs. When using the vSwitch as the network stack, you can bond either two, three, or four NICs in active-active mode. However, in active-active mode, bonding three, or four NICs is only beneficial for VM traffic, as shown in the illustration that follows.
Citrix Hypervisor can only send traffic over two or more NICs when there is more than one MAC address associated with the bond. Citrix Hypervisor can use the virtual MAC addresses in the VIF to send traffic across multiple links. Specifically:
-
VM traffic. Provided you enable bonding on NICs carrying only VM (guest) traffic, all links are active and NIC bonding can balance spread VM traffic across NICs. An individual VIF’s traffic is never split between NICs.
-
Management or storage traffic. Only one of the links (NICs) in the bond is active and the other NICs remain unused unless traffic fails over to them. Configuring a management interface or secondary interface on a bonded network provides resilience.
-
Mixed traffic. If the bonded NIC carries a mixture of IP-based storage traffic and guest traffic, only the guest and control domain traffic are load balanced. The control domain is essentially a virtual machine so it uses a NIC like the other guests. Citrix Hypervisor balances the control domain’s traffic the same way as it balances VM traffic.
Traffic balancing
Citrix Hypervisor balances the traffic between NICs by using the source MAC address of the packet. Because for management traffic, only one source MAC address is present, active-active mode can only use one NIC, and traffic is not balanced. Traffic balancing is based on two factors:
-
The virtual machine and its associated VIF sending or receiving the traffic
-
The quantity of data (in kilobytes) being sent.
Citrix Hypervisor evaluates the quantity of data (in kilobytes) each NIC is sending and receiving. If the quantity of data sent across one NIC exceeds the quantity of data sent across the other NIC, Citrix Hypervisor rebalances which VIFs use which NICs. The VIF’s entire load is transferred. One VIF’s load is never split between two NICs.
Though active-active NIC bonding can provide load balancing for traffic from multiple VMs, it cannot provide a single VM with the throughput of two NICs. Any given VIF only uses one of the links in a bond at a time. As Citrix Hypervisor periodically rebalances traffic, VIFs are not permanently assigned to a specific NIC in the bond.
Active-active mode is sometimes described as Source Load Balancing (SLB) bonding as Citrix Hypervisor uses SLB to share load across bonded network interfaces. SLB is derived from the open-source Adaptive Load Balancing (ALB) mode and reuses the ALB functionality to rebalance load across NICs dynamically.
When rebalancing, the number of bytes going over each secondary (interface) is tracked over a given period. If a packet to be sent contains a new source MAC address, it is assigned to the secondary interface with the lowest utilization. Traffic is rebalanced at regular intervals.
Each MAC address has a corresponding load and Citrix Hypervisor can shift entire loads between NICs depending on the quantity of data a VM sends and receives. For active-active traffic, all the traffic from one VM can be sent on only one NIC.
Note:
Active-active bonding does not require switch support for EtherChannel or 802.3ad (LACP).
Active-passive bonding
An active-passive bond routes traffic over only one of the NICs. If the active NIC loses network connectivity, traffic fails over to the other NIC in the bond. Active-passive bonds route traffic over the active NIC. The traffic shifts to the passive NIC if the active NIC fails.
Active-passive bonding is available in the Linux bridge and the vSwitch network stack. When used with the Linux bridge, you can bond two NICs together. When used with the vSwitch, you can only bond two, three, or four NICs together. However, regardless of the traffic type, when you bond NICs in active-passive mode, only one link is active and there is no load balancing between links.
The illustration that follows shows two bonded NICs configured in active-passive mode.
Active-active mode is the default bonding configuration in Citrix Hypervisor. If you are configuring bonds using the CLI, you must specify a parameter for the active-passive mode. Otherwise, an active-active bond is created. You do not need to configure active-passive mode because a network is carrying management traffic or storage traffic.
Active-passive can be a good choice for resiliency as it offers several benefits. With active-passive bonds, traffic does not move around between NICs. Similarly, active-passive bonding lets you configure two switches for redundancy but does not require stacking. If the management switch dies, stacked switches can be a single point of failure.
Active-passive mode does not require switch support for EtherChannel or 802.3ad (LACP).
Consider configuring active-passive mode in situations when you do not need load balancing or when you only intend to send traffic on one NIC.
Important:
After you have created VIFs or your pool is in production, be careful about changing bonds or creating bonds.
LACP Link Aggregation Control Protocol bonding
LACP Link Aggregation Control Protocol is a type of bonding that bundles a group of ports together and treats it like a single logical channel. LACP bonding provides failover and can increase the total amount of bandwidth available.
Unlike other bonding modes, LACP bonding requires configuring both sides of the links: creating a bond on the host, and creating a Link Aggregation Group (LAG) for each bond on the switch. See Switch configuration for LACP bonds. You must configure the vSwitch as the network stack to use LACP bonding. Also, your switches must support the IEEE 802.3ad standard.
A comparison of active-active SLB bonding and LACP bonding:
Active-active SLB bonding
Benefits:
- Can be used with any switch on the Hardware Compatibility List.
- Does not require switches that support stacking.
- Supports four NICs.
Considerations:
- Optimal load balancing requires at least one NIC per VIF.
- Storage or management traffic cannot be split on multiple NICs.
- Load balancing occurs only if multiple MAC addresses are present.
LACP bonding
Benefits:
- All links can be active regardless of traffic type.
- Traffic balancing does not depend on source MAC addresses, so all traffic types can be balanced.
Considerations:
- Switches must support the IEEE 802.3ad standard.
- Requires switch-side configuration.
- Supported only for the vSwitch.
- Requires a single switch or stacked switch.
Traffic balancing
Citrix Hypervisor supports two LACP bonding hashing types. The term hashing describes how the NICs and the switch distribute the traffic— (1) load balancing based on IP and port of source and destination addresses and (2) load balancing based on source MAC address.
Depending on the hashing type and traffic pattern, LACP bonding can potentially distribute traffic more evenly than active-active NIC bonding.
Note:
You configure settings for outgoing and incoming traffic separately on the host and the switch: the configuration does not have to match on both sides.
Load balancing based on IP and port of source and destination addresses.
This hashing type is the default LACP bonding hashing algorithm. If there is a variation in the source or destination IP or port numbers, traffic from one guest can be distributed over two links.
If a virtual machine is running several applications which use different IP or port numbers, this hashing type distributes traffic over several links. Distributing the traffic gives the guest the possibility of using the aggregate throughput. This hashing type lets one guest use the whole throughput of multiple NICs.
As shown in the illustration that follows, this hashing type can distribute the traffic of two different applications on a virtual machine to two different NICs.
Configuring LACP bonding based on IP and port of source and destination address is beneficial when you want to balance the traffic of two different applications on the same VM. For example, when only one virtual machine is configured to use a bond of three NICs.
The balancing algorithm for this hashing type uses five factors to spread traffic across the NICs: the source IP address, source port number, destination IP address, destination port number, and source MAC address.
Load balancing based on source MAC address.
This type of load balancing works well when there are multiple virtual machines on the same host. Traffic is balanced based on the virtual MAC address of the VM from which the traffic originated. Citrix Hypervisor sends outgoing traffic using the same algorithm as it does in active-active bonding. Traffic coming from the same guest is not split over multiple NICs. As a result, this hashing type is not suitable if there are fewer VIFs than NICs: load balancing is not optimal because the traffic cannot be split across NICs.
Switch configuration
Depending on your redundancy requirements, you can connect the NICs in the bond to either the same or separate stacked switches. If you connect one of the NICs to a second, redundant switch and a NIC or switch fails, traffic fails over to the other NIC. Adding a second switch prevents a single point-of-failure in your configuration in the following ways:
-
When you connect one of the links in a bonded management interface to a second switch, if the switch fails, the management network remains online and the hosts can still communicate with each other.
-
If you connect a link (for any traffic type) to a second switch and the NIC or switch fails, the virtual machines remain on the network as their traffic fails over to the other NIC/switch.
Use stacked switches when you want to connect bonded NICs to multiple switches and you configured the LACP bonding mode. The term ‘stacked switches’ is used to describe configuring multiple physical switches to function as a single logical switch. You must join the switches together physically and through the switch-management software so the switches function as a single logical switching unit, as per the switch manufacturer’s guidelines. Typically, switch stacking is only available through proprietary extensions and switch vendors may market this functionality under different terms.
Note:
If you experience issues with active-active bonds, the use of stacked switches may be necessary. Active-passive bonds do not require stacked switches.
The illustration that follows shows how the cables and network configuration for the bonded NICs have to match.
Switch configuration for LACP bonds
Because the specific details of switch configuration vary by manufacturer, there are a few key points to remember when configuring switches for use with LACP bonds:
-
The switch must support LACP and the IEEE 802.3ad standard.
-
When you create the LAG group on the switch, you must create one LAG group for each LACP bond on the host. For example, if you have a five-host pool and you created a LACP bond on NICs 4 and 5 on each host, you must create five LAG groups on the switch. One group for each set of ports corresponding with the NICs on the host.
You may also need to add your VLAN ID to your LAG group.
-
Citrix Hypervisor LACP bonds require setting the Static Mode setting in the LAG group to be set to Disabled.
As previously mentioned in Switch configuration, stacking switches are required to connect LACP bonds to multiple switches.
Initial networking configuration after setup
The Citrix Hypervisor server networking configuration is specified during initial host installation. Options such as IP address configuration (DHCP/static), the NIC used as the management interface, and host name are set based on the values provided during installation.
When a host has multiple NICs, the configuration present after installation depends on which NIC is selected for management operations during installation:
-
PIFs are created for each NIC in the host
-
The PIF of the NIC selected for use as the management interface is configured with the IP addressing options specified during installation
-
A network is created for each PIF (“network 0”, “network 1”, and so on)
-
Each network is connected to one PIF
-
The IP addressing options are left unconfigured for all PIFs other than the PIF used as the management interface
When a host has a single NIC, the following configuration is present after installation:
-
A single PIF is created corresponding to the host’s single NIC
-
The PIF is configured with the IP addressing options specified during installation and to enable management of the host
-
The PIF is set for use in host management operations
-
A single network, network 0, is created
-
Network 0 is connected to the PIF to enable external connectivity to VMs
When an installation of Citrix Hypervisor is done on a tagged VLAN network, the following configuration is present after installation:
-
PIFs are created for each NIC in the host
-
The PIF for the tagged VLAN on the NIC selected for use as the management interface is configured with the IP address configuration specified during installation
-
A network is created for each PIF (for example: network 1, network 2, and so on). Additional VLAN network is created (for example, for Pool-wide network associated with eth0 on VLAN<TAG>)
-
Each network is connected to one PIF. The VLAN PIF is set for use in host management operations
In both cases, the resulting networking configuration allows connection to the Citrix Hypervisor server by XenCenter, the xe CLI, and any other management software running on separate machines through the IP address of the management interface. The configuration also provides external networking for VMs created on the host.
The PIF used for management operations is the only PIF ever configured with an IP address during Citrix Hypervisor installation. External networking for VMs is achieved by bridging PIFs to VIFs using the network object which acts as a virtual Ethernet switch.
The steps required for networking features such as VLANs, NIC bonds, and dedicating a NIC to storage traffic are covered in the sections that follow.
Changing networking configuration
You can change your networking configuration by modifying the network object. To do so, you run a command that affects either the network object or the VIF.
Modifying the network object
You can change aspects of a network, such as the frame size (MTU), name-label, name-description, purpose, and other values. Use the xe network-param-set
command and its associated parameters to change the values.
When you run the xe network-param-set
command, the only required parameter is uuid
.
Optional parameters include:
-
default_locking_mode
. See Simplifying VIF locking mode configuration in the Cloud. -
name-label
-
name-description
-
MTU
-
purpose
. See Adding a purpose to a network. -
other-config
If a value for a parameter is not given, the parameter is set to a null value. To set a (key, value) pair in a map parameter, use the syntax map-param:key=value
.
Changing the up delay for bonds
Bonding is set up with an Up Delay of 31,000 ms by default to prevent traffic from being rebalanced onto a NIC after it fails. While seemingly long, the up delay is important for all bonding modes and not just active-active.
However, if you understand the appropriate settings to select for your environment, you can change the up delay for bonds by using the procedure that follows.
Set the up delay in milliseconds:
xe pif-param-set uuid=<uuid of bond master PIF> other-config:bond-updelay=<delay in ms>
<!--NeedCopy-->
To make the change take effect, you must unplug and then replug the physical interface:
xe pif-unplug uuid=<uuid of bond master PIF>
<!--NeedCopy-->
xe pif-plug uuid=<uuid of bond master PIF>
<!--NeedCopy-->