Resource pools
A resource pool comprises multiple XenServer host installations, bound together to a single managed entity that can host virtual machines. If combined with shared storage, a resource pool enables VMs to be started on any XenServer host which has sufficient memory.
The pool coordinator (formerly “pool master”) is a server in the resource pool that exposes an administration interface (used by XenCenter and the XenServer command line interface, known as the xe CLI). The pool coordinator forwards commands to individual members as necessary.
This article describes the concepts, requirements, and best practices with regard to resource pools. For information about how to create and manage your pools, see Manage your pools.
Advantages of resource pools
While you can organize your XenServer hosts as standalone hosts, effectively a pool of one, XenServer is optimized for hosts grouped into resource pools with shared storage. Several our features, such as high availability and live migration, are only available in resource pools of multiple hosts. The advantages of organizing your XenServer hosts into resource pools include the following:
- VM mobility: When your VMs are running on a pool and have their disks on the pool shared storage, these VMs can be dynamically moved among XenServer hosts while still running (live migration). Also, if an individual XenServer host suffers a hardware failure, the administrator can restart failed VMs on another XenServer host in the same resource pool.
- High availability This feature works to protect the VMs in your workload and ensure that they have minimal downtime in cases of hardware or host failure. When high availability is enabled on the resource pool, VMs can automatically restart on another host when their host fails. If the pool coordinator fails, high availability elects another pool coordinator. For more information, see High availability.
- Workload balancing Workload Balancing can evaluate your resource pool and the VM workload running on it to recommend the optimum placement for your VMs. You can also choose to have Workload Balancing automatically follow its placement recommendations and migrate your VMs between hosts in the pool. For more information, see Workload Balancing.
- Anti-affinity VM placement: Ensure that the VMs in a group are evenly spread across the hosts in a pool. For more information, see VM placement.
- Ease of management: Rather than managing each XenServer host individually, add the resource pool to XenCenter and manage all the hosts, SRs, and VMs within it together. For more information, see XenCenter.
Requirements for creating resource pools
When designing your XenServer deployment and deciding about your pool configuration, consider the following requirements:
Hardware requirements
All of the servers in a XenServer resource pool must have broadly compatible CPUs, that is:
-
The CPU vendor (Intel, AMD) must be the same on all CPUs on all servers.
-
All CPUs must have virtualization enabled.
Depending on how similar the CPUs are, the pool is on one of the following types:
-
Homogeneous pool: A homogeneous resource pool is an aggregate of servers with identical CPUs. CPUs on a server joining a homogeneous resource pool must have the same vendor, model, and features as the CPUs on servers already in the pool.
-
Heterogeneous pool: Heterogeneous pool creation is made possible by using technologies in Intel (FlexMigration) and AMD (Extended Migration) CPUs that provide CPU masking or leveling. These features allow a CPU to be configured to appear as providing a different make, model, or feature set than it actually does. These capabilities enable you to create pools of hosts with different CPUs but still safely support live migrations. As a result of this feature masking or levelling, you might not get the full performance of your CPUs.
Joining host requirements
XenServer checks that the following conditions are true for the host joining the pool:
-
It must be running the same version of XenServer, at the same patch level, as hosts already in the pool.
-
The joining host is not a member of an existing resource pool.
-
The joining host has no shared storage configured.
-
The joining host is not hosting any running or suspended VMs.
-
No active operations are in progress on the VMs on the joining host, such as a VM shutting down or being exported.
-
The clock on the joining host is synchronized to the same time as the pool coordinator (for example, by using NTP).
-
The management interface of the joining host is not bonded. You can configure the management interface when the host successfully joins the pool.
-
The joining host’s management IP address is static, either configured on the host itself or by using an appropriate configuration on your DHCP server.
-
The joining host’s management interface is on the same tagged VLAN as that of the resource pool.
-
The joining host must be configured with the same supplemental packs at the same revision as the hosts already in the pool.
-
The joining host must have the same XenServer license as the hosts already in the pool. You can change the license of any pool members after joining the pool. The host with the lowest license determines the features available to all members in the pool.
-
The joining host must be in the same site as the pool and connected by a low latency network.
Storage requirements
Shared storage used by the resource pool has the following requirements:
- Servers providing shared NFS or iSCSI storage for the pool must have a static IP address or a static DHCP lease.
Recommended pool size
The listed configuration limit of 64 is the maximum number of hosts that we support in a pool. However, it is not a recommended pool size, as it is often not the optimal size from a management or performance point of view for most workloads. Consider the following factors when deciding on the best size for your deployment:
-
Management considerations: Most XenServer management is performed at the pool level. A larger pool reduces the amount of management required and enables you to manage more hosts together. However, some management operations, for example applying updates to a pool, can take longer if you have more hosts, as the operation must be performed sequentially on each host in the pool. In this case, you might need a longer maintenance window to complete certain operations in a large pool than you need for multiple smaller pools.
-
Resource sharing: A XenServer pool typically shares resources such as storage repositories between hosts. A larger pool allows more hosts to share resources, which can have benefits. For example, in your Citrix Virtual Apps and Desktops environment you can have more hosts sharing a gold image, rather than having to create multiple copies on different pools. However, the particular devices that you choose for your shared resources might have specific performance considerations that make it better to use smaller groupings of hosts.
-
Control plane performance: In a XenServer pool, all operations are managed by the pool coordinator. The load on the toolstack of this host increases as you add more hosts to the pool: there are more background activities from each host and the expected number of concurrent operations increases. As the load on the toolstack increases, the time taken for each operation is likely to increase. As a result, one large pool can perform noticeably slower than two smaller pools.
-
Fault isolation: If XenServer or another component that is critical to the pool (for example, a storage device) has a problem, in a larger pool this might have a greater impact on your workloads than if that workload is split over multiple smaller pools.
-
GFS2 storage: If you use GFS2 storage for thin provisioning on block storage, XenServer supports a maximum of 16 hosts. This is due to limitations in the GFS2 implementation and the increased communications required between hosts to manage the storage.
-
High availability: If you use our high availability functionality to protect your VMs, all hosts in the pool continually monitor each other and communicate their status. As the pool increases in size, the volume of messages that each host has to send and receive as part of this monitoring increases. If there is a high load on the control domain, this can increase the probability of some of these monitoring messages being lost. In extreme scenarios, lost messages can cause hosts to unexpectedly fence for safety. When using the high availability feature, we recommend that you use smaller pools, with a maximum of 16 hosts, to reduce the risk of unexpected fencing.
For most Citrix Virtual Apps and Desktops use cases, we recommend a pool size of 16 hosts, with a maximum size of 32.
In addition to considering these factors when planning your pool size, observe and monitor your pool behavior to determine whether you need to change the pool size in your running environment.
Communicate with XenServer hosts and resource pools
TLS
XenServer uses the TLS 1.2 protocol to encrypt management API traffic. Any communication between XenServer and management API clients (or appliances) uses the TLS 1.2 protocol.
Important:
We do not support customer modifications to the cryptographic functionality of the product.
XenServer uses the following cipher suite:
- ECDHE-RSA-AES256-GCM-SHA384
- ECDHE-RSA-AES128-GCM-SHA256
SSH
When using an SSH client to connect directly to the XenServer host the following algorithms can be used:
Ciphers:
- aes128-ctr
- aes256-ctr
- aes128-gcm@openssh.com
- aes256-gcm@openssh.com
MACs:
- hmac-sha2-256
- hmac-sha2-512
- hmac-sha1
KexAlgorithms:
- curve25519-sha256
- ecdh-sha2-nistp256
- ecdh-sha2-nistp384
- ecdh-sha2-nistp521
- diffie-hellman-group14-sha1
HostKeyAlgorithms:
- ecdsa-sha2-nistp256
- ecdsa-sha2-nistp384
- ecdsa-sha2-nistp521
- ssh-ed25519
- ssh-rsa
Important:
We do not support customer modifications to the cryptographic functionality of the product. However, if you want to disable SSH access to a XenServer host, see Disable SSH access.
What happens when a host joins a resource pool?
When a new host joins a resource pool, the joining host synchronizes its local database with the pool-wide one, and inherits some settings from the pool:
-
VM, local, and remote storage configuration are added to the pool-wide database. This configuration is applied to the joining host in the pool unless you explicitly make the resources shared after the host joins the pool.
-
The joining host inherits existing shared storage repositories in the pool. Appropriate PBD records are created so that the new host can access existing shared storage automatically.
-
Networking information is partially inherited to the joining host: the structural details of NICs, VLANs, and bonded interfaces are all inherited, but policy information is not. This policy information, which must be reconfigured, includes:
-
The IP addresses of the management NICs, which are preserved from the original configuration.
-
The location of the management interface, which remains the same as the original configuration. For example, if the other pool hosts have management interfaces on a bonded interface, the joining host must be migrated to the bond after joining.
-
Dedicated storage NICs, which must be reassigned to the joining host from XenCenter or the CLI, and the PBDs replugged to route the traffic accordingly. This is because IP addresses are not assigned as part of the pool join operation, and the storage NIC works only when this is correctly configured. For more information on how to dedicate a storage NIC from the CLI, see Manage networking.
-