Configuration limits
Use the following configuration limits as a guideline when selecting and configuring your virtual and physical environment for Citrix Hypervisor. The following tested and recommended configuration limits are fully supported for Citrix Hypervisor.
-
Virtual machine limits
-
Citrix Hypervisor server limits
-
Resource pool limits
Factors such as hardware and environment can affect the limitations listed below. More information about supported hardware can be found on the Hardware Compatibility List. Consult your hardware manufacturers’ documented limits to ensure that you do not exceed the supported configuration limits for your environment.
Virtual machine (VM) limits
Item | Limit |
---|---|
Compute | |
Virtual CPUs per VM (Linux) | 32 (see note 1) |
Virtual CPUs per VM (Windows) | 32 |
Memory | |
RAM per VM | 1.5 TiB (see note 2) |
Storage | |
Virtual Disk Images (VDI) (including CD-ROM) per VM | 255 (see note 3) |
Virtual CD-ROM drives per VM | 1 |
Virtual Disk Size (NFS) | 2040 GiB |
Virtual Disk Size (LVM) | 2040 GiB |
Virtual Disk Size (GFS2) | 16 TiB |
Networking | |
Virtual NICs per VM | 7 (see note 4) |
Graphics Capability | |
vGPUs per VM | 8 |
Passed through GPUs per VM | 1 |
Devices | |
Pass-through USB devices | 6 |
Notes:
Consult your guest OS documentation to ensure that you do not exceed the supported limits.
The maximum amount of physical memory addressable by your operating system varies. Setting the memory to a level greater than the operating system supported limit may lead to performance issues within your guest. Some 32-bit Windows operating systems can support more than 4 GiB of RAM through use of the physical address extension (PAE) mode. For more information, see your guest operating system documentation and Guest operating system support.
The maximum number of VDIs supported depends on the guest operating system. Consult your guest operating system documentation to ensure that you do not exceed the supported limits.
Several guest operating systems have a lower limit, other guests require installation of the XenServer VM Tools to achieve this limit.
Citrix Hypervisor server limits
Item | Limit |
---|---|
Compute | |
Logical processors per host | 512 (see note 1) |
Concurrent VMs per host | 1000 (see note 2) |
Concurrent protected VMs per host with HA enabled | 500 |
Virtual GPU VMs per host | 128 (see note 3) |
Memory | |
RAM per host | 6 TB |
Storage | |
Concurrent active virtual disks per host | 2048 (see note 4) |
Storage repositories per host (NFS) | 400 |
Networking | |
Physical NICs per host | 16 |
Physical NICs per network bond | 4 |
Virtual NICs per host | 512 |
VLANs per host | 800 |
Network Bonds per host | 4 |
Graphics Capability | |
GPUs per host | 8 (see note 5) |
Notes:
The maximum number of logical physical processors supported differs by CPU. For more information, see the Hardware Compatibility List.
The maximum number of VMs per host supported depends on VM workload, system load, network configuration, and certain environmental factors. We reserve the right to determine what specific environmental factors affect the maximum limit at which a system can function. For larger pools (over 32 hosts), we recommend allocating at least 8GB RAM to the Control Domain (Dom0). For systems running over 500 VMs or when using the PVS Accelerator, we recommend allocating at least 16 GB RAM to the Control Domain. For information about configuring Dom0 memory, see CTX220763 - How to change dom0 Memory.
For NVIDIA vGPU, 128 vGPU accelerated VMs per host with 4xM60 cards (4x32=128 VMs), or 2xM10 cards (2x64=128 VMs). For the current supported limits, see the Hardware Compatibility List.
The number of concurrent active virtual disks per host is also constrained by the number of SRs you have attached to the host and the number of attached VDIs that are allowed for each SR (600). For more information, see the “Attached VDIs per SR” entry in the Resource pool limits.
This figure might change. For the current supported limits, see the Hardware Compatibility List.
Resource pool limits
Item | Limit |
---|---|
Compute | |
VMs per resource pool | 2400 |
Hosts per resource pool | 64 (see note 1 and 2) |
Networking | |
VLANs per resource pool | 800 |
Disaster recovery | |
Integrated site recovery storage repositories per resource pool | 8 |
Storage | |
Paths to a LUN | 16 |
Multipathed LUNs per host | 150 (see note 3) |
Multipathed LUNs per host (used by storage repositories) | 150 (see note 3) |
VDIs per SR (NFS, SMB, EXT, GFS2) | 20000 (see note 4) |
VDIs per SR (LVM) | 1000 (see note 4) |
Attached VDIs per SR (all types) | 600 |
Storage repositories per pool (NFS) | 400 |
Storage repositories per pool (GFS2) | 62 |
Maximum file system size (GFS2) | 100 TiB |
Storage live migration | |
(non-CDROM) VDIs per VM | 6 |
Snapshots per VM | 1 |
Concurrent transfers | 3 |
XenCenter | |
Concurrent operations per pool | 25 |
Notes:
- Clustered pools that use GFS2 storage support a maximum of 16 hosts in the resource pool.
If you apply hotfix XS82ECU1074 to a pool with more than 32 hosts, the hosts might disconnect intermittently due to tighter resource limits introduced by the hotfix. Future updates are planned to resolve this issue.
For pools larger than 32 hosts, either avoid installing this hotfix or apply the following workaround before installation:
Add the following entries to the
/etc/xapi.conf
file on each host in the pool:conn_limit_unix = 800 conn_limit_tcp = 800 <!--NeedCopy-->
Restart the toolstack.
- When HA is enabled, we recommend increasing the default timeout to at least 120 seconds when more than 30 multipathed LUNs are present on a host. For information about increasing the HA timeout, see Configure high availability timeout.
- The VDI count limit includes VDIs used for internal purposes such as snapshot management. Each snapshot is internally represented as 2 VDIs: one is the snapshot and one is the shared parent with the active writeable VDI.