Create a storage repository
You can use the New Storage Repository wizard in XenCenter to create storage repositories (SRs). The wizard guides you through the configuration steps. Alternatively, use the CLI, and the sr-create
command. The sr-create
command creates an SR on the storage substrate (potentially destroying any existing data). It also creates the SR API object and a corresponding PBD record, enabling VMs to use the storage. On successful creation of the SR, the PBD is automatically plugged. If the SR shared=true
flag is set, a PBD record is created and plugged for every XenServer in the resource pool.
If you are creating an SR for IP-based storage (iSCSI or NFS), you can configure one of the following as the storage network: the NIC that handles the management traffic or a new NIC for the storage traffic. To assign an IP address to a NIC, see Configure a dedicated storage NIC.
All XenServer SR types support VDI resize, fast cloning, and snapshot. SRs based on the LVM SR type (local, iSCSI, or HBA) provide thin provisioning for snapshot and hidden parent nodes. The other SR types (EXT3/EXT4, NFS, GFS2) support full thin provisioning, including for virtual disks that are active.
Warnings:
When VHD VDIs are not attached to a VM, for example for a VDI snapshot, they are stored as thinly provisioned by default. If you attempt to reattach the VDI, ensure that there is sufficient disk-space available for the VDI to become thickly provisioned. VDI clones are thickly provisioned.
XenServer does not support snapshots at the external SAN-level of a LUN for any SR type.
Do not attempt to create an SR where the LUN ID of the destination LUN is greater than 255. Ensure that your target exposes the LUN with a LUN ID that is less than or equal to 255 before using this LUN to create an SR.
If you use thin provisioning on a file-based SR, ensure that you monitor the free space on your SR. If the SR usage grows to 100%, further writes from VMs fail. These failed writes can cause the VM to freeze or crash.
The maximum supported VDI sizes are:
Storage Repository Format | Maximum VDI size |
---|---|
EXT3/EXT4 | 2 TiB |
GFS2 (with iSCSI or HBA) | 16 TiB |
XFS | 16 TiB |
LVM | 2 TiB |
LVMoFCOE (deprecated) | 2 TiB |
LVMoHBA | 2 TiB |
LVMoiSCSI | 2 TiB |
NFS | 2 TiB |
SMB | 2 TiB |
Local LVM
The Local LVM type presents disks within a locally attached Volume Group.
By default, XenServer uses the local disk on the physical host on which it is installed. The Linux Logical Volume Manager (LVM) is used to manage VM storage. A VDI is implemented in VHD format in an LVM logical volume of the specified size.
Note:
The block size of an LVM LUN must be 512 bytes. To use storage with 4 KB physical blocks, the storage must also support emulation of 512 byte allocation blocks (the logical block size must be 512 bytes).
LVM performance considerations
The snapshot and fast clone functionality for LVM-based SRs comes with an inherent performance overhead. When optimal performance is required, XenServer supports creation of VDIs in the raw format in addition to the default VHD format. The XenServer snapshot functionality is not supported on raw VDIs.
Warning:
Do not try to snapshot a VM that has
type=raw
disks attached. This action can result in a partial snapshot being created. In this situation, you can identify the orphan snapshot VDIs by checking thesnapshot-of
field and then deleting them.
Creating a local LVM SR
An LVM SR is created by default on host install.
Device-config parameters for LVM SRs are:
Parameter Name | Description | Required? |
---|---|---|
device |
Device name on the local host to use for the SR. You can also provide a comma-separated list of names. | Yes |
To create a local LVM SR on /dev/sdb
, use the following command.
xe sr-create host-uuid=valid_uuid content-type=user \
name-label="Example Local LVM SR" shared=false \
device-config:device=/dev/sdb type=lvm
<!--NeedCopy-->
Local EXT3/EXT4
Using EXT3/EXT4 enables thin provisioning on local storage. However, the default storage repository type is LVM as it gives a consistent write performance and, prevents storage over-commit. If you use EXT3/EXT4, you might see reduced performance in the following cases:
- When carrying out VM lifecycle operations such as VM create and suspend/resume
- When creating large files from within the VM
Local disk EXT3/EXT4 SRs must be configured using the XenServer CLI.
Whether a local EXT SR uses EXT3 or EXT4 depends on what version of XenServer created it:
- If you created the local EXT SR on an earlier version of Citrix Hypervisor or XenServer and then upgraded to XenServer 8, it uses EXT3.
- If you created the local EXT SR on XenServer 8, it uses EXT4.
Note:
The block size of an EXT3/EXT4 disk must be 512 bytes. To use storage with 4 KB physical blocks, the storage must also support emulation of 512 byte allocation blocks (the logical block size must be 512 bytes).
Creating a local EXT4 SR (ext
)
Device-config parameters for EXT SRs:
Parameter Name | Description | Required? |
---|---|---|
device |
Device name on the local host to use for the SR. You can also provide a comma-separated list of names. | Yes |
To create a local EXT4 SR on /dev/sdb
, use the following command:
xe sr-create host-uuid=valid_uuid content-type=user \
name-label="Example Local EXT4 SR" shared=false \
device-config:device=/dev/sdb type=ext
<!--NeedCopy-->
Local XFS
Using XFS enables thin provisioning on local storage. The local XFS type allows you to create local storage devices with 4 KB physical blocks without requiring a logical block size of 512 bytes.
Creating a local XFS SR
Device-config parameters for XFS SRs:
Parameter Name | Description | Required? |
---|---|---|
device |
Device name on the local host to use for the SR. You can also provide a comma-separated list of names. | Yes |
To create a local XFS SR on /dev/sdb
, use the following command:
xe sr-create host-uuid=valid_uuid content-type=user \
name-label="Example Local XFS SR" shared=false \
device-config:device=/dev/sdb type=xfs
<!--NeedCopy-->
udev
The udev type represents devices plugged in using the udev device manager as VDIs.
XenServer has two SRs of type udev that represent removable storage. One is for the CD or DVD disk in the physical CD or DVD-ROM drive of the XenServer host. The other is for a USB device plugged into a USB port of the XenServer host. VDIs that represent the media come and go as disks or USB sticks are inserted and removed.
ISO
The ISO type handles CD images stored as files in ISO format. This SR type is useful for creating shared ISO libraries.
The following ISO SR types are available:
-
nfs_iso
: The NFS ISO SR type handles CD images stored as files in ISO format available as an NFS share. -
cifs
: The Windows File Sharing (SMB/CIFS) SR type handles CD images stored as files in ISO format available as a Windows (SMB/CIFS) share.
If you do not specify the storage type to use for the SR, XenServer uses the location
device config parameter to decide the type.
Device-config parameters for ISO SRs:
Parameter Name | Description | Required? |
---|---|---|
location |
Path to the mount. | Yes |
type |
Storage type to use for the SR: cifs or nfs_iso . |
No |
nfsversion |
For the storage type NFS, the version of the NFS protocol to use: 3, 4, 4.0, or 4.1. | No |
vers |
For the storage type CIFS/SMB, the version of SMB to use: 1.0 or 3.0. The default is 3.0. | No |
username |
For the storage type CIFS/SMB, if a username is required for the Windows file server. | No |
cifspassword_secret |
(Recommended) For the storage type CIFS/SMB, you can pass a secret instead of a password for the Windows file server. | No |
cifspassword |
For the storage type CIFS/SMB, if a password is required for the Windows file server. We recommend you use the cifspassword_secret parameter instead. |
No |
Note:
When running the
sr-create
command, we recommend that you use thedevice-config:cifspassword_secret
argument instead of specifying the password on the command line. For more information, see Secrets.
For storage repositories that store a library of ISOs, the content-type
parameter must be set to iso
, for example:
xe sr-create host-uuid=valid_uuid content-type=iso type=iso name-label="Example ISO SR" \
device-config:location=<path_to_mount> device-config:type=nfs_iso
<!--NeedCopy-->
You can use NFS or SMB to mount the ISO SR. For more information about using these SR types, see NFS and SMB.
We recommend that you use SMB version 3 to mount ISO SR on Windows file server. Version 3 is selected by default because it is more secure and robust than SMB version 1.0. However, you can mount ISO SR using SMB version 1 using the following command:
xe sr-create content-type=iso type=iso shared=true device-config:location=<path_to_mount>
device-config:username=<username> device-config:cifspassword=<password> \
device-config:type=cifs device-config:vers=1.0 name-label="Example ISO SR"
<!--NeedCopy-->
Software iSCSI support
XenServer supports shared SRs on iSCSI LUNs. iSCSI is supported using the Open-iSCSI software iSCSI initiator or by using a supported iSCSI Host Bus Adapter (HBA). The steps for using iSCSI HBAs are identical to the steps for Fibre Channel HBAs. Both sets of steps are described in Create a Shared LVM over Fibre Channel / Fibre Channel over Ethernet / iSCSI HBA or SAS SR.
Shared iSCSI support using the software iSCSI initiator is implemented based on the Linux Volume Manager (LVM). This feature provides the same performance benefits provided by LVM VDIs in the local disk case. Shared iSCSI SRs using the software-based host initiator can support VM agility using live migration: VMs can be started on any XenServer host in a resource pool and migrated between them with no noticeable downtime.
iSCSI SRs use the entire LUN specified at creation time and may not span more than one LUN. CHAP support is provided for client authentication, during both the data path initialization and the LUN discovery phases.
Note:
The block size of an iSCSI LUN must be 512 bytes. To use storage with 4 KB physical blocks, the storage must also support emulation of 512 byte allocation blocks (the logical block size must be 512 bytes).
XenServer host iSCSI configuration
All iSCSI initiators and targets must have a unique name to ensure they can be uniquely identified on the network. An initiator has an iSCSI initiator address, and a target has an iSCSI target address. Collectively these names are called iSCSI Qualified Names, or IQNs.
XenServer hosts support a single iSCSI initiator which is automatically created and configured with a random IQN during host installation. The single initiator can be used to connect to multiple iSCSI targets concurrently.
iSCSI targets commonly provide access control using iSCSI initiator IQN lists. All iSCSI targets/LUNs that your XenServer host accesses must be configured to allow access by the host’s initiator IQN. Similarly, targets/LUNs to be used as shared iSCSI SRs must be configured to allow access by all host IQNs in the resource pool.
Note:
iSCSI targets that do not provide access control typically default to restricting LUN access to a single initiator to ensure data integrity. If an iSCSI LUN is used as a shared SR across multiple hosts in a pool, ensure that multi-initiator access is enabled for the specified LUN.
The XenServer host IQN value can be adjusted using XenCenter, or using the CLI with the following command when using the iSCSI software initiator:
xe host-param-set uuid=valid_host_id other-config:iscsi_iqn=new_initiator_iqn
<!--NeedCopy-->
Warning:
- Each iSCSI target and initiator must have a unique IQN. If a non-unique IQN identifier is used, data corruption or denial of LUN access can occur.
- Do not change the XenServer host IQN with iSCSI SRs attached. Doing so can result in failures connecting to new targets or existing SRs.
Software FCoE storage (deprecated)
Software FCoE provides a standard framework to which hardware vendors can plug in their FCoE-capable NIC and get the same benefits of a hardware-based FCoE. This feature eliminates the need for using expensive HBAs.
Note:
Software FCoE is deprecated and will be removed in a future release.
Before you create a software FCoE storage, manually complete the configuration required to expose a LUN to the host. This configuration includes configuring the FCoE fabric and allocating LUNs to your SAN’s public world wide name (PWWN). After you complete this configuration, the available LUN is mounted to the host’s CNA as a SCSI device. The SCSI device can then be used to access the LUN as if it were a locally attached SCSI device. For information about configuring the physical switch and the array to support FCoE, see the documentation provided by the vendor.
Note:
Software FCoE can be used with Open vSwitch and Linux bridge as the network back-end.
Create a Software FCoE SR
Before creating a Software FCoE SR, customers must ensure that there are FCoE-capable NICs attached to the host.
Device-config parameters for FCoE SRs are:
Parameter Name | Description | Required? |
---|---|---|
SCSIid |
The SCSI bus ID of the destination LUN | Yes |
Run the following command to create a shared FCoE SR:
xe sr-create type=lvmofcoe \
name-label="FCoE SR" shared=true device-config:SCSIid=SCSI_id
<!--NeedCopy-->
Hardware host bus adapters (HBAs)
This section covers various operations required to manage SAS, Fibre Channel, and iSCSI HBAs.
Sample QLogic iSCSI HBA setup
For details on configuring QLogic Fibre Channel and iSCSI HBAs, see the Cavium website.
Once the HBA is physically installed into the XenServer host, use the following steps to configure the HBA:
-
Set the IP networking configuration for the HBA. This example assumes DHCP and HBA port 0. Specify the appropriate values if using static IP addressing or a multi-port HBA.
/opt/QLogic_Corporation/SANsurferiCLI/iscli -ipdhcp 0 <!--NeedCopy-->
-
Add a persistent iSCSI target to port 0 of the HBA.
/opt/QLogic_Corporation/SANsurferiCLI/iscli -pa 0 iscsi_target_ip_address <!--NeedCopy-->
-
Use the xe
sr-probe
command to force a rescan of the HBA controller and display available LUNs. For more information, see Probe an SR and Create a Shared LVM over Fibre Channel / Fibre Channel over Ethernet / iSCSI HBA or SAS SR.
Remove HBA-based SAS, FC, or iSCSI device entries
Note:
This step is not required. We recommend that only power users perform this process if it is necessary.
Each HBA-based LUN has a corresponding global device path entry under /dev/disk/by-scsibus
in the format <SCSIid>-<adapter>:<bus>:<target>:<lun>
and a standard device path under /dev
. To remove the device entries for LUNs no longer in use as SRs, use the following steps:
-
Use
sr-forget
orsr-destroy
as appropriate to remove the SR from the XenServer host database. See Remove SRs for details. -
Remove the zoning configuration within the SAN for the desired LUN to the desired host.
-
Use the
sr-probe
command to determine the ADAPTER, BUS, TARGET, and LUN values corresponding to the LUN to be removed. For more information, Probe an SR. -
Remove the device entries with the following command:
echo "1" > /sys/class/scsi_device/adapter:bus:target:lun/device/delete <!--NeedCopy-->
Warning:
Make sure that you are certain which LUN you are removing. Accidentally removing a LUN required for host operation, such as the boot or root device, renders the host unusable.
Shared LVM storage
The Shared LVM type represents disks as Logical Volumes within a Volume Group created on an iSCSI (FC or SAS) LUN.
Note:
The block size of an iSCSI LUN must be 512 bytes. To use storage with 4 KB physical blocks, the storage must also support emulation of 512 byte allocation blocks (the logical block size must be 512 bytes).
Create a shared LVM over iSCSI SR by using the Software iSCSI initiator
Device-config parameters for LVMoiSCSI SRs:
Parameter Name | Description | Required? |
---|---|---|
target |
The IP address or host name of the iSCSI target on the SAN that hosts the SR. This can also be a comma-separated list of values to connect to multiple targets. | Yes |
targetIQN |
The iSCSI Qualified Name (IQN) of the target on the iSCSI SAN that hosts the SR, or * to connect to all IQNs. |
Yes |
SCSIid |
The SCSI bus ID of the destination LUN | Yes |
multihomed |
Enable multi-homing to this target | No (defaults to same value as host.other_config:multipathing) |
chapuser |
The user name to be used for CHAP authentication | No |
chappassword_secret |
(Recommended) Secret ID for the password to be used for CHAP authentication. Pass a secret instead of a password. | No |
chappassword |
The password to be used for CHAP authentication. We recommend you use the chappassword_secret parameter instead. |
No |
port |
The network port number on which to query the target | No |
usediscoverynumber |
The specific iSCSI record index to use | No |
incoming_chapuser |
The user name that the iSCSI filter uses to authenticate against the host | No |
incoming_chappassword_secret |
(Recommended) Secret ID for the password that the iSCSI filter uses to authenticate against the host. | No |
incoming_chappassword |
The password that the iSCSI filter uses to authenticate against the host. We recommend you use the incoming_chappassword_secret parameter instead. |
No |
Note:
When running the
sr-create
command, we recommend that you use thedevice-config:chappassword_secret
argument instead of specifying the password on the command line. For more information, see Secrets.
To create a shared LVMoiSCSI SR on a specific LUN of an iSCSI target, use the following command.
xe sr-create host-uuid=valid_uuid content-type=user \
name-label="Example shared LVM over iSCSI SR" shared=true \
device-config:target=target_ip= device-config:targetIQN=target_iqn= \
device-config:SCSIid=scsci_id \
type=lvmoiscsi
<!--NeedCopy-->
Create a Shared LVM over Fibre Channel / Fibre Channel over Ethernet / iSCSI HBA or SAS SR
SRs of type LVMoHBA can be created and managed using the xe CLI or XenCenter.
Device-config parameters for LVMoHBA SRs:
Parameter name | Description | Required? |
---|---|---|
SCSIid |
Device SCSI ID | Yes |
To create a shared LVMoHBA SR, perform the following steps on each host in the pool:
-
Zone in one or more LUNs to each XenServer host in the pool. This process is highly specific to the SAN equipment in use. For more information, see your SAN documentation.
-
If necessary, use the HBA CLI included in the XenServer host to configure the HBA:
-
Emulex:
/bin/sbin/ocmanager
-
QLogic FC:
/opt/QLogic_Corporation/SANsurferCLI
-
QLogic iSCSI:
/opt/QLogic_Corporation/SANsurferiCLI
For an example of QLogic iSCSI HBA configuration, see Hardware host bus adapters (HBAs) in the previous section. For more information on Fibre Channel and iSCSI HBAs, see the Broadcom and Cavium websites.
-
-
Use the
sr-probe
command to determine the global device path of the HBA LUN. Thesr-probe
command forces a rescan of HBAs installed in the system to detect any new LUNs that have been zoned to the host. The command returns a list of properties for each LUN found. Specify thehost-uuid
parameter to ensure that the probe occurs on the desired host.The global device path returned as the
<path>
property is common across all hosts in the pool. Therefore, this path must be used as the value for thedevice-config:device
parameter when creating the SR.If multiple LUNs are present use the vendor, LUN size, LUN serial number, or the SCSI ID from the
<path>
property to identify the desired LUN.xe sr-probe type=lvmohba \ host-uuid=1212c7b3-f333-4a8d-a6fb-80c5b79b5b31 Error code: SR_BACKEND_FAILURE_90 Error parameters: , The request is missing the device parameter, \ <?xml version="1.0" ?> <Devlist> <BlockDevice> <path> /dev/disk/by-id/scsi-360a9800068666949673446387665336f </path> <vendor> HITACHI </vendor> <serial> 730157980002 </serial> <size> 80530636800 </size> <adapter> 4 </adapter> <channel> 0 </channel> <id> 4 </id> <lun> 2 </lun> <hba> qla2xxx </hba> </BlockDevice> <Adapter> <host> Host4 </host> <name> qla2xxx </name> <manufacturer> QLogic HBA Driver </manufacturer> <id> 4 </id> </Adapter> </Devlist> <!--NeedCopy-->
-
On the pool coordinator, create the SR. Specify the global device path returned in the
<path>
property fromsr-probe
. PBDs are created and plugged for each host in the pool automatically.xe sr-create host-uuid=valid_uuid \ content-type=user \ name-label="Example shared LVM over HBA SR" shared=true \ device-config:SCSIid=device_scsi_id type=lvmohba <!--NeedCopy-->
Note:
You can use the XenCenter Repair Storage Repository function to retry the PBD creation and plugging portions of the
sr-create
operation. This function can be valuable in cases where the LUN zoning was incorrect for one or more hosts in a pool when the SR was created. Correct the zoning for the affected hosts and use the Repair Storage Repository function instead of removing and re-creating the SR.
Thin-provisioned shared GFS2 block storage
Thin provisioning better utilizes the available storage by allocating disk storage space to VDIs as data is written to the virtual disk, rather than allocating the full virtual size of the VDI in advance. Thin provisioning enables you to significantly reduce the amount of space required on a shared storage array, and with that your Total Cost of Ownership (TCO).
Thin provisioning for shared block storage is of particular interest in the following cases:
- You want increased space efficiency. Images are sparsely and not thickly allocated.
- You want to reduce the number of I/O operations per second on your storage array. The GFS2 SR is the first SR type to support storage read caching on shared block storage.
- You use a common base image for multiple virtual machines. The images of individual VMs will then typically utilize even less space.
- You use snapshots. Each snapshot is an image and each image is now sparse.
- Your storage does not support NFS and only supports block storage. If your storage supports NFS, we recommend you use NFS instead of GFS2.
- You want to create VDIs that are greater than 2 TiB in size. The GFS2 SR supports VDIs up to 16 TiB in size.
Note:
We recommend not to use a GFS2 SR with a VLAN due to a known issue where you cannot add or remove hosts on a clustered pool if the cluster network is on a non-management VLAN.
The shared GFS2 SR type creates a GFS2 filesystem on an iSCSI or HBA LUN. VDIs are stored in the GFS2 SR as files in the QCOW2 image format.
For more information about using GFS2 storage, see Thin-provisioned shared GFS2 block storage.
NFS and SMB
Shares on NFS servers (that support any version of NFSv4 or NFSv3) or on SMB servers (that support SMB 3) can be used immediately as an SR for virtual disks. VDIs are stored in the Microsoft VHD format only. Additionally, as these SRs can be shared, VDIs stored on shared SRs allow:
-
VMs to be started on any XenServer hosts in a resource pool
-
VM migrate between XenServer hosts in a resource pool using live migration (without noticeable downtime)
Important:
- Support for SMB3 is limited to the ability to connect to a share using the 3 protocol. Extra features like Transparent Failover depend on feature availability in the upstream Linux kernel and are not supported in XenServer 8.
- Clustered SMB is not supported with XenServer.
- For NFSv4, only the authentication type
AUTH_SYS
is supported.- SMB storage is available for XenServer Premium Edition customers.
- It is highly recommended for both NFS and SMB storage that a dedicated storage network be used, using at least two bonded links, ideally to independent network switches with redundant power supplies.
- When using SMB storage, do not remove the share from the storage before detaching the SMB SR.
VDIs stored on file-based SRs are thinly provisioned. The image file is allocated as the VM writes data into the disk. This approach has the considerable benefit that the VM image files take up only as much space on the storage as is required. For example, if a 100 GB VDI is allocated for a VM and an OS is installed, the VDI file only reflects the size of the OS data written to the disk rather than the entire 100 GB.
VHD files may also be chained, allowing two VDIs to share common data. In cases where a file-based VM is cloned, the resulting VMs share the common on-disk data at the time of cloning. Each VM proceeds to make its own changes in an isolated copy-on-write version of the VDI. This feature allows file-based VMs to be quickly cloned from templates, facilitating very fast provisioning and deployment of new VMs.
Note:
The maximum supported length of VHD chains is 30.
File-based SRs and VHD implementations in XenServer assume that they have full control over the SR directory on the file server. Administrators must not modify the contents of the SR directory, as this action can risk corrupting the contents of VDIs.
XenServer has been tuned for enterprise-class storage that uses non-volatile RAM to provide fast acknowledgments of write requests while maintaining a high degree of data protection from failure. XenServer has been tested extensively against Network Appliance FAS2020 and FAS3210 storage, using Data OnTap 7.3 and 8.1
Warning:
As VDIs on file-based SRs are created as thin provisioned, administrators must ensure that the file-based SRs have enough disk space for all required VDIs. XenServer hosts do not enforce that the space required for VDIs on file-based SRs is present.
Ensure that you monitor the free space on your SR. If the SR usage grows to 100%, further writes from VMs fail. These failed writes can cause the VM to freeze or crash.
Create a shared NFS SR (NFS)
Note:
If you attempt to attach a read-only NFS SR, this action fails with the following error message: “SR_BACKEND_FAILURE_461 - The file system for SR cannot be written to.”
To create an NFS SR, you must provide the hostname or IP address of the NFS server. You can create the SR on any valid destination path; use the sr-probe
command to display a list of valid destination paths exported by the server.
In scenarios where XenServer is used with lower-end storage, it cautiously waits for all writes to be acknowledged before passing acknowledgments on to VMs. This approach incurs a noticeable performance cost, and might be solved by setting the storage to present the SR mount point as an asynchronous mode export. Asynchronous exports acknowledge writes that are not actually on disk. Consider the risks of failure carefully in these situations.
Note:
The NFS server must be configured to export the specified path to all hosts in the pool. If this configuration is not done, the creation of the SR and the plugging of the PBD record fails.
The XenServer NFS implementation uses TCP by default. If your situation allows, you can configure the implementation to use UDP in scenarios where there may be a performance benefit. To do this configuration, when creating an SR, specify the device-config
parameter useUDP=true
.
The following device-config
parameters are used with NFS SRs:
Parameter Name | Description | Required? |
---|---|---|
server |
IP address or hostname of the NFS server | Yes |
serverpath |
Path, including the NFS mount point, to the NFS server that hosts the SR | Yes |
nfsversion |
Specifies the version of NFS to use. If you specify nfsversion="4" , the SR uses NFS v4.0, v4.1 or v4.2, depending on what is available. If you want to select a more specific version of NFS, you can specify nfsversion="4.0" and so on. Only one value can be specified for nfsversion . |
No |
useUDP |
Configure the SR to use UDP rather than the default TCP. | No |
For example, to create a shared NFS SR on 192.168.1.10:/export1
, using any version 4 of NFS that is made available by the filer, use the following command:
xe sr-create content-type=user \
name-label="shared NFS SR" shared=true \
device-config:server=192.168.1.10 device-config:serverpath=/export1 type=nfs \
device-config:nfsversion="4"
<!--NeedCopy-->
To create a non-shared NFS SR on 192.168.1.10:/export1
, using specifically NFS version 4.0, run the following command:
xe sr-create host-uuid=host_uuid content-type=user \
name-label="Non-shared NFS SR" \
device-config:server=192.168.1.10 device-config:serverpath=/export1 type=nfs \
device-config:nfsversion="4.0"
<!--NeedCopy-->
Create a shared SMB SR (SMB)
To create an SMB SR, provide the hostname or IP address of the SMB server, the full path of the exported share, and appropriate credentials.
Device-config parameters for SMB SRs:
Parameter Name | Description | Required? |
---|---|---|
server |
Full path to share on server | Yes |
username |
User account with RW access to share | Optional |
password_secret |
(Recommended) Secret ID for the password for the user account, which can be used instead of the password. | Optional |
password |
Password for the user account. We recommend that you use the password_secret parameter instead. |
Optional |
Note:
When running the
sr-create
command, we recommend that you use thedevice-config:password_secret
argument instead of specifying the password on the command line. For more information, see Secrets.
For example, to create a shared SMB SR on 192.168.1.10:/share1
, use the following command:
xe sr-create content-type=user \
name-label="Example shared SMB SR" shared=true \
device-config:server=//192.168.1.10/share1 \
device-config:username=valid_username device-config:password_secret=valid_password_secret type=smb
<!--NeedCopy-->
To create a non-shared SMB SR, run the following command:
xe sr-create host-uuid=host_uuid content-type=user \
name-label="Non-shared SMB SR" \
device-config:server=//192.168.1.10/share1 \
device-config:username=valid_username device-config:password_secret=valid_password_secret type=smb
<!--NeedCopy-->
LVM over Hardware HBA
The LVM over hardware HBA type represents disks as VHDs on Logical Volumes within a Volume Group created on an HBA LUN that provides, for example, hardware-based iSCSI or FC support.
XenServer hosts support Fibre Channel SANs through Emulex or QLogic host bus adapters (HBAs). All Fibre Channel configuration required to expose a Fibre Channel LUN to the host must be completed manually. This configuration includes storage devices, network devices, and the HBA within the XenServer host. After all FC configuration is complete, the HBA exposes a SCSI device backed by the FC LUN to the host. The SCSI device can then be used to access the FC LUN as if it were a locally attached SCSI device.
Use the sr-probe
command to list the LUN-backed SCSI devices present on the host. This command forces a scan for new LUN-backed SCSI devices. The path value returned by sr-probe
for a LUN-backed SCSI device is consistent across all hosts with access to the LUN. Therefore, this value must be used when creating shared SRs accessible by all hosts in a resource pool.
The same features apply to QLogic iSCSI HBAs.
See Create storage repositories for details on creating shared HBA-based FC and iSCSI SRs.
Note:
XenServer support for Fibre Channel does not support direct mapping of a LUN to a VM. HBA-based LUNs must be mapped to the host and specified for use in an SR. VDIs within the SR are exposed to VMs as standard block devices.
The block size of an LVM over HBA LUN must be 512 bytes. To use storage with 4 KB physical blocks, the storage must also support emulation of 512 byte allocation blocks (the logical block size must be 512 bytes).