Manage storage repositories
This section covers creating storage repository types and making them available to your XenServer host. It also covers various operations required in the ongoing management of Storage Repositories (SRs), including Live VDI Migration.
Create storage repositories
This section explains how to create Storage Repositories (SRs) of different types and make them available to your XenServer host. The examples provided cover creating SRs using the xe CLI. For details on using the New Storage Repository wizard to add SRs using XenCenter, see the XenCenter documentation.
Note:
Local SRs of type
lvm
,ext
, andxfs
can only be created using the xe CLI. After creation, you can manage all SR types by either XenCenter or the xe CLI.
There are two basic steps to create a storage repository for use on a host by using the CLI:
-
Probe the SR type to determine values for any required parameters.
-
Create the SR to initialize the SR object and associated PBD objects, plug the PBDs, and activate the SR.
These steps differ in detail depending on the type of SR being created. In all examples, the sr-create
command returns the UUID of the created SR if successful.
SRs can be destroyed when no longer in use to free up the physical device. SRs can also be forgotten to detach the SR from one XenServer host and attach it to another. For more information, see Removing SRs in the following section.
Probe an SR
The sr-probe
command can be used in the following ways:
- To identify unknown parameters for use in creating an SR
- To return a list of existing SRs
In both cases sr-probe
works by specifying an SR type and one or more device-config
parameters for that SR type. If an incomplete set of parameters is supplied, the sr-probe
command returns an error message indicating parameters are missing and the possible options for the missing parameters. When a complete set of parameters is supplied, a list of existing SRs is returned. All sr-probe
output is returned as XML.
For example, a known iSCSI target can be probed by specifying its name or IP address. The set of IQNs available on the target is returned:
xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10
Error code: SR_BACKEND_FAILURE_96
Error parameters: , The request is missing or has an incorrect target IQN parameter, \
<?xml version="1.0" ?>
<iscsi-target-iqns>
<TGT>
<Index>
0
</Index>
<IPAddress>
192.168.1.10
</IPAddress>
<TargetIQN>
iqn.192.168.1.10:filer1
</TargetIQN>
</TGT>
</iscsi-target-iqns>
<!--NeedCopy-->
Probing the same target again and specifying both the name/IP address and desired IQN returns the set of SCSIids
(LUNs) available on the target/IQN.
xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10 \
device-config:targetIQN=iqn.192.168.1.10:filer1
Error code: SR_BACKEND_FAILURE_107
Error parameters: , The SCSIid parameter is missing or incorrect, \
<?xml version="1.0" ?>
<iscsi-target>
<LUN>
<vendor>
IET
</vendor>
<LUNid>
0
</LUNid>
<size>
42949672960
</size>
<SCSIid>
149455400000000000000000002000000b70200000f000000
</SCSIid>
</LUN>
</iscsi-target>
<!--NeedCopy-->
Probing the same target and supplying all three parameters returns a list of SRs that exist on the LUN, if any.
xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10 \
device-config:targetIQN=192.168.1.10:filer1 \
device-config:SCSIid=149455400000000000000000002000000b70200000f000000
<?xml version="1.0" ?>
<SRlist>
<SR>
<UUID>
3f6e1ebd-8687-0315-f9d3-b02ab3adc4a6
</UUID>
<Devlist>
/dev/disk/by-id/scsi-149455400000000000000000002000000b70200000f000000
</Devlist>
</SR>
</SRlist>
<!--NeedCopy-->
The following parameters can be probed for each SR type:
SR type | The device-config parameters, in order of dependency |
Can be probed? | Required for sr-create ? |
---|---|---|---|
lvmoiscsi |
target |
No | Yes |
chapuser |
No | No | |
chappassword |
No | No | |
targetIQN |
Yes | Yes | |
SCSIid |
Yes | Yes | |
lvmohba |
SCSIid |
Yes | Yes |
lvmofcoe |
SCSIid |
Yes | Yes |
nfs |
server |
No | Yes |
serverpath |
Yes | Yes | |
smb |
server |
No | Yes |
username |
No | No | |
password |
No | No | |
lvm |
device |
No | Yes |
ext |
device |
No | Yes |
For information about probing a GFS2 SR, see Create a GFS2 SR.
Remove SRs
A Storage Repository (SR) can be removed either temporarily or permanently.
Detach: Breaks the association between the storage device and the pool or host (PBD Unplug). The SR (and its VDIs) becomes inaccessible. The contents of the VDIs and the meta-information used by VMs to access the VDIs are preserved. Detach can be used when you temporarily take an SR offline, for example, for maintenance. A detached SR can later be reattached.
Forget: Preserves the contents of the SR on the physical disk, but the information that connects a VM to its VDIs is permanently deleted. For example, allows you to reattach the SR, to another XenServer host, without removing any of the SR contents.
Destroy: Deletes the contents of the SR from the physical disk.
Note:
When using SMB storage, do not remove the share from the storage before detaching the SMB SR.
For Destroy or Forget, the PBD connected to the SR must be unplugged from the host.
-
Unplug the PBD to detach the SR from the corresponding XenServer host:
xe pbd-unplug uuid=pbd_uuid <!--NeedCopy-->
-
Use the
sr-destroy
command to remove an SR. The command destroys the SR, deletes the SR and corresponding PBD from the XenServer host database and deletes the SR contents from the physical disk:xe sr-destroy uuid=sr_uuid <!--NeedCopy-->
-
Use the
sr-forget
command to forget an SR. The command removes the SR and corresponding PBD from the XenServer host database but leaves the actual SR content intact on the physical media:xe sr-forget uuid=sr_uuid <!--NeedCopy-->
Note:
It can take some time for the software object corresponding to the SR to be garbage collected.
Introduce an SR
To reintroduce a previously forgotten SR, create a PBD. Manually plug the PBD to the appropriate XenServer hosts to activate the SR.
The following example introduces an SR of type lvmoiscsi
.
-
Probe the existing SR to determine its UUID:
xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10 \ device-config:targetIQN=192.168.1.10:filer1 \ device-config:SCSIid=149455400000000000000000002000000b70200000f000000 <!--NeedCopy-->
-
Introduce the existing SR UUID returned from the
sr-probe
command. The UUID of the new SR is returned:xe sr-introduce content-type=user name-label="Example Shared LVM over iSCSI SR" \ shared=true uuid=valid_sr_uuid type=lvmoiscsi <!--NeedCopy-->
-
Create a PBD to accompany the SR. The UUID of the new PBD is returned:
xe pbd-create type=lvmoiscsi host-uuid=valid_uuid sr-uuid=valid_sr_uuid \ device-config:target=192.168.0.1 \ device-config:targetIQN=192.168.1.10:filer1 \ device-config:SCSIid=149455400000000000000000002000000b70200000f000000 <!--NeedCopy-->
-
Plug the PBD to attach the SR:
xe pbd-plug uuid=pbd_uuid <!--NeedCopy-->
-
Verify the status of the PBD plug. If successful, the
currently-attached
property is true:xe pbd-list sr-uuid=sr_uuid <!--NeedCopy-->
Note:
Perform steps 3 through 5 for each host in the resource pool. These steps can also be performed using the Repair Storage Repository function in XenCenter.
Live LUN expansion
To fulfill capacity requirements, you may need to add capacity to the storage array to increase the size of the LUN provisioned to the XenServer host. Live LUN Expansion allows to you to increase the size of the LUN without any VM downtime.
After adding more capacity to your storage array, enter,
xe sr-scan sr-uuid=sr_uuid
<!--NeedCopy-->
This command rescans the SR, and any extra capacity is added and made available.
This operation is also available in XenCenter. Select the SR to resize, and then click Rescan.
Warnings:
- It is not possible to shrink or truncate LUNs. Reducing the LUN size on the storage array can lead to data loss.
Live VDI migration
Live VDI migration allows the administrator to relocate the VMs Virtual Disk Image (VDI) without shutting down the VM. This feature enables administrative operations such as:
- Moving a VM from cheap local storage to fast, resilient, array-backed storage.
- Moving a VM from a development to production environment.
- Moving between tiers of storage when a VM is limited by storage capacity.
- Performing storage array upgrades.
Limitations and caveats
Live VDI Migration is subject to the following limitations and caveats
- There must be sufficient disk space available on the target repository.
To move virtual disks by using XenCenter
-
In the Resources pane, select the SR where the Virtual Disk is stored and then click the Storage tab.
-
In the Virtual Disks list, select the Virtual Disk that you would like to move, and then click Move.
-
In the Move Virtual Disk dialog box, select the target SR that you would like to move the VDI to.
Note:
Ensure that the SR has sufficient space for another virtual disk: the available space is shown in the list of available SRs.
-
Click Move to move the virtual disk.
For xe CLI reference, see vdi-pool-migrate
.
Cold VDI migration between SRs (offline migration)
VDIs associated with a VM can be copied from one SR to another to accommodate maintenance requirements or tiered storage configurations. XenCenter enables you to copy a VM and all of its VDIs to the same or a different SR. A combination of XenCenter and the xe CLI can be used to copy individual VDIs.
For xe CLI reference, see vm-migrate
.
Copy all of a VM’s VDIs to a different SR
The XenCenter Copy VM function creates copies of all VDIs for a selected VM on the same or a different SR. The source VM and VDIs are not affected by default. To move the VM to the selected SR rather than creating a copy, select the Remove original VM option in the Copy Virtual Machine dialog box.
- Shut down the VM.
- Within XenCenter, select the VM and then select the VM > Copy VM option.
- Select the desired target SR.
Copy individual VDIs to a different SR
A combination of the xe CLI and XenCenter can be used to copy individual VDIs between SRs.
-
Shut down the VM.
-
Use the xe CLI to identify the UUIDs of the VDIs to be moved. If the VM has a DVD drive, its
vdi-uuid
is listed asnot in database
and can be ignored.xe vbd-list vm-uuid=valid_vm_uuid <!--NeedCopy-->
Note:
The
vbd-list
command displays both the VBD and VDI UUIDs. Be sure to record the VDI UUIDs rather than the VBD UUIDs. -
In XenCenter, select the VM Storage tab. For each VDI to be moved, select the VDI and click the Detach button. This step can also be done using the
vbd-destroy
command.Note:
If you use the
vbd-destroy
command to detach the VDI UUIDs, first check if the VBD has the parameterother-config:owner
set totrue
. Set this parameter tofalse
. Issuing thevbd-destroy
command withother-config:owner=true
also destroys the associated VDI. -
Use the
vdi-copy
command to copy each of the VM VDIs to be moved to the desired SR.xe vdi-copy uuid=valid_vdi_uuid sr-uuid=valid_sr_uuid <!--NeedCopy-->
-
In XenCenter, select the VM Storage tab. Click the Attach button and select the VDIs from the new SR. This step can also be done use the
vbd-create
command. -
To delete the original VDIs, select the Storage tab of the original SR in XenCenter. The original VDIs are listed with an empty value for the VM field. Use the Delete button to delete the VDI.
Convert local Fibre Channel SRs to shared SRs
Use the xe CLI and the XenCenter Repair Storage Repository feature to convert a local FC SR to a shared FC SR:
-
Upgrade all hosts in the resource pool to XenServer 8.
-
Ensure that all hosts in the pool have the SR’s LUN zoned appropriately. See Probe an SR for details on using the
sr-probe
command to verify that the LUN is present on each host. -
Convert the SR to shared:
xe sr-param-set shared=true uuid=local_fc_sr <!--NeedCopy-->
-
The SR is moved from the host level to the pool level in XenCenter, indicating that it is now shared. The SR is marked with a red exclamation mark to show that it is not currently plugged on all hosts in the pool.
-
Select the SR and then select the Storage > Repair Storage Repository option.
-
Click Repair to create and plug a PBD for each host in the pool.
Reclaim space for block-based storage on the backing array using discard
You can use space reclamation to free up unused blocks on a thinly provisioned LUN. After the space is released, the storage array can then reuse this reclaimed space.
Note:
Space reclamation is only available on some types of storage arrays. To determine whether your array supports this feature and whether it needs a specific configuration, see the Hardware Compatibility List and your storage vendor specific documentation.
To reclaim the space by using XenCenter:
-
Select the Infrastructure view, and then choose the host or pool connected to the SR.
-
Click the Storage tab.
-
Select the SR from the list, and click Reclaim freed space.
-
Click Yes to confirm the operation.
-
Click Notifications and then Events to view the status of the operation.
For more information, press F1
in XenCenter to access the Online Help.
To reclaim space by using the xe CLI, you can use the following command:
xe host-call-plugin host-uuid=host_uuid \
plugin=trim fn=do_trim args:sr_uuid=sr_uuid
Notes:
- The operation is only available for LVM-based SRs that are based on thinly provisioned LUNs on the array. Local SSDs can also benefit from space reclamation.
- Space reclamation is not required for file-based SRs such as NFS and EXT3/EXT4. The Reclaim Freed Space button is not available in XenCenter for these SR types.
- If you run the space reclamation xe command for a file-based SR or a thick-provisioned LVM-based SR, the command returns an error.
- Space reclamation is an intensive operation and can lead to a degradation in storage array performance. Therefore, only initiate this operation when space reclamation is required on the array. We recommend that you schedule this work outside of peak array demand hours.
Automatically reclaim space when deleting snapshots
When deleting snapshots with XenServer, space allocated on LVM-based SRs is reclaimed automatically and a VM reboot is not required. This operation is known as ‘online coalescing’. Online coalescing applies to all types of SR.
In certain cases, automated space reclamation might be unable to proceed. We recommend that you use the offline coalesce tool in these scenarios:
- Under conditions where a VM I/O throughput is considerable
- In conditions where space is not being reclaimed after a period
Notes:
- Running the offline coalesce tool incurs some downtime for the VM, due to the suspend/resume operations performed.
- Before running the tool, delete any snapshots and clones you no longer want. The tool reclaims as much space as possible given the remaining snapshots/clones. If you want to reclaim the entire space, delete all snapshots and clones.
- VM disks must be either on shared or local storage for a single host. VMs with disks in both types of storage cannot be coalesced.
Reclaim space by using the offline coalesce tool
Enable the hidden objects using XenCenter. Click View > Hidden objects. In the Resource pane, select the VM for which you want to obtain the UUID. The UUID is displayed in the General tab.
In the Resource pane, select the resource pool coordinator (the first host in the list). The General tab displays the UUID. If you are not using a resource pool, select the VM’s host.
-
Open a console on the host and run the following command:
xe host-call-plugin host-uuid=host-UUID \ plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=VM-UUID <!--NeedCopy-->
For example, if the VM UUID is
9bad4022-2c2d-dee6-abf5-1b6195b1dad5
and the host UUID isb8722062-de95-4d95-9baa-a5fe343898ea
, run the following command:xe host-call-plugin host-uuid=b8722062-de95-4d95-9baa-a5fe343898ea \ plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=9bad4022-2c2d-dee6-abf5-1b6195b1dad5 <!--NeedCopy-->
-
This command suspends the VM (unless it is already powered down), initiates the space reclamation process, and then resumes the VM.
Notes:
We recommend that you shut down or suspend the VM manually before running the off-line coalesce tool. You can shut down or suspend the VM using either XenCenter or the XenServer CLI. If you run the coalesce tool on a running VM, the tool automatically suspends the VM, performs the required VDI coalesce operations, and resumes the VM. Agile VMs might restart on a different host.
If the Virtual Disk Images (VDIs) to be coalesced are on shared storage, you must run the off-line coalesce tool on the pool coordinator.
If the VDIs to be coalesced are on local storage, run the off-line coalesce tool on the host to which the local storage is attached.
Working with disk I/O
You can configure the disk I/O scheduler and the disk I/O priority settings to change the performance of your disks.
Note:
The disk I/O capabilities described in this section do not apply to EqualLogic, NetApp, or NFS storage.
Adjust the disk I/O scheduler
For general performance, the default disk scheduler noop
is applied on all new SR types. The noop
scheduler provides the fairest performance for competing VMs accessing the same device.
-
Adjust the disk scheduler by using the following command:
xe sr-param-set other-config:scheduler=<option> uuid=<sr_uuid> <!--NeedCopy-->
The value of
<option>
can be one of the following terms:noop
,cfq
, ordeadline
. -
Unplug and replug the corresponding PBD for the scheduler parameter to take effect.
xe pbd-unplug uuid=<pbd_uuid> xe pbd-plug uuid=<pbd_uuid> <!--NeedCopy-->
To apply disk I/O request prioritization, override the default setting and assign the cfq
disk scheduler to the SR.
Virtual disk I/O request prioritization
Virtual disks have optional I/O request priority settings. You can use these settings to prioritize I/O to a particular VM’s disk over others.
Before configuring any disk I/O request priority parameters for a VBD, ensure that the disk scheduler for the SR has been set appropriately. The scheduler parameter must be set to cfq
on the SR and the associated PBD unplugged and replugged. For information about how to adjust the scheduler, see Adjusting the disk I/O scheduler.
For shared SR, where multiple hosts are accessing the same LUN, the priority setting is applied to VBDs accessing the LUN from the same host. These settings are not applied across hosts in the pool.
The host issues a request to the remote storage, but the request prioritization is done by the remote storage.
Setting disk I/O request parameters
These settings can be applied to existing virtual disks by using the xe vbd-param-set
command with the following parameters:
-
qos_algorithm_type
- This parameter must be set to the valueionice
, which is the only algorithm supported for virtual disks. -
qos_algorithm_param
- Use this parameter to set key/value pairs. For virtual disks,qos_algorithm_param
takes asched
key, and depending on the value, also requires aclass
key.The key
qos_algorithm_param:sched
can have one of the following values:-
sched=rt
orsched=real-time
- This value sets the scheduling parameter to real time priority, which requires aclass
parameter to set a value. -
sched=idle
- This value sets the scheduling parameter to idle priority, which requires noclass
parameter to set any value. -
sched=anything
- This value sets the scheduling parameter to best-effort priority, which requires aclass
parameter to set a value.
The key
qos_algorithm_param:class
can have one of the following values:-
One of the following keywords:
highest
,high
,normal
,low
,lowest
. -
An integer between 0 and 7, where 7 is the highest priority and 0 is the lowest. For example, I/O requests with a priority of 5, are given priority over I/O requests with a priority of 2.
-
Example
For example, the following CLI commands set the virtual disk’s VBD to use real time priority 5
:
xe vbd-param-set uuid=<vbd_uuid> qos_algorithm_type=ionice
xe vbd-param-set uuid=<vbd_uuid> qos_algorithm_params:sched=rt
xe vbd-param-set uuid=<vbd_uuid> qos_algorithm_params:class=5
xe sr-param-set uuid=<sr_uuid> other-config:scheduler=cfq
xe pbd-unplug uuid=<pbd_uuid>
xe pbd-plug uuid=<pbd_uuid>
<!--NeedCopy-->
In this article
- Create storage repositories
- Probe an SR
- Remove SRs
- Introduce an SR
- Live LUN expansion
- Live VDI migration
- Cold VDI migration between SRs (offline migration)
- Convert local Fibre Channel SRs to shared SRs
- Reclaim space for block-based storage on the backing array using discard
- Automatically reclaim space when deleting snapshots
- Working with disk I/O