PDF

On this page

Manage Storage

Configure storage pools automatically (physical deployments only)

Storage pools are the groups of disks, called disk groups, on which you create storage resources. The system can automatically configure storage pools by selecting the appropriate disk groups based on the type and availability of disks in the system. Configure custom storage pools explains how to configure custom storage pools.

Before you create storage resources, you must configure at least one storage pool.

The following table lists the attributes for automatic pool configuration.

Table 1. Automatic pool configuration attributes
Attribute
Description
Target
Type of disk configuration. Value is one of the following:
  • pool - Disks configured in a pool.
  • spares - Disks assigned to storage pools as spares. The number of spares assigned to a pool is dependent on the disk type and pool type:
    • For Capacity pools, no spare disks are assigned.
    • For Performance pools, a spare disk is assigned for the first 0-30 disks, and then another spare disk is assigned for every group of thirty disks after that.
    • For Flash pools, a spare disk is assigned for the first 0-30 disks, and then another spare disk is assigned for every group of thirty disks after that.
Name
Name of the pool. The system allocates disks to one or more of the following pools based on the types and characteristics of the disks on the system:
  • Capacity - Storage allocated from near-line (NL) serial attached SCSI (SAS) disks. Provides high-capacity storage, but with lower overall performance to regular SAS and Enterprise Flash Drive (EFD) disks. Use NL SAS disks to provide extremely economical storage for operations, such as data backup, that do not require high I/O performance.
  • Performance - Storage allocated from serial attached SCSI (SAS) disks. Provides medium performance and medium capacity storage for applications that require balance of performance and capacity.
  • Flash - Storage allocated from EFD disks. Extremely high level performance, but at a relatively high cost per GB of storage. EFDs are most applicable to applications that require high I/O performance and energy efficiency.
Depending on the pool type, the system configures the disks into different RAID groups and assigns disks to pools as spares. The Unisphere online help provides more details about storage pools and spares.
Drives (current)
List of disks currently in the pool.
Drives (new)
List of disks to be added to the pool.
RAID level
RAID level applied.
Stripe length
Comma-separated list of disks in the stripe.

Initiate automatic storage pool configuration

Start configuring storage pools automatically. View configuration settings for automatic storage pool creation displays the configuration settings that the system will apply when you run this command.

Format
/stor/config/auto set
Action qualifier
Qualifier
Description
-async
Run action in asynchronous mode.
Example

The following command initiates automatic storage pool configuration:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/auto set
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

View configuration settings for automatic storage pool creation

View the settings for automatic storage pool creation that will be applied to the system. Initiate automatic storage pool configuration explains how to apply these settings to the system.

The show action command explains how to change the output format.
Format
/stor/config/auto show
Example

The following command shows how storage pools and spares will be configured automatically on the system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/auto show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1:   Target           = Pool 
     Name             = Performance
     Drives (current) = 5 x 600GB SAS; 5 x 300GB SAS
     Drives (new)     = 5 x 600GB SAS
     RAID level       = 5
     Stripe length    = 5,9

2:   Target           = Pool 
     Name             = Capacity
     Drives (current) = 10 x 1TB NL-SAS
     Drives (new)     = 2 x 1TB NL SAS
     RAID level       = 5
     Stripe length    = 5,9

3:   Target           = Pool 
     Name             = Extreme Performance
     Drives (current) = 10 x 100GB EFD
     Drives (new)     = 10 x 100GB EFD
     RAID level       = 1
     Stripe length    = 2

4:   Target           = Spares 
     Name             = Unused / Hot Spare Candidates
     Drives (current) = 1 x 600GB SAS; 1 x 300GB SAS; 1 x 1TB NL SAS
     Drives (new)     = 1 x 100GB EFD 
     RAID level       =
     Stripe length    =

Configure custom storage pools

Storage pools are the groups of disks on which you create storage resources. Configure storage pools based on the type of storage resource and usage that will be associated with the pool, such as file system storage optimized for database usage. The storage characteristics differ according to the following:

  • Type of disk used to provide the storage.
  • RAID level implemented for the storage.
Before you create storage resources, you must configure at least one storage pool.

Configure storage pools automatically (physical deployments only) explains how to have the system configure storage pools automatically.

The following table lists the attributes for storage pools:

Table 2. Custom storage pool attributes
Attribute
Description
ID
ID of the storage pool.
Name
Name of the storage pool.
Description
Brief description of the storage pool.
Total space
Total storage capacity of the storage pool.
Current allocation
Amount of storage in the storage pool allocated to storage resources.
Remaining space
Amount of storage in the storage pool not allocated to storage resources.
Subscription
For thin provisioning, the total storage space subscribed to the storage pool. All storage pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, storage pools can be over provisioned to support more storage capacity than they actually possess.
The system automatically generates an alert when the total pool usage reaches 85% of the pool's physical capacity. -alertThreshold specifies the alert threshold value.
Subscription percent
For thin provisioning, the percentage of the total space in the storage pool that is subscription storage space.
Alert threshold
Threshold for the system to send an alert when hosts have consumed a specific percentage of the subscription space. Value range is 50 to 85.
Drives
List of the types of disks on the system, including the number of disks of each type, in the storage pool.
Number of drives
Total number of disks in the storage pool.
Number of unused drives
Number of disks in the storage pool that are not being used.
RAID level (physical deployments only)
RAID level of the disks in the storage pool.
Stripe length (physical deployments only)
Number of disks the data is striped across.
Rebalancing
Indicates whether a pool rebalancing is in progress. Value is one of the following:
  • Yes
  • No
Rebalancing progress
Indicates the progress of the pool rebalancing as a percentage.
System defined pool
Indication of whether the system configured the pool automatically. Valid values are:
  • Yes
  • No
Health state
Health state of the storage pool. The health state code appears in parentheses. Value is one of the following:
  • Unknown (0) - Health is unknown.
  • OK (5) - Operating normally.
  • OK BUT (7) - Pool has exceeded its user-specified threshold or the system specified threshold of 85%.
  • Degraded/Warning (10) - Pool is operating, but degraded due to one or more of the following:
    • Pool has exceeded the user-specified threshold.
    • Pool is nearing capacity.
    • Pool is almost full.
    • Pool performance has degraded.
  • Major failure (20) - Dirty cache has made the pool unavailable.
  • Critical failure (25) - Pool is full. To avoid data loss, add more storage to the pool, or create more pools.
  • Non-recoverable error (30) - Two or more disks in the pool have failed, possibly resulting in data loss.
Health details
Additional health information. See Appendix A, Reference, for health information details.
FAST Cache enabled
Indicates whether FAST Cache is enabled on the storage pool. Value is one of the following:
  • Yes
  • No
Protection size used
Quantity of storage used for data protection.
Auto-delete state
Indicates the state of an auto-delete operation on the storage pool. Value is one of the following:
  • Idle
  • Running
  • Could not reach LWM
  • Could not reach HWM
    If the auto-delete operation cannot satisfy the high water mark, and there are snapshots in the storage pool, the auto-delete operation sets the auto-delete state for that watermark to Could not reach HWM , and generates an alert.
  • Failed
Auto-delete paused
Indicates whether an auto-delete operation is paused. Value is one of the following:
  • Yes
  • No
Auto-delete pool full threshold enabled
Indicates whether the system will check the pool full high water mark for auto-delete. Value is one of the following:
  • Yes
  • No
Auto-delete pool full high water mark
The pool full high watermark on the storage pool.
Auto-delete pool full low water mark
The pool full low watermark on the storage pool.
Auto-delete snapshot space used threshold enabled
Indicates whether the system will check the snapshot space used high water mark for auto-delete. Value is one of the following:
  • Yes
  • No
Auto-delete snapshot space used high water mark
High watermark for snapshot space used on the storage pool.
Auto-delete snapshot space used low water mark
Low watermark for snapshot space used on the storage pool.

Configure storage pools

Configure a storage pool.

Format
/stor/config/pool create [-async] –name <value> [-descr <value>] {-diskGroup <value> -drivesNumber <value> –storProfile <value> | -disk <value> [-tier <value>]} [-alertThreshold <value>] [-snapPoolFullThresholdEnabled {yes|no}] [ -snapPoolFullHWM <value>] [-snapPoolFullLWM <value>] [-snapSpaceUsedThresholdEnabled {yes|no}] [-snapSpaceUsedHWM <value>] [-snapSpaceUsedLWM <value>]
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-name
Type a name for the storage pool.
-descr
Type a brief description of the storage pool.
-storProfile (physical deployments only)
Type the ID of the storage profiles, separated by commas, to apply to the storage pool, based on the type of storage resource that will use the pool and the intended usage of the pool. View storage profiles (physical deployments only) explains how to view the IDs of available storage profiles on the system. If this option is not specified, a default RAID configuration is selected for each particular drive type in the selected disk group: NL-SAS (RAID 6 with a stripe length of 8), SAS (RAID 5 with a stripe length of 5), or Flash (RAID 5 with a stripe length of 5).
-diskGroup (physical deployments only)
Type the IDs of the disk groups to use in the storage pool. Specifying disk groups with different disks types causes the creation of a multi-tier storage pool. View disk groups explains how to view the IDs of the disk groups on the system.
-drivesNumber (physical deployments only)
Specify the disk numbers, separated by commas, from the selected disk groups to use in the storage pool. If this option is specified when -storProfile is not specified, the operation may fail when the -drivesNumber value does not match the default RAID configuration for each drive type in the selected disk group.
-disk (virtual deployments only)
Specify the list of disks, separated by commas, to use in the storage pool. Specified disks must be reliable storage objects that do not require additional protection.
-tier (virtual deployments only)
Specify the list of tiers, separated by commas, to which the disks are assigned. If a tier is omitted, it will be assigned automatically if tiering information for the associated disk is available. Valid values include:
  • capacity
  • performance
  • extreme
-alertThreshold
For thin provisioning, specify the threshold, as a percentage, when the system will alert on the amount of subscription space used. When hosts consume the specified percentage of subscription space, the system sends an alert. Value range is 50% to 85%.
-FASTCacheEnabled
Specify whether to enable FAST Cache on the storage pool. Value is one of the following:
  • Yes
  • No
Default value is Yes.
-snapPoolFullThresholdEnabled
Indicate whether the system should check the pool full high water mark for auto-delete. Value is one of the following:
  • Yes
  • No
Default value is Yes.
-snapPoolFullHWM
Specify the pool full high watermark for the storage pool. Valid values are 1-99. Default value is 95.
-snapPoolFullLWM
Specify the pool full low watermark for the storage pool. Valid values are 0-98. Default value is 85.
-snapSpaceUsedThresholdEnabled
Indicate whether the system should check the snapshot space used high water mark for auto-delete. Value is one of the following:
  • Yes
  • No
Default value is Yes.
-snapSpaceUsedHWM
Specify the snapshot space used high watermark to trigger auto-delete on the storage pool. Valid values are 1-99. Default value is 95.
-snapSpaceUsedLWM
Specify the snapshot space used low watermark to trigger auto-delete on the storage pool. Valid values are 0-98. Default value is 20.
Example 1 (physical deployments only)

The following command creates a storage pool that uses storage profiles SP_1and SP_2, and seven disks from disk group DG_1 and five disks from disk group DG_2. The configured storage pool receives ID SPL_4:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool create -name GlobalPool1 –descr “Oracle databases” –storProfile SP_1 SP_2 –diskGroup DG_1 DG_2 –drivesNumber 7 5
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = SPL_4
Operation completed successfully.
Example 2 (virtual deployments only)

The following command creates a storage pool with two virtual disks, vdisk_0 and vdisk_2 in the extreme tier. The configured storage pool receives ID pool_4.

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool create -name vPool -descr "my virtual pool" -disk vdisk_0,vdisk_2 -tier extreme
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = pool_4
Operation completed successfully.

View storage pools

View a list of storage pools. You can filter on the storage pool ID.

The show action command explains how to change the output format.
Format
/stor/config/pool [–id <value>] show
Object qualifier
Qualifier
Description
-id
Type the ID of a storage pool.
Example 1 (physical deployments only)

The following command shows details about all storage pools on the system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1: ID                                                = SPL_1
   Name                                              = Performance
   Description                                       = Multi-tier pool
   Total space                                       = 4947802324992 (4.5T)
   Current allocation                                = 3298534883328 (3T)
   Remaining space                                   = 1649267441664 (1.5T)
   Subscription                                      = 10995116277760 (10T)
   Subscription percent                              = 222%
   Alert threshold                                   = 70%
   Drives                                            = 6 x 100GB EFD; 6 x 300GB SAS
   Number of drives                                  = 12
   RAID level                                        = Mixed
   Stripe length                                     = Mixed
   Rebalancing                                       = no
   Rebalancing progress                              =
   Health state                                      = OK (5)
   Health details                                    = "The component is operating normally.  No action is required."
   FAST Cache enabled                                = no
   Protection size used                              = 1099511625 (1G)
   Auto-delete state                                 = Running
   Auto-delete paused                                = no
   Auto-delete pool full threshold enabled           = yes
   Auto-delete pool full high water mark             = 95%
   Auto-delete pool full low water mark              = 85%
   Auto-delete snapshot space used threshold enabled = yes
   Auto-delete snapshot space used high water mark   = 25%
   Auto-delete snapshot space used low water mark    = 20%
       


2: ID                                                = SPL_2
   Name                                              = Capacity
   Description                                       =
   Total space                                       = 4947802324992 (4.5T)
   Current allocation                                = 3298534883328 (3T)
   Remaining space                                   = 1649267441664 (1.5T)
   Subscription                                      = 10995116277760 (10T)
   Subscription percent                              = 222%
   Alert threshold                                   = 70%
   Drives                                            = 12 x 2TB NL-SAS
   Number of drives                                  = 12
   Unused drives                                     = 7
   RAID level                                        = 6
   Stripe length                                     = 6       
   Rebalancing                                       = yes
   Rebalancing progress                              = 46%
   Health state                                      = OK (5)
   Health details                                    = "The component is operating normally.  No action is required."
   FAST Cache enabled                                = yes
   Protection size used                              = 10995116238 (10G)
   Auto-delete state                                 = Running
   Auto-delete paused                                = no
   Auto-delete pool full threshold enabled           = yes
   Auto-delete pool full high water mark             = 95%
   Auto-delete pool full low water mark              = 85%
   Auto-delete snapshot space used threshold enabled = yes
   Auto-delete snapshot space used high water mark   = 25%
   Auto-delete snapshot space used low water mark    = 20%
Example 2 (virtual deployments only)

The following command shows details for all storage pools on a virtual system.

uemcli -d 10.0.0.2 -u Local/joe -p MyPassword456! /stor/config/pool show -detail
Storage system address: 10.0.0.2
Storage system port: 443
HTTPS connection

1:     ID                                                = pool_1
       Name                                              = Capacity
       Description                                       =
       Total space                                       = 4947802324992 (4.5T)
       Current allocation                                = 3298534883328 (3T)
       Remaining space                                   = 4947802324992 (1.5T)
       Subscription                                      = 10995116277760 (10T)
       Subscription percent                              = 222%
       Alert threshold                                   = 70%
       Drives                                            = 1 x 120GB Virtual; 1 x 300GB Virtual
       Number of drives                                  = 2
       Health state                                      = OK (5)
       Health details                                    = "The component is operating normally.  No action is required."
       Protection size used                              = 1099511625 (1G)
       Auto-delete state                                 = Running
       Auto-delete paused                                = no
       Auto-delete pool full threshold enabled           = yes
       Auto-delete pool full high water mark             = 95%
       Auto-delete pool full low water mark              = 85%
       Auto-delete snapshot space used threshold enabled = yes
       Auto-delete snapshot space used high water mark   = 25%
       Auto-delete snapshot space used low water mark    = 20%

Change storage pool settings

Change the subscription alert threshold setting for a storage pool.

Format
/stor/config/pool -id <value> set [-async] –name <value> [-descr <value>] [-alertThreshold <value>] [-snapPoolFullThresholdEnabled {yes|no}] [-snapPoolFullHWM <value>] [-snapPoolFullLWM <value>] [-snapSpaceUsedThresholdEnabled {yes|no}] [-snapSpaceUsedHWM <value>] [-snapSpaceUsedLWM <value>]
Object qualifier
Qualifier
Description
-id
Type the ID of the storage pool to change.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-name
Type a name for the storage pool.
-descr
Type a brief description of the storage pool.
-alertThreshold
For thin provisioning, specify the threshold, as a percentage, when the system will alert on the amount of subscription space used. When hosts consume the specified percentage of subscription space, the system sends an alert. Value range is 50% to 85%.
-FASTCacheEnabled
Specify whether to enable FAST Cache on the storage pool. Value is one of the following:
  • Yes
  • No
Default value is Yes.
-snapPoolFullThresholdEnabled
Indicate whether the system should check the pool full high water mark for auto-delete. Value is one of the following:
  • Yes
  • No
Default value is Yes.
-snapPoolFullHWM
Specify the pool full high watermark for the storage pool. Valid values are 1-99. Default value is 95.
-snapPoolFullLWM
Specify the pool full low watermark for the storage pool. Valid values are 0-98. Default value is 85.
-snapSpaceUsedThresholdEnabled
Indicate whether the system should check the snapshot space used high water mark for auto-delete. Value is one of the following:
  • Yes
  • No
Default value is Yes.
-snapSpaceUsedHWM
Specify the snapshot space used high watermark to trigger auto-delete on the storage pool. Valid values are 1-99. Default value is 95.
-snapSpaceUsedLWM
Specify the snapshot space used low watermark to trigger auto-delete on the storage pool. Valid values are 0-98. Default value is 20.
-snapAutoDeletePaused
Specify whether to pause snapshot auto-delete. Typing no resumes the auto-delete operation.
Example

The following command sets the subscription alert threshold for storage pool SPL_1 to 70%:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool -id SPL_1 -set -alertThreshold 70 -FASTCacheEnabled no
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = SPL_1
Operation completed successfully.

Add disks or tiers to storage pools

Add new disks to a storage pool to increase its storage capacity.

Format
/stor/config/pool –id <value> extend [-async] {-diskGroup <value> -drivesNumber <value> [-storProfile <value>] |-disk <value> [-tier <value>]}
Object qualifier
Qualifier
Description
-id
Type the ID of the storage pool to extend.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-diskGroup (physical deployments only)
Type the IDs of the disk groups, separated by commas, to add to the storage pool.
-drivesNumber (physical deployments only)
Type the number of disks from the specified disk groups, separated by commas, to add to the storage pool. If this option is specified when -storProfile is not specified, the operation may fail when the -drivesNumber value does not match the default RAID configuration for each drive type in the selected disk group.
-storProfile (physical deployments only)
Type the IDs of the storage profiles, separated by commas, to apply to the storage pool. If this option is not specified, a default RAID configuration is selected for each particular drive type in the selected disk group: NL-SAS (RAID 6 with a stripe length of 8), SAS (RAID 5 with a stripe length of 5), or Flash (RAID 5 with a stripe length of 5).
-disk (virtual deployments only)
Specify the list of disks, separated by commas, to add to the storage pool. Specified disks must be reliable storage objects which do not require additional protection.
-tier (virtual deployments only)
Specify the list of tiers, separated by commas, to which the disks are assigned. If a tier is omitted, it will be assigned automatically if tiering information for associated disk is available. Valid values include:
  • capacity
  • performance
  • extreme
Example 1 (physical deployments only)

The following command extends storage pool SPL_1 with seven disks from disk group DG_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool –id SPL_1 extend –diskGroup DG_1 –drivesNumber 7 -storProfile profile_12
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = SPL_1
Operation completed successfully.
Example 2 (virtual deployments only)

The following command extends storage pool pool_1 by adding two virtual disks, vdisk_1 and vdisk_5.

uemcli -d 10.0.0.2 -u Local/joe -p MyPassword456! /stor/config/pool –id pool_1 extend –disk vdisk_1,vdisk_5
Storage system address: 10.0.0.2
Storage system port: 443
HTTPS connection

ID = pool_1
Operation completed successfully.

Delete storage pools

Delete a storage pool.

Format
/stor/config/pool –id <value> delete [-async]
Object qualifier
Qualifier
Description
-id
Type the ID of the storage pool to extend.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
Example

The following deletes storage pool SPL_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool –id SPL_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

Manage storage pool tiers

Storage tiers allow users to move data between different types of disks in a storage pool to maximize storage efficiency. Storage tiers are defined by the following characteristics:

  • Disk performance.
  • Disk capacity.

The following table lists the attributes for storage profiles:

Table 3. Storage tier attributes
Attribute
Description
Name
Storage tier name.
Drives
The list of disk types, and the number of disks of each type in the storage tier.
RAID level (physical deployments only)
RAID level of the storage tier.
Stripe length (physical deployments only)
Comma-separated list of the stripe length of the disks in the storage tier.
Total space
Total capacity in the storage tier.
Current allocation
Currently allocated space.
Remaining space
Remaining space.

View storage tiers

View a list of storage tiers. You can filter on the storage pool ID.

The show action command explains how to change the output format.
Format
/stor/config/pool/tier –pool <value> show
Object qualifier
Qualifier
Description
-pool
Type the ID of a storage pool.
Example 1 (physical deployments only)

The following command shows details about all storage pools on the system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/tier -pool SPL_1 show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1:    Name                = Extreme Performance
      Drives              =
      RAID level          =
      Stripe length       = 
      Total space         = 0
      Current allocation  = 0 
      Remaining space     = 0


2:    Name                = Performance
      Drives              = 5 x 300GB SAS
      RAID level          = 5
      Stripe length       = 5
      Total space         = 928180076544 (864.4G)
      Current allocation  = 8606711808 (8.0G)
      Remaining space     = 919573364736 (856.4G)


3:    Name                = Capacity
      Drives              =
      RAID level          =
      Stripe length       = 
      Total space         = 0
      Current allocation  = 0 
      Remaining space     = 0
Example 2 (virtual deployments only)

The following command shows details about storage pool pool_1 on a virtual system.

uemcli -d 10.0.0.2 -u Local/joe -p MyPassword456! /stor/config/pool/tier –pool pool_1 show -detail
Storage system address: 10.0.0.2
Storage system port: 443
HTTPS connection

1:    Name                = Extreme Performance
      Drives              =
      Total space         = 0
      Current allocation  = 0 
      Remaining space     = 0


2:    Name                = Performance
      Drives              = 1 x 500GB Virtual
      Total space         = 631242752000 (500.0G)
      Current allocation  = 12624855040 (10.0G)
      Remaining space     = 618617896960 (490.0G)


3:    Name                = Capacity
      Drives              =
      Total space         = 0
      Current allocation  = 0 
      Remaining space     = 0

Manage FAST VP pool settings

Fully Automated Storage Tiering for Virtual Pools (FAST VP) is a storage efficiency technology that automatically moves data between storage tiers within a storage pool based on data access patterns.

The following table lists the attributes for FAST VP pool settings.

Table 4. FAST VP pool attributes
Attribute
Description
Pool
Identifies the storage pool.
Status
Identifies the status of data relocation on the storage pool. Value is one of the following:
  • Not started - Data relocation has not started.
  • Paused - Data relocation is paused.
  • Completed - Data relocation is complete.
  • Stopped by user - Data relocation was stopped by the user.
  • Active - Data relocation is in progress.
  • Failed - Data relocation failed.
Relocation type
Type of data relocation. Value is one of the following:
  • Manual - Data relocation was initiated by the user.
  • Scheduled or rebalancing - Data relocation was initiated by the system because it was scheduled, or because the system rebalanced the data.
Schedule enabled
Identifies whether the pool is rebalanced according to the system FAST VP schedule. Value is one of the following:
  • Yes
  • No
Start time
Indicates the time the current data relocation started.
End time
Indicates the time the current data relocation is scheduled to end.
Data relocated
The amount of data relocated during an ongoing relocation, or the previous relocation if a data relocation is not occurring. The format is:
 <value>[suffix]
where:
  • value - Identifies the size of the data relocated.
  • suffix - Identifies that the value relates to the previous relocation session.
Rate
Identifies the transfer rate for the data relocation. Value is one of the following:
  • Low - Least impact on system performance.
  • Medium - Moderate impact on system performance.
  • High - Most impact on system performance.
Default value is medium.
This field is blank if data relocation is not in progress.
Data to move up
The amount of data in the storage pool scheduled to be moved to a higher storage tier.
Data to move down
The amount of data in the storage pool scheduled to be moved to a lower storage tier.
Data to move within
The amount of data in the storage pool scheduled to be moved within the same storage tiers for rebalancing.
Data to move up per tier
The amount of data per tier that is scheduled to be moved to a higher tier. The format is:
 <tier_name>:[value]
where:
  • tier_name - Identifies the storage tier.
  • value - Identifies the amount of data in that tier to be move up.
Data to move down per tier
The amount of data per tier that is scheduled to be moved to a lower tier. The format is:
 <tier_name>:[value]
where:
  • tier_name - Identifies the storage tier.
  • value - Identifies the amount of data in that tier to be moved down.
Data to move within per tier
The amount of data per tier that is scheduled to be moved to within the same tier for rebalancing. The format is:
 <tier_name>:[value]
where:
  • tier_name - Identifies the storage tier.
  • value - Identifies the amount of data in that tier to be rebalanced.
Estimated relocation time
Identifies the estimated time required to perform the next data relocation.

Change FAST VP pool settings

Modify FAST VP settings on an existing pool.

Format
/stor/config/pool/fastvp –pool <value> set [-async] -schedEnabled {yes | no}
Object qualifier
Qualifier
Description
-pool
Type the ID of the storage pool.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-schedEnabled
Specify whether the pool is rebalanced according to the system FAST VP schedule. Value is one of the following:
  • Yes
  • No
Example

The following example enables the rebalancing schedule on storage pool pool_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp -pool pool_1 set -schedEnabled yes
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Pool ID = pool_1
Operation completed successfully.

View FAST VP pool settings

View FAST VP settings on a storage pool.

Format
/stor/config/pool/fastvp [–pool <value>] show
Object qualifier
Qualifier
Description
-pool
Type the ID of the storage pool.
Example

The following command lists the FAST VP settings on the storage system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp –show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1: Pool                          = pool_1
   Relocation type               = manual
   Status                        = Active
   Schedule enabled              = no
   Start time                    = 2013-09-20 12:55:32
   End time                      = 2013-09-20 21:10:17
   Data relocated                = 100111454324 (100G)
   Rate                          = high
   Data to move up               = 4947802324992 (4.9T)
   Data to move down             = 4947802324992 (4.9T)
   Data to move within           = 4947802324992 (4.9T)
   Data to move up per tier      = Performance: 500182324992 (500G), Capacity:    1000114543245 (1.0T)
   Data to move down per tier    = Extreme Performance: 1000114543245 (1.0T),    Performance: 500182324992 (500G)
   Data to move within per tier  = Extreme Performance: 500182324992 (500G),    Performance: 500182324992 (500G), Capacity: 500182324992 (500G)
   Estimated relocation time     = 7h 30m

Start data relocation

Start data relocation on a storage pool.

Format
/stor/config/pool/fastvp –pool <value> start [-async] [-rate {low | medium | high}] [-endTime <value>]
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-pool
Type the ID of the storage pool.
-endTime
Specify the time to stop the data relocation. The format is:
[HH:MM]
where:
  • HH — Hour.
  • MM — Minute.
Default value is eight hours from the current time.
-rate
Specify the transfer rate for the data relocation. Value is one of the following:
  • Low — Least impact on system performance.
  • Medium — Moderate impact on system performance.
  • High — Most impact on system performance.
Default value is the value set at the system level.
Example

The following command starts data relocation on storage pool pool_1, and directs it to end at 04:00:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp -pool pool_1 start -endTime 04:00
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

Stop data relocation

Stop data relocation on a storage pool.

Format
/stor/config/pool/fastvp –pool <value> stop [-async]
Object qualifier
Qualifier
Description
-pool
Type the ID of the storage pool.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
Example

The following command stops data relocation on storage pool pool_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp –pool pool_1 stop
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

View storage pool resources

This command displays a list of storage resources allocated in a storage pool. This can be storage resources provisioned on specified storage pool, and NAS servers that have file systems allocated in the pool.

The following table lists the attributes for storage pool resources.

Table 5. Storage pool resources
Attribute
Description
ID
Storage resource identifier.
Name
Name of the storage resource.
Resource type
Type of the resource. Valid values are LUN, File system, LUN group, VMware NFS, VMware VMFS, and NAS server.
Pool
Name of the storage pool.
Total pool space used
Total space used by the storage pool. This includes primary data used size, snapshot used size, and metadata size.
Total pool snapshot space used
Total spaced used by the storage pool for snapshots.
Health state
Health state of the file system. The health state code appears in parentheses.
Health details
Additional health information. See Appendix A, Reference, for health information details.
The show action command explains how to change the output format.
Format
/stor/config/pool/sr [–pool <value>] show
Object qualifier
Qualifier
Description
-pool
Type the name of the storage pool.
Example

The following command shows details for all storage resources associated with the storage pool pool_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/sr -pool pool_1 show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1:       ID                             = res_1
         Name                           = FileSystem00
         Resource type                  = File system
         Pool                           = pool_1
         Total pool space used          = 23622320128 (220GB)
         Total pool snapshot space used = 2147483648 (2GB)
         Health state                   = OK (5)
         Health details                 = "The component is operating normally. No action is required."

2:       ID                             = res_2
         Name                           = LUNGroup00
         Resource type                  = LUN group
         Pool                           = pool_1
         Total pool space used          = 57982058496 (54GB)
         Total pool snapshot space used = 4294967296 (4GB)
         Health state                   = OK (5)
         Health details                 = "The component is operating normally. No action is required."

3:       ID                             = nas_1
         Name                           = NASServer00
         Resource type                  = NAS server
         Pool                           = pool_1
         Total pool space used          = 
         Total pool snapshot space used = 
         Health state                   = OK (5)
         Health details                 = "The component is operating normally. No action is required."

Manage FAST VP general settings

Fully Automated Storage Tiering for Virtual Pools (FAST VP) is a storage efficiency technology that automatically moves data between storage tiers within a storage pool based on data access patterns.

The following table lists the attributes for FAST VP general settings.

Table 6. FAST VP general attributes
Attribute
Description
Paused
Identifies whether the data relocation is paused. Value is one of the following:
  • Yes
  • No
Schedule-enabled
Identifies whether the pool is rebalanced according to the system FAST VP schedule. Value is one of the following:
  • Yes
  • No
Frequency
Data relocation schedule. The format is: Every <days_of_the_week> at <start_time> until <end_time> where:
  • <days_of_the_week> - List of the days of the week that data relocation will run.
  • <start_time> - Time the data relocation starts.
  • <end_time> - Time the data relocation finishes.
Rate
Identifies the transfer rate for the data relocation. Value is one of the following:
  • Low - Least impact on system performance.
  • Medium - Moderate impact on system performance.
  • High - Most impact on system performance.
Default value is medium.
This field is blank if data relocation is not in progress.
Data to move up
The amount of data in the storage pool scheduled to be moved to a higher storage tier.
Data to move down
The amount of data in the storage pool scheduled to be moved to a lower storage tier.
Data to move within
The amount of data in the storage pool scheduled to be moved within the same storage tiers for rebalancing.
Estimated relocation time
Identifies the estimated time required to perform the next data relocation.

Change FAST VP general settings

Change FAST VP general settings.

Format
/stor/config/fastvp set [-async] [-schedEnabled {yes | no}] [-days <value>] [-at <value>] [-until <value>] [-rate {low | medium | high}] [-paused {yes | no}]
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-paused
Specify whether to pause data relocation on the storage system. Value is one of the following:
  • Yes
  • No
-schedEnabled
Specify whether the pool is rebalanced according to the system FAST VP schedule. Value is one of the following:
  • Yes
  • No
-days
Specify a comma-separated list of the days of the week to schedule data relocation. Valid values are:
  • Mon – Monday
  • Tue – Tuesday
  • Wed – Wednesday
  • Thu – Thursday
  • Fri – Friday
  • Sat – Saturday
  • Sun – Sunday
-at
Specify the time to start the data relocation. The format is:
[HH:MM]
where:
  • HH – Hour
  • MM – Minute
Valid values are between 00:00 and 23:59. Default value is 00:00.
-until
Specify the time to stop the data relocation. The format is:
[HH:MM]
where:
  • HH – Hour
  • MM – Minute
Valid values are between 00:00 and 23:59. Default value is eight hours after the time specified with the -at parameter.
-rate
Specify the transfer rate for the data relocation. Value is one of the following:
  • Low – Least impact on system performance.
  • Medium – Moderate impact on system performance.
  • High – Most impact on system performance.
Default value is medium.
Example

The following command changes the data relocation schedule to run on Mondays and Fridays from 23:00 to 07:00:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastvp set -schedEnabled yes -days "Mon,Fri" -at 23:00 -until 07:00
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

View FAST VP general settings

View the FAST VP general settings.

Format
/stor/config/fastvp show -detail
Example

The following command displays the FAST VP general settings:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastvp show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1: Paused                              = no
   Schedule enabled                    = yes
   Frequency                           = Every Mon, Fri at 22:30 until 8:00
   Rate                                = high
   Data to move up                     = 4947802324992 (1.5T)
   Data to move down                   = 4947802324992 (1.5T)
   Data to move within                 = 4947802324992 (1.5T)
   Estimated scheduled relocation time = 7h 30m

Manage FAST Cache (physical deployments only)

FAST Cache is a storage efficiency technology that uses disks to expand the cache capability of the storage system to provide improved performance.

The following table lists the attributes for FAST Cache:

Table 7. FAST Cache attributes
Attribute
Description
Capacity
Capacity of the FAST Cache.
Drives
The list of disk types, and the number of disks of each type in the FAST Cache.
Number of drives
Total number of disks in the FAST Cache.
RAID level
RAID level applied to the FAST Cache disks. This value is always RAID 1.
Health state
Health state of the FAST Cache. The health state code appears in parentheses.
Health details
Additional health information. See Appendix A, Reference, for health information details.

Create FAST Cache

Configure FAST Cache. The storage system generates an error if FAST Cache is already configured.

Format
/stor/config/fastcache create [-async] -diskGroup <value> -drivesNumber <value> [-enableOnExistingPools]
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-diskGroup
Specify the disk group to include in the FAST Cache.
-drivesNumber
Specify the number of disks to include in the FAST Cache.
-enableOnExistingPools
Specify whether FAST Cache is enabled on all existing pools.
Example

The following command configures FAST Cache with six disks from disk group DG_1, and enables FAST Cache on existing storage pools:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache create -diskGroup DG_1 -drivesNumber 6 -enableOnExistingPools
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

View FAST Cache settings

View the FAST Cache parameters.

Format
/stor/config/fastcache show
Example

The following command displays the FAST Cache parameters:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fasdtcache show -detail
1:     Total space      = 536870912000 (500G)
       Drives           = 6 x 100GB SAS Flash
       Number of drives = 6
       RAID level       = 1
       Health state     = OK (5)
       Health details   = “The component is operating normally. No action is required."

Delete FAST Cache

Delete the FAST Cache configuration. The storage system generates an error if FAST Cache is not configured on the system.

Format
/stor/config/fastcache delete [-async]
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
Example

The following command deletes the FAST Cache configuration:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

View storage profiles (physical deployments only)

Storage profiles are preconfigured settings for configuring storage pools based on the following:

  • Types of storage resources that will use the pools.
  • Intended usage of the pool.

For example, create a storage pool for file system storage resources intended for general use. When configuring a storage pool, specify the ID of the storage profile to apply to the pool.

Storage profiles are not restrictive with regard to storage provisioning. For example, you can provision file systems from an FC or iSCSI database storage pool. However, the characteristics of the storage will be best suited to the indicated storage resource type and use.

Each storage profile is identified by an ID.

The following table lists the attributes for storage profiles.

Table 8. Storage profile attributes
Attribute
Description
ID
ID of the storage profile.
Description
Brief description of the storage profile.
Drive type
Types of disks for the storage profile.
RAID level
RAID level number for the storage profile. Value is one of the following:
  • 1 - RAID level 1.
  • 5 - RAID level 5.
  • 6 - RAID level 6.
  • 10 - RAID level 1+0.
Maximum capacity
Maximum storage capacity for the storage profile.
Stripe length
Number of disks the data is striped across.
For best fit profiles, this value is Best fit .
The show action command explains how to change the output format.
Format
/stor/config/profile [–id <value> | -driveType <value> [-raidLevel <value>]] show
Object qualifier
Qualifier
Description
-id
Type the ID of a storage profile.
-driveType
Specify the type of disk drive.
-raidLevel
Specify the RAID type of the profile.
Example

The following command shows details for all storage profiles on the system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/profile show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1:     ID               = SP_1
       Description      = Best Performance
       Drive type       = SAS
       RAID level       = 5
       Maximum capacity = 1099511627776 (1TB)
       Stripe length    = 6

2:     ID               = SP_2
       Description      = High Capacity
       Drive type       = FAT-SAS
       RAID level       = 6
       Maximum capacity = 21990232555520 (20TB)
       Stripe length    = 6

3:     ID               = SP_3
       Name             = Performance
       Drive type       = SAS
       RAID level       = 5
       Maximum capacity = 5937362789990 (5.4TB)
       Stripe length    = 5

Manage disk groups (physical deployments only)

Disk groups are the groups of disks on the system with similar characteristics, including type, capacity, and spindle speed. When configuring storage pools, you select the disk group to use and the number of disks from the group to add to the pool.

Each disk group is identified by an ID.

The following table lists the attributes for disk groups.

Table 9. Disk group attributes
Attribute
Description
ID
ID of the disk group.
Drive type
Type of disks in the disk group.
Drive size
Capacity of one disk in the disk group.
Rotational speed
Rotational speed of the disks in the group.
Number of drives
Total number of disks in the disk group.
Unconfigured drives
Total number of disks in the disk group that are not in a storage pool.
Capacity
Total capacity of all disks in the disk group.
Recommended number of spares
Number of spares recommended for the disk group.

View disk groups

View details about disk groups on the system. You can filter on the disk group ID.

The show action command explains how to change the output format.
Format
/stor/config/dg [–id <value>] show
Object qualifier
Qualifier
Description
-id
Type the ID of a disk group.
Example

The following command shows details about all disk groups:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/dg show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1: ID                           = DG_1
   Drive type                   = FAT-SAS
   Drive size                   = 536870912000 (500GB)
   Rotational speed             = 10000 rpm 
   Number of drives             = 21
   Unconfigured drives          = 7
   Capacity                     = 11544872091648 (10.5TB)
   Recommended number of spares = 1

2: ID                           = DG_2
   Drive type                   = FAT-SAS
   Drive size                   = 1099511627776 (1TB)
   Rotational speed             = 7200 rpm 
   Number of drives             = 14
   Unconfigured drives          = 0
   Capacity                     = 15393162788864 (14TB)
   Recommended number of spares = 1

3: ID                           = DG_3
   Drive type                   = SAS
   Drive size                   = 107374182400 (100GB)
   Rotational speed    	 	 	 	  = 10000 rpm 
   Number of drives             = 10
   Unconfigured drives          = 3
   Capacity                     = 1099511627776 (1TB)
   Recommended number of spares = 1

View recommended disk group configurations

View the recommended disk groups from which to add disks to a storage pool based on a specified storage profile or pool type.

The show action command explains how to change the output format.
Format
/stor/config/dg recom {–profile <value>|-pool <value>}
Action qualifier
Qualifier
Description
-profile
Type the ID of a storage profile. The output will include the list of disk groups recommended for the specified storage profile.
-pool
Type the ID of a storage pool. The output will include the list of disk groups recommended for the specified storage pool.
Example

The following command shows the recommended disk groups for storage pool SPL_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/dg recom -pool SPL_1
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1:     ID                        = DG_1
       Drive type                = SAS
       Drive size                = 536870912000 (500GB)
       Number of drives          = 8
       Allowed numbers of drives = 4,8
       Capacity                  = 4398046511104 (4TB)

2:     ID                        = DG_2
       Drive type                = SAS
       Drive size                = 268435456000 (250GB)
       Number of drives          = 4
       Allowed numbers of drives = 4
       Capacity                  = 1099511627776 (1TB)

Manage file systems

File systems are logical containers on the system that provide file-based storage resources to hosts. You configure file systems on NAS servers, which maintain and manage the file systems. You create network shares on the file system, which connected hosts map or mount to access the file system storage. When creating a file system, you can enable support for the following network shares:

  • Common Internet File System (CIFS) shares, which provide storage access to Windows hosts.
  • Network file system (NFS) shares, which provide storage access to Linux/UNIX hosts.

Each file system is identified by an ID.

The following table lists the attributes for file systems:

Table 10. File system attributes
Attribute
Description
ID
ID of the file system.
Name
Name of the file system.
Description
Description of the file system.
Health state
Health state of the file system. The health state code appears in parentheses. Value is one of the following:
  • OK (5) - File system is operating normally.
  • Degraded/Warning (10) - Working, but one or more of the following may have occurred:
    • One or more of its storage pools are degraded.
    • It has almost reached full capacity. Increase the primary storage size, or create additional file systems to store your data, to avoid data loss. Change file system settings explains how to change the primary storage size.
  • Minor failure (15) - One or both of the following may have occurred:
    • One or more of its storage pools have failed.
    • The associated NAS server has failed.
  • Major failure (20) - One or both of the following may have occurred:
    • One or more of its storage pools have failed.
    • File system is unavailable.
  • Critical failure (25) - One or more of the following may have occurred:
    • One or more of its storage pools are unavailable.
    • File system is unavailable.
    • File system has reached full capacity. Increase the primary storage size, or create additional file systems to store your data, to avoid data loss. Change file system settings explains how to change the primary storage size.
  • Non-recoverable error (30) - One or both of the following may have occurred:
    • One or more of its storage pools are unavailable.
    • File system is unavailable.
Health details
Additional health information. See Appendix A, Reference, for health information details.
File system
Identifier for the file system. Output of some metrics commands display only the file system ID. This will enable you to easily identify the file system in the output.
Server name
Name of the primary NAS server that the file system uses.
Storage pool ID
ID of the storage pool the file system is using.
Storage pool
Name of the storage pool that the file system uses.
Protocol
Protocol used to enable network shares from the file system. Values are one of the following:
  • nfs - Protocol for Linux/UNIX hosts.
  • cifs - Protocol for Windows hosts.
  • multiprotocol - Protocol for UNIX and Windows hosts.
Access policy
Access policy type for this file system. Values are one of the following:
  • native (default) - When this policy is selected, Unix mode bits are used for Unix/Linux clients, and Windows permissions (ACLs) are used for Windows clients.
  • Unix - When this policy is selected, Unix mode bits are used to grant access to each file on the file system.
  • Windows - When this policy is selected, permissions defined in Windows ACLs are honored for both Windows and Unix/Linux clients (Unix mode bits are ignored).
Size
Quantity of storage reserved for primary data.
Size used
Quantity of storage currently used for primary data.
Maximum size
Maximum size to which you can increase the primary storage capacity.
Thin provisioning enabled
Indication of whether thin provisioning is enabled. Value is yes or no. Default is no. All storage pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, storage pools can be over provisioned to support more storage capacity than they actually possess.
The Unisphere online help provides more details on thin provisioning.
Current allocation
If enabled, the quantity of primary storage currently allocated through thin provisioning.
Protection size used
Quantity of storage currently used for protection data.
Protection schedule
ID of an applied protection schedule. View protection schedules explains how to view the IDs of schedules on the system.
Protection schedule paused
Indication of whether an applied protection schedule is currently paused. Value is yes or no.
FAST VP policy
FAST VP policy of the file system. Value is one of the following:
  • Start high then auto-tier
  • Auto-tier
  • Highest available tier
  • Lowest available tier
FAST VP distribution
Percentage of the file system storage assigned to each tier. The format is:
<tier_name>:<value>%
where, <tier_name> is the name of the storage tier and <value> is the percentage of storage in that tier.
File level retention
Indication of whether file-level retention (FLR) is enabled. Value is yes or no. FLR provides a way to set file-based permissions to limit write access to the files for a specific period of time. In this way, file-level retention can ensure the integrity of data during that period by creating an unalterable set of files and directories.
File-level retention prevents files from being modified or deleted by NAS clients and users. Once you enable FLR for a Windows file system, you cannot disable it. Leave FLR disabled unless you intend to implement self-regulated archiving and you intend the administrator to be the only trusted user of the file system on which FLR is enabled. The Unisphere online help and the host documentation provide more details on FLR.
CIFS synchronous write
Indication of whether CIFS synchronous writes option is enabled. Value is yes or no.
  • The CIFS synchronous writes option provides enhanced support for applications that store and access database files on Windows network shares. On most CIFS filesystems read operations are synchronous and write operations are asynchronous. When you enable the CIFS synchronous writes option for a Windows (CIFS) file system, the system performs immediate synchronous writes for storage operations, regardless of how the CIFS protocol performs write operations.
  • Enabling synchronous write operations allows you to store and access database files (for example, MySQL) on CIFS network shares. This option guarantees that any write to the share is done synchronously and reduces the chances of data loss or file corruption in various failure scenarios, for example, loss of power.
Do not enable CIFS synchronous writes unless you intend to use the Windows file systems to provide storage for database applications.
The Unisphere online help provides more details on CIFS synchronous write.
CIFS oplocks
Indication of whether opportunistic file locks (oplocks) for CIFS network shares are enabled. Value is yes or no.
  • Oplocks allow CIFS clients to buffer file data locally before sending it to a server. CIFS clients can then work with files locally and periodically communicate changes to the system, rather than having to communicate every operation to the system over the network.
  • This feature is enabled by default for Windows (CIFS) file systems. Unless your application handles critical data or has specific requirements that make this mode or operation unfeasible, leave oplocks enabled.
The Unisphere online help provides more details on CIFS oplocks.
CIFS notify on write
Indication of whether write notifications for CIFS network shares are enabled. Value is yes or no. When enabled, Windows applications receive notifications each time a user writes or changes a file on the CIFS share.
If this option is enabled, the value for CIFS directory depth indicates the lowest directory level to which the notification setting applies.
CIFS notify on access
Indication of whether file access notifications for CIFS shares are enabled. Value is yes or no. When enabled, Windows applications receive notifications each time a user accesses a file on the CIFS share.
If this option is enabled, the value for CIFS directory depth indicates the lowest directory level to which the notification setting applies.
CIFS directory depth
For write and access notifications on CIFS network shares, the subdirectory depth permitted for file notifications. Value range is 1-512. Default is 512.
Deduplication enabled
Indication of whether deduplication is enabled on the file system. Value is:
  • Yes
  • No
Creation time
Date and time when the file system was created.
Last modified time
Date and time when the file system settings were last changed.
Snapshot count
Number of snapshots created on the file system.

Create file systems

Create an NFS file system or CIFS file system. You must create a file system for each type of share (NFS or CIFS) you plan to create. Once you create a file system, create the NFS or CIFS network shares and use the ID of file system to associate it with a share.

Size qualifiers provides details on using size qualifiers to specify a storage size.
Prerequisites
Format
/stor/prov/fs create [-async] -name <value> [-descr <value>] -server <value> -pool <value> -size <value> [-thin {yes | no}] -type {cifs | multiprotocol}[-cifsSyncWrites {yes | no}] [-cifsOpLocks {yes | no}] [-cifsNotifyOnWrite {yes | no}] [-cifsNotifyOnAccess {yes | no}] [-cifsNotifyDirDepth <value>] | nfs} [-accessPolicy {native | Windows | Unix}] [-fastvpPolicy { startHighThenAuto | auto | highest | lowest}] [-fileLevelRet {yes | no}] [-sched <value> [-schedPaused {yes | no}]]
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-name
Type a name for the file system.
-descr
Type a brief description of the file system.
-server
Type the ID of the NAS server that will be the primary NAS server for the file system. View NAS servers explains how to view the IDs of the NAS servers on the system.
-pool
Type the name of the storage pool that the file system will use.
Value is case-insensitive.
View storage pools explains how to view the names of the storage pools on the system.
-size
Type the quantity of storage to reserve for the file system. Storage resource size limitations explains the limitations on storage size.
-thin
Enable thin provisioning on the file system. Value is yes or no. Default is no.
-type
Specify the type of network shares to export from the file system. Value is one of the following:
  • nfs — Network shares for Linux/UNIX hosts.
  • cifs — Network shares for Windows hosts.
  • multiprotocol — Network shares for multiprotocol sharing.
Values are case-insensitive.
-cifsSyncWrites
Enable synchronous write operations for CIFS network shares. Value is yes or no. Default is no.
-cifsOpLocks
Enable opportunistic file locks (oplocks) for CIFS network shares. Value is yes or no. Default is yes.
-cifsNotifyOnWrite
Enable to receive notifications when users write to a CIFS share. Value is yes or no. Default is no.
-cifsNotifyOnAccess
Enable to receive notifications when users access a CIFS share. Value is yes or no. Default is no.
-cifsNotifyDirDepth
If the value for -cifsNotifyOnWrite or -cifsNotifyOnAccess is yes (enabled), specify the subdirectory depth to which the notifications will apply. Value range is within range 1–512. Default is 512.
-accessPolicy
Access policy type for this file system. Valid values (case insensitive):
  • native (default)
  • Unix
  • Windows
-fileLevelRet
Enable file-level retention on the file system. Values is yes or no. Default is no.
-sched
Type the ID of a protection schedule to apply to the storage resource. View protection schedules explains how to view the IDs of the schedules on the system.
-schedPaused
Specify whether to pause the protection schedule specified for -sched. Value is yes or no.
Example

The following command creates a file system with these settings:

  • Name is FileSystem01.
  • Description is “NFS shares.”
  • Uses the capacity storage pool.
  • Uses NAS server NAS_1 as the primary NAS server.
  • Primary storage size is 100 MB.
  • Supports NFS network shares.

The file system receives the ID FS_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs create -name FileSystem01 -descr "NFS shares" -pool capacity -server nas_1 -size 100M -type nfs
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = FS_1
Operation completed successfully.

View file systems

View details about a file system. You can filter on the file system ID.

The show action command explains how to change the output format.
Format
/stor/prov/fs [-id <value>] show
Object qualifier
Qualifier
Description
-id
Type the ID of a file system.
Example

The following command lists details about all file ssytems on the system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1:     ID                    = fs1
       Name                  = MyFS
       Description           = my file system
       Health state          = OK (5)
       File system           = FS_1
       Server                = nas_1
       Storage pool          = Performance
       Protocol              = nfs
       Size                  = 1099511627776 (1T)
       Size used             = 128849018880 (120G)
       Protection size used  = 1099511627776 (1T)
       Deduplication enabled = no

Change file system settings

Change the settings for a file system.

Size qualifiers explains how to use the size qualifiers when specifying a storage size.
Format
/stor/prov/fs -id <value> set [-async] [-descr <value>] [-size <value>] [-thin {yes | no}] [-cifsSyncWrites {yes | no}] [-fastvpPolicy { startHighThenAuto | auto | highest | lowest | none}] [-cifsOpLocks {yes | no}] [-cifsNotifyOnWrite {yes | no}] [-cifsNotifyOnAccess {yes | no}] [-cifsNotifyDirDepth <value>] [{-sched <value> | -noSched}] [-schedPaused {yes | no}]
Object qualifier
Qualifier
Description
-id
Type the ID of the file system to change.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-descr
Type a brief description of the file system.
-size
Type the amount of storage in the pool to reserve for the file system. Storage resource size limitations explains the limitations on storage size.
-thin
Enable thin provisioning on the file system. Value is yes or no. Default is no.
-cifsSyncWrites
Enable synchronous write operations for CIFS network shares. Value is yes or no. Default is no.
-cifsOpLocks
Enable opportunistic file locks (oplocks) for CIFS network shares. Value is yes or no. Default is yes.
-cifsNotifyOnWrite
Enable to receive notifications when users write to a CIFS share. Value is yes or no. Default is no.
-cifsNotifyOnAccess
Enable to receive notifications when users access a CIFS share. Value is yes or no. Default is no.
-cifsNotifyDirDepth
If the value for -cifsNotifyOnWrite or -cifsNotifyOnAccess is yes (enabled), specify the subdirectory depth to which the notifications will apply. Value range is 1–512. Default is 512.
-sched
Type the ID of the schedule to apply to the file system. View protection schedules explains how to view the IDs of the schedules on the system.
-schedPaused
Pause the schedule specified for the -sched qualifier. Value is yes or no (default).
-noSched
Unassigns the protection schedule.
-fastvpPolicy
Specify the FAST VP policy of the file system. Value is one of the following:
  • startHighThenAuto
  • auto
  • highest
  • lowest
Example

The following command enables thin provisioning on file system FS_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs –id FS_1 set -thin yes
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = FS_1
Operation completed successfully.

Delete file systems

Delete a file system.

Deleting a file system removes all network shares, and optionally snapshots associated with the file system from the system. After the file system is deleted, the files and folders inside it cannot be restored from snapshots. Back up the data from a file system before deleting it from the system.
Format
/stor/prov/fs -id <value> delete [-deleteSnapshots {yes | no}] [-async]
Object qualifier
Qualifier
Description
-id
Type the ID of the file system to delete.
Action qualifier
Qualifier
Description
-deleteSnapshots
Specifies that snapshots of the file system can be deleted along with the file system itself. Value is one of the following:
  • Yes
  • No
Default value is no.
-async
Run the operation in asynchronous mode.
Example

The following command deletes file system FS_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs -id FS_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

Manage NFS network shares

Network file system (NFS) network shares use the NFS protocol to provide an access point for configured Linux/UNIX hosts, or IP subnets, to access file system storage. NFS network shares are associated with an NFS file system.

Each NFS share is identified by an ID.

The following table lists the attributes for NFS network shares:

Table 11. NFS network share attributes
Attribute
Description
ID
ID of the share.
Name
Name of the share.
Description
Brief description of the share.
Local path
Name of the directory on the system where the share resides.
Export path
Export path, used by hosts to connect to the share.
The export path is a combination of the name of the associated NAS server and the name of the share.
File system
ID of the parent file system associated with the NFS share.
Default access
Default share access settings for host configurations and for unconfigured hosts that can reach the share. Value is one of the following:
  • ro - Read-only access to primary storage and snapshots associated with the share.
  • rw - Read/write access to primary storage and snapshots associated with the share.
  • root - Read/write root access to primary storage and snapshots associated with the share. This includes the ability to set access controls that restrict the permissions for other login accounts.
  • na - No access to the share or its snapshots.
Read-only hosts
ID of each host that has read-only permission to the share and its snapshots.
Read/write hosts
ID of each host that has read/write permissions to the share and its snapshots.
Root hosts
ID of each host that has root permission to the share and its snapshots.
No access hosts
ID of host that has no access to the share or its snapshots.
Creation time
Creation time of the share.
Last modified time
Last modified time of the share.

Create NFS network shares

Create an NFS share to export a file system through the NFS protocol.

Share access permissions set for specific hosts take effect only if the host-specific setting is less restrictive than the default access setting for the share. Additionally, setting access for a specific host to “No Access” always takes effect over the default access setting.
  • Example 1: If the default access setting for a share is Read-Only, setting the access for a specific host configuration to Read/Write will result in an effective host access of Read/Write.
  • Example 2: If the default access setting for the share is Read-Only, setting the access permission for a particular host configuration to No Access will take effect and prevent that host from accessing to the share.
  • Example 3: If the default access setting for a share is Read-Write, setting the access permission for a particular host configuration to Read-Only will result in an effective host access of Read/Write.
Prerequisite

Configure a file system to which to associate the NFS network shares. Create file systems explains how to create file systems on the system.

Format
/stor/prov/fs/nfs create [-async] –name <value> [-descr <value>] –fs <value> -path <value> [-defAccess {ro|rw|root|na}] [-roHosts <value>] [-rwHosts <value>] [-rootHosts <value>] [-naHosts <value>]
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-name
Type a name for the share.
This value, along with the name of the NAS server, constitutes the export path by which hosts access the share.
-descr
Type a brief description of the share.
-fs
Type the ID of the parent file system associated with the NFS share.
-path
Type a name for the directory on the system where the share will reside. This path must correspond to an existing directory/folder name within the share that was created from the host-side.
  • Each share must have a unique local path. The system automatically creates this path for the initial share created when you create the file system.
  • Before you can create additional network shares within an NFS file system, you must create network shares within it from a Windows host that is connected to the file system. After a share has been created from a mounted host, you can create a corresponding share on the system and set access permissions accordingly.
-defAccess
Specify the default share access settings for host configurations and for unconfigured hosts that can reach the share. Value is one of the following:
  • ro — Read-only access to primary storage and snapshots associated with the share.
  • rw — Read/write access to primary storage and snapshots associated with the share.
  • root — Read/write root access to primary storage and snapshots associated with the share. This includes the ability to set access controls that restrict the permissions for other login accounts.
  • na — No access to the share or its snapshots.
-roHosts
Type the ID of each host configuration you want to grant read-only permission to the share and its snapshots. Separate each ID with a comma.
  • For host configurations of type 'host,' by default, all of the host's IP addresses can access the share and its snapshots. To allow access to only specific IPs, type those specific IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address.
  • View host configurations explains how to view the ID of each host configuration.
-rwHosts
Type the ID of each host configuration you want to grant read-write permission to the share and its snapshots. Separate each ID with a comma.
  • For host configurations of type 'host,' by default, all of the host's IP addresses can access the share and its snapshots. To allow access to only specific IPs, type those specific IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address.
  • View host configurations explains how to view the ID of each host configuration.
-rootHosts
Type the ID of each host configuration you want to grant root permission to the share and its snapshots. Separate each ID with a comma.
  • For host configurations of type 'host,' by default, all of the host's IP addresses can access the share and its snapshots. To allow access to only specific IPs, type those specific IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address.
  • View host configurations explains how to view the ID of each host configuration.
-naHosts
Type the ID of each host configuration you want to block access to the share and its snapshots. Separate each ID with a comma.
  • For host configurations of type 'host,' by default, all of the host's IP addresses cannot access the share and its snapshots. To limit access for specific IPs, type the IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address.
  • View host configurations explains how to view the ID of each host configuration.
Example

The following command creates an NFS share with these settings:

  • Name is NFSshare.
  • Description is “My share.”
  • Associated to file system fs1.
  • Local path on the system is directory nfsshare.
  • Host HOST_1 has read-only permissions to the share and its snapshots.
  • Hosts HOST_2 and HOST_3 have read and write access to the share and its snapshots.

The share receives ID NFS_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/nfs create –name NFSshare -descr “My share” –res fs1 -path ”nfsshare” HOST_1 -rwHosts “HOST_2,HOST_3”
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = NFS_1
Operation completed successfully.

View NFS share settings

View details of an NFS share. You can filter on the NFS share ID or view the NFS network shares associated with a file system ID.

The show action command explains how to change the output format.
Format
/stor/prov/fs/nfs [{-id <value>|-fs <value>}] show
Object qualifier
Qualifier
Description
-id
Type the ID of an NFS share.
-fs
Type the ID of an NFS file system to view the associated NFS network shares.
Example

The following command lists details for all NFS network shares on the system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/nfs show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1:     ID               = NFS_1
       Name             = MyNFSshare1
       Description      = My nfs share
       Resource         = fs1
       Local path       = nfsshare1
       Export path      = 10.64.75.10/MyNFSshare1
       Default access   = na
       Read-only hosts  = 1014, 1015
       Read/write hosts = 1016
       Root hosts       =
       No access hosts  =


2:     ID               = NFS_2
       Name             = MyNFSshare2
       Description      = This is my second share
       Resource         = fs1
       Local path       = nfsshare2
       Export path      = 10.64.75.10/MyNFSshare2
       Default access   = na
       Read-only hosts  = 1014, 1015
       Read/write hosts = 1016
       Root hosts       =
       No access hosts  =

Change NFS share settings

Change the settings of an NFS share.

Format
/stor/prov/fs/nfs –id <value> set [-async][-descr <value>] [-defAccess {ro|rw|root|na}] [-roHosts <value>] [-rwHosts <value>] [-rootHosts <value>] [-naHosts <value>]
Object qualifier
Qualifier
Description
-id
Type the ID of an NFS share to change. View NFS share settings explains how to view the IDs of the NFS network shares on the system.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-descr
Type a brief description of the share.
-defAccess
Specify the default share access settings for host configurations and for unconfigured hosts who can reach the share. Value is one of the following:
  • ro – Read-only access to primary storage and snapshots associated with the share.
  • rw – Read/write access to primary storage and snapshots associated with the share.
  • root – Read/write root access to primary storage and snapshots associated with the share. This includes the ability to set access controls that restrict the permissions for other login accounts.
  • na – No access to the share or its snapshots.
-roHosts
Type the ID of each host configuration you want to grant read-only permission to the share and its snapshots. Separate each ID with a comma.
  • For host configurations of type 'host,' by default, all of the host's IP addresses can access the share and its snapshots. To allow access to only specific IPs, type those specific IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address.
  • View host configurations explains how to view the ID of each host configuration.
-rwHosts
Type the ID of each host configuration you want to grant read-write permission to the share and its snapshots. Separate each ID with a comma.
  • For host configurations of type 'host,' by default, all of the host's IP addresses can access the share and its snapshots. To allow access to only specific IPs, type those specific IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address.
  • View host configurations explains how to view the ID of each host configuration.
-rootHosts
Type the ID of each host configuration you want to grant root permission to the share and its snapshots. Separate each ID with a comma.
  • For host configurations of type 'host,' by default, all of the host's IP addresses can access the share and its snapshots. To allow access to only specific IPs, type those specific IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address.
  • View host configurations explains how to view the ID of each host configuration.
-naHosts
Type the ID of each host configuration you want to block access to the share and its snapshots. Separate each ID with a comma.
  • For host configurations of type 'host,' by default, all of the host's IP addresses cannot access the share and its snapshots. To limit access for specific IPs, type the IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address.
  • View host configurations explains how to view the ID of each host configuration.
Example

The following command changes NFS share NFS_1 to block access to the share and its snapshots for host HOST_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/nfs –id NFS_1 set -descr “My share” -naHosts ”HOST_1”
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = NFS_1
Operation completed successfully.

Delete NFS network shares

Delete an NFS share.

Deleting a share removes any files and  folders associated with the share from the system. You cannot use snapshots to restore the contents of a share. Back up the data from a share before deleting it from the system.
Format
/stor/prov/fs/nfs –id <value> delete [-async]
Object qualifier
Qualifier
Description
-id
Type the ID of an NFS share to change. View NFS share settings explains how to view the IDs of the NFS network shares on the system.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
Example

The following command deletes NFS share NFS_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/nfs –id NFS_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

Manage CIFS network shares

Common internet file system (CIFS) network shares use the CIFS protocol to provide an access point for configured Windows hosts, or IP subnets, to access file system storage. CIFS network shares are associated with a CIFS file system.

Each CIFS share is identified by an ID.

The following table lists the attributes for CIFS network shares:

Table 12. CIFS network share attributes
Attribute
Description
ID
ID of the share.
Name
Name of the share.
Description
Brief description of the share.
Local path
Name of the directory on the system where the share resides.
Export path
Export path, used by hosts to connect to the share.
The export path is a combination of the name of the associated NAS server and the name of the share.
File system
ID of the parent file system associated with the CIFS share.
Creation time
Creation time of the share.
Last modified time
Last modified time of the share.
Availability enabled
Continuous availability state.
Encryption enabled
CIFS encryption state.
Umask
Indicates the default Unix umask for new files created on the share. If not specified, the umask defaults to 022.
ABE enabled
Indicates whether an Access-Based Enumeration (ABE) filter is enabled. Valid values include:
  • yes — Filters the list of available files and folders on a share to include only those that the requesting user has access to.
  • no (default)
DFS enabled
Indicates whether Distributed File System (DFS) is enabled. Valid values include:
  • yes — Allows administrators to group shared folders located on different shares by transparently connecting them to one or more DFS namespaces.
  • no (default)
BranchCache enabled
Indicates whether BranchCache is enabled. Valid values include:
  • yes — Copies content from the main office or hosted cloud content servers and caches the content at branch office locations. This allows client computers at branch offices to access content locally rather than over the WAN.
  • no (default)
Offline availability
Indicates whether Offline availability is enabled. When enabled, users can use this feature on their computers to work with shared folders stored on a server, even when they are not connected to the network. Valid values include:
  • none — Prevents clients from storing documents and programs in offline cache. (default)
  • documents — All files that clients open from the share will be available offline.
  • programs — All programs and files that clients open from the share will be available offline. Programs and files will preferably open from offline cache, even when connected to the network.
  • manual — Only specified files will be available offline.

Create CIFS network shares

Create a CIFS share to export a file system through the CIFS protocol.

Prerequisite

Configure a file system to which to associate the CIFS network shares. Create file systemsexplains how to create file systems on the system.

Format
/stor/prov/fs/cifs create [-async] –name <value> [-descr <value>] –fs <value> -path <value> [-enableContinuousAvailability {yes|no}] [-enableCIFSEncryption {yes|no}] [-umask <value> ] [-enableABE {yes | no} ] [-enableBranchCache {yes | no}] [-offlineAvailability {none | documents | programs | manual} ]
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-name
Type a name for the share.
This value, along with the name of the NAS server, constitutes the export path by which hosts access the share.
-descr
Type a brief description of the share.
-fs
Type the ID of the parent file system associated with the CIFS share.
-path
Type a name for the directory on the system where the share will reside. This path must correspond to an existing directory/folder name within the share that was created from the host-side.
  • Each share must have a unique local path. The system automatically creates this path for the initial share created when you create the file system.
  • Before you can create additional network shares within an NFS file system, you must create network shares within it from a Windows host that is connected to the file system. After a share has been created from a mounted host, you can create a corresponding share on the system and set access permissions accordingly.
-enableContinuousAvailability
Specify whether continuous availability is enabled.
-enableCIFSEncryption
Specify whether CIFS encryption is enabled.
-umask
Type the default Unix umask for new files created on the share.
-enableABE
Specify if Access-based Enumeration (ABE) is enabled. Valid values include:
  • yes
  • no
-enableBranchCache
Specify if BranchCache is enabled. Valid values include:
  • yes
  • no
-offlineAvailability
Specify the type of offline availability. Valid values include:
  • none (default) — Prevents clients from storing documents and programs in offline cache.
  • documents — Allows all files that clients open to be available offline.
  • programs — Allows all programs and files that clients open to be available offline. Programs and files will open from offline cache, even when connected to the network.
  • manual — Allows only specified files to be available offline.
Example

The following command creates a CIFS share with these settings:

  • Name is CIFSshare.
  • Description is “My share.”
  • Associated to file system fs1.
  • Local path on the system is directory cifsshare.
  • Continuous availability is enabled.
  • CIFS encryption is enabled.

The share receives ID CIFS_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/cifs create –name CIFSshare -descr “My share” –fs fs1 -path ”cifsshare” -enableContinuousAvailability yes -enableCIFSEncryption yes
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = CIFS_1
Operation completed successfully.

View CIFS share settings

View details of a CIFS share. You can filter on the CIFS share ID or view the CIFS network shares associated with a file system ID.

The show action command explains how to change the output format.
Format
/stor/prov/fs/cifs [{-id <value>|-fs <value>}] show
Object qualifier
Qualifier
Description
-id
Type the ID of a CIFS share.
-fs
Type the ID of an CIFS file system to view the associated CIFS network shares.
Example

The following command lists details for all NFS network shares on the system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/cifs show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1:     ID           = CIFS_1
       Name         = MyCIFSshare1
       Description  = This is my CIFS share
       File system  = fs1
       Local path   = cifsshare1
       Export path  = 10.64.75.10/MyCIFSshare1

2:     ID           = CIFS_2
       Name         = MyCIFSshare2
       Description  = This is my second share
       File system  = fs1
       Local path   = cifsshare2
       Export path  = 10.64.75.10/MyCIFSshare2

Change CIFS share settings

Change the settings of an CIFS share.

Format
/stor/prov/fs/cifs –id <value> set [-async] –name <value> [-descr <value>] [-enableContinuousAvailability {yes|no}] [-enableCIFSEncryption {yes|no}] [-umask <value> ] [-enableABE {yes | no} ] [-enableBranchCache {yes | no}] [-offlineAvailability {none | documents | programs | manual} ]
Object qualifier
Qualifier
Description
-id
Type the ID of a CIFS share to change.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-descr
Specifies the description for the CIFS share.
-enableContinuousAvailability
Specifies whether continuous availability is enabled.
-enableCIFSEncryption
Specifies whether CIFS encryption is enabled.
-umask
Type the default Unix umask for new files created on the share.
-enableABE
Specify if Access-Based Enumeration (ABE) is enabled. Valid values include:
  • yes
  • no
-enableBranchCache
Specify if BranchCache is enabled. Valid values include:
  • yes
  • no
-offlineAvailability
Specify the type of offline availability. Valid values include:
  • none (default) — Prevents clients from storing documents and programs in offline cache.
  • documents — Allows all files that users open to be available offline.
  • programs — Allows all programs and files that users open to be available offline. Programs and files will open from offline cache, even when connected to the network.
  • manual — Allows only specified files to be available offline.
Example

The following command sets the description of CIFS share CIFS_1 to My share.

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/cifs –id CIFS_1 set -descr “My share”
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = CIFS_1
Operation completed successfully.

Delete CIFS network shares

Delete a CIFS share.

Deleting a share removes any files and  folders associated with the share from the system. You cannot use snapshots to restore the contents of a share. Back up the data from a share before deleting it from the system.
Format
/stor/prov/fs/cifs –id <value> delete [-async]
Object qualifier
Qualifier
Description
-id
Type the ID of a CIFS share to change.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
Example

The following command deletes CIFS share CIFS_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/sf/cifs –id CIFS_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

Manage LUNs

A LUN is a single unit of storage that represents a specific storage pool and quantity of Fibre Channel (FC) or iSCSI storage. Each LUN is associated with a name and logical unit number (LUN) identifier.

Each LUN is identified by an ID.

The following table lists the attributes for LUNs:

Table 13. LUN attributes
Attribute
Description
ID
ID of the LUN.
Name
Name of the LUN.
Description
Brief description of the LUN.
Storage pool ID
ID of the storage pool the LUN is using.
Storage pool
Name of the storage pool the LUN is using.
Health state
Health state of the LUN storage. The health state code appears in parentheses. Value is one of the following:
  • OK (5) — The LUN storage is operating normally.
  • Degraded/Warning (10) — Working, but one or more of the following may have occurred:
    • One or more of its storage pools are degraded.
    • Resource is degraded.
    • Resource is running out of space and needs to be increased.
  • Minor failure (15) — One or both of the following may have occurred:
    • One or more of its storage pools have failed.
    • Resource is unavailable.
  • Major failure (20) — One or both of the following may have occurred:
    • One or more of its storage pools have failed.
    • Resource is unavailable.
  • Critical failure (25) — One or more of the following may have occurred:
    • One or more of its storage pools are unavailable.
    • Resource is unavailable.
    • Resource has run out of space and needs to be increased.
  • Non-recoverable error (30) — One or both of the following may have occurred:
    • One or more of its storage pools are unavailable.
    • Resource is unavailable.
Health details
Additional health information.
Size
Current size of the LUN.
Maximum size
Maximum size of the LUN.
Thin provisioning enabled
Indication of whether thin provisioning is enabled. Value is yes or no. Default is no. All storage pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, storage pools can be over provisioned to support more storage capacity than they actually possess.
The Unisphere online help provides more details on thin provisioning.
Current allocation
If thin provisioning is enabled, the quantity of primary storage currently allocated through thin provisioning.
Protection size used
Quantity of storage currently used for protection data.
Snapshot count
Number of snapshots created on the LUN.
Protection schedule
ID of a protection schedule applied to the VMFS datastore . View protection schedules explains how to view the IDs of the schedules on the system.
Protection schedule paused
Indication of whether an applied protection schedule is currently paused.
WWN
World Wide Name of the LUN.
Creation time
The time the resource was created.
Last modified time
The time resource was last modified.
SP owner
Indicates the default owner of the LUN. Value is one of the following:
  • SP A
  • SP B
Trespassed
Indicates whether the LUN is trespassed to the peer SP. Value is one of the following:
  • Yes
  • No
FAST VP policy
FAST VP policy of the LUN storage. Value is one of the following:
  • Start high then auto-tier
  • Auto-tier
  • Highest available tier
  • Lowest available tier
FAST VP distribution
Percentage of the LUN storage assigned to each tier. The format is:
<tier_name>:<value>%
where, <tier_name> is the name of the storage tier and <value> is the percentage of storage in that tier.
LUN access hosts
List of hosts with access permissions to the LUN.
Snapshots access hosts
List of hosts with access to snapshots of the LUN.

Create LUNs

Create a LUN to which host initiators connect to access storage.

Prerequisites

Configure at least one storage pool for the LUN to use and allocate at least one storage disk to the pool. Configure storage pools automatically (physical deployments only) explains how to create storage pools automatically on the system and Configure custom storage pools explains how to create a custom storage pool on the system.

Format
/stor/prov/luns/lun create [-async] -name <value> [-descr <value>] [-group <value>] -pool <value> -size <value> -thin {yes | no} [-sched <value> [-schedPaused {yes | no}]] [-fastvpPolicy { startHighThenAuto | auto | highest | lowest }] [ -lunHosts <value>] [ -snapHosts <value>]
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-name
Type the name of the LUN.
-descr
Type a brief description of the LUN.
-group
Type the ID of a LUN group to associate the new LUN with. View LUN groups explains how to view information on LUN groups.
If no LUN group is specified, the LUN will not be assigned to a LUN group.
-pool
Type the name of the storage pool that the LUN will use.
Value is case-insensitive.
View storage pools explains how to view the names of the storage pools on the system.
-size
Type the quantity of storage to allocate for the LUN. Storage resource size limitations explains the limitations on storage size.
-thin
Enable thin provisioning on the LUN. Value is yes or no. Default is no.
-sched
Type the ID of a protection schedule to apply to the storage resource. View protection schedules explains how to view the IDs of the schedules on the system.
-schedPaused
Pause the schedule specified for the -sched qualifier. Value is yes or no. Default is no.
-fastvpPolicy
Specify the FAST VP policy of the LUN. Value is one of the following:
  • startHighThenAuto
  • auto
  • highest
  • lowest
-lunHosts
Specifies a comma-separated list of hosts with access to the LUN.
-snapHosts
Specifies a comma-separated list of hosts with access to snapshots of the LUN.
Example

The following command creates a LUN with these settings:

  • Name is MyLUN.
  • Description is “My LUN.”
  • Associated with LUN group group_1.
  • Uses the pool_1 storage pool.
  • Primary storage size is 100 MB.

The LUN receives the ID lun_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun create -name "MyLUN" -descr "My LUN" -group group_1 -pool pool_1 -size 100M
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = lun_1
Operation completed successfully.

View LUNs

Display the list of existing LUNs.

The show action command explains how to change the output format.
Format
/stor/prov/luns/lun [{-id <value> | -group <value> | -standalone}] show
Object qualifier
Qualifier
Description
-id
Type the ID of a LUN.
-group
Type the name of a LUN group. The list of LUNs in the specified LUN group are displayed.
-standalone
Displays only LUNs that are not part of a LUN group.
Example

The following command displays details about all LUNs on the system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1:     ID                   = lun_1
       Name                 = MyLUN
       Description          = My LUN
       Group                = group_1
       Storage pool ID      = pool_1
       Storage pool         = Capacity
       Health state         = OK (5)
       Size                 = 2199023255552 (2T)
       Protection size used = 0
       SP owner             = SPA
       Trespassed           = no

2:     ID                   = lun_2
       Name                 = MyLUN2
       Description          = My second LUN
       Group                = group_1
       Storage pool ID      = pool_2
       Storage pool         = Performance
       Health state         = OK (5)
       Size                 = 104857600 (100M)
       Protection size used = 0
       SP owner             = SPB
       Trespassed           = no

Change LUNs

Change the settings for a LUN.

Format
/stor/prov/luns/lun -id <value> set [-async] [-name <value>] [-descr <value>] [-size <value>] [{-group <value> | -standalone}] [{-sched <value> | -noSched}] [-schedPaused {yes | no}] [-spOwner {spa | spb}] [-fastvpPolicy { startHighThenAuto | auto | highest | lowest | none}] [ -lunHosts <value>] [ -snapHosts <value>]
Object qualifier
Qualifier
Description
-id
Type the ID of the LUN to change.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-name
Type the name of the LUN.
-descr
Type a brief description of the LUN.
-group
Type the ID of a LUN group to associate the new LUN with. View LUN groups explains how to view information on LUN groups.
If no LUN group is specified, the LUN will not be assigned to a LUN group.
-size
Type the quantity of storage to allocate for the LUN. Storage resource size limitations explains the limitations on storage size.
-standalone
Removes the LUN from the LUN group.
-sched
Type the ID of the schedule to apply to the NFS datastore. View protection schedules explains how to view the IDs of the schedules on the system.
-schedPaused
Pause the schedule specified for the -sched qualifier. Value is yes or no. Default is no.
-noSched
Unassigns the protection schedule.
-spOwner
Specifies the default owner of the LUN. Value is one of the following:
  • SP A
  • SP B
-fastvpPolicy
Specify the FAST VP policy of the LUN storage. Value is one of the following:
  • startHighThenAuto
  • auto
  • highest
  • lowest
-lunHosts
Specifies a comma-separated list of hosts with access to the LUN.
-snapHosts
Specifies a comma-separated list of hosts with access to snapshots of the LUN.
Example

The following command updates LUN lun_1with these settings:

  • Name is NewName.
  • Description is “My new description.”
  • Primary storage size is 150 MB.
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun -id lun_1 set -name NewName -descr "My new description" -size 150M
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = lun_1
Operation completed successfully.

Delete LUNs

Delete a LUN.

Deleting a LUN removes all associated data from the system. After a LUN is deleted, you cannot restore the data inside it from snapshots. Back up the data from a LUN to another host before deleting it from the system.
Format
/stor/prov/luns/lun -id <value> delete [-deleteSnapshots {yes | no}] [-async]
Object qualifier
Qualifier
Description
-id
Type the ID of the LUN to delete.
Action qualifier
Qualifier
Description
-deleteSnapshots
Specifies that snapshots of the LUN can be deleted along with the LUN itself. Value is one of the following:
  • Yes
  • No (default)
-async
Run the operation in asynchronous mode.
Example

The following command deletes LUN lun_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun -id lun_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

Manage LUN groups

LUN groups provide a way to organize and group LUNs together to simplify storage tiering and snapshots when an application spans multiple LUNs.

The following table lists the attributes for LUN groups:

Table 14. LUN group attributes
Attribute
Description
ID
ID of the LUN.
Name
Name of the LUN.
Description
Brief description of the LUN.
Health state
Health state of the LUN storage. The health state code appears in parentheses. Value is one of the following:
  • OK (5) — The LUN storage is operating normally.
  • Degraded/Warning (10) - Working, but one or more of the following may have occurred:
    • One or more of its storage pools are degraded.
    • Resource is degraded.
    • Resource is running out of space and needs to be increased.
  • Minor failure (15) — One or both of the following may have occurred:
    • One or more of its storage pools have failed.
    • Resource is unavailable.
  • Major failure (20) — One or both of the following may have occurred:
    • One or more of its storage pools have failed.
    • Resource is unavailable.
  • Critical failure (25) — One or more of the following may have occurred:
    • One or more of its storage pools are unavailable.
    • Resource is unavailable.
    • Resource has run out of space and needs to be increased.
  • Non-recoverable error (30) — One or both of the following may have occurred:
    • One or more of its storage pools are unavailable.
    • Resource is unavailable.
Health details
Additional health information. See Appendix A, Reference, for health information details.
Total capacity
Total capacity of all associated LUNs.
Total current allocation
Total current allocation of all associated LUNs.
Thin provisioning enabled
Indication of whether thin provisioning is enabled. Value is yes or no. Default is no. All storage pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, storage pools can be over provisioned to support more storage capacity than they actually possess.
The Unisphere online help provides more details on thin provisioning.
Total protection size used
Total quantity of storage used for protection data.
Snapshot count
Number of snapshots created on the LUN.
Protection schedule
ID of a protection schedule applied to the VMFS datastore . View protection schedules explains how to view the IDs of the schedules on the system.
Protection schedule paused
Indication of whether an applied protection schedule is currently paused.
LUN access hosts
List of hosts with access permissions to the associated LUNs.
Hosts that have access to the snapshots of some, but not all of the associated LUNs are marked as Mixed.
Snapshots access hosts
List of hosts with access to snapshots of the associated LUNs.
Hosts that have access to the snapshots of some, but not all of the associated LUNs are marked as Mixed.
Replication destination
Indication of whether the LUN group is a destination for a replication session (local or remote). Value is yes or no. Manage replication sessions explains how to configure replication sessions on the system.
Creation time
The time the resource was created.
Last modified time
The time resource was last modified.
FAST VP policy
FAST VP policy of the LUN storage. Value is one of the following:
  • Start high then auto-tier
  • Auto-tier
  • Highest available tier
  • Lowest available tier
FAST VP distribution
Percentage of the LUN storage assigned to each tier. The format is:
<tier_name>:<value>%
where, <tier_name> is the name of the storage tier and <value> is the percentage of storage in that tier.

Create a LUN group

Create a LUN group.

Format
/stor/prov/luns/group create [-async] -name <value> [-descr <value>] [-sched <value> [-schedPaused {yes | no}]] [-replDest {yes | no}]
Action qualifier
Qualifier
Description
-async
Run the operation is asynchronous mode.
-name
Type a name for the storage resource.
Use a name that reflects the type and version of the application that will use it, which can facilitate how the storage resource is managed and monitored through Unisphere.
-descr
Type a brief description of the storage resource.
-sched
Type the ID of a protection schedule to apply to the storage resource. View protection schedules explains how to view the IDs of the schedules on the system.
-schedPaused
Specify whether to pause the protection schedule specified for -sched. Value is yes or no.
-replDest
Specifies whether the resource is a replication destination. Valid values are:
  • Yes
  • No (default)
Values are case insensitive.
Example

The following command creates a LUN group with these settings:

  • Name is GenericStorage01.
  • Description is “MyStorage.”
  • Uses protection schedule SCHD_1.

The storage resource receives the group_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/group create -name GenericStorage01 -descr "MyStorage" -sched SCHD_1
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = group_1
Operation completed successfully.

View LUN groups

Display the list of existing LUN groups.

The show action command explains how to change the output format.
Format
/stor/prov/luns/group -id <value> show
Object qualifier
Qualifier
Description
-id
Type the ID of a LUN group.
Example

The following command displays details about the LUN group on the system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/group show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1:     ID                          = group_1
       Name                        = MyLUNGroup
       Description                 = My LUN group
       Health state                = OK (5)
       Health details              = "The component is operating normally.  No action is required."
       Total capacity              = 107374182400 (100G)
       Thin provisioning enabled   = no
       Total current allocation    = 107374182400 (100G)
       Total protection size used  = 0
       Snapshot count              = 0
       Total current allocation    = 10737418240 (10G)
       Protection schedule         = SCHD_1
       Protection schedule paused  = no
       LUNs access hosts           = 1014, 1015
       Snapshots access hosts      = 1016(mixed)
       Creation time               = 2012-12-21 12:55:32            
       Last modified time          = 2013-01-15 10:31:56
       FAST VP policy              = mixed
       FAST VP distribution        = Best Performance: 55%, High Performance: 10%, High Capacity: 35%

Change LUN groups

Change the settings for a LUN group.

Format
/stor/prov/luns/group –id <value> set [-async] [-name <value>] [-descr <value>] [{-sched <value> | -noSched}] [-schedPaused {yes | no}] [-lunHosts <value>] [ -snapHosts <value>] [-replDest {yes | no}] [-fastvpPolicy { startHighThenAuto | auto | highest | lowest | none}]
Object qualifier
Qualifier
Description
-id
Type the ID of iSCSI storage resource to change.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-name
Type the name of the LUN.
-descr
Type a brief description of the LUN.
-sched
Type the ID of the schedule to apply to the LUN. View protection schedules explains how to view the IDs of the schedules on the system.
-schedPaused
Pause the schedule specified for the -sched qualifier. Value is yes or no (default).
-noSched
Unassigns the protection schedule.
-lunHosts
Specifies a comma-separated list of hosts with access to the LUN.
-snapHosts
Specifies a comma-separated list of hosts with access to snapshots of the LUN.
-replDest
Specifies whether the resource is a replication destination. Valid values are:
  • Yes
  • No (default)
Values are case insensitive.
-fastvpPolicy
Specify the FAST VP policy of the LUN storage. Value is one of the following:
  • startHighThenAuto
  • auto
  • highest
  • lowest
Example

The following command updates the LUN group group_1 with these settings:

  • Name is NewName.
  • Description is “New description.”
  • Uses protection schedule SCHD_2.
  • The selected schedule is currently paused.
  • The FAST VP policy is start high then auto-tier.
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/group -id group_1 set -name NewName -descr "New description" -sched SCHD_2 -schedPaused yes -fastvpPolicy startHighThenAuto
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = group_1
Operation completed successfully.

Delete LUN groups

Delete a LUN group.

Deleting a LUN group removes all LUNs and data associated with the LUN group from the system. After a LUN group is deleted, you cannot restore the data from snapshots. Back up the data from the LUN group before deleting it.
Format
/stor/prov/luns/group –id <value> delete -id <value> delete [-async]
Object qualifier
Qualifier
Description
-id
Type the ID of the LUN to delete.
Action qualifier
Qualifier
Description
-deleteSnapshots
Specifies that snapshots of the LUN can be deleted along with the LUN itself. Value is one of the following:
  • Yes
  • No (default)
-async
Run the operation in asynchronous mode.
Example

The following command deletes LUN group storage resource group_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/group -id group_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

Manage VMware NFS datastores

VMware NFS datastores provide file-based storage to VMware ESX Servers for hosting virtual machines (VM). You can provision and manage NFS datastores and view details about each NFS datastore on the system, such as their storage capacity and health.

Each NFS datastore is identified by an ID.

The following table lists the attributes for NFS datastores:

Table 15. NFS datastore attributes
Attribute
Description
ID
ID of the NFS datastore.
Name
Name of the NFS datastore.
Description
Description of the NFS datastore.
Health state
Health state of the NFS datastore. The health state code appears in parentheses. Value is one of the following:
  • OK (5) — NFS datastore is operating normally.
  • Degraded/Warning (10) — Working, but one or more of the following may have occurred:
    • One or more of its storage pools are degraded.
    • It has almost reached full capacity. Increase the primary storage size, or create additional NFS datastores to store your data, to avoid data loss. Change NFS datastore settings explains how to change the primary storage size.
  • Minor failure (15) — One or both of the following may have occurred:
    • One or more of its storage pools have failed.
    • The associated NAS server has failed.
  • Major failure (20) — One or both of the following may have occurred:
    • One or more of its storage pools have failed.
    • NFS datastore is unavailable.
  • Critical failure (25) — One or more of the following may have occurred:
    • One or more of its storage pools are unavailable.
    • NFS datastore is unavailable.
    • NFS datastore has reached full capacity. Increase the primary storage size, or create additional NFS datastore to store your data, to avoid data loss. Change NFS datastore settings explains how to change the primary storage size.
  • Non-recoverable error (30) — One or both of the following may have occurred:
    • One or more of its storage pools are unavailable.
    • NFS datastore is unavailable.
Health details
Additional health information. See Appendix A, Reference, for health information details.
File system
Identifier for the datastore. Output of some metrics commands display only the datastore ID. This will enable you to easily identify the datastore in the output.
Server
Name of the primary NAS server that the NFS datastore uses.
Storage pool ID
Identifier of the storage pool that the NFS datastore uses.
Storage pool
Name of the storage pool that the NFS datastore uses.
Format
Datastore format (applies to NFS datastores only). Valid values are:
  • UFS32 - Indicates a 32-bit NFS datastore.
  • UFS64 - Indicates a 64-bit NFS datastore.
Size
Quantity of storage reserved for primary data.
Size used
Quantity of storage currently used for primary data.
Maximum size
Maximum size to which you can increase the primary storage capacity.
Thin provisioning enabled
Indication of whether thin provisioning is enabled. Value is yes or no. Default is no. All storage pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, storage pools can be over provisioned to support more storage capacity than they actually possess.
The Unisphere online help provides more details on thin provisioning.
Current allocation
If enabled, the quantity of primary storage currently allocated through thin provisioning.
Protection size used
Quantity of storage currently used for protection data.
Snapshot count
Quantity of protection storage currently allocated through thin provisioning.
Protection schedule
ID of an applied protection schedule. View protection schedules explains how to view the IDs of schedules on the system.
Protection schedule paused
Indication of whether an applied protection schedule is currently paused. Value is yes or no.
FAST VP policy
FAST VP policy of the datastore. Value is one of the following:
  • Start high then auto-tier
  • Auto-tier
  • Highest available tier
  • Lowest available tier
FAST VP distribution
Percentage of the datastore assigned to each tier. The format is:
<tier_name>:<value>%
where, <tier_name> is the name of the storage tier and <value> is the percentage of storage in that tier.
Local path
Local path to be exported.
Export path
Export path to datastore.
Default access
Default share access settings for host configurations and for unconfigured hosts that can reach the NFS datastore. Value is one of the following:
  • ro — Read-only access to primary storage and snapshots associated with the NFS datastore.
  • rw — Read/write access to primary storage and snapshots associated with the NFS datastore.
  • root — Read/write root access to primary storage and snapshots associated with the NFS datastore. This includes the ability to set access controls that restrict the permissions for other login accounts.
  • na — No access to the NFS datastore or its snapshots.
Read-only hosts
ID of each host that has read-only permission to the NFS datastore and its snapshots.
Root hosts
ID of each host that has root permission to the NFS datastore and its snapshots.
No access hosts
ID of each host that has no access to the NFS datastore or its snapshots.
Deduplication enabled
Indication of whether deduplication is enabled on the NFS datastore. Valid values are:
  • Yes
  • No
Creation time
The time the resource was created.
Last modified time
The time the resource was last modified.

Create NFS datastores

Create an NFS datastore.

Prerequisites
Share access permissions set for specific hosts take effect only if the host-specific setting is less restrictive than the default access setting for the share. Additionally, setting access for a specific host to “No Access” always takes effect over the default access setting.
  • Example 1: If the default access setting for a share is Read-Only, setting the access for a specific host configuration to Read/Write will result in an effective host access of Read/Write.
  • Example 2: If the default access setting for the share is Read-Only, setting the access permission for a particular host configuration to No Access will take effect and prevent that host from accessing to the share.
  • Example 3: If the default access setting for a share is Read-Write, setting the access permission for a particular host configuration to Read-Only will result in an effective host access of Read/Write.
Format
/stor/prov/vmware/nfs create [-async] –name <value> [-replDest {yes|no}] [-descr <value>] -server <value> -pool <value> -size <value> [-thin {yes|no}][-sched <value> [-schedPaused {yes|no}]] [-defAccess {ro|rw|root|na}] [-fastvpPolicy {startHighThenAuto|auto|highest|lowest}][–roHosts <value>] [-rootHosts <value>] [-naHosts <value>] [-format {UFS32|UFS64}]
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-name
Type a name for the NFS datastore.
-descr
Type a brief description of the NFS datastore.
-server
Type the ID of the NAS server that will be the primary NAS server for the NFS datastore. View NAS servers explains how to view the IDs of the NAS servers on the system.
-pool
Type the name of the storage pool that the NFS datastore will use.
Value is case-insensitive.
View storage pools explains how to view the names of the storage pools on the system.
-size
Type the quantity of storage to reserve for the NFS datastore. Storage resource size limitations explains the limitations on storage size.
-thin
Enable thin provisioning on the NFS datastore. Value is yes or no. Default is no.
-sched
Type the ID of a protection schedule to apply to the storage resource. View protection schedules explains how to view the IDs of the schedules on the system.
-schedPaused
Specify whether to pause the protection schedule specified for -sched. Value is yes or no.
-fastvpPolicy
Specify the FAST VP policy of the datastore. Value is one of the following:
  • startHighThenAuto
  • auto
  • highest
  • lowest
-defAccess
Specify the default share access settings for host configurations and for unconfigured hosts that can reach the NFS datastore. Value is one of the following:
  • ro — Read-only access to primary storage and snapshots associated with the NFS datastore.
  • rw — Read/write access to primary storage and snapshots associated with the NFS datastore.
  • root — Read/write root access to primary storage and snapshots associated with the NFS datastore. This includes the ability to set access controls that restrict the permissions for other login accounts.
  • na — No access to the NFS datastore or its snapshots.
-roHosts
Type the ID of each host configuration you want to grant read-only permission to the NFS datastore and its snapshots. Separate each ID with a comma. For host configurations of type 'host,' by default, all of the host's IP addresses can access the NFS datastore and its snapshots. To allow access to only specific IPs, type those specific IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address. View host configurations explains how to view the ID of each host configuration.
-rootHosts
Type the ID of each host configuration you want to grant root permission to the NFS datastore and its snapshots. Separate each ID with a comma. For host configurations of type 'host,' by default, all of the host's IP addresses can access the NFS datastore and its snapshots. To allow access to only specific IPs, type those specific IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address. View host configurations explains how to view the ID of each host configuration.
-naHosts
Type the ID of each host configuration you want to block access to the NFS datastore and its snapshots. Separate each ID with a comma. For host configurations of type 'host,' by default, all of the host's IP addresses cannot access the NFS datastore and its snapshots. To limit access for specific IPs, type the IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address. View host configurations explains how to view the ID of each host configuration.
-format
Datastore format (applies to NFS datastores only). Valid values are:
  • UFS32 - Indicates a 32-bit NFS datastore.
  • UFS64 - Indicates a 64-bit NFS datastore.
Example

The following command creates an NFS datastore with these settings:

  • Named Accounting.
  • Description is “Accounting VMs.”
  • Uses NAS server nas_1 as the primary NAS server.
  • Uses the capacity storage pool.
  • Primary storage size is 100 GB.
  • No protection schedule.

The file system receives the ID NFSDS_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/vmware/nfs create –name Accounting –descr “Accounting VMs” –server nas_1 –pool capacity –size 100G
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = NFSDS_1
Operation completed successfully.

View NFS datastores

View details about an NFS datastore. You can filter on the NFS datastore ID.

The show action command explains how to change the output format.
Format
/stor/prov/vmware/nfs [-id <value> [-shrinkToSize <value>]] show
Object qualifier
Qualifier
Description
-id
Identifies the VMware NFS file system.
-shrinkToSize
Specify the targeted shrink size to view an estimate of the minimum size and reclaimable size.
Minimum size and reclaimable size are populated only when this qualifier is specified.
Example 1

The following command lists details about all NFS datastores on the system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/vmware/nfs show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection


1:     ID                   = NFSDS_1
       Name                 = MyVMware
       Description          = My VMware
       Health state         = OK (5)
       Server               = nas_1
       Storage pool ID      = pool_1
       Storage pool         = capacity
       Size                 = 536870912000 (500GB)
       Size used            = 128849018880 (120GB)
       Protection size used = 0
       Local path           = /MyVMware
       Export path          = 10.64.75.10/MyVMware
       Minimum size         = 
       Reclaimable size     = 
Example 2

The following command lists details about the vmware_1 NFS datastores with an shrink estimate:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/vmware/nfs -id vmware_1 -shrinkToSize 200G show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection


1:     ID                   = vmware_1
       Name                 = MyVMware
       Description          = My VMware
       Health state         = OK (5)
       File system          = fs_1
       Server               = SFServer00
       Storage pool ID      = pool_1
       Storage pool         = capacity
       Format               = UFS64
       Size                 = 536870912000 (500G)
       Size used            = 128849018880 (120G)
       Protection size used = 0
       Local path           = /
       Export path          = 10.64.75.10/MyVMware
       Minimum size         = 134217728000 (125G)
       Reclaimable size     = 322122547200 (300G)

Change NFS datastore settings

Change the settings for an NFS datastore.

Size qualifiers explains how to use the size qualifiers when specifying a storage size.
Format
/stor/prov/vmware/nfs –id <value> set [-async] –descr <value> -size <value> [-thin {yes|no}][{-sched <value> | noSched}[-schedPaused {yes|no}]][-fastvpPolicy {startHighThenAuto|auto|highest|lowest}][–roHosts <value>] [-rootHosts <value>] [-naHosts <value>][-replDest {yes | no}]
Object qualifier
Qualifier
Description
-id
Type the ID of the NFS datastore to change.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-descr
Type a brief description of the NFS datastore.
-size
Type the amount of storage in the pool to reserve for the NFS datastore. Storage resource size limitations explains the limitations on storage size.
-thin
Enable thin provisioning on the NFS datastore. Value is yes or no. Default is no.
-sched
Type the ID of the schedule to apply to the datastore View protection schedules explains how to view the IDs of the schedules on the system.
-noSched
Unassigns the protection schedule.
-fastvpPolicy
Specify the FAST VP policy of the datastore. Value is one of the following:
  • startHighThenAuto
  • auto
  • highest
  • lowest
-schedPaused
Pause the schedule specified for the -sched qualifier. Value is yes or no (default).
-defAccess
Specify the default share access settings for host configurations and for unconfigured hosts who can reach the datastore. Value is one of the following:
  • ro — Read-only access to primary storage and snapshots associated with the datastore
  • rw — Read/write access to primary storage and snapshots associated with the datastore.
  • root — Read/write root access to primary storage and snapshots associated with the datastore. This includes the ability to set access controls that restrict the permissions for other login accounts.
  • na — No access to the datastore or its snapshots.
Values are case-insensitive.
-roHosts
Type the ID of each host configuration you want to grant read-only permission to the datastore and its snapshots. Separate each ID with a comma. For host configurations of type 'host,' by default, all of the host's IP addresses can access the datastore and its snapshots. To allow access to only specific IPs, type those specific IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address. View host configurations explains how to view the ID of each host configuration.
-rootHosts
Type the ID of each host configuration you want to grant root permission to the datastore and its snapshots. Separate each ID with a comma. For host configurations of type 'host,' by default, all of the host's IP addresses can access the datastore and its snapshots. To allow access to only specific IPs, type those specific IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address. View host configurations explains how to view the ID of each host configuration.
-naHosts
Type the ID of each host configuration you want to block access to the datastore and its snapshots. Separate each ID with a comma. For host configurations of type 'host,' by default, all of the host's IP addresses cannot access the datastore and its snapshots. To limit access for specific IPs, type the IPs in square brackets after the host ID. For example: ID[IP,IP], where 'ID' is a host configuration ID and 'IP' is an IP address. View host configurations explains how to view the ID of each host configuration.
Example

The following command changes NFS datastore NFSDS_1 to provide read-only access permissions to host configurations HOST_1 and HOST_2 and blocks access for HOST_3:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/vmware/nfs –id NFSDS_1 set –roHosts “HOST_1,HOST_2” -naHosts “HOST_3”
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = NFSDS_1
Operation completed successfully.

Delete NFS datastores

Delete an NFS datastore.

Deleting an NFS datastore removes any files and folders associated with it from the system. You cannot use snapshots to restore the contents of the datastore. Back up the data from the datastore before deleting it from the system.
Format
/stor/prov/vmware/nfs -id <value> delete [-async]
Object qualifier
Qualifier
Description
-id
Type the ID of the NFS datastore to delete.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
Example

The following command deletes NFS datastore NFSDS_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/vmware/nfs -id NFSDS_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

Manage VMware VMFS datastores

Virtual Machine Filesystem (VMFS) datastores provide block storage for ESX Server hosts. VMFS datastores appear to ESX Server hosts as LUNs, to which the hosts connect through the iSCSI protocol. You can provision and manage NFS datastores and view details about each NFS datastore on the system, such as their storage capacity and health.

Each VMFS datastore is identified by an ID.

The following table lists the attributes for VMFS datastores.

Table 16. VMFS datastore attributes
Attribute
Description
ID
ID of the VMFS datastore.
LUN
Logical unit number (LUN) ID of the VMFS datastore.
Name
Name of the VMFS datastore.
Description
Brief description of the VMFS datastore.
Health state
Health state of the VMFS datastore. The health state code appears in parentheses. Value is one of the following:
  • OK (5) — VMFS datastore is operating normally.
  • Degraded/Warning (10) — Working, but one or more of the following may have occurred:
    • One or more of its storage pools are degraded.
    • Its replication session is degraded.
    • Its replication session has faulted.
    • It has almost reached full capacity. Increase the primary storage size, or create additional datastores to store your data, to avoid data loss. Change VMware VMFS datastore settings explains how to change the primary storage size.
  • Minor failure (15) — One or both of the following may have occurred:
    • One or more of its storage pools have failed.
    • The associated iSCSI node has failed.
  • Major failure (20) — One or both of the following may have occurred:
    • Datastore is unavailable.
    • One or more of the associated storage pools have failed.
  • Critical failure (25) — One or more of the following may have occurred:
    • One or more of its storage pools are unavailable.
    • Datastore is unavailable.
    • Datastore has reached full capacity. Increase the primary storage size, or create additional file systems to store your data, to avoid data loss. Change VMware VMFS datastore settings explains how to change the primary storage size.
  • Non-recoverable error (30) — One or both of the following may have occurred:
    • One or more of its storage pools are unavailable.
    • Datastore is unavailable.
Health details
Additional health information. See Appendix A, Reference, for health information details.
Storage pool ID
ID of the storage pool the datastore uses.
Storage pool
Name of the storage pool the LUN is using.
Size
Quantity of storage reserved for primary data.
Maximum size
Maximum size to which you can increase the primary storage capacity.
Thin provisioning enabled
Indication of whether thin provisioning is enabled. Value is yes or no. Default is no. All storage pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, storage pools can be over provisioned to support more storage capacity than they actually possess.
The Unisphere online help provides more details on thin provisioning.
Current allocation
If thin provisioning is enabled, the quantity of primary storage currently allocated through thin provisioning.
Protection size used
Quantity of storage currently used for protection data.
Snapshot count
Total number of snapshots on the datastore.
Maximum protection size
Maximum size to which you can increase the protection storage size.
Protection schedule
ID of a protection schedule applied to the VMFS datastore . View protection schedules explains how to view the IDs of the schedules on the system.
Protection schedule paused
Indication of whether an applied protection schedule is currently paused.
Snapshot auto-delete
Indicates whether application snapshots can be deleted automatically. Value is one of the following:
  • Yes
  • No
SP owner
Indicates the default owner of the LUN. Value is one of the following:
  • SP A
  • SP B
Trespassed
Indicates whether the LUN is trespassed to the peer SP. Value is one of the following:
  • Yes
  • No
LUN access hosts
List of hosts with access permissions to the VMFS datastore, presented to the hosts as a LUN.
Snapshots access hosts
List of hosts with access permissions to the VMFS datastore snapshots.
WWN
World Wide Name of the VMware resource.
Replication destination
Flag indicating whether the resource is a destination for a replication session. Value is one of the following:
  • Yes
  • No
Creation time
The time the resource was created.
Last modified time
The time the resource was last modified.
FAST VP policy
FAST VP policy of the datastore. Value is one of the following:
  • Start high then auto-tier
  • Auto-tier
  • Highest available tier
  • Lowest available tier
FAST VP distribution
Percentage of the datastore assigned to each tier. The format is:
<tier_name>:<value>%
where, <tier_name> is the name of the storage tier and <value> is the percentage of storage in that tier.
Version
Indicates the VMFS version of the datastore. Value is one of the following:
  • 3
  • 5
Block size
Indicates the block size in megabytes. Value is one of the following:
  • 1
  • 2
  • 4
  • 8

Create VMware VMFS datastores

Create a VMFS datastore.

Prerequisites
Format
/stor/prov/vmware/vmfs create [-async] -name <value> [-descr <value>] pool <value> -size <value> [-thin {yes | no}] [-sched <value> [-schedPaused {yes | no}]] [-fastvpPolicy { startHighThenAuto | auto | highest | lowest }] [-vdiskHosts <value>] [-snapHosts <value>] [-replDest {yes | no}] [-version {3 [-blockSize {1 | 2 | 4 | 8}] | 5}]
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-name
Type a name for the VMFS datastore.
Use a name that reflects the type and version of the application that will use it, which can facilitate how the VMFS datastore is managed and monitored through Unisphere.
-descr
Type a brief description of the VMFS datastore.
-pool
Type the name of the storage pool that the VMFS datastore will use.
Value is case-insensitive.
View storage pools explains how to view the names of the storage pools on the system.
-size
Type the quantity of storage to reserve for the VMFS datastore. Storage resource size limitations explains the limitations on storage size.
-thin
Enable thin provisioning on the VMFS datastore. Value is yes or no. Default is no.
-sched
Type the ID of a protection schedule to apply to the storage resource. View protection schedules explains how to view the IDs of the schedules on the system.
-schedPaused
Specify whether to pause the protection schedule specified for -sched. Value is yes or no.
-vdiskHosts
Type the ID of each host configuration to give access to the VMFS datastore. Separate each ID with a comma. By default, all iSCSI initiators on the host can access the VMFS datastore. To allow access for specific initiators, type the IQN of each initiator in square brackets after the host ID. For example: ID[IQN,IQN], where 'ID' is a host configuration ID and 'IQN' is an initiator IQN. View host configurations explains how to view the ID of each host configuration.
-snapHosts
Type the ID of each host configuration to give access to snapshots of the VMFS datastore. Separate each ID with a comma. By default, all iSCSI initiators on the host can access all VMFS datastore snapshots. To allow access for specific initiators, type the IQN of each initiator in square brackets after the host ID. For example: ID[IQN,IQN], where 'ID' is a host configuration ID and 'IQN' is an initiator IQN. View host configurations explains how to view the ID of each host configuration.
-replDest
Specifies whether the resource is a replication destination. Valid values are:
  • Yes
  • No (default)
Values are case insensitive.
-fastvpPolicy
Specify the FAST VP policy of the datastore. Value is one of the following:
  • startHighThenAuto
  • auto
  • highest
  • lowest
-version
Type the VMFS version of the datastore. Value is one of the following:
  • 3 (default)
  • 5
-blockSize
Type the block size in megabytes of the datastore. Value is one of the following:
  • 1
  • 2
  • 4
  • 8 (default)
Example

The following command creates a VMFS datastore with these settings:

  • Name is Accounting3.
  • Description is “Accounting Group 3.”
  • Uses the capacity storage pool.
  • Provides host access permissions to the VMFS datastore (presented as a LUN) to two of the IQNs for host configuration 1014 and for host configuration 1015.
  • No protection schedule.

The VMFS datastore receives the ID VMFS_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/vmware/vmfs create –name “Accounting3” –descr “Accounting Group 3” –pool capacity -size 100G –thin yes –vdiskHosts “1014,1015”
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = VMFS_1
Operation completed successfully.

View VMware VMFS datastores

Display the list of existing VMFS datastores. You can filter on the ID of a VMFS datastore.

The show action command explains how to change the output format.
Format
/stor/prov/vmware/vmfs [-id <value>] show
Object qualifier
Qualifier
Description
-id
Type the ID of a VMFS datastore.
Example

The following command displays details about the VMFS datastore on the system:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/vmware/vmfs show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

1:     ID                   = VMFS_1
       LUN                  = sv_1
       Name                 = MyVMware
       Description          = My description
       Health state         = OK (5)
       Storage pool ID      = pool_2
       Storage pool         = capacity
       Size                 = 107374182400 (100G)
       Protection size used = 0
       SP owner             = SPA
       Trespassed           = no

Change VMware VMFS datastore settings

Change the settings for a VMFS datastore.

Format
/stor/prov/vmware/vmfs –id <value> set [-async] [-name <value>] [-descr <value>] [-size <value>] [{-sched <value> | -noSched}] [-schedPaused {yes | no}] [-vdiskHosts <value>] [-snapHosts <value>] [-spOwner {spa | spb}] [-fastvpPolicy { startHighThenAuto | auto | highest | lowest | none}] [-replDest {yes | no}]
Object qualifier
Qualifier
Description
-id
Type the ID of the VMFS datastore to change.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-name
Type a name for the VMFS datastore.
Use a name that reflects the type and version of the application that will use it, which can facilitate how the VMFS datastore is managed and monitored through Unisphere.
-descr
Type a brief description of the VMFS datastore.
-size
Type the quantity of storage to allocate for the VMFS datastore. Storage resource size limitations explains the limitations on storage size.
-sched
Type the ID of a protection schedule to apply to the VMFS datastore. View protection schedules explains how to view the IDs of the schedules on the system.
-schedPaused
Specify whether to pause the protection schedule specified for -sched . Value is yes or no.
-noSched
Unassign the protection schedule.
-vdiskHosts
Type the ID of each host configuration to give access to the VMFS datastore. Separate each ID with a comma. By default, all iSCSI initiators on the host can access the VMFS datastore. To allow access for specific initiators, type the IQN of each initiator in square brackets after the host ID. For example: ID[IQN,IQN], where 'ID' is a host configuration ID and 'IQN' is an initiator IQN. View host configurations explains how to view the ID of each host configuration.
-snapHosts
Type the ID of each host configuration to give access to snapshots of the VMFS datastore. Separate each ID with a comma. By default, all iSCSI initiators on the host can access all VMFS datastore snapshots. To allow access for specific initiators, type the IQN of each initiator in square brackets after the host ID. For example: ID[IQN,IQN], where 'ID' is a host configuration ID and 'IQN' is an initiator IQN. View host configurations explains how to view the ID of each host configuration.
-spOwner
Specify the default SP that owns the datastore.
-replDest
Specifies whether the resource is a replication destination. Valid values are:
  • yes
  • no
Values are case insensitive.
-fastvpPolicy
Specify the FAST VP policy of the datastore. Value is one of the following:
  • startHighThenAuto
  • auto
  • highest
  • lowest
Example

The following command updates VMFS datastore VMFS_1 with these settings:

  • Name is Accounting4.
  • Description is “Accounting Group 4.”
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/vmware/vmfs –id VMFS_1 set –name Accounting4 –descr “Accounting Group 4”
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = VMFS_1
Operation completed successfully.

Delete VMware VMFS datastores

Delete a VMFS datastore.

Deleting a VMFS datastore removes all data and snapshots of it from the system. After the VMFS datastore is deleted, you cannot restore the data from snapshots. Back up all data from the VMFS datastore before deleting it.
Format
/stor/prov/vmware/vmfs –id <value> delete [-deleteSnapshots {yes | no}] [-async]
Object qualifier
Qualifier
Description
-id
Type the ID of the VMFS datastore to delete.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-deleteSnapshots
Specify whether the datastore can be deleted along with snapshots. Value is Yes or No (default).
Example

The following command deletes VMFS datastore VMFS_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/vmware/vmfs –id VMFS_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.

Manage data deduplication

Deduplication increases file storage efficiency by eliminating redundant data from the files stored in the file system on a storage resource, such as a file system, which saves storage space. Enable deduplication on a storage resource and the system scans the filesystem on the storage resource for redundant data and deduplicates the data to free storage space. The scan runs once every week.

When the system is busy, scanning is limited, or suspended, so as not to further reduce system performance. When the system returns to normal operation, normal scanning resumes. After deduplicating a filesystem, the amount of storage used by the storage resource is significantly reduced, as much as 50 percent. The Unisphere online help provides more details about deduplication.

You can enable deduplication for file systems.

This command supports asynchronous execution.

The following table lists the attributes for deduplication:

Table 17. Deduplication attributes
Attribute
Description
ID
ID of the storage resource on which deduplication is enabled.
Enabled
Indication of whether deduplication is enabled. Value is yes or no.
State
State of deduplication, which performs a scan once a week. Value is one of the following:
  • paused — System is not currently scanning the storage resource.
  • running — System is currently scanning the storage resource. This is the default value when deduplication is enabled.
Excluded file extensions
List of file extensions that specify the files that will not be deduplicated. Each file extension is separated by a colon.
Excluded paths
List of paths on the filesystem that contains files that will not be deduplicated. Each path is separated by a semi-colon.
Last scan
Date and time when the system last scanned the filesystem.
Files deduplicated
The number and percentages of files deduplicated. Value appears in the <num>(<perc>%) format. Where:
  • <num> - Number of files deduplicated.
  • <perc> %- Percentage of files deduplicated.
Percent complete
Status (as a percentage) of the deduplication scan process.
Total size
Total capacity size of the storage resource on which deduplication is enabled.
Original size used
Amount of storage used by the storage resource before its files are deduplicated.
Current size used
Amount of storage used by the storage resource after its files are deduplicated.

View deduplication settings

View details about the deduplication settings on the system.

The show action command explains how to change the output format.
Format
/eff/dedup [-id <value>] show
Object qualifier
Qualifier
Description
-id
Type the ID of a storage resource on which deduplication is enabled.
Example

The following command displays the deduplication settings:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /eff/dedup show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1:     ID                 = SFS_1
       Resource type      = sf
       State              = running
       File exclude list  = .jpg:.gif
       Path exclude list  = /home/photo
       Last scan          = 2014-04-25 04:42:28
       Files deduplicated = 10 (30%)
       Percent complete   = 100%
       Total size         = 2147483648 (2.0G)
       Original size used = 8192 (8.0K)
       Current size used  = 2818048 (2.6M)

Configure deduplication settings

Configure deduplication settings for a storage resource.

Format
/eff/dedup -id <value> set [-async] [-enabled {yes|no}] [-state {running|paused}] [-fileExcList <value>] [-pathExcList <value>]
Object qualifier
Qualifier
Description
-id
Type the ID of the storage resource on which to configure deduplication.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
-enabled
Enable deduplication. Valid values are:
  • Yes
  • No
When you disable deduplication, all files on the storage resource will be re-deduplicated, which returns the storage usage to its original size before the files were deduplicated. Ensure the storage pool can accommodate the added storage use before disabling deduplication.
-state
Specify to pause or run deduplication scanning, which scans the target storage resource once a week. Value is one of the following:
  • running — System will scan the storage resource. This is the default value when -enabled is yes.
  • paused — System will not scan the storage resource.
To change this qualifier, deduplication must be enabled.
-fileExcList
Type a list of file extensions for files that will not be deduplicated. Use a semicolon to separate each file extension.
To change this qualifier, deduplication must be enabled and -state must be paused.
-pathExcList
List of paths on the file system that contain files that will not be deduplicated. Use a colon to separate the paths.
To change this qualifier, deduplication must be enabled and -state must be paused.
Example

The following command pauses deduplication scanning for file system fs_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /eff/dedup –id fs_1 set –state paused
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

ID = nas_1
Operation completed successfully.

Force a rescan

Rescan a target storage resource to deduplicate it immediately. By default, the system performs a scan once every week.

Format
/eff/dedup -id <value> rescan -async
Object qualifier
Qualifier
Description
-id
Type the ID of a storage resource on which deduplication is enabled.
Action qualifier
Qualifier
Description
-async
Run the operation in asynchronous mode.
Example

The following command forces deduplication scanning of file system fs_1:

uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /eff/dedup –id fs_1 rescan
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection

Operation completed successfully.
Connect with EMC Connect with EMC
Need help immediately? EMC Sales Specialists are standing by to answer your questions real time.
Use Live Chat for fast, direct access to EMC Customer Service Professionals to resolve your support questions.
Explore and compare EMC products in the EMC Store, and get a price quote from EMC or an EMC partner.
We're here to help. Send us your sales inquiry and an EMC Sales Specialist will get back to you within one business day.
Want to talk? Call us to speak with an EMC Sales Specialist live.