Tuesday, January 28, 2020

How To Move Volumes from One SVM to Another SVM Netapp - Rehost

volume rehost - Rehost a volume from one Vserver (SVM) into another Vserver (SVM)

This command is available to cluster administrators at the admin privilege level.

The volume rehost command rehosts a volume from source Vserver onto destination Vserver. The volume name must be unique among the other volumes on the destination Vserver.

 Before re-hosting the volume you have to verify whether it is hosting NFS, CIFS Or LUN, accordingly you have to perform the volume re-host process

Netapp volume rehost Pre-checks

  • Release snapmirror relationships if any associated with volume
  • Un-Mount the volume
  • Excute rehost volume command
  • remount and remap the LUN's
  • Create CIFS shares
  • Create export policies
    Before moving the volume SVM (Vserver)  should have similar protocols enabled otherwise rehost will fail
ARKIT-NA::*> volume rehost -vserver ARKIT-SVM -volume Volume1 -destination-vserver ARKIT-SVM

Warning: Rehosting a volume from one Vserver to another Vserver does not change the security information on that volume.
         If the security domains of the Vservers are not identical, unwanted access might be permitted, and desired access might be denied. An attempt to rehost a volume will disassociate the volume from all
         volume policies and policy rules. The volume must be reconfigured after a successful or unsuccessful rehost operation.
Do you want to continue? {y|n}: y
[Job 23760] Job is queued: Volume rehost operation on volume "Volume1" on Vserver "ARKIT-SVM" to destination Vserver "ARKIT-SVM" by administrator "admin".

Error: command failed: [Job 23760] Job failed:
       Volume rehost pre-check failed for reasons:


       Cannot rehost volume "Volume1" on Vserver "ARKIT-SVM" because the volume is type "RW" and is in a SnapMirror relationship. To rehost the volume, use the "snapmirror delete"
       command on the destination volume, followed by "snapmirror release -relationship-info-only" on the source volume, and then try the command again.



Step1: volume unmount -vserver ARKIT-SVM -volume Volume1

Step2: set -privilege advanced


Step3: volume rehost -vserver ARKIT-SVM -volume Volume1 -destination-vserver ARKIT-SVM1


Do you want to continue? {y|n}: y
[Job 23761] Job succeeded: Successful


Step 4: volume show -fields junction-path -vserver ARKIT-SVM1


volume mount -vserver ARKIT-SVM1 -volume Volume1 -junction-path /Volume1


Explanation: Un-mount the volume and change to advanced privilege mode. Execute rehost of the volume. 

After successful rehost operation check volume is mounted on destination if not we have to mount it back.

in case of volume hosting LUN's use -force-remap-luns true option.

More options


-vserver <vserver name> - Source Vserver name
         This specifies the Vserver on which the volume is located.

-volume <volume name> - Target volume name
         This specifies the volume that is to be rehosted.

-destination-vserver <vserver name> - Destination Vserver name
         This specifies the destination Vserver where the volume must be located post rehost operation.

{ [-force-unmap-luns {true|false}] - Unmap LUNs in volume
           This specifies whether the rehost operation should unmap LUNs present on volume. The default setting is false (the rehost operation shall not unmap LUNs). When set to true, the command will unmap
           all mapped LUNs on the volume.

| [-auto-remap-luns {true|false}] } - Automatic Remap of LUNs
           This specifies whether the rehost operation should perform LUN mapping operation at the destination Vserver for the LUNs mapped on the volume at the source Vserver. The default setting is false
           (the rehost operation shall not map LUNs at the destination Vserver). When set to true, at the destination Vserver the command will create initiators groups along with the initiators (if present)
           with same name as that of source Vserver. Then the LUNs on the volume are mapped to initiator groups at the destination Vserver as mapped in source Vserver.




Join Telegram Group

Monday, January 28, 2019

How To Collect NDMP Logs from Netapp Cluster mode


Using this simple method you can collect Netapp NDMP Logs and troubleshoot NDMP issues on Netapp

Check the Node scoped mode is enabled

::> system services ndmp node-scope-mode status
NDMP node-scope-mode is disabled.


if the status is disabled then enable by running below command

 ::> system services ndmp node-scope-mode on
NDMP node-scope-mode is enabled.


Check the status whether it is enabled

::> system services ndmp node-scope-mode status
NDMP node-scope-mode is enabled.


Disable node-scoped NDMP debug mode using command below
 ::> system services ndmp node-scope-mode off
NDMP node-scope-mode is disabled.


If you like to collect the NDMP logs on vserver level then enable debugging on vserver level first using below commands

Check which Vserver is enabled for NDMP service

::> vserver services ndmp show
::> vserver services ndmp status

::> set diag

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y




Now enable debug mode on vserver using below command

::*> vserver services ndmp modify -vserver ARKIT-NA -debug-filter normal -debug-enable true

Disable NDMP Debug mode using command below

::*> vserver services ndmp modify -vserver ARKIT-NA -debug-filter normal -debug-enable false 

Now download ndmp logs from SPI by default spi is enabled for C-Mode 8.2 or earlier versions

Go to browser and login to SPI

 http://FILER-IP-ADDRESS/spi



 Go to /etc/logs/mlog and download ndmpd.log*

That's how you can collect NDMP Debug logs for troubleshooting Netapp NDMP issues.


Facebook | Twitter | YouTube

Monday, October 29, 2018

How To Shutdown Netapp Cluster Gracefully

Shutdown Netapp Cluster Gracefully 

1. Go to Advanced mode and run configuration Backup
2. Identify Primary and Secondary Nodes
3. Stop Vservers
4. Disable Failover Or HA
5. Initiate Node shutdown one by one

ARK-NA::> set advanced

##### Repeat for All nodes ####
 ARK-NA::> system configuration backup create -node ARK-NA01 -backup-type cluster -backup-name BACKUP_BEFORE_SHUTDOWN_NODE1 


ARK-NA::*> system configuration backup show
Node       Backup Name                               Time               Size
---------  -----------------------------------------

ARK-NA01   ARK-NA.8hour.2018-10-25.10_15_00.7z       10/25 10:15:00     81.52MB
ARK-NA01   ARK-NA.8hour.2018-10-25.18_15_00.7z       10/25 18:15:00     86.11MB
ARK-NA01   ARK-NA.8hour.2018-10-26.02_15_00.7z       10/26 02:15:00     86.85MB


Check Primary and Secondary node details and shutdown secondary nodes first

ARK-NA::> cluster ring show

Repeat All the Vservers

ARK-NA::> vserver stop -vserver VSERVER-NAME 

Disable Failover after first start

ARK-NA::> storage failover modify -node * enabled false

Login to each node and run this command

ARK-NA::> halt -node ARK-NA01 -inhibit-takeover true -ignore-quorum-warnings true -reason "Maintainance Activity Planned"

Or

ARK-NA::> system node halt -node ARK-NA01 -inhibit-takeover true -skip-lif-migration-before-shutdown true -reason Shutdown