Saturday 4 April 2015

Moving OCR / Voting Disk to different ASM Diskgroup

1st Method

11gR2_OCR_VOTE

In case you wish to move OCR and Voting disk out of the current diskgroup, you can follow below steps. These steps can also be used to move files from ASM Diskgroup to Netapp filer and vice versaWe wish to move OCR/Voting disk from Diskgroup DATA to DG_DATA01
1) Create the new diskgroup DG_DATA01
2) Ensure that ASM Compatability is set to 11.2
 select name,COMPATIBILITY,DATABASE_COMPATIBILITY from V$ASM_DISKGROUP where name='DG_DATA01';
 
NAME       COMPATIBILITY      DATABASE_COMPATIBILITY
--------  ----------------   --------------------------
DG_DATA01  10.1.0.0.0        10.1.0.0.0
In case it is not, then you can change it using following command
alter diskgroup DG_DATA01 SET ATTRIBUTE 'compatible.asm'='11.2';
 
3) Replace voting disk using crsctl command as oracle user. You can do this without stopping the clusterware
 crsctl replace votedisk +DG_DATA01
Successful addition of voting disk 241b1e0a36344f7bbfaca4a576d514e9.                                                                                                                                                                                                                               
Successful deletion of voting disk 72e47e5e3afb4fe9bfb502b1b4340503.
Successfully replaced voting disk group with +DG_DATA01.
CRS-4266: Voting file(s) successfully replaced
To list the voting disk
crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    ---------------                --------- ---------
 1. ONLINE   241b1e0a36344f7bbfaca4a576d514e9 (ORCL:DISK4) [DG_DATA01]
 
Located 1 voting disk(s).
4) To replace OCR, we need to have multiple OCR configured. In our case we need to add OCR to new diskgroup and delete old one. We need to run this command as root.
[root ~]# ocrconfig -add +DG_DATA01
[root ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
    Version                  :          3
    Total space (kbytes)     :     262120
    Used space (kbytes)      :       2424
    Available space (kbytes) :     259696
    ID                       :  579998313
    Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
    Device/File Name         : +DG_DATA01
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
    Cluster registry integrity check succeeded
    Logical corruption check succeeded
   
   
[root ~]# ocrconfig -delete +DATA
[root ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
    Version                  :          3
    Total space (kbytes)     :     262120
    Used space (kbytes)      :       2424
    Available space (kbytes) :     259696
    ID                       :  579998313
    Device/File Name         : +DG_DATA01
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
    Cluster registry integrity check succeeded
    Logical corruption check succeeded
Verify that OCR location is pointing to the correct diksgroup on both nodes.
# cat /etc/oracle/ocr.loc
#Device/file +DATA getting replaced by device +DG_DATA01 
ocrconfig_loc=+DG_DATA01
local_only=false
5) ASM Parameter file is also present in same diskgroup. In case you wish to remove it, then follow below steps
SQL> create pfile from spfile;
File created.
 
SQL> create spfile='+DG_DATA01' from pfile;
File created.
You will need to restart the cluster on both the nodes as ASM instance is still using the file. Run following commands on both the nodes
crsctl stop cluster
crsctl start cluster


NOTE:-The same is not the case with Voting device. Once you move voting devices from raw devices to ASM Diskgroup, it’s not possible to add or delete voting files from ASM diskgroup as shown below.


[root@RAC2 bin]# ./crsctl add css votedisk /dev/raw/raw2
CRS-4258: Addition and deletion of voting files are not allowed because the voting files are on ASM

SOLUTION : - multiplex voting disk

1) check voting disk
[oracle@myrac-1 bin]$  crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   37b49fb451214f06bf9cb720358bb189 (/rac/rhdiskdata01) [DATA]
Located 1 voting disk(s).


2) multiplex voting disk (as root or sudo)
Uset ‘crsctl replace’ to relocate/multiplexing Voting disk to another ASM disk group (With normal redundancy)
/u01/app/grid/product/11.2.0/bin/crsctl replace votedisk +CRS

3) check newly added voting disk
[oracle@myrac-1 bin]$ crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group
–  —–    —————–                ——— ———
1. ONLINE   afb77b2693a24f1ebfe876784103e82a (/rac/rhdiskdata03) [CRS]
2. ONLINE   3e2542c5b1154ffdbfc5b6dea7dce190 (/rac/rhdiskdata04) [CRS]
3. ONLINE   5e0f3c5921cc4f93bf223de1465d87cc (/rac/rhdiskdata05) [CRS]
Located 3 voting disk(s).


Note that we cannot use the ‘crsctl add css votedisk’ to add a vote disk on ASM disk group/ACFS file system. We can use crsctl replace votedisk’ to move voting disk to a ASM disk group with normal redundancy. The new ASM Diskgroup has minimum of 3 fail groups (total of 3 disks). This configuration will provide 3 Voting Disks (1 on each fail group) and a single OCR which takes on the redundancy of the disk group. Hence, use separate disk group with normal redundancy with one disk quorum is the method to multiplexing voting disks.

clusterware startup sequence

Oracle 11gR2 

ohasd -> orarootagent -> ora.cssdmonitor : Monitors CSSD and node health (along with the cssdagent). Try to restart the node if the node is unhealthy.
                                    -> ora.ctssd : Cluster Time Synchronization Services Daemon
                                    -> ora.crsd   -> oraagent  -> ora.LISTENER.lsnr
                                                                             -> ora.LISTENER_SCAN.lsnr
                                                                             -> ora.ons
                                                                             -> ora.eons
                                                                             -> ora.asm
                                                                             -> ora.DB.db

                                                         ->orarootagent -> ora.nodename.vip
                                                                                 -> ora.net1.network
                                                                                 -> ora.gns.vip
                                                                                 -> ora.gnsd
                                                                                 -> ora.SCANn.vip

         -> cssdagent -> ora.cssd : Cluster Synchronization Services



         -> oraagent -> ora.mdnsd : Used for DNS lookup
                            -> ora.evmd
                            -> ora.asmd
                            -> ora.gpnpd : Grid Plug and Play = adding a node to the cluster is easier (we need less configuration for the new node)


If a resource is written using blue & bold font => resource owned by root. The other resources are owner by oracle. (all this on UNIX environment)
When a resource is managed by root, we need to run the command crsctl asroot or oracle.


DIAG TOOL

11gR2 RAC 

Cluster Diagnostic Collection Tool

$GRID_HOME/bin/diagcollection.pl --> Script

To use cluster diagnostic collection tool helps you collect the diagnostic information for all the require components
like HOST, OS, CLUSTER, AGENT etc....


------ OPTIONS AVAILABLE ------

[root@rac1 ~]# diagcollection.pl

Production Copyright 2004, 2008, Oracle.  All rights reserved
Cluster Ready Services (CRS) diagnostic collection tool
diagcollection
    --collect
             [--crs] For collecting crs diag information
             [--adr] For collecting diag information for ADR
             [--ipd] For collecting IPD-OS data
             [--all] Default.For collecting all diag information.
             [--core] UNIX only. Package core files with CRS data
             [--afterdate] UNIX only. Collects archives from the specified date. Specify in mm/dd/yyyy format
             [--aftertime] Supported with -adr option. Collects archives after the specified time. Specify in YYYYMMDDHHMISS24 format
             [--beforetime] Supported with -adr option. Collects archives before the specified date. Specify in YYYYMMDDHHMISS24 format
             [--crshome] Argument that specifies the CRS Home location
             [--incidenttime] Collects IPD data from the specified time.  Specify in MM/DD/YYY24HH:MM:SS format
                  If not specified, IPD data generated in the past 24 hours are collected
             [--incidentduration] Collects IPD data for the duration after the specified time.  Specify in HH:MM format.
                 If not specified, all IPD data after incidenttime are collected
             NOTE:
             1. You can also do the following
                ./diagcollection.pl --collect --crs --crshome <CRS Home>

     --clean        cleans up the diagnosability
                    information gathered by this script

     --coreanalyze  UNIX only. Extracts information from core files
                    and stores it in a text file




--nocore option undocumented is to use to ignore core datafiles while collection diagnostic information using Oracle Cluster Diagnostic Tool.




Below Example.:-

[root@rac1 ~]# diagcollection.pl --collect --crs --crshome /app/gridHome
Production Copyright 2004, 2008, Oracle.  All rights reserved
Cluster Ready Services (CRS) diagnostic collection tool
The following CRS diagnostic archives will be created in the local directory.
crsData_rac1_20150314_1940.tar.gz -> logs,traces and cores from CRS home. Note: core files will be packaged only with the --core option.
ocrData_rac1_20150314_1940.tar.gz -> ocrdump, ocrcheck etc
coreData_rac1_20150314_1940.tar.gz -> contents of CRS core files in text format

osData_rac1_20150314_1940.tar.gz -> logs from Operating System
Collecting crs data
/bin/tar: log/rac1/cssd/ocssd.log: file changed as we read it
/bin/tar: log/rac1/ohasd/ohasd.log: file changed as we read it
/bin/tar: log/rac1/agent/ohasd/orarootagent_root/orarootagent_root.log: file changed as we read it
/bin/tar: log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log: file changed as we read it
/bin/tar: log/rac1/agent/crsd/oraagent_oracle/oraagent_oracle.log: file changed as we read it
Collecting OCR data
Collecting information from core files
No corefiles found
Collecting OS logs.

[root@rac1 ~]#


Note:- Check log One by one.

tail -100 $GRID_HOME/log/rac1/cssd/ocssd.log

OHASD

11gR2 RAC

How ohasd will know which disk contains voting disk? Following is the ASM disk header metadata (dumped using kfed). ohasd use the markers between vfstart & vfend.

If the markers between vfstart & vfend are 0 then disk does *NOT* contain voting disk

[root@rac1] kfed read ‘/dev/sdi’ | grep vf
kfdhdb.vfstart: 130; 0x0ec: 0×00000060
kfdhdb.vfend: 168; 0x0f0: 0x000000d0

[root@rac1] kfed read ‘/dev/sdg’ | grep vf
kfdhdb.vfstart: 0 ; 0x0ec: 0×00000000
kfdhdb.vfend: 0 ; 0x0f0: 0×00000000

[root@rac1] kfed read ‘/dev/sdh’ | grep vf
kfdhdb.vfstart: 0 ; 0x0ec: 0×00000000
kfdhdb.vfend: 0 ; 0x0f0: 0×00000000

So from above output, disk ‘/dev/sdi’ contains voting disk.

Conclusion: The way to handle Voting Disk and OCR has changed significantly – they can be kept inside of an ASM Diskgroup especially.

oifcfg


11gR2 RAC - oifcfg (Oracle Interface Configuration tool)


Utility is usefull while managing the hostname,IP address changes during datacenter migrations, or network
reconfiguration that require chaing IP address to cluster member. (Return value Global Public and global cluster_interconnect)

[root@rac1 peer]# oifcfg

Name:
        oifcfg - Oracle Interface Configuration Tool.

Usage:  oifcfg iflist [-p [-n]]
        oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}...
        oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ]
        oifcfg delif [{-node <nodename> | -global} [<if_name>[/<subnet>]]]
        oifcfg [-help]

        <nodename> - name of the host, as known to a communications network
        <if_name>  - name by which the interface is configured in the system
        <subnet>   - subnet address of the interface
        <if_type>  - type of the interface { cluster_interconnect | public }


[root@rac1 peer]# oifcfg iflist
eth0  192.168.0.0
eth1  192.168.1.0
eth2  192.168.2.0

[root@rac1 peer]# oifcfg getif
eth0  192.168.0.0  global  public
eth1  192.168.1.0  global  cluster_interconnect


[root@rac1 peer]# oifcfg delif -global 

[root@rac1 peer]# oifcfg setif -global <interface name>/<subnet>:public

[root@rac1 peer]# oifcfg setif -global <interface name>/<subnet>:cluster_interconnect