Tuesday, 21 July 2015

UNDO TABLESPACE QUERIES

CALCULATE UNDO_RETENTION 

----Total Size of undo tablespace 
select trim(sum(d.bytes)) "undo"
from v$datafile d,
v$tablespace t,
dba_tablespaces s
where s.contents='UNDO'
and s.status='ONLINE'
and t.name=s.tablespace_name
and d.ts#=t.ts#;



---Generate Undo_per_sec
select max(undoblks/((end_time-begin_time)*3600*24)) "undo_block_per_sec" from v$undostat;




---Calculate formula Undo_retention time in sec.
Undo_retention = undo/(undo_bloc_per_sec*db_block_size)

so undo_retention set round of figure.




------Oracle Automatically tunes Undo Retention in the current instance workload
select to_char(begin_time,'hh24:mi:ss') BEGIN_TIME,
to_char(end_time,'hh24:mi:ss') END_TIME,
maxquerylen,nospaceerrcnt,tuned_undoretention from v$undostat;


select to_char(begin_time,'hh24:mi:ss') BEGIN_TIME,
to_char(end_time,'hh24:mi:ss') END_TIME,
maxquerylen,ssolderrcnt,nospaceerrcnt,undoblks,txncount  from v$undostat order by undoblks;




----Undo Usage
select ( select sum(bytes)/1024/1024 from dba_data_files
where tablespace_name like 'UND%') allocated,(select sum(bytes)/1024/1024
from dba_free_space where tablespace_name like 'UND%') free,(select sum(bytes)/1024/1024
from dba_undo_extents where tablespace_name like 'UND%') USed from dual
/



----Undo Used by session
select s.sid,s.username,t.used_urec,t.used_ublk
from v$session s,v$transaction t
where s.saddr=t.ses_addr
and s.sid=107
order by t.used_ublk desc;



select s.sid,t.statistic#,s.value
from v$sesstat s,v$statname t
where s.statistic#=t.statistic#
and t.name='undo change vector size'
order by s.value desc;



---most undo behalf of username and sid
select sql.sql_text sql_text,t.used_urec records,t.used_ublk blocks,(t.used_ublk*8192/1024) kbytes from v$transaction t,
v$session s,
v$sql sql
where t.addr=s.taddr
and s.sql_id=sql.sql_id
and s.username='&USERNAME';

Monday, 20 July 2015

11gR2 Remove Node from Existing Oracle Clusterware on Linux - (RHEL 5)

RAC 11gR2 

Note: If your are using Non-shared ORACLE_HOME,GRID_HOME,ORACLE_BASE then  follow below the all steps .

If shared using then click on this link: 11g R2 RAC Delete Node


Contents (11.2.0.x)

Introduction

Although not as exciting as building an Oracle RAC or adding a new node and instance to a cluster database; removing a node from a clustered environment is just as important to understand for a DBA managing Oracle RAC. While it is true that most of the attention in a cluster database environment is focused on extending the database tier to support increased demand, the exact opposite is just as likely to be encountered where the DBA needs to remove a node from an existing Oracle RAC. One scenario may be a node failure or that an underutilized server in the database cluster could be better served in another business unit. In either case, a node can be removed from the cluster while the remaining nodes continue to service ongoing requests.

This document is an extension to two of my articles: "Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL 5) " and "Add a Node to an Existing Oracle RAC 11g R2 Cluster on Linux - (RHEL 5) ". Contained in this new article are the steps required to remove a single node from an existing three-node Oracle RAC 11g Release 2 (11.2.0.3.0) environment on the CentOS 5 Linux platform. The node being removed is the third node I added in thesecond article. Although this article was written and tested on CentOS 5 Linux, it should work unchanged with Red Hat Enterprise Linux 5 or Oracle Linux 5.

As part of removing a node from Oracle RAC, you must first remove the Oracle instance, de-install the Oracle Database software and then remove Oracle Grid Infrastructure from the node you are deleting. In other words, you remove the software components from the node you are deleting in the reverse order that you originally installed them. It is important that you perform each step contained this article in the order provided.
This article assumes the following:
·         The reader has already built and configured a three-node Oracle RAC using the articles "Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL 5)" and "Add a Node to an Existing Oracle RAC 11g R2 Cluster on Linux - (RHEL 5) ".
Note: The current Oracle RAC has been upgraded from its base release 11.2.0.1 to 11.2.0.3 by applying the 10404530 patchset.
·         The third node in the existing Oracle RAC named racnode3 (running the racdb3 instance) will be removed from the cluster making it a two-node cluster. It is assumed that the node to be removed is available.
·         All shared disk storage for the existing Oracle RAC is based on iSCSI using a Network Storage Server; namely Openfiler Release 2.3 (Final) x86_64.
·         The existing Oracle RAC is not using Grid Naming Service (GNS) to assign IP addresses.
·         The existing Oracle RAC database is administrator-managed (not policy-managed).
·         The existing Oracle RAC does not use shared Oracle homes for the Grid Infrastructure or Database software.
·         The Oracle Grid Infrastructure and Oracle Database software is installed using the optional Job Role Separation configuration. One OS user is created to own each Oracle software product — "grid" for the Oracle Grid Infrastructure owner and "oracle" for the Oracle Database software.
·         Oracle ASM is being used for all Clusterware files, database files, and the Fast Recovery Area. The OCR file and the voting files are stored in an ASM disk group named +CRS. The ASMLib support library is configured to provide persistent paths and permissions for storage devices used with Oracle ASM.
All database files are configured using Oracle Managed Files (OMF) with four (4) ASM disk groups (CRS, RACDB_DATA, DOCS, FRA).
·         The existing Oracle RAC is configured with Oracle ASM Cluster File System (Oracle ACFS) and Oracle ASM Dynamic Volume Manager (ADVM) which is being used as a shared file system to store files maintained outside of the Oracle database. The mount point for the cluster file system is /oradocs on all Oracle RAC nodes which mounts the docsvol1 ASM dynamic volume created in the DOCSDG1 ASM disk group.
·         User equivalence is configured for the grid and oracle OS account between all nodes in the cluster so that the Oracle Grid Infrastructure and Oracle Database software will be securely removed from the node to be deleted (racnode3). User equivalence will need to be configured from a node that is to remain a member of the Oracle RAC (racnode1 in this guide) to the node being removed so that ssh can be executed without being prompted for a password

Oracle Documentation

While this guide provides detailed instructions for removing a node from an existing Oracle RAC 11g system, it is by no means a substitute for the official Oracle documentation (see list below). In addition to this guide, users should also consult the following Oracle documents to gain a full understanding of alternative configuration options, installation, and administration with Oracle RAC 11g. Oracle's official documentation site is docs.oracle.com.

Example Configuration

The example configuration used in this guide stores all physical database files (data, online redo logs, control files, archived redo logs) on ASM in an ASM disk group named +RACDB_DATA while the Fast Recovery Area is created in a separate ASM disk group named +FRA.
The existing three-node Oracle RAC and the network storage server is configured as described in the table below.
Oracle RAC / Openfiler Nodes
Node Name
Instance Name
Database Name
Processor
RAM
Operating System
racnode1
racdb1
racdb.idevelopment.info
1 x Dual Core Intel Xeon, 3.00 GHz
4GB
CentOS 5.5 - (x86_64)
racnode2
racdb2
1 x Dual Core Intel Xeon, 3.00 GHz
4GB
CentOS 5.5 - (x86_64)
racnode3   mini_cross [Remove]
racdb3
1 x Dual Core Intel Xeon, 3.00 GHz
4GB
CentOS 5.5 - (x86_64)
openfiler1


2 x Intel Xeon, 3.00 GHz
6GB
Openfiler 2.3 - (x86_64)

Network Configuration
Node Name
Public IP
Private IP
Virtual IP
SCAN Name
SCAN IP
racnode1
192.168.1.151
192.168.2.151
192.168.1.251
racnode-cluster-scan
192.168.1.187
192.168.1.188
192.168.1.189
racnode2
192.168.1.152
192.168.2.152
192.168.1.252
racnode3   mini_cross [Remove]
192.168.1.153
192.168.2.153
192.168.1.253
openfiler1
192.168.1.195
192.168.2.195


Oracle Software Components
Software Component
OS User
Primary Group
Supplementary Groups
Home Directory
Oracle Base / Oracle Home
Grid Infrastructure
grid
oinstall
asmadmin, asmdba, asmoper
/home/grid
/u01/app/grid
/u01/app/11.2.0/grid
Oracle RAC
oracle
oinstall
dba, oper, asmdba
/home/oracle
/u01/app/oracle
/u01/app/oracle/product/11.2.0/dbhome_1

Storage Components
Storage Component
File System
Volume Size
ASM Volume Group Name
ASM Redundancy
Openfiler Volume Name
OCR/Voting Disk
ASM
2GB
+CRS
External
racdb-crs1
Database Files
ASM
32GB
+RACDB_DATA
External
racdb-data1
ASM Cluster File System
ASM
32GB
+DOCS
External
racdb-acfsdocs1
Fast Recovery Area
ASM
32GB
+FRA
External
racdb-fra1
The following is a conceptual look at what the environment will look like after removing the third Oracle RAC node (racnode3) from the cluster. Click on the graphic below to enlarge the image.

     

Figure 1: Remove racnode3 from the existing Oracle RAC 11g Release 2 System

This article is only designed to work as documented with absolutely no substitutions. The only exception here is the choice of vendor hardware (i.e. machines, networking equipment, and internal / external hard drives). Ensure that the hardware you purchase from the vendor is supported on Red Hat Enterprise Linux 5 and Openfiler 2.3 (Final Release).

Remove Oracle Instance

Remove Instance from OEM Database Control Monitoring

If Oracle Enterprise Manager (Database Control) is configured for the existing Oracle RAC, remove the instance from the DB Control cluster configuration before removing it from the cluster database.
The URL for this example is: https://racnode1.idevelopment.info:1158/em

[oracle@racnode1 ~]$ emca -displayConfig dbcontrol -cluster
 
STARTED EMCA at May 4, 2012 12:42:12 PM
EM Configuration Assistant, Version 11.2.0.3.0 Production
Copyright (c) 2003, 2011, Oracle.  All rights reserved.
 
Enter the following information:
Database unique name: racdb
Service name: racdb.idevelopment.info
Do you wish to continue? [yes(Y)/no(N)]: y
May 4, 2012 12:42:24 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/racdb/emca_2012_05_04_12_42_12.log.
May 4, 2012 12:42:27 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage
INFO:
****************  Current Configuration  ****************
 INSTANCE            NODE           DBCONTROL_UPLOAD_HOST
----------        ----------        ---------------------
 
racdb             racnode1             racnode1.idevelopment.info
racdb             racnode2             racnode1.idevelopment.info
racdb             racnode3             racnode1.idevelopment.info
 
 
Enterprise Manager configuration completed successfully
FINISHED EMCA at May 4, 2012 12:42:27 PM

Run the emca command from any node in the cluster, except from the node where the instance we want to stop from being monitored is running.
popup_dialog_information_mark 
Refer to My Oracle Support
Doc ID: 578011.1 - How to manage DB Control 11.x for RAC Database with emca
Doc ID: 394445.1 - emca -deleteInst db fails with Database Instance unavailable

 
[oracle@racnode1 ~]$ emca -deleteInst db
 
STARTED EMCA at May 4, 2012 12:43:37 PM
EM Configuration Assistant, Version 11.2.0.3.0 Production
Copyright (c) 2003, 2011, Oracle.  All rights reserved.
 
Enter the following information:
Database unique name: racdb
Service name: racdb.idevelopment.info
Node name: racnode3
Database SID: racdb3
 
Do you wish to continue? [yes(Y)/no(N)]: y
May 4, 2012 12:43:59 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/racdb/racdb3/emca_2012_05_04_12_43_36.log.
May 4, 2012 12:44:00 PM oracle.sysman.emcp.util.DBControlUtil stopOMS
INFO: Stopping Database Control (this may take a while) ...
May 4, 2012 12:44:07 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage
INFO:
****************  Current Configuration  ****************
 INSTANCE            NODE           DBCONTROL_UPLOAD_HOST
----------        ----------        ---------------------
 
racdb             racnode1             racnode1.idevelopment.info
racdb             racnode2             racnode1.idevelopment.info
 
 
Enterprise Manager configuration completed successfully
FINISHED EMCA at May 4, 2012 12:44:07 PM




     

Figure 2: Oracle Enterprise Manager - (Database Console)

Backup OCR

Backup the OCR using ocrconfig -manualbackup from a node that is to remain a member of the Oracle RAC.


[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig –manualbackup

Note that voting disks are automatically backed up in OCR after the changes we will be making to the cluster.

Remove Instance from any Services - (if necessary)

The instance racdb3 is hosted on node racnode3 which is part of the existing Oracle RAC and the node being removed in this guide. The racdb3 instance is in the preferred list of the service racdbsvc.idevelopment.info.


[oracle@racnode1 ~]$ srvctl config service -d racdb -s racdbsvc.idevelopment.info -v
Service name: racdbsvc.idevelopment.info
Service is enabled
Server pool: racdb_racdbsvc.idevelopment.info
Cardinality: 3
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: racdb3,racdb1,racdb2
Available instances:

Update the racdbsvc.idevelopment.info service to remove the racdb3 instance.


[oracle@racnode1 ~]$ srvctl modify service -d racdb -s racdbsvc.idevelopment.info -n -i racdb1,racdb2
 
[oracle@racnode1 ~]$ srvctl config service -d racdb -s racdbsvc.idevelopment.info -v
Service name: racdbsvc.idevelopment.info
Service is enabled
Server pool: racdb_racdbsvc.idevelopment.info
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Preferred instances: racdb1,racdb2
Available instances:
 
[oracle@racnode1 ~]$ srvctl status service -d racdb -s racdbsvc.idevelopment.info -v
Service racdbsvc.idevelopment.info is running on instance(s) racdb1,racdb2

Remove Instance from the Cluster Database

As the Oracle software owner, run the Oracle Database Configuration Assistant (DBCA) in silent mode from a node that will remain in the cluster to remove the racdb3 instance from the existing cluster database. The instance that's being removed by DBCA must be up and running.


[oracle@racnode1 ~]$ dbca -silent -deleteInstance -nodeList racnode3 \ -gdbName racdb.idevelopment.info -instanceName racdb3 \
                          -sysDBAUserName sys -sysDBAPassword ******
Deleting instance
20% complete
21% complete
22% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/racdb.log" for further details.

Review the DBCA trace file which is located in /u01/app/oracle/cfgtoollogs/dbca/trace.log_OraDb11g_home1_<DATE>.

Verify that the racdb3 database instance was removed from the cluster database.


[oracle@racnode1 ~]$ srvctl config database -d racdb -v
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +RACDB_DATA/racdb/spfileracdb.ora
Domain: idevelopment.info
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2
Disk Groups: RACDB_DATA,FRA
Mount point paths:
Services: racdbsvc.idevelopment.info
Type: RAC
Database is administrator managed
 
[oracle@racnode1 ~]$ sqlplus / as sysdba
 
SQL> select inst_id, instance_name, status, 
   to_char(startup_time, 'DD-MON-YYYY HH24:MI:SS') as "START_TIME"
    from gv$instance order by inst_id;
 
   INST_ID INSTANCE_NAME    STATUS       START_TIME
---------- ---------------- ------------ --------------------
         1 racdb1           OPEN         01-MAY-2012 11:30:01
         2 racdb2           OPEN         01-MAY-2012 11:30:00

As seen from the output above, the racdb3 database instance was removed and only racdb1 and racdb2 remain.

When DBCA is used to delete the instance, it also takes care of removing any Oracle dependencies like the public redo log thread, undo tablespace, and all instance related parameter entries for the deleted instance. This can be examined in the trace file produced by DBCA /u01/app/oracle/cfgtoollogs/dbca/trace.log_OraDb11g_home1_<DATE>:


...
...  thread SQL = SELECT THREAD# FROM V$THREAD WHERE UPPER(INSTANCE) = UPPER('racdb3')
...  threadNum.length=1
...  threadNum=3
...  redoLog SQL =SELECT GROUP# FROM V$LOG WHERE THREAD# = 3
...  redoLogGrNames length=2
...  Group numbers=(5,6)
...  logFileName SQL=SELECT MEMBER FROM V$LOGFILE WHERE GROUP# IN (5,6)
...  logFiles length=4
...  SQL= ALTER DATABASE DISABLE THREAD 3
...  archive mode = false
...  SQL= ALTER DATABASE DROP LOGFILE GROUP 5
...  SQL= ALTER DATABASE DROP LOGFILE GROUP 6
...  SQL=DROP TABLESPACE UNDOTBS3 INCLUDING CONTENTS AND DATAFILES
...  sidParams.length=2
...  SQL=ALTER SYSTEM RESET undo_tablespace SCOPE=SPFILE SID = 'racdb3'
...  SQL=ALTER SYSTEM RESET instance_number SCOPE=SPFILE SID = 'racdb3'
...

Check if the redo log thread and UNDO tablespace for the deleted instance is removed (which for my example, they were successfully removed). If not, manually remove them.


SQL> select thread# from v$thread where upper(instance) = upper('racdb3');
 
   THREAD#
----------
         3
 
SQL> select group# from v$log where thread# = 3;
 
    GROUP#
----------
         5
         6
 
SQL> select member from v$logfile where group# in (5,6);
 
MEMBER
--------------------------------------------------
+RACDB_DATA/racdb/onlinelog/group_5.270.781657813
+FRA/racdb/onlinelog/group_5.281.781657813
+RACDB_DATA/racdb/onlinelog/group_6.271.781657815
+FRA/racdb/onlinelog/group_6.289.781657815
 
SQL> alter database disable thread 3;
 
Database altered.
 
SQL> alter database drop logfile group 5;
 
Database altered.
 
SQL> alter database drop logfile group 6;
 
Database altered.
 
SQL> drop tablespace undotbs3 including contents and datafiles;
 
Tablespace dropped.
 
SQL> alter system reset undo_tablespace scope=spfile sid = 'racdb3';
 
System altered.
 
SQL> alter system reset instance_number scope=spfile sid = 'racdb3';
 
System altered.

Remove Oracle Database Software

In this step, the Oracle Database software will be removed from the node that will be deleted. In addition, the inventories of the remaining nodes will be updated to reflect the removal of the node's Oracle Database software home.

Log in as the Oracle Database software owner when executing the tasks in this section.

Verify Listener Not Running in Oracle Home

If any listeners are running from the Oracle home being removed, they will need to be disabled and stopped.
Check if any listeners are running from the Oracle home to be removed.


[oracle@racnode3 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
  /u01/app/11.2.0/grid on node(s) racnode3,racnode2,racnode1
End points: TCP:1521

In Oracle 11g Release 2 (11.2) the default listener runs from Grid home. Since the listener is running from the Grid home (shown above), disabling and stopping the listener can be skipped in release 11.2.

If any listeners were explicitly created to run from the Oracle home being removed, they would need to be disabled and stopped.


$ srvctl disable listener -l <listener_name> -n <name_of_node_to_delete>
$ srvctl stop listener -l <listener_name> -n <name_of_node_to_delete>

Update Oracle Inventory - (Node Being Removed)

As the Oracle software owner, execute runInstaller from Oracle_home/oui/bin on the node being removed to update the inventory. Set "CLUSTER_NODES={name_of_node_to_delete}".


[oracle@racnode3 ~]$ cd $ORACLE_HOME/oui/bin
 
[oracle@racnode3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racnode3}" –local
 
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 9983 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

popup_dialog_exclamation_mark 
Make sure to specify the -local flag as to not update the inventory on all nodes in the cluster.

After executing runInstaller on the node to be deleted, the inventory.xml file on that node (/u01/app/oraInventory/ContentsXML/inventory.xml) will show only the node to be deleted under the Oracle home name.


...
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode3"/>
   </NODE_LIST>
</HOME>
...

The inventory.xml file on the other nodes will still show all of the nodes in the cluster. The inventory on the remaining nodes will be updated after de-installing the Oracle Database software.


...
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode1"/>
      <NODE NAME="racnode2"/>
      <NODE NAME="racnode3"/>
   </NODE_LIST>
</HOME>
...

De-install Oracle Home

Before attempting to de-install the Oracle Database software, review the /etc/oratab file on the node to be deleted and remove any entries that contain a database instance running out of the Oracle home being de-installed. Do not remove any +ASM entries.


...
#
+ASM3:/u01/app/11.2.0/grid:N                        # line added by Agent
racdb3:/u01/app/oracle/product/11.2.0/dbhome_1:N    # line added for DBA scripts
...

If a rouge entry exists in the /etc/oratab file that contains the Oracle home being deleted, then the deinstall described in the next step will fail:


  ERROR: The option -local will not modify any database configuration for this Oracle home.
Following databases have instances configured on local node : 'racdb3'. Remove these
database instances using dbca before de-installing the local Oracle home.

When using a non-shared Oracle home (as is the case in this example guide), run deinstall as the Oracle Database software owner from the node to be removed in order to delete the Oracle Database software.


[oracle@racnode3 ~]$ cd $ORACLE_HOME/deinstall
 
[oracle@racnode3 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
 
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
 
 
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
 
 
... <SNIP> ...
 
 
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: 
'/u01/app/oraInventory/logs/deinstall_deconfig2012-05-04_01-39-32-PM.out'
Any error messages from this session will be written to:
'/u01/app/oraInventory/logs/deinstall_deconfig2012-05-04_01-39-32-PM.err'
 
 
... <SNIP> ...
 
 
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
 
 
############# ORACLE DEINSTALL & DECONFIG TOOL END #############

Review the complete deinstall output.
After the de-install completes, the Oracle home node will be successfully removed from the inventory.xml file on the local node.


popup_dialog_exclamation_mark 
Make sure to specify the -local flag as to not remove more than just the local node's software. If -local is not specified then deinstall would apply to the entire cluster.
popup_dialog_information_mark 
If this were a shared home then instead of de-installing the Oracle Database software, you would simply detach the Oracle home from the inventory.
$ ./runInstaller -detachHome ORACLE_HOME=Oracle_home_location

Update Oracle Inventory - (All Remaining Nodes)

From one of the nodes that is to remain part of the cluster, execute runInstaller (without the -local option) as the Oracle software owner to update the inventories with a list of the nodes that are to remain in the cluster. Use the CLUSTER_NODES option to specify the nodes that will remain in the cluster.


[oracle@racnode1 ~]$ cd $ORACLE_HOME/oui/bin
 
[oracle@racnode1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racnode1,racnode2}"
 
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 9521 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

Review the inventory.xml file on each remaining node in the cluster to verify the Oracle home name does not include the node being removed.


...
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode1"/>
      <NODE NAME="racnode2"/>
   </NODE_LIST>
</HOME>
...

Remove Node from Clusterware

This section describes the steps to remove a node from Oracle Clusterware.

Verify Grid_home

Most of the commands in this section will be run as root. Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where Grid_home is the location of the installed Oracle Clusterware software.


[root@racnode1 ~]# GRID_HOME=/u01/app/11.2.0/grid
[root@racnode1 ~]# export GRID_HOME

Unpin Node

Run the following command as root to determine whether the node you want to delete is active and whether it is pinned.


[root@racnode1 ~]# $GRID_HOME/bin/olsnodes -s -t
racnode1        Active  Unpinned
racnode2        Active  Unpinned
racnode3        Active  Unpinned

If the node being removed is already unpinned then you do not need to run the crsctl unpin css command below and can proceed to the next step.

If the node being removed is pinned with a fixed node number, then run the crsctl unpin css command as root from a node that is to remain a member of the Oracle RAC in order to unpin the node and expire the CSS lease on the node you are deleting.


[root@racnode1 ~]# $GRID_HOME/bin/crsctl unpin css -n racnode3
CRS-4667: Node racnode3 successfully unpinned.

If Cluster Synchronization Services (CSS) is not running on the node you are deleting, then the crsctl unpin css command in this step fails.


popup_dialog_information_mark 
An Oracle RAC node will only be pinned if it is using CTSS or used with a database version < 11.2.

Disable Oracle Clusterware

Before executing the rootcrs.pl script described in this section, you must ensure EMAGENT is not running on the node being deleted.


[oracle@racnode3 ~]$ emctl stop dbconsole

If you have been following along in this guide, the EMAGENT should not be running since the instance on the node being deleted was removed from OEM Database Control Monitoring.

Next, disable the Oracle Clusterware applications and daemons running on the node to be deleted from the cluster. Run the rootcrs.pl script as root from the Grid_home/crs/install directory on the node to be deleted.


[root@racnode3 ~]# GRID_HOME=/u01/app/11.2.0/grid
[root@racnode3 ~]# export GRID_HOME
 
[root@racnode3 ~]# cd $GRID_HOME/crs/install
 
[root@racnode3 ~]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.1.0/255.255.255.0/eth0, type static
VIP exists: /racnode1-vip/192.168.1.251/192.168.1.0/255.255.255.0/eth0, hosting node racnode1
VIP exists: /racnode2-vip/192.168.1.252/192.168.1.0/255.255.255.0/eth0, hosting node racnode2
VIP exists: /racnode3-vip/192.168.1.253/192.168.1.0/255.255.255.0/eth0, hosting node racnode3
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'racnode3'
CRS-2677: Stop of 'ora.registry.acfs' on 'racnode3' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode3'
CRS-2673: Attempting to stop 'ora.crsd' on 'racnode3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racnode3'
CRS-2673: Attempting to stop 'ora.oc4j' on 'racnode3'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'racnode3'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'racnode3'
CRS-2673: Attempting to stop 'ora.RACDB_DATA.dg' on 'racnode3'
CRS-2673: Attempting to stop 'ora.DOCS.dg' on 'racnode3'
CRS-2677: Stop of 'ora.FRA.dg' on 'racnode3' succeeded
CRS-2677: Stop of 'ora.RACDB_DATA.dg' on 'racnode3' succeeded
CRS-2677: Stop of 'ora.DOCS.dg' on 'racnode3' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'racnode2'
CRS-2676: Start of 'ora.oc4j' on 'racnode2' succeeded
CRS-2677: Stop of 'ora.CRS.dg' on 'racnode3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racnode3'
CRS-2677: Stop of 'ora.asm' on 'racnode3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racnode3' has completed
CRS-2677: Stop of 'ora.crsd' on 'racnode3' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'racnode3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'racnode3'
CRS-2673: Attempting to stop 'ora.crf' on 'racnode3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racnode3'
CRS-2673: Attempting to stop 'ora.evmd' on 'racnode3'
CRS-2673: Attempting to stop 'ora.asm' on 'racnode3'
CRS-2677: Stop of 'ora.mdnsd' on 'racnode3' succeeded
CRS-2677: Stop of 'ora.crf' on 'racnode3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racnode3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'racnode3' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'racnode3' succeeded
CRS-2677: Stop of 'ora.asm' on 'racnode3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'racnode3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'racnode3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racnode3'
CRS-2677: Stop of 'ora.cssd' on 'racnode3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'racnode3'
CRS-2677: Stop of 'ora.gipcd' on 'racnode3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'racnode3'
CRS-2677: Stop of 'ora.gpnpd' on 'racnode3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

popup_dialog_exclamation_mark 
If you do not use the -force option in the preceding command or the node you are deleting is not accessible for you to execute the preceding command, then the VIP resource remains running on the node. You must manually stop and remove the VIP resource using the following commands as root from any node that you are not deleting:
# srvctl stop vip -i vip_name -f
# srvctl remove vip -i vip_name -f
Where vip_name is the VIP for the node to be deleted. If you specify multiple VIP names, then separate the names with commas and surround the list in double quotation marks ("").
For example,
[root@racnode3 ~]# srvctl stop vip -i racnode3-vip -f
[root@racnode3 ~]# srvctl remove vip -i racnode3-vip -f
popup_dialog_information_mark 
If you are deleting all nodes from a cluster, then append the -lastnode option to the preceding command to clear OCR and the voting disks, as follows:
# ./rootcrs.pl -deconfig -deinstall -force -lastnode
Only use the -lastnode option if you are deleting all cluster nodes because that option causes the rootcrs.pl script to clear OCR and the voting disks of data.

Delete Node from Clusterware Configuration

From a node that is to remain a member of the Oracle RAC, run the following command from the Grid_home/bin directory as root to update the Clusterware configuration to delete the node from the cluster.


[root@racnode1 ~]# $GRID_HOME/bin/crsctl delete node -n racnode3
CRS-4661: Node racnode3 successfully deleted.
 
[root@racnode1 ~]# $GRID_HOME/bin/olsnodes -t -s
racnode1        Active  Unpinned
racnode2        Active  Unpinned

where racnode3 is the node to be deleted.

Update Oracle Inventory - (Node Being Removed)

As the Oracle Grid Infrastructure owner, execute runInstaller from Grid_home/oui/bin on the node being removed to update the inventory. Set "CLUSTER_NODES={name_of_node_to_delete}". Note that this step is missing in the official Oracle Documentation (Oracle Clusterware Administration and Deployment Guide 11g Release 2 (11.2) E10717-11 April 2010).


[grid@racnode3 ~]$ cd $GRID_HOME/oui/bin
 
[grid@racnode3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={racnode3}" CRS=TRUE –local
 
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 9983 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

popup_dialog_exclamation_mark 
Make sure to specify the -local flag as to not update the inventory on all nodes in the cluster.

After executing runInstaller on the node to be deleted, the inventory.xml file on that node (/u01/app/oraInventory/ContentsXML/inventory.xml) will show only the node to be deleted under the Grid home name.


...
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="racnode3"/>
   </NODE_LIST>
</HOME>
...

The inventory.xml file on the other nodes will still show all of the nodes in the cluster. The inventory on the remaining nodes will be updated after de-installing the Oracle Grid Infrastructure software.


...
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="racnode1"/>
      <NODE NAME="racnode2"/>
      <NODE NAME="racnode3"/>
   </NODE_LIST>
</HOME>
...

De-install Oracle Grid Infrastructure Software

When using a non-shared Grid home (as is the case in this example guide), run deinstall as the Grid Infrastructure software owner from the node to be removed in order to delete the Oracle Grid Infrastructure software.


[grid@racnode3 ~]$ cd $GRID_HOME/deinstall
 
[grid@racnode3 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2012-05-07_01-21-53PM/logs/
 
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
 
 
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
 
 
Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: racnode3
Checking for sufficient temp space availability on node(s) : 'racnode3'
 
## [END] Install check configuration ##
 
Traces log file: /tmp/deinstall2012-05-07_01-21-53PM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "racnode3"[racnode3-vip]
 >
[ENTER]
 
The following information can be collected by running "/sbin/ifconfig -a" on node "racnode3"
Enter the IP netmask of Virtual IP "192.168.1.253" on node "racnode3"[255.255.255.0]
 >
[ENTER]
 
Enter the network interface name on which the virtual IP address "192.168.1.253" is active
 >
[ENTER]
 
Enter an address or the name of the virtual IP[]
 >
[ENTER]
 
 
Network Configuration check config START
 
Network de-configuration trace file location: 
/tmp/deinstall2012-05-07_01-21-53PM/logs/netdc_check2012-05-07_01-22-16-PM.log
 
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured 
[LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:LISTENER
 
At least one listener from the discovered listener list 
[LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1] is missing in the specified listener 
list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available 
after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration 
Assistant instead. Do you want to continue? (y|n) [n]: y
 
Network Configuration check config END
 
Asm Check Configuration START
 
ASM de-configuration trace file location: 
/tmp/deinstall2012-05-07_01-21-53PM/logs/asmcadc_check2012-05-07_01-25-11-PM.log
 
 
######################### CHECK OPERATION END #########################
 
 
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:racnode3
Since -local option has been specified, the Oracle home will be deinstalled only on the 
local node, 'racnode3', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: 
'/tmp/deinstall2012-05-07_01-21-53PM/logs/deinstall_deconfig2012-05-07_01-21-56-PM.out'
Any error messages from this session will be written to: 
'/tmp/deinstall2012-05-07_01-21-53PM/logs/deinstall_deconfig2012-05-07_01-21-56-PM.err'
 
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: 
/tmp/deinstall2012-05-07_01-21-53PM/logs/asmcadc_clean2012-05-07_01-25-16-PM.log
ASM Clean Configuration END
 
Network Configuration clean config START
 
Network de-configuration trace file location: 
/tmp/deinstall2012-05-07_01-21-53PM/logs/netdc_clean2012-05-07_01-25-16-PM.log
 
De-configuring RAC listener(s): LISTENER
 
De-configuring listener: LISTENER
    Stopping listener on node "racnode3": LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
 
De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
 
De-configuring backup files...
Backup files de-configured successfully.
 
The network configuration has been cleaned up successfully.
 
Network Configuration clean config END
 
 
---------------------------------------->
 
The deconfig command below can be executed in parallel on all the remote nodes. 
Execute the command on  the local node after the execution completes on all the 
remote nodes.
 
Run the following command as the root user or the administrator on node "racnode3".
 
/tmp/deinstall2012-05-07_01-21-53PM/perl/bin/perl \
-I/tmp/deinstall2012-05-07_01-21-53PM/perl/lib \
-I/tmp/deinstall2012-05-07_01-21-53PM/crs/install \
/tmp/deinstall2012-05-07_01-21-53PM/crs/install/rootcrs.pl \
-force \
-deconfig \
-paramfile "/tmp/deinstall2012-05-07_01-21-53PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
 
Press Enter after you finish running the above commands
 
<----------------------------------------




Run the above command as root on the specified node(s) from a different shell:
[root@racnode3 ~]# /tmp/deinstall2012-05-07_01-21-53PM/perl/bin/perl \
-I/tmp/deinstall2012-05-07_01-21-53PM/perl/lib \
-I/tmp/deinstall2012-05-07_01-21-53PM/crs/install \
/tmp/deinstall2012-05-07_01-21-53PM/crs/install/rootcrs.pl \
-force \
-deconfig \
-paramfile "/tmp/deinstall2012-05-07_01-21-53PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2012-05-07_01-21-53PM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware          #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node




Once completed press [ENTER] on the first shell session:
Remove the directory: /tmp/deinstall2012-05-07_01-21-53PM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
 
Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done
 
Delete directory '/u01/app/11.2.0/grid' on the local node : Done
 
Delete directory '/u01/app/oraInventory' on the local node : Done
 
Delete directory '/u01/app/grid' on the local node : Done
 
Oracle Universal Installer cleanup was successful.
 
Oracle Universal Installer clean END
 
 
## [START] Oracle install clean ##
 
Clean install operation removing temporary directory '/tmp/deinstall2012-05-07_01-21-53PM' on node 'racnode3'
 
## [END] Oracle install clean ##
 
 
######################### CLEAN OPERATION END #########################
 
 
####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "racnode3"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Oracle Universal Installer cleanup was successful.
 
 
Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'racnode3' at the end of the session.
 
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'racnode3' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
 
 
############# ORACLE DEINSTALL & DECONFIG TOOL END #############




Remove specified files as root:
[root@racnode3 ~]# rm -rf /etc/oraInst.loc
[root@racnode3 ~]# rm -rf /opt/ORCLfmap
[root@racnode3 ~]# rm -rf /u01/app/11.2.0
[root@racnode3 ~]# rm -rf /u01/app/oracle

Review the complete deinstall output.
After the de-install completes, verify that the /etc/inittab file does not start Oracle Clusterware.


[root@racnode3 ~]# diff /etc/inittab /etc/inittab.no_crs
[root@racnode3 ~]#

popup_dialog_exclamation_mark 
Make sure to specify the -local flag as to not remove more than just the local node's software. If -local is not specified then deinstall would apply to the entire cluster.
popup_dialog_information_mark 
If this were a shared home then instead of de-installing the Grid Infrastructure software, you would simply detach the Grid home from the inventory.
$ ./runInstaller -detachHome ORACLE_HOME=Grid_home_location

Update Oracle Inventory - (All Remaining Nodes)

From one of the nodes that is to remain part of the cluster, execute runInstaller (without the -local option) as the Grid Infrastructure software owner to update the inventories with a list of the nodes that are to remain in the cluster. Use the CLUSTER_NODES option to specify the nodes that will remain in the cluster.


[grid@racnode1 ~]$ cd $GRID_HOME/oui/bin
 
[grid@racnode1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={racnode1,racnode2}" CRS=TRUE
 
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 9559 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

Review the inventory.xml file on each remaining node in the cluster to verify the Grid home name does not include the node being removed.


...
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="racnode1"/>
      <NODE NAME="racnode2"/>
   </NODE_LIST>
</HOME>
...

Verify New Cluster Configuration

Run the following CVU command to verify that the specified node has been successfully deleted from the cluster.


[grid@racnode1 ~]$ cluvfy stage -post nodedel -n racnode3 -verbose
 
Performing post-checks for node removal
 
Checking CRS integrity...
 
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "racnode2"
The Oracle Clusterware is healthy on node "racnode1"
 
CRS integrity check passed
Result:
Node removal check passed
 
Post-check for node removal was successful.

At this point, racnode3 is no longer a member of the cluster. However, if an OCR dump is taken from one of the reamining nodes, information about the deleted node is still contained in the OCRDUMPFILE.


[grid@racnode1 ~]$ ocrdump

[SYSTEM.crs.e2eport.racnode3]
ORATEXT : (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.3.153)(PORT=50989))
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : 
           PROCR_NONE, OTHER_PERMISSION : PROCR_NONE, USER_NAME : grid, GROUP_NAME : oinstall}
 
...
 
[SYSTEM.OCR.BACKUP.2.NODENAME]
ORATEXT : racnode3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : 
           PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root}
 
...
 
[SYSTEM.OCR.BACKUP.DAY.NODENAME]
ORATEXT : racnode3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION :
           PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root}
 
...
 
[SYSTEM.OCR.BACKUP.DAY_.NODENAME]
ORATEXT : racnode3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : 
           PROCR_READ, USER_NAME : root, GROUP_NAME : root}
 
...
 
[SYSTEM.OCR.BACKUP.WEEK_.NODENAME]
ORATEXT : racnode3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : 
           PROCR_READ, USER_NAME : root, GROUP_NAME : root}
 
...
 
[DATABASE.ASM.racnode3]
ORATEXT : racnode3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : 
           PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall}
 
[DATABASE.ASM.racnode3.+asm3]
ORATEXT : +ASM3
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : 
           PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall}
 
[DATABASE.ASM.racnode3.+asm3.ORACLE_HOME]
ORATEXT : /u01/app/11.2.0/grid
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : 
           PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall}
 
[DATABASE.ASM.racnode3.+asm3.ENABLED]
ORATEXT : true
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : 
           PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall}
 
[DATABASE.ASM.racnode3.+asm3.VERSION]
ORATEXT : 11.2.0.3.0
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : 
           PROCR_READ, USER_NAME : grid, GROUP_NAME : oinstall}

This does not mean that the node wasn't removed properly. It is still possible to add the node again with the same hostname, IP address, VIP, etc. anytime in the future.

Remove Remaining Components

This section provides instructures to delete any remaining components from the node to be removed.

Remove ASMLib

Remove the ASMLib kernel driver, supporting software, and associated directories from racnode3.


[root@racnode3 ~]# /usr/sbin/oracleasm exit
Unmounting ASMlib driver filesystem: /dev/oracleasm
Unloading module "oracleasm": oracleasm
 
[root@racnode3 ~]# rpm -qa | grep oracleasm
oracleasmlib-2.0.4-1.el5
oracleasm-2.6.18-274.el5-2.0.5-1.el5
oracleasm-support-2.1.7-1.el5
 
[root@racnode3 ~]# rpm -ev oracleasmlib-2.0.4-1.el5 oracleasm-2.6.18-274.el5-2.0.5-1.el5 oracleasm-support-2.1.7-1.el5
 
warning: /etc/sysconfig/oracleasm saved as /etc/sysconfig/oracleasm.rpmsave
 
[root@racnode3 ~]# rm -f /etc/sysconfig/oracleasm.rpmsave
[root@racnode3 ~]# rm -f /etc/sysconfig/oracleasm-_dev_oracleasm
[root@racnode3 ~]# rm -f /etc/rc.d/rc2.d/S29oracleasm
[root@racnode3 ~]# rm -f /etc/rc.d/rc0.d/K20oracleasm
[root@racnode3 ~]# rm -f /etc/rc.d/rc5.d/S29oracleasm
[root@racnode3 ~]# rm -f /etc/rc.d/rc4.d/S29oracleasm
[root@racnode3 ~]# rm -f /etc/rc.d/rc1.d/K20oracleasm
[root@racnode3 ~]# rm -f /etc/rc.d/rc3.d/S29oracleasm
[root@racnode3 ~]# rm -f /etc/rc.d/rc6.d/K20oracleasm

Disable iSCSI Initiator Service

Modify the iSCSI initiator service on racnode3 so it will not automatically start and therefore will not attempt to discover iSCSI volumes from the Openfiler server.

Manually Logout of iSCSI Targets

[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.acfs1 -p 192.168.2.195 --logout
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 --logout
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 --logout
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 --logout

Verify we are logged out of the iSCSI target by looking at the /dev/disk/by-path directory. If no other iSCSI targets exist on the client node, then after logging out from the iSCSI target, the mappings for all targets should be gone and the following command should not find any files or directories:


[root@racnode3 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')
 
ls: *openfiler*: No such file or directory

Delete Target and Disable Automatic Login

Update the record entry on the client node to disable automatic logins to the iSCSI targets.


[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.acfs1 -p 192.168.2.195 --op update -n node.startup -v manual
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 --op update -n node.startup -v manual
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 --op update -n node.startup -v manual
[root@racnode3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 --op update -n node.startup -v manual

Delete the iSCSI target.


[root@racnode3 ~]# iscsiadm -m node --op delete --targetname iqn.2006-01.com.openfiler:racdb.acfs1
[root@racnode3 ~]# iscsiadm -m node --op delete --targetname iqn.2006-01.com.openfiler:racdb.crs1
[root@racnode3 ~]# iscsiadm -m node --op delete --targetname iqn.2006-01.com.openfiler:racdb.data1
[root@racnode3 ~]# iscsiadm -m node --op delete --targetname iqn.2006-01.com.openfiler:racdb.fra1

Remove udev Rules Files

If the iSCSI targets being removed are the only remaining targets and you don't plan on adding any further iSCSI targets in the future, then it is safe to remove the iSCSI rules file and its call-out script.
[root@racnode3 ~]# rm -f /etc/udev/rules.d/55-openiscsi.rules
[root@racnode3 ~]# rm -f /etc/udev/scripts/iscsidev.sh

Disable the iSCSI (Initiator) Service

If the iSCSI targets being removed are the only remaining targets and you don't plan on adding any further iSCSI targets in the future, then it is safe to disable the iSCSI Initiator Service.


[root@racnode3 ~]# service iscsid stop
[root@racnode3 ~]# chkconfig iscsid off
[root@racnode3 ~]# chkconfig iscsi off

Remove Access Permissions on Network Storage Server

Network access to racnode3 will need to be revoked from Openfiler so that it cannot access the iSCSI volumes through the storage (private) network.
From the Openfiler Storage Control Center home page, log in as an administrator. The default administration login credentials for Openfiler are:
https://openfiler1.idevelopment.info:446/

Username: openfiler 
Password: 
password

From the Openfiler Storage Control Center, navigate to [Volumes] / [iSCSI Targets]. Under the "Target Configuration" sub-tab, use the pull-down menu to select one of the current RAC iSCSI targets in the section "Select iSCSI Target" and then click the [Change] button.



Figure 3: Select iSCSI Target


Click on the grey sub-tab named "Network ACL" (next to "LUN Mapping" sub-tab). For the currently selected iSCSI target, change the "Access" for the new Oracle RAC node from 'Allow' to 'Deny' and click the [Update] button.This needs to be performed for all of the RAC iSCSI targets.




Figure 4: Update Network ACL for the Deleted Oracle RAC Node


Navigate to [System] / [Network Setup]. The "Network Access Configuration" section (at the bottom of the page) allows an administrator to setup networks and/or hosts that will be allowed to access resources exported by the Openfiler appliance. Remove racnode3 from the network access configuration.


Figure 5: Revoke Openfiler Network Access for the Deleted Oracle RAC Node

Remove Oracle Software Owners Accounts

Finally, remove the Grid and Oracle user accounts and all associated UNIX groups from racnode3.


[root@racnode3 ~]# userdel -r grid
[root@racnode3 ~]# userdel -r oracle
[root@racnode3 ~]# groupdel oinstall
[root@racnode3 ~]# groupdel asmadmin
[root@racnode3 ~]# groupdel asmdba
[root@racnode3 ~]# groupdel asmoper
[root@racnode3 ~]# groupdel dba
[root@racnode3 ~]# groupdel oper

This concludes the removing of a node from the Oracle RAC.

About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX / Linux server environment. Jeff's other interests include mathematical encryption theory, tutoring advanced mathematics, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 20 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science and Mathematics.