Friday, 13 March 2015

OMOTION

RAC 11g R2 11.2.0.2 -- OMOTION ---- Let’s try to get the online relocation done by ourselves and see what happens to the existing and on to the target instance.


After installation of the Oracle 11gr2 Grid Infrastructure and a "software only installation" of 11gr2 RAC, I installed patch 9004119 .

If you want to read about this article with all explanation read below the links:

Below the steps follow After patching on 11.2.0.1.0 :
[oracle@host01 ~]$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +FRA/orcl/spfileorcl.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances:
Disk Groups: FRA
Mount point paths:
Services: srvc1
Type: RAC One Node
Online relocation timeout: 30
Instance name prefix: orcl
Candidate servers: host01,host02,host03
Database is administrator managed

Before we start the migration, let’s check the status of the database and it’s instance right now.
[oracle@host01 ~]$ srvctl status database -d orcl
 Instance orcl_1 is running on node host01
 Online relocation: INACTIVE

So what we have here is a RAC One Node database with the SID orcl and it is running on the node HOST01with the instance orcl_1.  So now, we shall try to relocate the instance from this node to the target nodeHOST02.  Also it’s shown that the online relocation is not active at the moment.
It’s important to mention that with the version 11201, this task was done by a utility OMOTION but from 11202 onwards, the use of this utility is not required. The release of the software used for this demo was 11203 so obviously, the utility wasn’t required.
The conversion is done using the command SRVCTL RELOCATE DATABASE in which we are going to pass the name of the target node and the option to be in verbose mode for the output. Below is the output of this command:
[oracle@host01 ~]$ srvctl relocate database -d orcl -n host02 -w 30 -v
Configuration updated to two instances
Instance orcl_2 started
Services relocated
Waiting for up to 30 minutes for instance orcl_1 to stop ...
Instance orcl_1 stopped
Configuration updated to one instance


And from another session, we can see that the migration is going on.
[oracle@host01 ~]$ srvctl status database -d orcl
Instance orcl_2 is running on node host02
Online relocation: ACTIVE
Source instance: orcl_1 on host03
Destination instance: orcl_2 on host02

We can see that the second instance has come up and the relocation status is also shown as ACTIVE which means that the relocation is going on. We would need to run the command couple of times as it may take longer for the instance to crash.
[oracle@host01 ~]$ srvctl status database -d orcl
Instance orcl_2 is running on node host02
Online relocation: ACTIVE
Source instance: orcl_1 on host03
Destination instance: orcl_2 on host02

[oracle@host01 ~]$ srvctl status database -d orcl
Instance orcl_2 is running on node host02
Online relocation: ACTIVE
Source instance: orcl_1 on host03
Destination instance: orcl_2 on host02

Finally when the relocation would be over, this would be shown as the output,
[oracle@host01 ~]$ srvctl status database -d orcl
Instance orcl_2 is running on node host02
Online relocation: INACTIVE [oracle@host01 ~]$

As we can see, one the relocation is complete only the instance “orcl_2” is going to be working and the status of ONLINE RELOCATION is going to be completed.


No comments: