Thursday, 21 January 2016

SMART_FLASH_CACHE

Smart flash_cache in 11gR2 and > ( 12c ) Version  supports on OEL or Solaris only DB Smart Flash Cache in Oracle 11g

In case you don’t have budget to buy Exadata you can still buy huge number of flash disks and put on them part of your database. But what should be stored on flash disks(very fast) and what on magnetic disks(very slow) ?

It’s not your businesses to know let decide database.
Introduction

DB Smart Flash Cache is new extension for buffer cache area. This extra area should be defined on solid state disks (SSD) and has following features:
          make performance improvements at moderate cost(cheaper than DRAM)
          low latency compared to magnetic disks
          higher throughput compared to magnetic disks
          easy to setup
          easy to control
          can be used for RAC cache fusion keeps consistency
          direct I/O bypasses buffer cache so as well bypasses DB smart flash cache
          can cache only clean blocks from buffer cache
          flash cache is not auto-tuned
          only blocks from standard buffer pool are cached in DB smart flash cache
Oracle recommends:
          flash disks should have comparable read IOPs and IOPs write
          this new layer should be at least 2-10 times bigger than buffer cache in DRAM
          mainly for OLTP systems

Architecture


if a oracle server process needs to read a block from database at first it must read it from magnetic disk(physical read). Then the block is stored in buffer cache memory and added to standard “LRU chain” list.
When “in memory buffers area” is getting full Oracle must decide which blocks needs to be removed from cache. If you have DB Smart Flash Cache enabled “clean” blocks are written to “Flash cache” by DBWR process so next time they can be read into memory from Flash Cache and improve your performance.
NOTE: “Dirty” blocks are never stored in Flash Cache
List of blocks cached in DB smart flash cache are stored in buffer cache area on two dedicated flash “LRU lists” depending on object attribute FLASH_CACHE:

          DEFAULT – standard last recently used algorithm decides how long such blocks are cached in flash cache. It’s default value assigned to each object in database.
          KEEP – such blocks are not removed from flash cache as long as the flash cache is large enough

alter|create table|index objectname
storage
(  
 buffer_pool { keep | recycle | default }
   flash_cache { keep | none    | default }
);

NONE value for FLASH_CACHE is blocking flash caching for a given object.

Statistics
All I/O operations from DB smart flash cache are counted as physical I/O however Oracle also collects such informations in new columns.
V$SQL - OPTIMIZED_PHY_READ_REQUESTS
V$SQLAREA - OPTIMIZED_PHY_READ_REQUESTS
V$FILESTAT - OPTIMIZED_PHYBLKRD
select name from v$statname where name like 'physical%optimized%';

NAME                                                           
----------------------------------------------------------------
physical read requests optimized                                 
physical read total bytes optimized
You can see such stats in V$SESSTAT and V$SYSSTAT
Setup
Two parameters must be set on database level to turn on DB smart flash cache:
 DB_FLASH_CACHE_FILE – defines (OS disk path or ASM disk group) and file name to store this data
 DB_FLASH_CACHE_SIZE – defines size of the flash cache

DB_FLASH_CACHE_FILE='/os path/flash_cache_file.dbf'
DB_FLASH_CACHE_FILE='+FLASH_DISK_GROUP/flash_cache_file.dbf'
DB_FLASH_CACHE_SIZE=200m

After setting both parameters you need to restart database.
DB_FLASH_CACHE_FILE
          can’t be shared between many databases or instances DB_FLASH_CACHE_SIZE
          can’t be dynamically resized
           can be set to 0 to disable DB smart flash cache
          can be set to original size to re-enable DB smart flash cache




Performance improvements
Oracle conducted interesting test for a OLTP database 70GB size with 8GB SGA. From below picture you can see improvements for Transactions versus size of DB smart cache size.

Following picture shows improvement in transaction response time versus DB smart cache size

Example
I simulate SSD disk by creation ramdisk – disk based in memory using following steps:
1. create directory to mount ramdisk and change owner to oracle and group dba
[root@oel5 /]mkdir /ramdisk
[root@oel5 /]chown oracle:dba -R /ramdisk
2. mount ramdisk and check it
[root@oel5 /]# mount -t tmpfs none /ramdisk -o size=256m

[root@oel5 /]# mount | grep ramdisk

none on /ramdisk type tmpfs (rw,size=256m)

3. set parameters for database and restart it as user oracle
SQL> alter system set db_flash_cache_file='/ramdisk/ram.dbf'
SQL> scope=spfile;
System altered.

SQL> alter system set db_flash_cache_size=200M scope=sp;
System altered. 
SQL> startup force
ORACLE instance started.
Total System Global Area  835104768 bytes
Fixed Size                  2232960 bytes
Variable Size             507514240 bytes
Database Buffers          322961408 bytes
Redo Buffers                2396160 bytes
Database mounted.
Database opened.
SQL> show parameter flash_cache

NAME                    TYPE        VALUE
----------------------- ----------- ------------------------------
db_flash_cache_file     string      /ramdisk/ram.dbf
db_flash_cache_size     big integer 200M
4. Check new file exists in /ramdisk
[root@oel5 ramdisk]# ll
total 8
-rw-r----- 1 oracle asmadmin 209715200 Feb 24 22:54 ram.dbf
5. Let’s create tables with flash_cache keep reference in storage clause so Oracle will try to keep the blocks in DB smart cache as long as possible.
create table test_tbl1(id number,id1 number,id2 number)storage(flash_cache keep);
begin
  for i in 1..1000000
  loop
    insert into test_tbl1 values(i, i, i);
  end loop;
  commit;
end;
/


6. Eventually after some time you can see some data in flash cache – v$bh view.
select status, count(*) from v$bh
group by status;
STATUS       COUNT(*)
---------- ----------
xcur            36915
flashcur        25583
cr                 13
7. If you clean buffer cache as well db smart flash cache is purged
alter system flush buffer_cache;
system FLUSH altered. 
STATUS       COUNT(*)
---------- ----------
xcur              520
free            36411

ERROR:-
I do all steps of your manual, but after ‘startup force’ I have an error:

SQL> startup force
ORA-00439: feature not enabled: Server Flash Cache
What am I doing wrong?
My configuration:
Oracle Linux Server release 6.4
2.6.39-400.24.1.el6uek.x86_64
Oracle Database 11g 11.2.0.3.0

Patch 12949806: FLASH CACHE CHECK IS AGAINST ENTERPRISE-RELEASE
Now it works!


------FOR TESTING PURPOSE------

    # fdisk -l /dev/sdc

    Disk /dev/sdc: 24.5 GB, 24575868928 bytes
    255 heads, 63 sectors/track, 2987 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Disk /dev/sdc doesn't contain a valid partition table
    # chmod 777 /dev/sdc

set Oracle initialization parameters:

    $ sqlplus / as sysdba
    SQL> alter system set db_flash_cache_file='/dev/sdc' scope=spfile;
    System altered.

    SQL> alter system set db_flash_cache_size=10G scope=spfile;
    System altered.


Stop/Start database

    SQL> shutdown immediate



Reference:

12c Below one more my favrt. link to configure and understand smart flash cache.

ORA-65096: invalid common user or role name


SQL> CREATE USER remote_clone_user IDENTIFIED BY remote_clone_user;
create user remote_clone_user identified by remote_clone_user
            *
ERROR at line 1:
ORA-65096: invalid common user or role name


SQL> alter session set "_ORACLE_SCRIPT"=true;


SQL> CREATE USER remote_clone_user IDENTIFIED BY remote_clone_user;


SQL> grant create session,create pluggable database to remote_clone_user;

Saturday, 16 January 2016

RAC INSTALLATION


Below Some link to installation of Oracle RAC 11g R2. 

OS: Enterprise-R5-U6-Server-i386-dvd

Oracle Installation Doc :  Oracle RAC installation on VM Workstation


English Version Video Demo :






























Arabic Version Video Demo: Oracle RAC 11g R2 Installation

Friday, 15 January 2016

Backup Pluggable Databases

Oracle 12c New Feature: How to backup pluggable databases

Oracle 12c introduced the new multi-tenant feature called Pluggable Databases (PDB). We will show how to take a backup of the pluggable database components in this post.

Setup for RMAN with Oracle 12c

In order to use the Oracle 12c Recovery Manager (RMAN) utility for pluggable database backups, you need to first enable archivelog mode.





Once archivelog mode is enabled, we can take a backup of the pluggable database

rman target sysbackup





Now we can verify that the backup image is available from RMAN for our pluggable database






Backup for root component of Oracle 12c Pluggable Databases
Backup database ;    # Both database backup CDB and PDB all       
Pluggable database backup above the command.

In a nutshell, an Oracle 12c PDB consists of two parts: a root component and a seed component that includes the data. Earlier we performed a full database backup of the entire pluggable database but let us say that we just want to backup the root itself. We can do so with the RMAN command BACKUP DATABASE ROOT as shown in the following example:





Now let us verify the root backup for our PDB with Oracle 12c:




Stay tuned when we visit how to restore pluggable databases with RMAN and Oracle 12c!



RESTORE & RECOVER root (Container)
RMAN> restore datafile 6;
RMAN>restore database root;
RMAN>recover database root;



RESTORE & RECOVER PDB

RMAN> restore datafile 29 pluggable database pdb1;   # restore datafile 29;
RMAN> restore pluggable database pdb1;           # restore pluggable database pdb1,pdb2,pdb3;
RMAN> recover pluggable database pdb1;


PDB Create & Drop in GUI / CLI MODE

12c

This method copies the files for the seed to a new location and associates the copied files with the new PDB, which will be called PDB1. Although you have many options for creating PDBs, this example is one of the simplest ways to get up and running. Using this method leaves you with a PDB with no customizations.

1.    Log in to your CDB using SQL*Plus as SYSDBA. To make sure you’re in the correct location, type
show con_name
You should see something like this:
CON_NAME
------------------------------
CDB$ROOT
The out-of-the box file location for PDBs is in a subdirectory under the oradata directory for the CDB.
2.    Create a subdirectory for the new PDB under the CDB file location from the OS oracle software owner by typing
mkdir /u01/app/oracle/oradata/CDB1/pdb1
If this command succeeds, you get no output. You can list the new directory by typing
ls –l /u01/app/oracle/oradata/CDB1 |grep pdb1
You should see something like this:
drwxr-xr-x. 2 oracle oinstall   4096 Aug 17 01:56 pdb1

3.    Back in SQL*Plus as SYSDBA, create pluggable database command by typing
select CON_ID, OPEN_MODE, NAME from v$containers;
select FILE_NAME from cdb_data_files;
alter session set container=PDB$SEED;
select FILE_NAME from dba_data_files;
CREATE PLUGGABLE DATABASE pdb1 ADMIN USER vinay identified by oracle  ROLE=(CONNECT)
DEFAULT TABLESPACE USERS
DATAFILE '/u01/app/oracle/oradata/CDB1/pdb1/users01.dbf'
SIZE 250M AUTOEXTEND ON
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/CDB1/datafile/o1_mf_system_c62dlts4_.dbf', '/u01/app/oracle/oradata/CDB1/pdb1/system01.dbf',
'/u01/app/oracle/oradata/CDB1/datafile/o1_mf_sysaux_c62dltrv_.dbf', '/u01/app/oracle/oradata/CDB1/pdb1/sysaux01.dbf',
'/u01/app/oracle/oradata/CDB1/datafile/pdbseed_temp012015-12-04_11-59-21-AM.dbf', '/u01/app/oracle/oradata/CDB1/pdb1/temp01.dbf');
You should see this:
Pluggable database created.
The new PDB is left in a mount state.
4.    Show the new PDB and open it by typing
show pdbs
alter pluggable database pdb1 open;
You should see this:
CON_ID CON_NAME            OPEN MODE RESTRICTED
------- ------------------------------ ---------- ----------
   2 PDB$SEED            READ ONLY NO
   3 PDB1            READ WRITE NO
Pluggable database altered.

5.    Verify the status by typing
show pdbs
You should see this:
CON_ID CON_NAME            OPEN MODE RESTRICTED
------ ------------------------------ ---------- ----------
   2 PDB$SEED            READ ONLY NO
   3 PDB1            READ WRITE NO

   



Monday, 11 January 2016

IMR (INSTANCE MEMBERSHIP RECOVERY)


Hi Guyz so today topic in RAC IMR process which is Deeper Level Of Node Evection Process that's
called IMR (Instance Membership Recovery).

RAC TO NON-RAC (PHYSICAL STANDBY)


-----------------------------------------------------------------------------------------
11g R2  Oracle RAC 2 Node (ASM) To Non-RAC (OMF) DataGurad
-----------------------------------------------------------------------------------------

ENV:- 2 RAC NODE & shared Storage

Node Name - Node1, Node2
Shared Storage


------------------------------
1. PRIM MACHINE
------------------------------

Both Node Add  (STDRAC) last ip on this file /etc/hosts

192.168.76.20  rac1.localdomain        rac1
192.168.76.21  rac2.localdomain        rac2
# Private
192.168.76.22  rac1-priv.localdomain   rac1-priv
192.168.76.23  rac2-priv.localdomain   rac2-priv
# Virtual
192.168.76.24  rac1-vip.localdomain    rac1-vip
192.168.76.25  rac2-vip.localdomain    rac2-vip
# SCAN
192.168.76.26  rac-scan.localdomain    rac-scan
# SAN
192.168.76.10  shared.localdomain      shared

# Standby Machine stdrac
192.168.76.15 stdrac.localdomain       stdrac



rac2.__db_cache_size=226492416
rac1.__db_cache_size=234881024
rac2.__java_pool_size=4194304
rac1.__java_pool_size=4194304
rac2.__large_pool_size=4194304
rac1.__large_pool_size=4194304
rac2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
rac2.__pga_aggregate_target=364904448
rac1.__pga_aggregate_target=339738624
rac2.__sga_target=478150656
rac1.__sga_target=503316480
rac2.__shared_io_pool_size=0
rac1.__shared_io_pool_size=0
rac2.__shared_pool_size=234881024
rac1.__shared_pool_size=251658240
rac2.__streams_pool_size=0
rac1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/rac/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='+DATA/rac/controlfile/current.260.844530091'
*.db_block_checking='TRUE'
*.db_block_checksum='TYPICAL'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain='localdomain'
*.db_file_name_convert='/u01/app/oracle/oradata/STDRAC/','+DATA/rac/datafile/'
*.db_name='rac'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=racXDB)'
*.fal_client='RAC'
*.fal_server='STDRAC'
rac2.instance_number=2
rac1.instance_number=1
*.log_archive_config='DG_CONFIG=(RAC,STDRAC)'
*.log_archive_dest_1='LOCATION=/u01/app/oracle/product/11.2.0/db_1/dbs/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES)'
*.log_archive_dest_2='SERVICE=STDRAC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STDRAC'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.arc'
*.log_archive_max_processes=5
*.log_file_name_convert='/u01/app/oracle/oradata/STDRAC/','+DATA/rac/onlinelog'
*.memory_target=843055104
*.open_cursors=300
*.processes=150
*.remote_listener='rac-scan:1521'
*.remote_login_passwordfile='exclusive'
*.standby_file_management='AUTO'
rac2.thread=2
rac1.thread=1
rac2.undo_tablespace='UNDOTBS2'
rac1.undo_tablespace='UNDOTBS1'
SPFILE='+DATA/rac/spfilerac.ora' # line added by Agent







--------------------------------
2. Standby Machine
--------------------------------
VM single Machine OMF
Normal oracle 11g software installed


Add  (STDRAC) last ip on this file /etc/hosts

192.168.76.20  rac1.localdomain        rac1
192.168.76.21  rac2.localdomain        rac2
# Private
192.168.76.22  rac1-priv.localdomain   rac1-priv
192.168.76.23  rac2-priv.localdomain   rac2-priv
# Virtual
192.168.76.24  rac1-vip.localdomain    rac1-vip
192.168.76.25  rac2-vip.localdomain    rac2-vip
# SCAN
192.168.76.26  rac-scan.localdomain    rac-scan
# SAN
192.168.76.10  shared.localdomain      shared

# Standby Machine stdrac
192.168.76.15 stdrac.localdomain       stdrac



stdrac.__db_cache_size=226492416
stdrac.__java_pool_size=4194304
stdrac.__large_pool_size=4194304
stdrac.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
stdrac.__pga_aggregate_target=364904448
stdrac.__sga_target=478150656
stdrac.__shared_io_pool_size=0
stdrac.__shared_pool_size=234881024
stdrac.__streams_pool_size=0

*.audit_file_dest='/u01/app/oracle/diag/adump'
*.audit_trail='db'
*.db_unique_name='STDRAC'
*.compatible='11.2.0.0.0'
*.control_files='/u01/app/oracle/oradata/STDRAC/controlfile/control_file.ctl'
*.db_block_checking='TRUE'
*.db_block_checksum='TYPICAL'
*.db_block_size=8192
*.db_create_file_dest='/u01/app/oracle/oradata/STDRAC/'
*.db_domain='localdomain'
*.db_file_name_convert='+DATA/rac/datafile/','/u01/app/oracle/oradata/STDRAC/'
*.db_name='rac'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=racXDB)'
*.fal_client='STDRAC'
*.fal_server='RAC'
*.log_archive_config='DG_CONFIG=(RAC,STDRAC)'
*.log_archive_dest_1='LOCATION='/u01/app/oracle/oradata/STDRAC/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RAC'
*.log_archive_dest_2='SERVICE=RAC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STDRAC'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.arc'
*.log_archive_max_processes=5
*.log_file_name_convert='+DATA/rac/onlinelog','/u01/app/oracle/oradata/STDRAC/'
*.memory_target=843055104
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='exclusive'
*.standby_file_management='AUTO'
stdrac.undo_tablespace='UNDOTBS'



-----------------------------
3. PRIM MACHINE
-----------------------------

ADD Both Node TNS entry

RAC
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = rac.localdomain)
    )
  )


STDRAC =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = stdrac.localdomain)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = stdrac)
    )
  )



-----------------------------------
4. STANDBY MACHINE
-----------------------------------


RAC
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = rac.localdomain)
    )
  )


STDRAC =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = stdrac.localdomain)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = stdrac)
    )
  )


-----------------------------------
5. TNSPING BOTH SIDE
-----------------------------------
PRIMARY - tnsping RAC
            - tnsping STDRAC

STANDBY - tnsping STDRAC
                    - tnsping RAC





-- Stop database using srvctl

SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list
SQL>alter database force logging;
SQL>shutdown immediate;

Start database using srvctl 

--Create standby redo logs: (# of logs vary) (same size as the online redo logs)
SQL>alter database add standby logfile group 7  ('+REDO1','+REDO2') size 50M;
SQL>alter database add standby logfile group 8  ('+REDO1','+REDO2') size 50M;
SQL>alter database add standby logfile group 9  ('+REDO1','+REDO2') size 50M;
SQL>select * from v$log;
SQL>select * from v$standby_log;


--ON PRIMARY 
--Copy password files. Create pfile. Start standby instance. 
% scp $ORACLE_HOME/dbs/orapwRAC stdby-serv1:$ORACLE_HOME/dbs/orapwSTDRAC

--ON Both STANDBY nodes create adump directory 
mkdir -p /u01/app/oracle/admin/STDRAC/adump

--On first standby server create initSTDBY1.ora in /tmp with only db_name=STDRAC
cd /tmp vi initSTDRAC.ora
db_name=STDBY
export ORACLE_SID=STDRAC

sqlplus / as sysdba
SQL> startup nomount pfile='/tmp/initSTDRAC.ora;






--------------------------------
6. RMAN COMMAND
---------------------------------
--test connection: 
sqlplus sys/mypasswd@<STDBY>_DGMGRL as sysdba

--ON PRIMARY 
--Use RMAN to create standby database. 
--on prim-serv1
cd to <scrpit-dir>
export ORACLE_SID=RAC
echo $ORACLE_SID
rman
connect target / connect auxiliary sys/password@STDBY_DGMGRL



--- returns connected to (not started) 
@create_phys_sby.cmd 
ACTIVE STANDBY COMMAND RMAN
-------------------------------------------------------------------------------------------
run {
ALLOCATE CHANNEL c1 TYPE DISK;
ALLOCATE AUXILIARY CHANNEL sby TYPE DISK;
DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE NOFILENAMECHECK dorecover;
}
-----------------------------------------------------------------------------------------------------




NOTE:- IF GETTING BELOW THE ERROR IN 11201 DB ASM TO NON-ASM ERROR THEN APPLY PATCH only 64-bit p9530594_112010_Linux-x86-64 ASM to NON-ASM



Alert log CHECK BOTH BUT ERROR GOT FROM standby:-
=====================================================================
latest log file
MAN DUPLICATE: Errors in krbm_getDupCopy
Errors in file /u01/app/oracle/diag/rdbms/stdrac/stdrac/trace/stdrac_ora_5542.trc:
ORA-19625: error identifying file /u01/app/oracle/oradata/STDRAC/example.264.844530143
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
RMAN DUPLICATE: Errors in krbm_getDupCopy
Errors in file /u01/app/oracle/diag/rdbms/stdrac/stdrac/trace/stdrac_ora_5542.trc:
ORA-19625: error identifying file /u01/app/oracle/oradata/STDRAC/undotbs2.265.844530431
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Errors in file /u01/app/oracle/diag/rdbms/stdrac/stdrac/trace/stdrac_lgwr_4990.trc:
ORA-00313: open failed for members of log group 8 of thread 0
ORA-00312: online log 8 thread 0: '/u01/app/oracle/oradata/STDRAC/stby08.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Errors in file /u01/app/oracle/diag/rdbms/stdrac/stdrac/trace/stdrac_lgwr_4990.trc:
ORA-00313: open failed for members of log group 8 of thread 0
ORA-00312: online log 8 thread 0: '/u01/app/oracle/oradata/STDRAC/stby08.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Aug 09 20:41:00 2015
ERROR: slave communication error with ASM; terminating process 5542
Errors in file /u01/app/oracle/diag/rdbms/stdrac/stdrac/trace/stdrac_ora_5542.trc:

==================================*******======================================================


Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Errors in file /u01/app/oracle/diag/rdbms/stdrac/stdrac/trace/stdrac_lgwr_6750.trc:
ORA-00313: open failed for members of log group 8 of thread 0
ORA-00312: online log 8 thread 0: '/u01/app/oracle/oradata/STDRAC//stby08.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sat Aug 08 14:21:33 2015
ERROR: slave communication error with ASM; terminating process 6862
Errors in file /u01/app/oracle/diag/rdbms/stdrac/stdrac/trace/stdrac_ora_6862.trc:


FAT HDD MOUNT ON RHEL 6/5

Installed rpm's

     exfat-utils-1.2.0-1.el6.i686.rpm

-Alternate
     fuse-exfat-1.0.1-1.el6.i686.rpm
     fuse-exfat-1.0.1-2.el6.i686.rpm


mount.exfat /dev/sdb1 /media/

cp -rpvnf * /home/abc/

-rpvnf --- overwrite  all files

Friday, 8 January 2016

FAILOVER PHYSICAL STANDBY

Steps to failover to physical standby database.

In this document we will see the steps to failover to a physical standby database. 

1) Stop Redo Apply.
   Issue the following SQL statement on the target standby database:
           SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;


2) Finish applying all received redo data.

By this we are giving indication to the standby database that primary database is no more.
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;

Once the FINISH command has completed, the protection mode of the primary database    is lowered to Maximum Performance, regardless of its original protection mode. This is done because the new primary can be activated with out a standby.
           
                         SQL> select protection_mode, protection_level from v$database;
                            Protection Mode                                 Protection Level
           -------------------------------------              ------------------------------------
                     MAXIMUM PERFORMANCE                    UNPROTECTED


 3) Verify that the target standby database is ready to become a primary database. 
             Query the SWITCHOVER_STATUS column of the V$DATABASE view on the target standby database.
           
                       SQL> select SWITCHOVER_STATUS from v$database ;

4) Switch the physical standby database to the primary role.
               SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;

5) Open the new primary database
               SQL> ALTER DATABASE OPEN;
       
       At this stage the protection level is changed to Max Performance from ‘Unprotected’
        
       6) Backup the new primary database.