Monday, 25 January 2016

OLR (Oracle Local Registry) in RAC 11gR2

Oracle Clusterware 11g Release 2 an additional component related to the OCR called the Oracle Local Registry (OLR) is installed on each node in the cluster. The OLR is a local registry for node specific resources. 

In Oracle Clusterware 11g Release 2, each node in a cluster has a local registry for node specific resources, called an Oracle Local Registry (OLR), which is installed and configured when Oracle Clusterware installs OCR.

It is the very first file that is accessed to startup  clusterware when OCR is stored on ASM. OCR should be accessible to find out the resources which need to be started on a node. If OCR is on ASM, it can’t be read until ASM (which itself is a resource for the node and this information is stored in OCR) is up. To resolve this problem, information about the resources which need to be started on a node is stored in an operating system  file which is called Oracle Local Registry or OLR. Since OLR is a file an operating system file, it can be accessed by various processes on the node for read/write irrespective of the status of the clusterware (up/down). Hence, when  a node joins the cluster,  OLR on that node is read, various resources ,including ASM  are started on the node  . Once ASM is up , OCR is accessible.

By default, OLR is located at $GRID_HOME/cdata/hostname.olr.

The OLR contains the information about ORA_CRS_HOME, localhost version, active version, GPnP details, OCR latest backup time and location, information about OCR daily, weekly backup location
and node name etc.

You can manage the OLR by using the ocrcheck, ocrdump, and ocrconfig utilities with the -local option.


To see the location of the OLR, use the ocrcheck utility:
$ ocrcheck -config -local


You can check the status of the OLR as follows:
# ocrcheck -local


You can display the content of OLR to the text terminal that initiated the program using the OCRDUMP utility, as follows:
$ ocrdump -local -stdout


You can perform administrative tasks on OLR using the OCRCONFIG utility. To export OLR to a file:
# ocrconfig –local –export file_name


To import a specified file to OLR:
# ocrconfig –local –import file_name


To modify the OLR file on the local node:
# ocrconfig –local –repair olr file_name


Note: The olr keyword used with the -repair option is valid only when -local is used.

Friday, 22 January 2016

12c New Features

--------------------
Part 1 Covers:
--------------------
CDB & PDB
Online migration of an active data file
Online table partition or sub-partition migration
Invisible column
Multiple indexes on the same column
DDL logging  ( $DIAG )
Temporary undo in- and- outs
New backup user privilege
How to execute SQL statement in RMAN
Table level recovery in RMAN
Restricting PGA size
Smart Flash Cache
 Automatic Big Table Cache 12.1.0.2c (ABTC)


--------------------
Part 2 Covers:
--------------------
Table partition maintenance enhancements
Database upgrade improvements
Restore/Recover data file over the network
Data Pump enhancements
Real-time ADDM
Concurrent statistics gathering
TDE Re-create master key after deleting

--------------------------------------
Part 3 RAC & ASM Covers:
--------------------------------------
Additions/Enhancements in ASM
        * Flex ASM
* Increased ASM storage limits
* Tuning ASM rebalance operations
* ASM Disk Scrubbing
* Active Session History (ASH) for ASM

Additions/Enhancements in Grid Infrastructure
* Flex Clusters
* OCR backup in ASM disk group
* IPv6 support

Additions/Enhancements in Real Application Cluster (database)
* What-If command evaluation
* Miscellaneous srvctl improvements

--------------------
Part 4 Covers:
---------------------
How to truncate a master table while child tables contain data
Limiting ROWS for Top-N query results
Miscellaneous SQL*Plus enhancements
Session level sequences
WITH clause improvements
Extended data types



I'm great thanks to Mr.Syed Jaffar Hussain &  Mr.Tim Hall for nice explanation some 12c features.

Thursday, 21 January 2016

GoldenGate Basic Architecture.







Source Database - GoldenGate extracts the information from the source database.
The source database may be from any of the following vendors.

- Oracle Database
- TimesTen
- MySQL
- IBM DB2
- Microsoft SQL Server
- Teradata
- Sybase
- Enscribe
- SQL/MX


Capture(Local Extract)
 - Capture is the primary process in which all the commited DML and 
DDL changes are extracted from the source database.

Source Trail
 - The data extracted from the source database is written to the source trail,
which is basically a staging file and will be used by other processes (Ex: datapump).


Datapump
 - The datapump which is a goldengate process(not to be confused with oracle 
datapump), reads the information from the source trail, transfers it to the target host via
 
TCP/IP and writes it down to the remote trail.
 

Network
 - GoldenGate sends the data from the source trail via TCP/IP to the target host
and writes it in the remote trail.


Collector
 - The collector process is automatically started by the manager. This process
runs in the background, It captures and writes all the data received to the remote trail.

Remote Trail
 - Remote trail is similar to source trail. The only difference between the
two is that the remote trail is at the target host and the source trail is at the source Database
host.


Delivery(Replicat)
 - The delivery process applies all the changes to the target database by
reading the remote trail. The changes are done to the target database in the same order as they
were done in the source database.

Target Database
 - Target database is the database at which all the changes are applied.
It can be same or different from the source database.

SMART_FLASH_CACHE

Smart flash_cache in 11gR2 and > ( 12c ) Version  supports on OEL or Solaris only DB Smart Flash Cache in Oracle 11g

In case you don’t have budget to buy Exadata you can still buy huge number of flash disks and put on them part of your database. But what should be stored on flash disks(very fast) and what on magnetic disks(very slow) ?

It’s not your businesses to know let decide database.
Introduction

DB Smart Flash Cache is new extension for buffer cache area. This extra area should be defined on solid state disks (SSD) and has following features:
          make performance improvements at moderate cost(cheaper than DRAM)
          low latency compared to magnetic disks
          higher throughput compared to magnetic disks
          easy to setup
          easy to control
          can be used for RAC cache fusion keeps consistency
          direct I/O bypasses buffer cache so as well bypasses DB smart flash cache
          can cache only clean blocks from buffer cache
          flash cache is not auto-tuned
          only blocks from standard buffer pool are cached in DB smart flash cache
Oracle recommends:
          flash disks should have comparable read IOPs and IOPs write
          this new layer should be at least 2-10 times bigger than buffer cache in DRAM
          mainly for OLTP systems

Architecture


if a oracle server process needs to read a block from database at first it must read it from magnetic disk(physical read). Then the block is stored in buffer cache memory and added to standard “LRU chain” list.
When “in memory buffers area” is getting full Oracle must decide which blocks needs to be removed from cache. If you have DB Smart Flash Cache enabled “clean” blocks are written to “Flash cache” by DBWR process so next time they can be read into memory from Flash Cache and improve your performance.
NOTE: “Dirty” blocks are never stored in Flash Cache
List of blocks cached in DB smart flash cache are stored in buffer cache area on two dedicated flash “LRU lists” depending on object attribute FLASH_CACHE:

          DEFAULT – standard last recently used algorithm decides how long such blocks are cached in flash cache. It’s default value assigned to each object in database.
          KEEP – such blocks are not removed from flash cache as long as the flash cache is large enough

alter|create table|index objectname
storage
(  
 buffer_pool { keep | recycle | default }
   flash_cache { keep | none    | default }
);

NONE value for FLASH_CACHE is blocking flash caching for a given object.

Statistics
All I/O operations from DB smart flash cache are counted as physical I/O however Oracle also collects such informations in new columns.
V$SQL - OPTIMIZED_PHY_READ_REQUESTS
V$SQLAREA - OPTIMIZED_PHY_READ_REQUESTS
V$FILESTAT - OPTIMIZED_PHYBLKRD
select name from v$statname where name like 'physical%optimized%';

NAME                                                           
----------------------------------------------------------------
physical read requests optimized                                 
physical read total bytes optimized
You can see such stats in V$SESSTAT and V$SYSSTAT
Setup
Two parameters must be set on database level to turn on DB smart flash cache:
 DB_FLASH_CACHE_FILE – defines (OS disk path or ASM disk group) and file name to store this data
 DB_FLASH_CACHE_SIZE – defines size of the flash cache

DB_FLASH_CACHE_FILE='/os path/flash_cache_file.dbf'
DB_FLASH_CACHE_FILE='+FLASH_DISK_GROUP/flash_cache_file.dbf'
DB_FLASH_CACHE_SIZE=200m

After setting both parameters you need to restart database.
DB_FLASH_CACHE_FILE
          can’t be shared between many databases or instances DB_FLASH_CACHE_SIZE
          can’t be dynamically resized
           can be set to 0 to disable DB smart flash cache
          can be set to original size to re-enable DB smart flash cache




Performance improvements
Oracle conducted interesting test for a OLTP database 70GB size with 8GB SGA. From below picture you can see improvements for Transactions versus size of DB smart cache size.

Following picture shows improvement in transaction response time versus DB smart cache size

Example
I simulate SSD disk by creation ramdisk – disk based in memory using following steps:
1. create directory to mount ramdisk and change owner to oracle and group dba
[root@oel5 /]mkdir /ramdisk
[root@oel5 /]chown oracle:dba -R /ramdisk
2. mount ramdisk and check it
[root@oel5 /]# mount -t tmpfs none /ramdisk -o size=256m

[root@oel5 /]# mount | grep ramdisk

none on /ramdisk type tmpfs (rw,size=256m)

3. set parameters for database and restart it as user oracle
SQL> alter system set db_flash_cache_file='/ramdisk/ram.dbf'
SQL> scope=spfile;
System altered.

SQL> alter system set db_flash_cache_size=200M scope=sp;
System altered. 
SQL> startup force
ORACLE instance started.
Total System Global Area  835104768 bytes
Fixed Size                  2232960 bytes
Variable Size             507514240 bytes
Database Buffers          322961408 bytes
Redo Buffers                2396160 bytes
Database mounted.
Database opened.
SQL> show parameter flash_cache

NAME                    TYPE        VALUE
----------------------- ----------- ------------------------------
db_flash_cache_file     string      /ramdisk/ram.dbf
db_flash_cache_size     big integer 200M
4. Check new file exists in /ramdisk
[root@oel5 ramdisk]# ll
total 8
-rw-r----- 1 oracle asmadmin 209715200 Feb 24 22:54 ram.dbf
5. Let’s create tables with flash_cache keep reference in storage clause so Oracle will try to keep the blocks in DB smart cache as long as possible.
create table test_tbl1(id number,id1 number,id2 number)storage(flash_cache keep);
begin
  for i in 1..1000000
  loop
    insert into test_tbl1 values(i, i, i);
  end loop;
  commit;
end;
/


6. Eventually after some time you can see some data in flash cache – v$bh view.
select status, count(*) from v$bh
group by status;
STATUS       COUNT(*)
---------- ----------
xcur            36915
flashcur        25583
cr                 13
7. If you clean buffer cache as well db smart flash cache is purged
alter system flush buffer_cache;
system FLUSH altered. 
STATUS       COUNT(*)
---------- ----------
xcur              520
free            36411

ERROR:-
I do all steps of your manual, but after ‘startup force’ I have an error:

SQL> startup force
ORA-00439: feature not enabled: Server Flash Cache
What am I doing wrong?
My configuration:
Oracle Linux Server release 6.4
2.6.39-400.24.1.el6uek.x86_64
Oracle Database 11g 11.2.0.3.0

Patch 12949806: FLASH CACHE CHECK IS AGAINST ENTERPRISE-RELEASE
Now it works!


------FOR TESTING PURPOSE------

    # fdisk -l /dev/sdc

    Disk /dev/sdc: 24.5 GB, 24575868928 bytes
    255 heads, 63 sectors/track, 2987 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Disk /dev/sdc doesn't contain a valid partition table
    # chmod 777 /dev/sdc

set Oracle initialization parameters:

    $ sqlplus / as sysdba
    SQL> alter system set db_flash_cache_file='/dev/sdc' scope=spfile;
    System altered.

    SQL> alter system set db_flash_cache_size=10G scope=spfile;
    System altered.


Stop/Start database

    SQL> shutdown immediate



Reference:

12c Below one more my favrt. link to configure and understand smart flash cache.

ORA-65096: invalid common user or role name


SQL> CREATE USER remote_clone_user IDENTIFIED BY remote_clone_user;
create user remote_clone_user identified by remote_clone_user
            *
ERROR at line 1:
ORA-65096: invalid common user or role name


SQL> alter session set "_ORACLE_SCRIPT"=true;


SQL> CREATE USER remote_clone_user IDENTIFIED BY remote_clone_user;


SQL> grant create session,create pluggable database to remote_clone_user;

Saturday, 16 January 2016

RAC INSTALLATION


Below Some link to installation of Oracle RAC 11g R2. 

OS: Enterprise-R5-U6-Server-i386-dvd

Oracle Installation Doc :  Oracle RAC installation on VM Workstation


English Version Video Demo :






























Arabic Version Video Demo: Oracle RAC 11g R2 Installation

Friday, 15 January 2016

Backup Pluggable Databases

Oracle 12c New Feature: How to backup pluggable databases

Oracle 12c introduced the new multi-tenant feature called Pluggable Databases (PDB). We will show how to take a backup of the pluggable database components in this post.

Setup for RMAN with Oracle 12c

In order to use the Oracle 12c Recovery Manager (RMAN) utility for pluggable database backups, you need to first enable archivelog mode.





Once archivelog mode is enabled, we can take a backup of the pluggable database

rman target sysbackup





Now we can verify that the backup image is available from RMAN for our pluggable database






Backup for root component of Oracle 12c Pluggable Databases
Backup database ;    # Both database backup CDB and PDB all       
Pluggable database backup above the command.

In a nutshell, an Oracle 12c PDB consists of two parts: a root component and a seed component that includes the data. Earlier we performed a full database backup of the entire pluggable database but let us say that we just want to backup the root itself. We can do so with the RMAN command BACKUP DATABASE ROOT as shown in the following example:





Now let us verify the root backup for our PDB with Oracle 12c:




Stay tuned when we visit how to restore pluggable databases with RMAN and Oracle 12c!



RESTORE & RECOVER root (Container)
RMAN> restore datafile 6;
RMAN>restore database root;
RMAN>recover database root;



RESTORE & RECOVER PDB

RMAN> restore datafile 29 pluggable database pdb1;   # restore datafile 29;
RMAN> restore pluggable database pdb1;           # restore pluggable database pdb1,pdb2,pdb3;
RMAN> recover pluggable database pdb1;