Thursday 1 November 2018

OGG-01033 There is a problem in network communication, a remote file problem, encryption keys for target and source do not match (if using ENCRYPT) or an unknown error.

Initial Load issue in  12c.

-- Report error.

2018-10-28 10:08:02  ERROR   OGG-01033  There is a problem in network communication, a remote file problem, encryption keys for target and source do not match (if using ENCRYPT) or an unknown error. (Remote file used is /u01/app/oracle/product/ogg/dirdat/initload.dat, reply received is Failed resolving output file /u01/app/oracle/product/ogg/dirdat/initload.dat (error: 2, No such file or directory)).
2018-10-28 10:08:02  
ERROR   OGG-01668  PROCESS ABENDING.

-- Solution.

This error will come in goldengate version 12.2, as new parameter ALLOWOUTPUTDIR  has been introduced , which need to be added in ./GLOBALS file on target .

syntax – ALLOWOUTPUTDIR   [ path of the remote trail location]

1. stop the manager and the replicates

ggsci> stop mgr
ggsci> stop *

ggsci> edit params ./GLOBAL     <-------------------- Put the parameter ALLOWOUTPUTDIR

2.  edit the ./GLOBAL
ALLOWOUTPUTDIR /u01/ogg/dirdat/

3. stat mgr and replicates
ggsci> start mgr
ggsci> start *

-- Source 
./extract <param_file> <report_file> for checking any issue before deploying automate.




Monday 16 July 2018

rtld: 0712-001 Symbol CreateIoCompletionPort was referenced

In My Case Cause of below the issue at storage level.
failing currently due to LVM inconsistency

For more Detail follow below the reference links.

Ref: 
http://oracle-help.com/ora-errors/rtld-0712-001-symbol-createiocompletionport-was-referenced/
AIX 12.1.0.2 Installation Fails with "rtld: 0712-001 Symbol CreateIoCompletionPort was referenced " (Doc ID 1949184.1)

Highly Available VIP (HAVIP)

Highly Available IP (HAIP) comes from 11.2.0.2 on-words and maximum 4 define for Private network.

Now in 12c version introduces Highly Available VIP (HAVIP) a Clusterware-monitored VIP that can be used for non-Databaseapplications.

Typically, VIPs are created for VIP or SCAN listeners for Database connections. In some cases, the need arises to create a VIP monitored by Clusterware designed to support other applications.

For more Detail : Click Here

New feature in Oracle 11g database to kill session


Now we can kill session after finishing existing transaction to be finished. Means in previous release there was no scope to finish on going transaction. Instead of "alter system kill session" here we can use "alter system disconnect session". Using syntax "POST_TRANSACTION" we can kill session for active transaction has to be completed first and after that it will be automatically killed. Example of same syntax is given below.

SQL> alter system disconnect session '10,251' post_transaction;
System altered.

Thursday 12 July 2018

LREG New BG Process In 12c

In 11.2 Pmon process propogates the service metrics to the listeners regitered in local_listener adn remote_listener initialization parameter.

As the remote_listener specifies specifies the address of scan listener and the local_listener parameter specifies the address of the VIP listener, PMON process propogates the service metrics to both SCAN and VIP listeners.

You can trace listener registration using the following command.

alter system set event='immediate trace name listener_registration level 15';

Now in 12c version listener registration is permed by a new mandatory background process named LREG.
And if you want to trace then LREG trace file getting to all info about listener registration.

Advantage : Not recommneding that you create 50+ of unnecessary services,you should create
as many services as you need to split the application into manageable and disjointed workload.

In 11gR2 if there are 50+/100+ services and listeners then there is a possiblilyt that the PMON process
might spend more time on service registration to listener.

But in 12c this possibility is eliminated as the LREG backgroud process which is totally dedicated for registering the services to Listener.

and PMON is freed from listener registration.

Monday 2 July 2018

PRCD-1027 PRCD-1229

PRCD-1027 : Failed to retrieve database racdb
PRCD-1229 : An attempt to access configuration of database racdb was rejected because its version 11.2.0.1.0 differs from the program version 11.2.0.4.0. Instead run the program from /oracle/product/11.2.0/db_1.



[root@rac1 ~]# srvctl status database -d racdb
PRCD-1027 : Failed to retrieve database racdb
PRCD-1229 : An attempt to access configuration of database racdb was rejected because its version 11.2.0.1.0 differs from the program version 11.2.0.4.0. Instead run the program from /oracle/product/11.2.0/db_1.

[root@rac1 ~]# srvctl config database -d racdb

PRCD-1027 : Failed to retrieve database racdb
PRCD-1229 : An attempt to access configuration of database racdb was rejected because its version 11.2.0.1.0 differs from the program version 11.2.0.4.0. Instead run the program from /oracle/product/11.2.0/db_1.

[root@rac1 ~]# srvctl config database
racdb


-- First Attempt
[root@rac1 ~]# srvctl modify database -d racdb -o /oracle/product/11204/db_1

PRCD-1027 : Failed to retrieve database racdb
PRCD-1229 : An attempt to access configuration of database racdb was rejected because its version 11.2.0.1.0 differs from the program version 11.2.0.4.0. Instead run the program from /oracle/product/11.2.0/db_1.

-- Second Attempt
[root@rac1 ~]# srvctl upgrade database -d  racdb -o /oracle/product/11204/db_1

PRCD-1231 : Failed to upgrade configuration of database racdb to version 11.2.0.4.0 in new Oracle home /oracle/product/11204/db_1
PRKH-1014 : Current user "root" is not the oracle owner user "oracle" of oracle home "/oracle/product/11204/db_1"

[root@rac1 ~]# su - oracle

-- Final Attempt
[oracle@rac1 ~]$ srvctl upgrade database -d  racdb -o /oracle/product/11204/db_1

[oracle@rac1 ~]$ exit
logout

[root@rac1 ~]# srvctl status database -d racdb
Instance racdb1 is running on node rac1
Instance racdb2 is running on node rac2

Saturday 30 June 2018

mv: cannot move `OPatch' to `OPatch_old': Permission denied

Applying Latest Opatch in GRID_HOME  Opatch (6880880) 


[grid@dbapath.com patch]$ mv $ORACLE_HOME/OPatch $ORACLE_HOME/OPatch_old
mv: cannot move `/u01/app/11.2.0/grid/OPatch/’ to `/u01/app/11.2.0/grid/OPatch_old’: Permission denied

You must either unlock GRID_HOME or you can move OPatch directory as a superuser.

[oracle@grid_server1 ]$ cd /u01/app/11.2.0/grid/crs/install
[oracle@grid_server1 ]$ sudo perl rootcrs.pl -unlock -crshome /u01/app/11.2.0/grid

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Undefined subroutine &main::read_file called at /u01/app/11.2.0/grid/crs/install/crspatch.pm line 86.

If you hit an error like above, here is the workaround (according to this paper: roothas.pl -patch or rootcrs.pl -patch Fails with ‘Undefined subroutine’ [ID 1268390.1]):

# Take a backup of the file /crs/install/crsconfig_lib.pm

grid$ cd /crs/install

grid$ cp crsconfig_lib.pm crsconfig_lib.pm.bak

# Make the following change in that file crsconfig_lib.pm
#From
# my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR
#To
# my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR read_file

#Execute relock:

[grid@dbapath.com ]$ cd /u01/app/11.2.0/grid/crs/install

[grid@dbapath.com ]$ sudo perl rootcrs.pl -patch -crshome /u01/app/11.2.0/grid
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
ACFS-9200: Supported
CRS-4123: Oracle High Availability Services has been started.

Thursday 28 June 2018

Quiescing RAC Databases


To quiesce a RAC database, use the ALTER SYSTEM QUIESCE RESTRICTED statement from one instance.

It is not possible to open the database from any instance while the database is in the process of being quiesced from another instance.

After all non-DBA sessions become inactive, the ALTER SYSTEM QUIESCE RESTRICTED statement executes and the database is considered to be quiesced.
In a RAC environment, this statement affects all instances.

Cold backups cannot be taken when the database is in a quiesced state because the Oracle background processes may  still perform updates for internal purposes even when the database is in a quiesced state.
Also, the file headers of online data filescontinue to appear as if they are being accessed. They do not look the same as if a clean shutdownwere done.

Wednesday 27 June 2018

Mongo ReplicaSet

Hi Guys this is my first post in Mongodb and today i give you a small demonstration to configure replica set.

  • But first thing is know about the replica set? 
  • what is replicaset? 
  • How they work ? go through with google all this type of questions.


Now we are going to configure replicaset.

=============
My Envirenment
=============

3 Servers -> 1 Primary -> 2 Secondary
OS Useing RHEL7
MongoDB version 3.4

=========
# Host File
=========

- Entry in /etc/hosts in all members

[root@db1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.219.130         mongodb1.com    db1
192.168.219.140         mongodb2.com    db2
192.168.219.141         mongodb3.com    db3

==============
# Disable Selinux
==============

Check the SELinux status with the command.
[root@db1 ~]# getenforce
enabled

[root@db1 ~]# cat /etc/selinux/config
SELINUX=disabled

[root@db1 ~]# getenforce
disabled

==================
# Configure Firewalld
==================
In the first step, we already disabled SELinux. For security reasons, we will now enable firewalld on all nodes and open only the ports that are used by MongoDB and SSH.

- Install Firewalld with the yum command.

[root@db1 ~]# yum -y install firewalld

- Start firewalld and enable it to start at boot time.

[root@db1 ~]# systemctl start firewalld
[root@db1 ~]# systemctl enable firewalld

- Next, open your ssh port and the MongoDB default port 27017.

[root@db1 ~]# firewall-cmd --permanent --add-port=22/tcp
[root@db1 ~]# firewall-cmd --permanent --add-port=27017/tcp

- Reload firewalld to apply the changes.

[root@db1 ~]# firewall-cmd --reload



=====================
# Mongo Configuration File
=====================

- Below the configuration file parameter which is same in all the members

/etc/mongod.conf
# network interfaces
net:
  port: 27017
  #bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.

replication:
  replSetName: rs0


=========================
# MongoDB Replica Set initiate
=========================
In this step, we will create the replica set. We will use the 'mongo1' server as 'PRIMARY' node, and 'db2' and 'db3' as 'SECONDARY' nodes.

Initiate the replica set from the mongo1 server with the query below.

[root@db1 ~]# rs.initiate()
Make sure 'ok' value is 1.

-Now add the 'db2' and 'db3' nodes to the replica sets.

[root@db1 ~]# rs.add("db2:27017")
[root@db1 ~]# rs.add("db3:27017")

You will see the results below and make sure there is no error.


- Next, check the replica sets status with the rs query below.

[root@db1 ~]# rs.status()

- Another query to check the status is:

[root@db1 ~]# rs.isMaster()



==================
# Test the Replication
==================

Test the data set replication from the 'PRIMARY' instance 'mongo1' to 'SECONDARY' nodes 'db2' and 'db3'.

In this step, we will try to write or create a new database on the 'PRIMARY' node 'db1',
then check if the replication is working by checking the database on 'SECONDARY' nodes 'db2' and 'db3'.

- Login to the 'db1' server and open mongo shell.

[root@db1 ~]# mongo

- Now create a new database 'lemp' and new 'stack' collection for the database.

> use lemp
> db.stack.save(
{
    "desc": "LEMP Stack",
    "apps":  ["Linux", "Nginx", "MySQL", "PHP"],
})




- Next, go to the 'SECONDARY' node 'db2' and open the mongo shell.

[root@db1 ~]# mongo

Enable reading from the 'SECONDARY' node with the query 'rs.slaveOk()', and then check if the 'lemp' database exists on the 'SECONDARY' nodes.

> rs.slaveOk()
> show dbs
> use lemp
> show collections
> db.stack.find()


===================
# Check Replication Lag
===================

> rs.printReplicationinfo()
> rs.printSlaveReplicationinfo() # show all the members with replication lag.

rs0:PRIMARY> rs.printSlaveReplicationInfo();
source: db2:27017
        syncedTo: Mon Jan 15 2018 21:49:07 GMT+0530 (IST)
        0 secs (0 hrs) behind the primary
source: db3:27017
        syncedTo: Thu Jan 01 1970 05:30:00 GMT+0530 (IST)
        1516033147 secs (421120.32 hrs) behind the primary


rs0:PRIMARY> rs.printSlaveReplicationInfo();
source: db2:27017
        syncedTo: Mon Jan 15 2018 21:49:07 GMT+0530 (IST)
        0 secs (0 hrs) behind the primary
source: db3:27017
        syncedTo: Thu Jan 01 1970 05:30:00 GMT+0530 (IST)
        1516033147 secs (421120.32 hrs) behind the primary

rs0:PRIMARY> rs.status();
Test Demo: Down the db3 member instanace throught db.serverShutdown(); and execute above command and see the status.



Monday 18 June 2018

Udev Rule For ASM Disks

In Case of RAC configure on VM - After Adding ASM disk on Node 1 adding asm disk in UDEV rules for persistent.

------------------
IN RHEL/OEL-5/6/7
------------------
Set Asm disk using UDEV
/sbin/scsi_id -g -u -d /dev/sdb
/sbin/scsi_id -g -u -d /dev/sdc
/sbin/scsi_id -g -u -d /dev/sdd
/sbin/scsi_id -g -u -d /dev/sde
/sbin/scsi_id -g -u -d /dev/sdf
/sbin/scsi_id -g -u -d /dev/sdg

-Above the command execute not showing any scsi_id for udev rule. I'm not testing much more but find some doc and fix this issue on above linux versions.

- Adding below rules into "/etc/udev/rules.d/50-udev.rules" in my case my OS version is 5. For more detail follow oracle-base.

# ASM DISK RULES
KERNEL=="sdb1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdc1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdd1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sde1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdf1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk5", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdg1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk6", OWNER="oracle", GROUP="dba", MODE="0660"

- No execute again below the command and wondering got the output of scsi_id.
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sdb
36000c299cae37d62af51ab7c55768959
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sdc
36000c2989736710c1b2bd8efea742e61
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sdd
36000c295eae87b8a4e919b5b3b077827
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sde
36000c293bc54c062ff70f20ae8c3078e
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sdf
36000c29759c3a92862de21e908f8516e
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sdg
36000c292f86ba892199aa8fd6bb7ffe8

- Creating a file and insert below commands into this file "vim /etc/udev/rules.d/99-oracle-asmdevices.rules"
KERNEL=="sdb1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c299cae37d62af51ab7c55768959",NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdc1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c2989736710c1b2bd8efea742e61",NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdd1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c295eae87b8a4e919b5b3b077827",NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sde1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c293bc54c062ff70f20ae8c3078e",NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdf1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c29759c3a92862de21e908f8516e",NAME="asm-disk5", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdg1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c292f86ba892199aa8fd6bb7ffe8",NAME="asm-disk6", OWNER="oracle", GROUP="dba", MODE="0660"

- Add the following to the "/etc/scsi_id.config" file to configure SCSI devices as trusted. Create the file if it doesn't already exist.

options=-g

- Execute below commands
/sbin/partprobe /dev/sdb1
/sbin/partprobe /dev/sdc1
/sbin/partprobe /dev/sdd1
/sbin/partprobe /dev/sde1
/sbin/partprobe /dev/sdf1
/sbin/partprobe /dev/sde1

# #OL5 and OL6
# /sbin/start_udev
ls -lrt /dev/asm*
Now proceed your next action plan for node 2.