Saturday, 30 June 2018

mv: cannot move `OPatch' to `OPatch_old': Permission denied

Applying Latest Opatch in GRID_HOME  Opatch (6880880) 


[grid@dbapath.com patch]$ mv $ORACLE_HOME/OPatch $ORACLE_HOME/OPatch_old
mv: cannot move `/u01/app/11.2.0/grid/OPatch/’ to `/u01/app/11.2.0/grid/OPatch_old’: Permission denied

You must either unlock GRID_HOME or you can move OPatch directory as a superuser.

[oracle@grid_server1 ]$ cd /u01/app/11.2.0/grid/crs/install
[oracle@grid_server1 ]$ sudo perl rootcrs.pl -unlock -crshome /u01/app/11.2.0/grid

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Undefined subroutine &main::read_file called at /u01/app/11.2.0/grid/crs/install/crspatch.pm line 86.

If you hit an error like above, here is the workaround (according to this paper: roothas.pl -patch or rootcrs.pl -patch Fails with ‘Undefined subroutine’ [ID 1268390.1]):

# Take a backup of the file /crs/install/crsconfig_lib.pm

grid$ cd /crs/install

grid$ cp crsconfig_lib.pm crsconfig_lib.pm.bak

# Make the following change in that file crsconfig_lib.pm
#From
# my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR
#To
# my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR read_file

#Execute relock:

[grid@dbapath.com ]$ cd /u01/app/11.2.0/grid/crs/install

[grid@dbapath.com ]$ sudo perl rootcrs.pl -patch -crshome /u01/app/11.2.0/grid
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
ACFS-9200: Supported
CRS-4123: Oracle High Availability Services has been started.

Thursday, 28 June 2018

Quiescing RAC Databases


To quiesce a RAC database, use the ALTER SYSTEM QUIESCE RESTRICTED statement from one instance.

It is not possible to open the database from any instance while the database is in the process of being quiesced from another instance.

After all non-DBA sessions become inactive, the ALTER SYSTEM QUIESCE RESTRICTED statement executes and the database is considered to be quiesced.
In a RAC environment, this statement affects all instances.

Cold backups cannot be taken when the database is in a quiesced state because the Oracle background processes may  still perform updates for internal purposes even when the database is in a quiesced state.
Also, the file headers of online data filescontinue to appear as if they are being accessed. They do not look the same as if a clean shutdownwere done.

Wednesday, 27 June 2018

Mongo ReplicaSet

Hi Guys this is my first post in Mongodb and today i give you a small demonstration to configure replica set.

  • But first thing is know about the replica set? 
  • what is replicaset? 
  • How they work ? go through with google all this type of questions.


Now we are going to configure replicaset.

=============
My Envirenment
=============

3 Servers -> 1 Primary -> 2 Secondary
OS Useing RHEL7
MongoDB version 3.4

=========
# Host File
=========

- Entry in /etc/hosts in all members

[root@db1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.219.130         mongodb1.com    db1
192.168.219.140         mongodb2.com    db2
192.168.219.141         mongodb3.com    db3

==============
# Disable Selinux
==============

Check the SELinux status with the command.
[root@db1 ~]# getenforce
enabled

[root@db1 ~]# cat /etc/selinux/config
SELINUX=disabled

[root@db1 ~]# getenforce
disabled

==================
# Configure Firewalld
==================
In the first step, we already disabled SELinux. For security reasons, we will now enable firewalld on all nodes and open only the ports that are used by MongoDB and SSH.

- Install Firewalld with the yum command.

[root@db1 ~]# yum -y install firewalld

- Start firewalld and enable it to start at boot time.

[root@db1 ~]# systemctl start firewalld
[root@db1 ~]# systemctl enable firewalld

- Next, open your ssh port and the MongoDB default port 27017.

[root@db1 ~]# firewall-cmd --permanent --add-port=22/tcp
[root@db1 ~]# firewall-cmd --permanent --add-port=27017/tcp

- Reload firewalld to apply the changes.

[root@db1 ~]# firewall-cmd --reload



=====================
# Mongo Configuration File
=====================

- Below the configuration file parameter which is same in all the members

/etc/mongod.conf
# network interfaces
net:
  port: 27017
  #bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.

replication:
  replSetName: rs0


=========================
# MongoDB Replica Set initiate
=========================
In this step, we will create the replica set. We will use the 'mongo1' server as 'PRIMARY' node, and 'db2' and 'db3' as 'SECONDARY' nodes.

Initiate the replica set from the mongo1 server with the query below.

[root@db1 ~]# rs.initiate()
Make sure 'ok' value is 1.

-Now add the 'db2' and 'db3' nodes to the replica sets.

[root@db1 ~]# rs.add("db2:27017")
[root@db1 ~]# rs.add("db3:27017")

You will see the results below and make sure there is no error.


- Next, check the replica sets status with the rs query below.

[root@db1 ~]# rs.status()

- Another query to check the status is:

[root@db1 ~]# rs.isMaster()



==================
# Test the Replication
==================

Test the data set replication from the 'PRIMARY' instance 'mongo1' to 'SECONDARY' nodes 'db2' and 'db3'.

In this step, we will try to write or create a new database on the 'PRIMARY' node 'db1',
then check if the replication is working by checking the database on 'SECONDARY' nodes 'db2' and 'db3'.

- Login to the 'db1' server and open mongo shell.

[root@db1 ~]# mongo

- Now create a new database 'lemp' and new 'stack' collection for the database.

> use lemp
> db.stack.save(
{
    "desc": "LEMP Stack",
    "apps":  ["Linux", "Nginx", "MySQL", "PHP"],
})




- Next, go to the 'SECONDARY' node 'db2' and open the mongo shell.

[root@db1 ~]# mongo

Enable reading from the 'SECONDARY' node with the query 'rs.slaveOk()', and then check if the 'lemp' database exists on the 'SECONDARY' nodes.

> rs.slaveOk()
> show dbs
> use lemp
> show collections
> db.stack.find()


===================
# Check Replication Lag
===================

> rs.printReplicationinfo()
> rs.printSlaveReplicationinfo() # show all the members with replication lag.

rs0:PRIMARY> rs.printSlaveReplicationInfo();
source: db2:27017
        syncedTo: Mon Jan 15 2018 21:49:07 GMT+0530 (IST)
        0 secs (0 hrs) behind the primary
source: db3:27017
        syncedTo: Thu Jan 01 1970 05:30:00 GMT+0530 (IST)
        1516033147 secs (421120.32 hrs) behind the primary


rs0:PRIMARY> rs.printSlaveReplicationInfo();
source: db2:27017
        syncedTo: Mon Jan 15 2018 21:49:07 GMT+0530 (IST)
        0 secs (0 hrs) behind the primary
source: db3:27017
        syncedTo: Thu Jan 01 1970 05:30:00 GMT+0530 (IST)
        1516033147 secs (421120.32 hrs) behind the primary

rs0:PRIMARY> rs.status();
Test Demo: Down the db3 member instanace throught db.serverShutdown(); and execute above command and see the status.



Monday, 18 June 2018

Udev Rule For ASM Disks

In Case of RAC configure on VM - After Adding ASM disk on Node 1 adding asm disk in UDEV rules for persistent.

------------------
IN RHEL/OEL-5/6/7
------------------
Set Asm disk using UDEV
/sbin/scsi_id -g -u -d /dev/sdb
/sbin/scsi_id -g -u -d /dev/sdc
/sbin/scsi_id -g -u -d /dev/sdd
/sbin/scsi_id -g -u -d /dev/sde
/sbin/scsi_id -g -u -d /dev/sdf
/sbin/scsi_id -g -u -d /dev/sdg

-Above the command execute not showing any scsi_id for udev rule. I'm not testing much more but find some doc and fix this issue on above linux versions.

- Adding below rules into "/etc/udev/rules.d/50-udev.rules" in my case my OS version is 5. For more detail follow oracle-base.

# ASM DISK RULES
KERNEL=="sdb1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdc1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdd1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sde1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdf1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk5", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdg1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="SATA_VBOX_HARDDISK_VBd306dbe0-df3367e3_", NAME="asm-disk6", OWNER="oracle", GROUP="dba", MODE="0660"

- No execute again below the command and wondering got the output of scsi_id.
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sdb
36000c299cae37d62af51ab7c55768959
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sdc
36000c2989736710c1b2bd8efea742e61
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sdd
36000c295eae87b8a4e919b5b3b077827
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sde
36000c293bc54c062ff70f20ae8c3078e
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sdf
36000c29759c3a92862de21e908f8516e
[root@rac1 rules.d]# /sbin/scsi_id -g -u -s /block/sdg
36000c292f86ba892199aa8fd6bb7ffe8

- Creating a file and insert below commands into this file "vim /etc/udev/rules.d/99-oracle-asmdevices.rules"
KERNEL=="sdb1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c299cae37d62af51ab7c55768959",NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdc1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c2989736710c1b2bd8efea742e61",NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdd1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c295eae87b8a4e919b5b3b077827",NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sde1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c293bc54c062ff70f20ae8c3078e",NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdf1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c29759c3a92862de21e908f8516e",NAME="asm-disk5", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdg1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c292f86ba892199aa8fd6bb7ffe8",NAME="asm-disk6", OWNER="oracle", GROUP="dba", MODE="0660"

- Add the following to the "/etc/scsi_id.config" file to configure SCSI devices as trusted. Create the file if it doesn't already exist.

options=-g

- Execute below commands
/sbin/partprobe /dev/sdb1
/sbin/partprobe /dev/sdc1
/sbin/partprobe /dev/sdd1
/sbin/partprobe /dev/sde1
/sbin/partprobe /dev/sdf1
/sbin/partprobe /dev/sde1

# #OL5 and OL6
# /sbin/start_udev
ls -lrt /dev/asm*
Now proceed your next action plan for node 2.