Tuesday, 26 December 2023

Oracle 19c Feature Active Data Guard DML Redirection

Today, I will write about an extremely cool feature introduced in the Data Guard component on Oracle Database Active Data Guard DML Redirection

It was officially introduced in Oracle 19c, but was also present in the 18c version via the underscore parameter "_enable_proxy_adg_redirect=true".

SQL> alter session enable adg_redirect_dml;

With this feature, you can run DML operations on Active Data Guard standby databases. This enables you to run read-mostly applications, which occasionally execute DMLs, on the standby database". So imagine one reporting application that needs to create some staging tables, but that you couldn't have it running in the ADG as it was a fully read-only environment before. Now, this is no longer a problem.

The only issue I see with this feature is that it is controlled by a session modifiable level parameter. In other words, any database user can enable this for himself on the standby side.

Drawback:

However, starting on 19c, any user connected to the Data Guard environment could potentially change the data in the Production, as long as the user has the appropriate grants in primary to do so. Thus, leveraging the protection on being a "read-only" environment is not enough anymore. All the protections made on the primary should be extended to the DGs.

Solution:

Not sure but seems to be a potentially insecure feature, I found two options on the internet, but I haven't tested them yet.

*.Totally disabling the database links on the standby database.

*.Creating a logon trigger on the primary blocking connections coming from the standby database.


Refer Doc:

Active Data Guard DML Redirection 19c (Doc ID 2465016.1)

Role of Standby Redo Logs Why and How?

I came across an article about Standby Redo Logs, and I wanted to share it with you.

Purpose of Data Guard:

High Availability
Disaster Recovery
Data Protection
Workload Offloading & Testing


How Data Guard Works:  Dataguard Track the Changes on Primary Database and Replicate the same to Standby Databases through Redo Logs/Archive Logs.


In 19c, Oracle Introduced 2 exciting features.

*.Active DG DML Redirect, through which even the DML Statements can be executed directly on Standby Database which will be applied to Primary Database as well which reduces the workload of Primary Database. 

*.We have automatic Standby recovery through Flashback feature by which we don't need to flashback/rebuild the standby database after Primary Flashback..


Modes of Standby databases:


Maximum Protection Mode which ensure transaction on primary commits only after it is applied on Standby. There are no Dataloss in Max Protection mode.
Here the redo transport type is SYNC & AFFIRM.

Maximum availability Mode. In This also, the transaction on Primary gets commited only after it is applied on standby database. But If Standby not available due to outage/network issue, the transaction on primary will continue without any impact.. Usually there will not be any data loss. Here also the redo transport is sync & Affirm.

Maximum performance Mode in which the primary database transaction will not wait for Standby replication which improve the performance of the primary database. But there are significant chances of Data Loss. 


Why use SRLs?

If you configure your standby for Maximum Protection, then Standby Redo Logs are required. Most implementations are configured for Maximum Performance because they do not want the performance hit Max Protect may impart on their application. 
Even if you are using Max Performance, you still want to implement SRLs. 

To understand why, we first need to start by examining how redo transport works when SRLs do not exist. 



1. A transaction writes redo records into the Log Buffer in the System Global Area (SGA).

2. The Log Writer process (LGWR) writes redo records from the Log Buffer to the Online Redo Logs (ORLs).

3. When the ORL switches to the next log sequence (normally when the ORL fills up), the Archiver process (ARC0) will copy the ORL to the Archived Redo Log.

4. Because a standby database exists, a second Archiver process (ARC1) will read from a completed Archived Redo Log and transmit the redo over the network to the Remote File Server (RFS) process running for the standby instance.

5. RFS sends the redo stream to the local Archiver process (ARCn).

6. ARCn then writes the redo to the archived redo log location on the standby server.

7. Once the archived redo log is completed, the Managed Recovery Process (MRP0) sends the redo to the standby instance for applying the transaction.



With SRLs, not only do we have more resources, we also have different choices, i.e. different paths to get from the primary to the standby. The first choice is to decide if we are configured for Max Protect or Max Performance as I will discuss its impact below.



1. Just like without SRLs, a transaction generates redo in the Log Buffer in the SGA.

2. The LGWR process writes the redo to the ORL.

3. Are we in Max Protect or Max Performance mode?

4. If Max Protect, then we are performing SYNC redo transport. The Network Server SYNC process (NSSn) is a slave process to LGWR. It ships redo to the RFS process on the standby server.

5. If Max Performance mode, then we are performing ASYNC redo transport. The Network Server ASYNC process (NSAn) reads from the ORL and transports the redo to the RFS process on the standby server.

6. RFS on the standby server simply writes the redo stream directly to the SRLs.

7. How the redo gets applied depends if we are using Real Time Apply or not.

8. If we are using Real Time Apply, MRP0 will read directly from the SRLs and apply the redo to the standby database.

9. If we are not using Real Time Apply, MRP0 will wait for the SRL’s contents to be archived and then once archived and once the defined delay has elapsed, MRP0 will apply the redo to the standby database.


 Step 3 above is the entire reason we want to use Standby Redo Logs. If we are in Max Protect (SYNC) mode, then SRLs are required otherwise this process will not work. If we are in Max Performance mode, will still want SRLs. Why? We want SRLs to be configured, even in Max Performance mode because they reduce data loss to seconds, rather than minutes or hours. Max Performance mode with SRLs often achieves a near-zero data loss solution. The last sentence above is why you want to configure SRLs if you are in Max Performance mode. The other big benefit to SRLs is when Real Time Apply is being performed. As soon as the redo is in the SRL, it can be replayed on the standby database. We do not have to wait for a log switch to occur. Real Time Apply, only possible with SRLs, means the recovery time to open the standby database in a failover operation is as low as it can be.

 I often find that people are operating under the misconception that if you configure for ASYNC, configure for Max Performance, then only ARCn can transport redo from the primary to the standby. This used to be true in much older versions, but in 10g (maybe 9i), ARCn is only used to transport redo only if SRLs are not configured. If SRLs are configured, then for ASYNC, NSAn is used to transport redo. Furthermore, NSAn does this in near real time. I only ever configure Max Performance mode in my standby configurations and I often have 1 second or 2 second data loss.

Here comes the 2 new Processes. that is Network Server Async Process (NSA) and Network Server Sync Process (NSS). Prior to 12c, the Log-Write Network Server Process (LNS) Process which is was used instead of NSA and NSS.

 Without SRLs, then I must wait for a log switch to occur on the primary before the redo can be transported. If it takes one hour for the log switch to occur, then I can have one hour’s worth of data loss. If it takes six hours for that log switch to occur, then I can have six hour’s worth of data loss. This behavior was ameliorated by the DBA implementing the ARCHIVE_LAG_TARGET initialization parameter in their primary configuration. If the DBA set this parameter to 3600 seconds, then a log switch would occur at most once per hour. Even with this parameter, one hour of data loss may seem like a lot to most companies, especially when you do better. 

 All you have to do to enjoy data loss measured in a few seconds is to create Standby Redo Logs in your standby database. That’s it. It couldn’t be more simple.
 
 
Best Practice:

*.Make sure your ORL groups all have the same exact size. You want every byte in the ORL to have a place in its corresponding SRL.

*.Create the SRLs with the same exact byte size as the ORL groups. If they can’t be the same exact size, make sure they are bigger than the ORLs.

*.Do not assign the SRLs to any specific thread. That way, the SRLs can be used by any thread, even with Oracle RAC primary databases.

*.When you create SRLs in the standby, create SRLs in the primary. They will normally never be used. But one day you may perform a switchover operation. When you do switchover, you want the old primary, now a standby database, to have SRLs. Create them at the same time.

*.For an Oracle RAC primary database, create the number of SRLs equal to the number of ORLs in all primary instances. For example, if you have a 3-node RAC database with 4 ORLs in each thread, create 12 SRLs (3x4) in your standby. No matter how many instances are in your standby, the standby needs enough SRLs to support all ORLs in the primary, for all instances.



Sunday, 10 September 2023

Jenkin: Role-Based Access Control (RBAC)

Jenkins provides Role-Based Access Control (RBAC) as a way to manage user permissions and access control more granularly. RBAC allows you to define roles with specific permissions and assign those roles to users and groups. Here's how you can set up Role-Based Access Control in Jenkins:


1. Install the Role-based Authorization Strategy Plugin:

To enable RBAC in Jenkins, you need to install the "Role-based Authorization Strategy" plugin. You can install it via the Jenkins plugin manager:


a. Go to the Jenkins dashboard.
b. Click on "Manage Jenkins."
c. Select "Manage Plugins."
d. Navigate to the "Available" tab.
e. In the "Filter" box, type "Role-based Authorization Strategy."
f. Check the checkbox next to the "Role-based Authorization Strategy" plugin.
g. Click "Install without restart."







2. Configure Global Roles:


a. After installing the plugin, go to "Manage Jenkins" > "Configure Global Security."

b. Under the "Access Control" section, select "Role-Based Strategy."


In my case already installed. But you can follow the same step as per snippet.






3.Define Global Roles:

*.Scroll down to the "Role-Based Authorization Strategy" section and click on "Add global role."
*.Define the roles you want to create, giving them meaningful names (e.g., Administrator, Developer, QA, etc.).
*.For each role, specify the desired permissions by checking the corresponding checkboxes. Jenkins provides a list of common permissions you can assign.
*.Click "Add" to save the global roles.





4.Assign Users or Groups to Roles:


*.After defining global roles, you can assign users or groups to these roles.

*.Scroll down to the "Role to User/Group Mapping" section.

*.Select a role from the "Role" dropdown.

*.Enter the usernames or group names (if using groups) in the "User/Group Names" field. You can separate multiple names with commas.

*.Click "Add" to map users or groups to the role.

*.Repeat this step for each role and its corresponding users or groups.



5. Apply and Save:


    *.Click the "Apply" button to apply the RBAC configuration.

    *.Then, click the "Save" button to save the changes.


6. Test Permissions:

Log in as different users and verify that they have the expected permissions based on the roles you assigned to them.


7. Fine-Tune Role Permissions:

You can further refine role permissions by modifying the roles and their associated permissions as needed.


Role-Based Access Control allows you to manage access control in a more flexible and organized manner, making it easier to control who can do what within your Jenkins instance. It's especially useful in larger Jenkins installations with many users and complex access requirements.


Jenkin: User Create

 To create a new user in Jenkins, you'll need administrative privileges.

 Follow these steps to add a user to your Jenkins instance:

1.Log into Jenkins: 

Open a web browser and access your Jenkins instance by navigating to http://mongodb:8080


2.Access User Management:

Click on "Manage Jenkins" in the Jenkins dashboard.



3.Access User Management Page:

Click on "Manage Users" to access the User Management page.

4.Create a New User:

Click on the "Create User" link.


5.Fill in User Details:

Fill out the user details for the new user, including:

Username: Choose a unique username for the new user.

Password: Set a secure password for the user. You can click the "Generate" button to have Jenkins generate a random password.

Full Name: Enter the user's full name.

Email Address: Provide the user's email address.

Click the "Create User" button.


6.Configure User Permissions:

By default, new users are given read-only access to Jenkins.

To grant additional permissions, click on the user's name on the User Management page.


7.Configure User Permissions:

Scroll down to the "Add user to roles" section.

Check the roles that you want to assign to the user. For example, you can give them "Overall" or "Job" permissions based on your requirements.


8.Save User Configuration:

Click the "Save" button to save the user's configuration.


9.Verify User Creation:

The new user is now created and should have access to Jenkins based on the assigned permissions.


10.Notify the User:

Share the username and password with the new user.

It's advisable to have users change their password upon their first login for security reasons.

Keep in mind that Jenkins also supports various authentication methods, including LDAP, Active Directory, and others. If you have an existing user directory (e.g., LDAP or Active Directory), you can configure Jenkins to use that directory for user authentication and authorization, which can simplify user management.

Remember to manage user permissions carefully to ensure that users have access to the appropriate Jenkins resources and functions while maintaining security and access control.


Jenkin: Change the theme or appearance

To change the theme or appearance of the Jenkins web interface, you can use Jenkins plugins that provide themes or styles. One such plugin is the "Simple Theme Plugin," which allows you to customize the CSS and JavaScript of the Jenkins UI. Here's how you can change the theme in Jenkins:


1. Log into Jenkins: 

    Open your web browser and access your Jenkins instance by navigating to (In my case)                             http://mongodb:8080


2. Install the Simple Theme Plugin:

    a. Click on "Manage Jenkins" in the Jenkins dashboard.
    b. Select "Manage Plugins" from the dropdown menu.
    c. Go to the "Available" tab.
    d. In the "Filter" box, type "Simple Theme Plugin."
    e. Check the checkbox next to "Simple Theme Plugin."
    f. Click the "Install without restart" button.






3. Create or Edit a Theme:

    a. After the plugin is installed, go back to the Jenkins dashboard and click on "Manage Jenkins"                 again.
    b. Select "Configure System."
    c. Scroll down to the "Theme" section. Here, you can add or edit themes.


4. Add/Edit a Theme:

    a. Click on the "Add" button to add a new theme or edit an existing one.
    b. Provide a name for your theme in the "Name" field.
    c. In the "URL of theme CSS" field, you can specify the URL of a CSS file that defines your custom     styles. This file should be hosted on a web server accessible to your Jenkins server.
    d. You can also add JavaScript files to customize the behavior of the Jenkins UI.
    e. Click "Save" to save your theme.

CSS URL where you can download/use theme: http://afonsof.com/jenkins-material-theme/


5. Apply the Theme:

    a. Once you've created or edited a theme, you can apply it to your Jenkins instance.
    b. Go to the Jenkins dashboard and click on your username in the top right corner.
    c. Select "Configure" from the dropdown menu.
    d. In the "User Themes" section, select the theme you created or edited from the "Theme" dropdown         list.
    e. Click "Save" to apply the theme.


6. View the New Theme:

Refresh your Jenkins dashboard, and you should see the changes applied by your custom theme.


Please note that modifying the Jenkins UI through custom themes can be powerful but should be used judiciously. 

Ensure that any changes you make do not compromise the functionality or security of your Jenkins instance. Additionally, be aware that Jenkins may undergo updates, and custom themes may need to be maintained accordingly.

Jenkin : Forgot Admin password

 Prerequisites:

*.A super user( root) or any user with SUDO privileges.

*.vim/ vi/ text editor to edit the configuration files.


Steps to recover forgotten password in Jenkins

1.Now, copy and take the backup of configuration file of Jenkins and save it.

# cp -v /var/lib/jenkins/config.xml ./config.xml_0923_passwordreset


Step 2: Open the /var/lib/jenkins/config.xml configuration file and turn off the protection.


# vi /var/lib/jenkins/config.xml 

And now, find the <useSecurity> tag and change the value from true to false.

  <useSecurity>true</useSecurity>   ####Put the value is false

  

Step 3: After making the above changes, restart the Jenkins services. After restarting the services, make sure the status of Jenkins is running and enabled


[root@jenkins]# systemctl stop jenkins.service

[root@jenkins]# systemctl start jenkins.service


Step 4: Now, open your browser, and hit try to open the jenkins. You will now not be asked to enter the credentials.

After this, navigate to "Manage Jenkins" button from the option from the left side menu and click on "Configure Global Security" just as shown below.



Step 5: Now, navigate to Security Realm, and select the "Jenkins' own user database" from the dropdown menu. and click on save button from the button and click on Dashboard.



Step 6: Now, after clicking on "Dashboard", go to "People" menu, as shown below. And, select the username of which you want to change the password.






Step 7: Now, go to your server and either replace the existing configuration file with that of you had make in step 2 or replace from "false" to "true", that you had make in step 3.


mv ./config.xml_0923_passwordreset /var/lib/jenkins/config.xml

or

vi /var/lib/jenkins/config.xml


Step 8: Now just restart the jenkins service and test your login to your Jenkins using browser.


[root@jenkins]# systemctl stop jenkins.service

[root@jenkins]# systemctl start jenkins.service


And this is how you will recover the forgotten password of admin user in Jenkins.


Jenkin Installation on RHEL7

Note: Before installation make sure create yum repository then go to installation part.

To install Jenkins on Red Hat Enterprise Linux 7 (RHEL 7), you can follow these steps. Jenkins is a widely used automation server for building, testing, and deploying code. Before you begin, make sure you have administrative access to your RHEL 7 server.

Here's how you can install Jenkins on RHEL 7:

1. Update Your System:

It's a good practice to update your system's package repository to ensure you have the latest software packages:

sudo yum update

2. Install Java:

Jenkins requires Java to run. You can install Java using the following command:

sudo yum install java-1.8.0-openjdk

Verify that Java has been installed correctly by running:

java -version


3. Add Jenkins Repository:

Jenkins provides an official repository for RHEL. You can add it to your system using the following commands:

sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import http://pkg.jenkins.io/redhat-stable/jenkins.io.key


4. Install Jenkins:

Now that you've added the Jenkins repository, you can install Jenkins with the following command:

sudo yum install jenkins


5. Start and Enable Jenkins:
Once Jenkins is installed, you can start the Jenkins service and enable it to start on boot:

# systemctl start jenkins
# systemctl enable jenkins


6. Check Jenkins Status:
You can verify that Jenkins is running by checking its status:

# systemctl status jenkins

If Jenkins is running correctly, you should see a status message indicating that it's active and running.


7. Firewall Configuration:
If you have a firewall enabled on your RHEL 7 system, you need to open port 8080 to access the Jenkins web interface. You can do this with the following command:


# firewall-cmd --zone=public --permanent --add-port=8080/tcp
# firewall-cmd --reload


8. Access Jenkins Web Interface:
Open a web browser and access Jenkins by navigating to http://your-server-IP-or-domain:8080. You should see the Jenkins setup wizard.

To get the initial admin password required for setup, you can run:


# cat /var/lib/jenkins/secrets/initialAdminPassword

Copy and paste the generated password into the Jenkins setup wizard to complete the installation.


9. Install Plugins and Configure Jenkins:
Follow the Jenkins setup wizard to install the recommended plugins and configure your Jenkins instance according to your needs.


10. Start Using Jenkins:

Once the setup is complete, you can start using Jenkins for your automation and CI/CD needs.

That's it! You now have Jenkins installed and running on your RHEL 7 server. You can customize Jenkins further and install additional plugins as needed for your projects.

Useful links:

https://www.jenkins.io/doc/book/installing/linux/

https://sysadminxpert.com/how-to-install-jenkins-on-centos-7-or-rhel-7/

Monday, 21 August 2023

Multiplexing Controlfile in ASM

 Manual Method:

================

1.

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

control_files                        string      +DATA/racdb/controlfile/current.260.979677429


[oracle@rac1 ~]$ srvctl stop database -d racdb


+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

2.

ASMCMD> lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  NORMAL  N         512   4096  1048576     30717    27208             1171           13018              0             N  DATA/

MOUNTED  NORMAL  N         512   4096  1048576      3069     2143              309             917              0             N  OCR/


cd OCR

mkdir racdb

cd racdb

mkdir controlfile


ASMCMD> pwd

+DATA/RACDB/CONTROLFILE


ASMCMD> cp +DATA/RACDB/CONTROLFILE/Current.260.979677429 +OCR/racdb/controlfile/current

copying +DATA/RACDB/CONTROLFILE/Current.260.979677429 -> +OCR/racdb/controlfile/current


ASMCMD> cd +OCR/racdb/controlfile/


ASMCMD> ls -l

Type         Redund  Striped  Time             Sys  Name

                                               N    current => +OCR/ASM/CONTROLFILE/current.256.1144794611


--asm_alert.log

Sun Aug 13 22:27:42 2023

SQL> /* ASMCMD */alter diskgroup /*ASMCMD*/ "OCR" add directory '+OCR/racdb'

SUCCESS: /* ASMCMD */alter diskgroup /*ASMCMD*/ "OCR" add directory '+OCR/racdb'

Sun Aug 13 22:27:56 2023

SQL> /* ASMCMD */alter diskgroup /*ASMCMD*/ "OCR" add directory '+OCR/racdb/controlfile'

SUCCESS: /* ASMCMD */alter diskgroup /*ASMCMD*/ "OCR" add directory '+OCR/racdb/controlfile'


--Output

ASMCMD> pwd

+OCR/racdb/controlfile

ASMCMD> ls -l

Type         Redund  Striped  Time             Sys  Name

                                               N    current => +OCR/ASM/CONTROLFILE/current.256.1144794611


+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


3.

[oracle@rac1 ~]$ srvctl start database -d racdb -o nomount

show controlfile

alter system set control_files='+DATA/racdb/controlfile/current.260.979677429','+OCR/racdb/controlfile/current' scope=spfile sid='*';


srvctl stop database -d racdb


[oracle@rac1 ~]$ srvctl start database -d racdb

[oracle@rac1 ~]$ srvctl status database -d racdb

Instance racdb1 is running on node rac1

Instance racdb2 is running on node rac2


NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

control_files                        string      +DATA/racdb/controlfile/current.260.979677429, +OCR/racdb/controlfile/current


+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

4. clean controlfile from old DG if you do moving.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


===============================

RMAN CONTROLFILE Multiplexing:

===============================


1. Identify the location of the current controlfile:

SQL> select name from v$controlfile;


NAME

--------------------------------------------------------------------------------

+DATA/racdb/controlfile/current.260.979677429



2. Shutdown the database and start the instance:

    SQL> shutdown normal

    SQL> startup nomount


3. Use RMAN to duplicate the controlfile:

    $ rman nocatalog

    RMAN>connect target

    RMAN>restore controlfile to '<DISKGROUP_NAME>' from '<OLD_PATH>';

    RMAN> restore controlfile to '+OCR' from '+DATA/racdb/controlfile/current.260.979677429';


4. On the ASM instance, identify the name of the controlfile:

    Using ASMCMD:

    $ asmcmd

    ASMCMD> cd OCR

    ASMCMD> find -t controlfile . *


 Changing the current directory to the diskgroup where the controlfile was created will speed the search.

    Output:

ASMCMD> find -t controlfile . *

WARNING:option 't' is deprecated for 'find'

please use 'type'

+OCR/RACDB/CONTROLFILE/current.256.1144796825


5. On the database side:

##OPTIONAL STEP spfile and init file manual changes.

            * Modify init.ora or spfile, adding the new path to parameter control_files.

            * if using init<SID>.ora, just modify the control_files parameter and restart the database.

            * If using spfile,


            1) startup nomount the database instance

            2) alter system set control_files='+DG1/P10R2/CONTROLFILE/backup.308.577785757','/oradata2/102b/oradata/P10R2/control01.ctl' scope=spfile;


            For RAC instance:

            alter system set control_files='+DATA/racdb/controlfile/current.260.979677429','+OCR/RACDB/CONTROLFILE/current.256.1144796825' scope=spfile sid='*';


            3) shutdown immediate



        * start the instance.


            Verify that new control file has been recognized. If the new controlfile was not used, the complete procedure needs to be repeated.


6. 

[oracle@rac1 ~]$ srvctl start database -d racdb

[oracle@rac1 ~]$ srvctl status database -d racdb

Instance racdb1 is running on node rac1

Instance racdb2 is running on node rac2


Thursday, 3 August 2023

What is MongoDB ?

MongoDB is open source database system. It’s a document database with the scalability and flexibility that you want with the querying and indexing that you need.

Its support to Dyanmic schema, using a map-reduce, GeoSpatial support the main feature.

  • MongoDB stores data in flexible, JSON-like documents, meaning fields can vary from document to document and data structure can be changed over time
  • The document model maps to the objects in your application code, making data easy to work with
  • Ad hoc queries, indexing, and real time aggregation provide powerful ways to access and analyze your data
  • MongoDB is a distributed database at its core, so high availability, horizontal scaling, and geographic distribution are built in and easy to use
  • MongoDB is free and open-source, published under the GNU Affero General Public License

 

For more detail about Key Feature: Click Here

Overview of MongoDB Collection and Documnets

 MongoDB works on concept of collection and document.

 

RDBMSMongoDB
DatabaseDatabase
TableCollection
Tuple/RowDocument
columnField
Table JoinEmbedded Documents
Primary KeyPrimary Key (Default key _id provided by mongodb itself)

 

Database Server and Client
Mysqld/Oraclemongod
mysql/sqlplusmongo

Advantage of mongodb as compare to SQL database

 The MongoDB data bases on the JSON (Java Script Object Notation) format. JSON allows for the transfer of data

between web applications and servers using a human readable format.

Before JSON, XML was used for this.
JSON is defined in MongoDB’s BSON (Binary JSON). The binary format of BSON provides reliability and greater efficiency when it comes to speed and storage space.

 

JSON & BSON
JSON can only represent a subset of the types supported by BSON.

For More Details about JSON Click Here and BSON Click Here

MongoDB Installation

 MongoDB Provide two types of software.

  1. Community Edition :   Its For freeware no mongo support available on this.
  2. Enterprise Edition : Its License Based.

For More Detail about Installation and Packages and Environment Support : Click Here

For any query regarding MongoDB installation/Upgrade feel free contact to Oracle-Help Team.

Terminology with MongoDB

 

Database

Database is a physical container for collections. Each database gets its own set of files on the file system. A single MongoDB server typically has multiple databases.

Collection

A grouping of MongoDB documents. A collection is the equivalent of an RDBMS table. A collection exists within a single database. Collections do not enforce a schema.

Document

MongoDB stores data records as BSON documents. BSON is a binary representation of JSON documents, though it contains more data types than JSON.

CRUD Operations

 CRUD operations create, read, update, and delete documents as same like in RDBMS (Select and DMLs).

 

Firstly knows how to create database in mongoDB

MongoDB use DATABASE_NAME is used to create database. The command will create a new database if it doesn’t exist, otherwise it will return the existing database.

Execute simple command : > use TEST

Switch the Database :  > use admin

check your currently selected database : > db

List of Databases available with size  : > show databases / show dbs     <———- But this command is used directly when your authorization is disable. Later to discuss about authorization.

 

 

Create Collection 

There is two type of collection normal collection and capped collection. Capped collection is auto purge which is used for like a auditing/Oplog later to discuss about oplog which is part of replication. And mongoDB is schema less so when you create a database its highly recommended to create a any single collection, if you not follow this after switching the newly created database to another one your newly created database is flush i.e not visible when you check with ” > show dbs “ command.

Below the syntax of Collection

> db.createCollection(name, options);

> use Test
switched to db Test

db.createCollection(“student”)

> db.createCollection(“Audit”, { capped : true, autoIndexId : true, size : 6142800, max : 10000 } )  <—– Creating a Capped Collection Syntax

You can check the created collection

> show collections      

 

Document Insert 

Above mention the command also working but not need to create collection . In MongoDB, you don’t need to create collection. MongoDB creates collection automatically, when you insert some document

Syntax: > db.Collection.insert ({“filed” : “value”});

> use Test

> db.student.insert({ “name” : “Aman”});

> db.student.insert({“name” : “Steve”, “rollno” : 101,”contact” : “99xxxxxxxx”});

> db.student.insert({“name” : “Kate”, “rollno” : 101,”contact” : “98xxxxxxxx”});

> db.student.insert({“name” : “Tarun”, “rollno” : 102,”contact” : “97xxxxxxxx”});

> db.student.insert({“name” : “Dan”, “rollno” : 103,”contact” : “96xxxxxxxx”});

> db.student.insert({“name” : “Tom”, “rollno” : 104,”contact” : “95xxxxxxxx”});

> db.student.insert({“name” : “Mohan”, “rollno” : 105,”contact” : “93xxxxxxxx”});

> db.student.insert({“name” : “Tin”, “rollno” : 104,”contact” : “92xxxxxxxx”});

> db.student.insert({“name” : “Vince”, “rollno” : 105,”contact” : “91xxxxxxxx”});

 

 

Document Read

You can just check the details available in collections.

Syntax: db.collection.find();

> db.student.find()    <——- It’s showing complete collection 

> db.student.findOne()    <——– Showing only single record.

> db.student.find().pretty()    <——- Display the results in a formatted.

Later to discuss about all opetions like sorting, orderby, less then, greater than etc.

 

 

Document Update

In MongoDB update document into a collection. The update() method updates the values in the existing document while the save() method replaces the existing document with the document passed in save() method.

Syntax: db.collection.update(selection_data, update_data)

> db.student.update.({“name” : “Kate”},{$set:{“rollno”: 104}});

> db.student.find();

 

Document Delete

Deleting a document from collection. remove() clause used for this.

Syntax : >db.collection.remove(DELLETION_DATA)

 > db.student.remove({ “rollno” : 104})      <———- Now Its removing the all entries which is belongs to 104

db.student.find()

> db.student.remove({ “rollno” : 105,1})   <——— Now Its removing only the first record which is belongs to 105

db.student.find()

> db.student.remove() <——— Remove all he documents from the collection.

db.student.find()

For More Detail about the CRUD operations: Click Here

Drop Database Drop Collection

 Drop collection from database in mongoDB

db.collection.drop() is used to drop a collection from the database.

Syntax: db.collection.drop()

use TEST

show collections     <——— check the available collections into your database TEST.

db.student.drop()    <——– Check and verify the student collection has been drop from TEST Database.

 

Drop Database in mongoDB

db.dropDatabase() command is used to drop a existing database.

Syntax: db.dropDatabase()

Example:

show dbs     <—– check the database list and select you want to drop.

use TEST      <—– Put the db in usable state.

db.dropDatabase()    <—— Now the Test database has been drop check & verify with show dbs command.

Where Condition in MongoDB

 Below some example of Where condition in MongoDB.

pretty: using for just formatting of data view.
Equal To:
{<key>:<value>}
          > db.student.find({“name”: “Mohan” }).pretty()
Less Than:
{<key>:{$lt:<value>}}
          > db.student.find({“marks”:{$lt:50}}).pretty()
Less Than Equals:
{<key>:{$lte:<value>}}
          > db.student.find({“marks”:{$lte:50}}).pretty()
Greater Than:
{<key>:{$gt:<value>}}
          > db.student.find({“marks”:{$gt:50}}).pretty()
Greater Than Equals:
{<key>:{$gte:<value>}}
          > db.student.find({“marks”:{$gte:50}}).pretty()
Not Equals:
{<key>:{$ne:<value>}}
          > db.student.find({“marks”:{$ne:50}}).pretty()
AND Condition in MongoDB
Multiple keys by separating them by ‘,’ then MongoDB treat as AND condition.
          > db.student.find({$and:[{“rollno”:105},{“name”: “Dan”}]})
OR Condition in MongoDB
          > db.student.find({$or:[{“rollno”:104},{“rollno”: 105}]})