Monday 27 August 2012

Monitoring Space in the Flash Recovery Area


SQL:


SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     16
Next log sequence to archive   18
Current log sequence           18


alter system set db_recovery_file_dest='D:\oracle\product\10.2.0\flasback_recovery_area\' scope=spfile;

alter system set db_recovery_file_dest_size=10G scope=spfile;

select * from v$flash_recovery_area_usage;

RMAN:

rman target 'sys@orcl'
password:

rman> show all

mrna> backup database;


OR

SELECT
ROUND((SPACE_USED - SPACE_RECLAIMABLE)/SPACE_LIMIT * 100, 1)
AS PERCENT_FULL
FROM V$RECOVERY_FILE_DEST;

OR

SELECT
    NAME,
    TO_CHAR(SPACE_LIMIT, '999,999,999,999') AS SPACE_LIMIT,
    TO_CHAR(SPACE_LIMIT - SPACE_USED + SPACE_RECLAIMABLE, '999,999,999,999')
       AS SPACE_AVAILABLE,
    ROUND((SPACE_USED - SPACE_RECLAIMABLE)/SPACE_LIMIT * 100, 1)
       AS PERCENT_FULL
    FROM V$RECOVERY_FILE_DEST;

Oracle Applications (apps) e-Business Suite 11i architecture



Oracle Applications 11i eBusiness Suite (Oracle ERP) use the n-tier architecture (the architecture made for the web). This architecture can be seen on the following graphic:

Oracle Applications 11i eBusiness Suite (Oracle ERP) use the n-tier architecture. Here are the explications:

Desktop tier :  A client could access the Oracle Applications only by using a Web Browser which must have Oracle JInitiator installed on it. Oracle JInitiator is the Oracle Java Virtual Machine used to run Oracle Forms applets.

Application (or middle) tier includes
·         Web Server:  listens for HTTP requests and provides an HTTP response to the client. The HTTP response usually consists of an HTML document, but can also be a raw file, an image or some other type of document (defined by MIME-types);

·         Concurrent Processing Server:  server which run reporting programs and data updating programs (periodically or ad hoc). These programs (named concurrent programs) run on this server in background in order not to interfere with the normal operations. When a request is made a row is inserted into FND_CONCURRENT_REQUEST table. The concurrent managers read these requests in the table and start the relevant concurrent programs;

·         Administration Server:  contains scripts which are used to administer the Oracle Applications eBusiness Suite. From this server the following operations can be performed:
       1.       Applying patches (using adpatch);
       2.       Maintaining Oracle Applications (using adadmin);
       3.       Upgrading Oracle Applications from an earlier release to  11.5.x (using adaimgr);

·         Forms server:  includes the Forms listener and the Forms Runtime Engine. The Forms Runtime Engine maintains a connection to the database on behalf of the Java client. It uses the same Forms, Menus, and Libraries files that are used for running in client/server mode. No application code changes are required to deploy a legacy client/server application to the Internet;

·         Reports Server:   run Oracle Applications reports and is always installed on the same node as the Concurrent Processing Server. However the reports are administered separately from the concurrent processing reports;

  •  Discoverer Server (optional):  is installed if Oracle Discoverer will be used.

Database tier :  The Database tier contains the Oracle Database Server which store all the data needed by Oracle Applications. The database store the Oracle Applications online help as well. The database server communicates with middle tier and does not communicate directly with the desktop tier.



Thursday 9 August 2012

Log Miner



LogMiner Utility THROUGH read the online redolog files & archive file for the purpose of audit ,these file contain the data script
Using Log Miner utility, you can query the contents of online redo log files and archived log files. Because LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.

LogMiner Configuration
There are three basic objects in a LogMiner configuration that you should be familiar with: the source database, the LogMiner dictionary, and the redo log files containing the data of interest:

The source database is the database that produces all the redo log files that you want LogMiner to analyze.

The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that you request.

LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data.
For example, consider the following the SQL statement:
INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY, MAX_SALARY) VALUES('IT_WT','Technical Writer', 4000, 11000);
Without the dictionary, LogMiner will display:
insert into "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4") values
(HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'),
HEXTORAW('c229'),HEXTORAW('c3020b'));
The redo log files contain the changes made to the database or database dictionary.


LogMiner Dictionary Options
LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives you three options for supplying the dictionary:

Using the Online Catalog
Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option.


Extracting a LogMiner Dictionary to the Redo Log Files
Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate that changes will be made to the column definitions in the tables of interest.

Extracting the LogMiner Dictionary to a Flat File
This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead.


Using the Online Catalog
This option can be used when the LogMiner session is started.To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source when you start LogMiner, as follows:

It is most efficient and easy-to-use way when LogMiner has access to the database to which log file belong and no column definition changes are expected in the tables of interest. This option can be used when the LogMiner session is started as follows:

begin
dbms_logmnr.start_logmnr( options => dbms_logmnr.dict_from_online_catalog );
end;
/

OR

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);


Extracting a LogMiner Dictionary to the Redo Log Files
To extract a LogMiner dictionary to the redo log files, the database must be open and in ARCHIVELOG mode and archiving must be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent (whereas the dictionary extracted to a flat file is not).

Use this option when using LogMiner in a database to which log file don't belong and column definition changes are expected in the tables of interest. For this to work you will have to embed the dictionary information into the redo logs at the database from where the redo logs for analysis are coming.

begin
dbms_logmnr_d.build( options => dbms_logmnr_d.store_in_redo_logs );
end;
/

This may embed the dictionary info in multiple log files. Use following sql statements to see where the dictionary embedding starts and where it ends.

SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';

SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';

You will have to add these files and the files between them in the LogMiner analysis.

To extract dictionary information to the redo log files, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option. Do not specify a filename or location.

SQL> EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);


Extracting the LogMiner Dictionary to a Flat File

When the LogMiner dictionary is in a flat file, fewer system resources are used than when it is contained in the redo log files. Oracle recommends that you regularly back up the dictionary extract to ensure correct analysis of older redo log files.

1. Set the initialization parameter, UTL_FILE_DIR, in the initialization parameter file. For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed, enter the following in the initialization parameter file:

UTL_FILE_DIR = /oracle/database

2. Start the Database

SQL> startup

3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary file. For example, enter the following to create the file dictionary.ora in /oracle/database:

SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora','/oracle/database/',DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

Redo Log File Options
To mine data in the redo log files, LogMiner needs information about which redo log files to mine.
You can direct LogMiner to automatically and dynamically create a list of redo log files to analyze, or you can explicitly specify a list of redo log files for LogMiner to analyze, as follows:

Automatically
If LogMiner is being used on the source database, then you can direct LogMiner to find and create a list of redo log files for analysis automatically. Use the CONTINUOUS_MINE option when you start LogMiner.

Manually
Use the DBMS_LOGMNR.ADD_LOGFILE procedure to manually create a list of redo log files before you start LogMiner. After the first redo log file has been added to the list, each subsequently added redo log file must be from the same database and associated with the same database RESETLOGS SCN. When using this method, LogMiner need not be connected to the source database.

Example: Finding All Modifications in the Current Redo Log File (CATALOG)
The easiest way to examine the modification history of a database is to mine at the source database and use the online catalog to translate the redo log files. This example shows how to do the simplest analysis using LogMiner.

Step 1 Specify the list of redo log files to be analyzed.
Specify the redo log files which you want to analyze.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/ica/log1.ora',OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/u01/oracle/ica/log2.ora',OPTIONS => DBMS_LOGMNR.ADDFILE);

Step 2 Start LogMiner.
Start LogMiner and specify the dictionary to use.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

Step 3 Query the V$LOGMNR_CONTENTS view.
Note that there are four transactions (two of them were committed within the redo log file being analyzed, and two were not). The output shows the DML statements in the order in which they were executed; thus transactions interleave among themselves.

SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,SQL_REDO, SQL_UNDO
FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');

USR XID SQL_REDO SQL_UNDO
---- --------- ----------------------------------------------------
HR 1.11.1476 set transaction read write;
HR 1.11.1476 insert into "HR"."EMPLOYEES"( delete from "HR"."EMPLOYEES"
"EMPLOYEE_ID","FIRST_NAME", where "EMPLOYEE_ID" = '306'
"LAST_NAME","EMAIL", and "FIRST_NAME" = 'Mohammed'
"PHONE_NUMBER","HIRE_DATE", and "LAST_NAME" = 'Sami'
"JOB_ID","SALARY", and "EMAIL" = 'MDSAMI'
"COMMISSION_PCT","MANAGER_ID", and "PHONE_NUMBER" = '1234567890'
"DEPARTMENT_ID") values and "HIRE_DATE" = TO_DATE('10-JAN-2003
('306','Mohammed','Sami', 13:34:43', 'dd-mon-yyyy hh24:mi:ss')
'MDSAMI', '1234567890', and "JOB_ID" = 'HR_REP' and
TO_DATE('10-jan-2003 13:34:43', "SALARY" = '120000' and
'dd-mon-yyyy hh24:mi:ss'), "COMMISSION_PCT" = '.05' and
'HR_REP','120000', '.05', "DEPARTMENT_ID" = '10' and
'105','10'); ROWID = 'AAAHSkAABAAAY6rAAO';
OE 1.1.1484 set transaction read write;
OE 1.1.1484 update "OE"."PRODUCT_INFORMATION" update "OE"."PRODUCT_INFORMATION"
set "WARRANTY_PERIOD" = set "WARRANTY_PERIOD" =
TO_YMINTERVAL('+05-00') where TO_YMINTERVAL('+01-00') where
"PRODUCT_ID" = '1799' and "PRODUCT_ID" = '1799' and
"WARRANTY_PERIOD" = "WARRANTY_PERIOD" =
TO_YMINTERVAL('+01-00') and TO_YMINTERVAL('+05-00') and
ROWID = 'AAAHTKAABAAAY9mAAB'; ROWID = 'AAAHTKAABAAAY9mAAB';
OE 1.1.1484 update "OE"."PRODUCT_INFORMATION" update "OE"."PRODUCT_INFORMATION"
set "WARRANTY_PERIOD" = set "WARRANTY_PERIOD" =
TO_YMINTERVAL('+05-00') where TO_YMINTERVAL('+01-00') where
"PRODUCT_ID" = '1801' and "PRODUCT_ID" = '1801' and
"WARRANTY_PERIOD" = "WARRANTY_PERIOD" =
TO_YMINTERVAL('+01-00') and TO_YMINTERVAL('+05-00') and
ROWID = 'AAAHTKAABAAAY9mAAC'; ROWID ='AAAHTKAABAAAY9mAAC';
HR 1.11.1476 insert into "HR"."EMPLOYEES"( delete from "HR"."EMPLOYEES"
"EMPLOYEE_ID","FIRST_NAME", "EMPLOYEE_ID" = '307' and
"LAST_NAME","EMAIL", "FIRST_NAME" = 'John' and
"PHONE_NUMBER","HIRE_DATE", "LAST_NAME" = 'Silver' and
"JOB_ID","SALARY", "EMAIL" = 'JSILVER' and
"COMMISSION_PCT","MANAGER_ID", "PHONE_NUMBER" = '5551112222'
"DEPARTMENT_ID") values and "HIRE_DATE" = TO_DATE('10-jan- 2003
('307','John','Silver', 13:41:03', 'dd-mon-yyyy hh24:mi:ss')
'JSILVER', '5551112222', and "JOB_ID" ='105' and
"DEPARTMENT_ID"
TO_DATE('10-jan-2003 13:41:03', = '50' and ROWID =
'AAAHSkAABAAAY6rAAP';
'dd-mon-yyyy hh24:mi:ss'),
'SH_CLERK','110000', '.05',
'105','50');
OE 1.1.1484 commit;
HR 1.15.1481 set transaction read write;
HR 1.15.1481 delete from "HR"."EMPLOYEES" insert into "HR"."EMPLOYEES"(
where "EMPLOYEE_ID" = '205' and "EMPLOYEE_ID","FIRST_NAME",
"FIRST_NAME" = 'Shelley' and "LAST_NAME","EMAIL","PHONE_NUMBER",
"LAST_NAME" = 'Higgins' and "HIRE_DATE", "JOB_ID","SALARY",
"EMAIL" = 'SHIGGINS' and "COMMISSION_PCT","MANAGER_ID",
"PHONE_NUMBER" = '515.123.8080' "DEPARTMENT_ID") values
and "HIRE_DATE" = TO_DATE( ('205','Shelley','Higgins',
'07-jun-1994 10:05:01', and 'SHIGGINS','515.123.8080',
'dd-mon-yyyy hh24:mi:ss') TO_DATE('07-jun-1994 10:05:01',
and "JOB_ID" = 'AC_MGR' 'dd-mon-yyyy hh24:mi:ss'),
and "SALARY"= '12000' 'AC_MGR','12000',NULL,'101','110');
and "COMMISSION_PCT" IS NULL
and "MANAGER_ID"
= '101' and "DEPARTMENT_ID" =
'110' and ROWID =
'AAAHSkAABAAAY6rAAM';
OE 1.8.1484 set transaction read write;
OE 1.8.1484 update "OE"."PRODUCT_INFORMATION" update "OE"."PRODUCT_INFORMATION"
set "WARRANTY_PERIOD" = set "WARRANTY_PERIOD" =
TO_YMINTERVAL('+12-06') where TO_YMINTERVAL('+20-00') where
"PRODUCT_ID" = '2350' and "PRODUCT_ID" = '2350' and
"WARRANTY_PERIOD" = "WARRANTY_PERIOD" =
TO_YMINTERVAL('+20-00') and TO_YMINTERVAL('+20-00') and
ROWID = 'AAAHTKAABAAAY9tAAD'; ROWID ='AAAHTKAABAAAY9tAAD';
HR 1.11.1476 commit;

Step 4 End the LogMiner session.
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();


Example of Mining Without Specifying the List of Redo Log Files Explicitly
The previous example explicitly specified the redo log file or files to be mined. However, if you are mining in the same database that generated the redo log files, then you can mine the appropriate list of redo log files by just specifying the time (or SCN) range of interest. To mine a set of redo log files without explicitly specifying them, use the DBMS_LOGMNR.CONTINUOUS_MINE option to the DBMS_LOGMNR.START_LOGMNR procedure, and specify either a time range or an SCN range of interest.

Example : Mining Redo Log Files in a Given Time Range (CATALOG)
This example assumes that you want to use the data dictionary extracted to the redo log files.

Step 1 Determine the timestamp of the redo log file that contains the start of the data dictionary.
SQL> SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#)
FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES');
NAME FIRST_TIME
-------------------------------------------- --------------------
/usr/oracle/data/db1arch_1_207_482701534.dbf 10-jan-2003 12:01:34

Step 2 Display all the redo log files that have been generated so far.
This step is not required, but is included to demonstrate that the CONTINUOUS_MINE option works as expected, as will be shown in Step 4.
SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS
WHERE LOW_TIME > '10-jan-2003 12:01:34';
NAME
----------------------------------------------
/usr/oracle/data/db1arch_1_207_482701534.dbf
/usr/oracle/data/db1arch_1_208_482701534.dbf
/usr/oracle/data/db1arch_1_209_482701534.dbf
/usr/oracle/data/db1arch_1_210_482701534.dbf

Step 3 Start LogMiner.
Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY, PRINT_PRETTY_SQL, and CONTINUOUS_MINE options.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR
(STARTTIME => '10-jan-2003 12:01:34',
ENDTIME => SYSDATE,
OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS +
DBMS_LOGMNR.COMMITTED_DATA_ONLY +
DBMS_LOGMNR.PRINT_PRETTY_SQL +
DBMS_LOGMNR.CONTINUOUS_MINE);

Step 4 Query the V$LOGMNR_LOGS view.
This step shows that the DBMS_LOGMNR.START_LOGMNR procedure with the CONTINUOUS_MINE option includes all of the redo log files that have been generated so far, as expected. (Compare the output in this step to the output in Step 2.)
SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS;
NAME
------------------------------------------------------
/usr/oracle/data/db1arch_1_207_482701534.dbf
/usr/oracle/data/db1arch_1_208_482701534.dbf
/usr/oracle/data/db1arch_1_209_482701534.dbf
/usr/oracle/data/db1arch_1_210_482701534.dbf

Step 5 Query the V$LOGMNR_CONTENTS view.
To reduce the number of rows returned by the query, exclude all DML statements done in the sys or system schema. (This query specifies a timestamp to exclude transactions that were involved in the dictionary extraction.)
Note that all reconstructed SQL statements returned by the query are correctly translated.
SQL> SELECT USERNAME AS usr,(XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID, SQL_REDO
FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM')
AND TIMESTAMP > '10-jan-2003 15:59:53';
USR XID SQL_REDO
----------- -------- -----------------------------------
SYS 1.2.1594 set transaction read write;
SYS 1.2.1594 create table oe.product_tracking (product_id number not null,
modified_time date,
old_list_price number(8,2),
old_warranty_period interval year(2) to month);
SYS 1.2.1594 commit;
SYS 1.18.1602 set transaction read write;
SYS 1.18.1602 create or replace trigger oe.product_tracking_trigger
before update on oe.product_information
for each row
when (new.list_price <> old.list_price or
new.warranty_period <> old.warranty_period)
declare
begin
insert into oe.product_tracking values
(:old.product_id, sysdate,
:old.list_price, :old.warranty_period);
end;
SYS 1.18.1602 commit;
OE 1.9.1598 update "OE"."PRODUCT_INFORMATION"
set
"WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),
"LIST_PRICE" = 100
where
"PRODUCT_ID" = 1729 and
"WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and
"LIST_PRICE" = 80 and
ROWID = 'AAAHTKAABAAAY9yAAA';
OE 1.9.1598 insert into "OE"."PRODUCT_TRACKING"
values
"PRODUCT_ID" = 1729,
"MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:03',
'dd-mon-yyyy hh24:mi:ss'),
"OLD_LIST_PRICE" = 80,
"OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');
OE 1.9.1598 update "OE"."PRODUCT_INFORMATION"
set
"WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),
"LIST_PRICE" = 92
where
"PRODUCT_ID" = 2340 and
"WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and
"LIST_PRICE" = 72 and
ROWID = 'AAAHTKAABAAAY9zAAA';
OE 1.9.1598 insert into "OE"."PRODUCT_TRACKING"
values
"PRODUCT_ID" = 2340,
"MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:07',
'dd-mon-yyyy hh24:mi:ss'),
"OLD_LIST_PRICE" = 72,
"OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');
OE 1.9.1598 commit;

Step 6 End the LogMiner session.
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

Extracting the LogMiner Dictionary to a Flat File
This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead.
This article is prepared using "Extracting the LogMiner Dictionary to a Flat File" as it requires a little bit of setup and I wanted to demonstrate that.

$ mkdir -p /u01/apps/logminer_dir

$ sqlplus / as sysdba

/*
The location where dictionary will be created should be set in utl_file_dir initialization
parameter.
*/

SQL> alter system set utl_file_dir='/u01/apps/logminer_dir' scope=spfile;
System altered.

shutdown immediate

startup

show parameter utl_file_dir

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
utl_file_dir string /u01/apps/logminer_dir

Normally oracle records the change vector in the redo log files i.e. just the information that is required to reconstruct the operation at recovery time. If you want additional information in the redo log then you need to enable supplemental logging prior to generating log files that will be analyzed by LogMiner. Therefore, at the very least, we will enable minimal supplemental logging, as the following SQL statement shows:

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
SUPPLEME
--------
NO

/* Minimum supplemental logging is not enabled. */
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
SUPPLEME
--------
YES

/* Minimum supplemental logging is now enabled. */
SQL> alter system switch logfile;
System altered.

SQL> select g.group# , g.status , m.member
from v$log g, v$logfile m
where g.group# = m.group#
and g.status = 'CURRENT';
GROUP# STATUS MEMBER
---------- -------- -------------------------------------------
2 CURRENT +DG1/ora10g/onlinelog/group_2.264.718794539

/* start fresh with a new log file which is the group 2.*/
SQL> create table scott.test_logmnr (id number, name varchar2(10) );
Table created.

SQL> begin
dbms_logmnr_d.build
(
dictionary_filename => 'dictionary.dic',
dictionary_location => '/u01/apps/logminer_dir',
options => dbms_logmnr_d.store_in_flat_file
);
end;
/
PL/SQL procedure successfully completed.

/*
This has recorded the dictionary information into the file
"/u01/apps/logminer_dir/dictionary.dic".
*/

Now lets make a few user data changes by logging in as user SCOTT.
SQL> conn scott/tiger
connected.

SQL> insert into test_logmnr values (1,'TEST1');
1 row created.
SQL> insert into test_logmnr values (2,'TEST2');
1 row created.
SQL> commit;
Commit complete.
SQL> update test_logmnr set name = 'TEST';
2 rows updated.
SQL> commit;
Commit complete.
SQL> delete from test_logmnr;
2 rows deleted.
SQL> commit;
Commit complete.
After these changes lets log back in as sysdba and start LogMiner session.
SQL> conn / as sysdba
connected.

SQL> select g.group# , g.status , m.member
from v$log g, v$logfile m
where g.group# = m.group#
and g.status = 'CURRENT';

GROUP# STATUS MEMBER
---------- -------- -------------------------------------------
2 CURRENT +DG1/ora10g/onlinelog/group_2.264.718794539


/*
Log group 2 is still current, that means all of the changes we made are in this redo group.
*/

SQL> begin
dbms_logmnr.add_logfile
(
logfilename => '+DG1/ora10g/onlinelog/group_2.264.718794539',
options => dbms_logmnr.new
);
end;
/
PL/SQL procedure successfully completed.


/*
DBMS_LOGMNR.ADD_LOGFILE builds up a list of redo log files for LogMiner analysis.
The first file is added with the options => dbms_logmnr.new and rest are added
with the options => dbms_logmnr.addfile
*/

SQL> select filename from v$logmnr_logs;
FILENAME
--------------------------------------------
+DG1/ora10g/onlinelog/group_2.264.718794539

/*
Dictionary view v$logmnr_logs contains the list of log files that are added
via DBMS_LOGMNR.ADD_LOGFILE.
*/

SQL> begin
dbms_logmnr.start_logmnr
(
dictfilename => '/u01/apps/logminer_dir/dictionary.dic',
options => dbms_logmnr.print_pretty_sql +
dbms_logmnr.no_sql_delimiter +
dbms_logmnr.ddl_dict_tracking
);
end;
/
PL/SQL procedure successfully completed.

DBMS_LOGMNR.START_LOGMNR starts a LogMiner session. It will populate the dictionary view v$logmnr_contents with the contents of log files in the list we built with DBMS_LOGMNR.ADD_LOGFILE.
v$logminer_contents is only accessible to the current session which has started LogMiner and only until the DBMS_LOGMNR.END_LOGMNR is called. There could be many options provided with START_LOGMNR which affects the data representation in v$logmnr_contents e.g.
dbms_logmnr.print_pretty_sql will format the sql statements to enhance readability.
dbms_logmnr.no_sql_delimiter will omit the ";" from the end of the sql statements which is useful when sql are meant to be re-executed in PL/SQL routines.
dbms_logmnr.ddl_dict_tracking tracks the DDL statements in the log files.

SQL> DROP TABLE myLogAnalysis;
Table dropped.

SQL> create table myLogAnalysis
as
select * from v$logmnr_contents;
Table created.

/*
Its always better to copy contents of v$logmnr_contents to a user table and then perform
the analysis as it is quite expensive to query v$logmnr_contents. Moreover, the user table
can be indexed for better query performance.
*/
SQL> begin
DBMS_LOGMNR.END_LOGMNR();
end;
/
PL/SQL procedure successfully completed.

DBMS_LOGMNR.END_LOGMNR() ends the LogMiner session and v$logmnr_contents is no more accessible but our user table myLogAnalysis is still available which is a copy of v$logmnr_contents.

set lines 1000
set pages 500
column scn format a6
column username format a8
column seg_name format a11
column sql_redo format a33
column sql_undo format a33

/*
The below query will show the changes made by the user SCOTT and either the table is
TEST_LOGMNR or there is no table at all i.e. transaction start and transaction end
statements etc.
The output below shows the system change number for the change, the segment on which the
change was made, the sql statement to redo the change and the sql statement to undo the
change.
*/

select scn , seg_name , sql_redo , sql_undo
from myLogAnalysis
where username = 'SCOTT'
AND (seg_owner is null OR seg_owner = 'SCOTT')
SCN SEG_NAME SQL_REDO SQL_UNDO
------ ----------- --------------------------------- ---------------------------------
639968 TEST_LOGMNR create table scott.test_logmnr
(id number,
name varchar2(10)
)
640039 set transaction read write
640039 TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
values where
"ID" = 1, "ID" = 1 and
"NAME" = 'TEST1' "NAME" = 'TEST1' and
ROWID = 'AAAM7vAAEAAAALcAAA'
640041 TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
values where
"ID" = 2, "ID" = 2 and
"NAME" = 'TEST2' "NAME" = 'TEST2' and
ROWID = 'AAAM7vAAEAAAALcAAB'
640044 commit
640047 set transaction read write
640047 TEST_LOGMNR update "SCOTT"."TEST_LOGMNR" update "SCOTT"."TEST_LOGMNR"
set set
"NAME" = 'TEST' "NAME" = 'TEST1'
where where
"NAME" = 'TEST1' and "NAME" = 'TEST' and
ROWID = 'AAAM7vAAEAAAALcAAA' ROWID = 'AAAM7vAAEAAAALcAAA'
640047 TEST_LOGMNR update "SCOTT"."TEST_LOGMNR" update "SCOTT"."TEST_LOGMNR"
set set
"NAME" = 'TEST' "NAME" = 'TEST2'
where where
"NAME" = 'TEST2' and "NAME" = 'TEST' and
ROWID = 'AAAM7vAAEAAAALcAAB' ROWID = 'AAAM7vAAEAAAALcAAB'
640050 commit
640052 set transaction read write
640058 TEST_LOGMNR delete from "SCOTT"."TEST_LOGMNR" insert into "SCOTT"."TEST_LOGMNR"
where values
"ID" = 1 and "ID" = 1,
"NAME" = 'TEST' and "NAME" = 'TEST'
ROWID = 'AAAM7vAAEAAAALcAAA'
640058 TEST_LOGMNR delete from "SCOTT"."TEST_LOGMNR" insert into "SCOTT"."TEST_LOGMNR"
where values
"ID" = 2 and "ID" = 2,
"NAME" = 'TEST' and "NAME" = 'TEST'
ROWID = 'AAAM7vAAEAAAALcAAB'
640066 commit.
The following restrictions apply:
The following are not supported:
Data types LONG and LOB
Simple and nested abstract data types ( ADTs)
Collections (nested tables and VARRAYs)
Object Refs
Index Organized Tables (IOTs)
DBMS_LOGMNR_D.STORE_IN_FLAT _FILE
DBMS_LOGMNR_D.STORE_IN_REDO _LOGS


----with flat file option
SQL> EXECUTE DBMS_LOGMNR_D.BUILD( DICTIONARY_FILENAME => 'mydictionaryname.ora',
DICTIONARY_LOCATION => '/usr/mydictionary/location');

----OR
SQL> EXECUTE DBMS_LOGMNR_D.BUILD( 'flatdictionary.ora', '/oracle/logminor/',
options => DBMS_LOGMNR_D.STORE_IN_FLAT _FILE);


--with redo logs option
SQL> EXECUTE DBMS_LOGMNR_D.BUILD( options => DBMS_LOGMNR_D.STORE_IN_REDO _LOGS);
If you get the following error:

SQL> EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
BEGIN DBMS_LOGMNR_D.BUILD(OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS); END;
*
ERROR at line 1:
ORA-01347: Supplemental log data no longer found
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 2562
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 2617
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 12
ORA-06512: at line 1
First check if you have SUPPLEMENTAL Logging enabled,

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

If not,

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
(Adding or Removing the Redo Log Files for Analysis)
You can specify the redo log files as following examples with the mount or nomount option of startup.

---- for new log file or the first one.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/archivelogs/log_01 _132_6576654328.ora',
OPTIONS => DBMS_LOGMNR.NEW);

-----for adding an additional log file.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/archivelogs/log_01 _133_6576654328.ora',
OPTIONS => DBMS_LOGMNR.ADDFILE);

---- for removing a log file.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/archivelogs/log_01 _133_6576654328.ora',
OPTIONS => DBMS_LOGMNR.REMOVEFILE);

(-----Starting LogMiner-------)
After you have create a dictionary file and specify which redo log files to analyze, you can start LogMiner and begin your analysis. Take the following steps:

----To start Log Miner with flat dictionary:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( DICTFILENAME =>'/oracle/database/dictionary .ora');

----To start Log Miner with using dictionary from redo logs:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS =>DBMS_LOGMNR.DICT_FROM_REDO _LOGS);

----To start Log Miner with using Online Catalog Dictionary:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS =>DBMS_LOGMNR.DICT_FROM_ONLINE _CATALOG);

----To start Log Miner using starting and ending time:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( DICTFILENAME => '/oracle/flatdictionary.ora',
STARTTIME => TO_DATE('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS')
ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));

-----To start Log Miner using the SCN number:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( DICTFILENAME => '/oracle/dictionary.ora',
STARTSCN => 100,
ENDSCN => 150);

----To start Log Miner using the following OPTIONs:
COMMITTED_DATA_ONLY
SKIP_CORRUPTION
DDL_DICT_TRACKING
NO_DICT_RESET_ONSELECT
DICT_FROM_ONLINE_CATALOG
DICT_FROM_REDO_LOGS

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( OPTIONS => DBMS_LOGMNR.DDL_DICT_TRACKING +
DBMS_LOGMNR.NO_DICT_RESET _ONSELECT +
DBMS_LOGMNR.DICT_FROM_REDO _LOGS);
Now you should be able to do this:
SQL> SELECT count(*) FROM v$logmnr_contents;
SQL> DESC v$logmnr_contents

Querying LogMiner
(EXAMPLES of how to read from v$logmnr_contents)
To read the log file, you need to do the following query.
SQL> COL table_name FORMAT a20
SQL> SELECT sql_redo FROM SYS.V$LOGMNR_CONTENTS;

To query the V$LOGMNR_CONTENTS view to see changes done by a specific user:
SQL> SELECT sql_redo, sql_undo FROM V$LOGMNR_CONTENTS
WHERE USERNAME = 'SAASUB' AND TABLE_NAME = 'EVENT';
SQL> SELECT rownum, sql_redo FROM V$LOGMNR_CONTENTS
WHERE sql_redo like '%SAABUD%' and sql_redo NOT like '%SYS%' and
rownum < 10;

--with time stamp
SQL> SELECT 'Row Number: ' || rownum, 'Date-Time: ' || to_char(timestamp,'DD-MM HH24:MI:SS'),
'Transaction on table: ' ||table_name || '--->' ||
SUBSTR(sql_redo,1,20) FROM V$LOGMNR_CONTENTS
WHERE sql_redo like '%SAABUD%' AND
sql_redo NOT like '%SYS%' AND rownum < 10;

To determine which tables were modified in the range of time.
SQL> SELECT seg_owner, seg_name, count(*) AS Hits
FROM V$LOGMNR_CONTENTS WHERE seg_name NOT LIKE '%$'
GROUP BY seg_owner, seg_name;
SQL> SELECT rownum, to_char(timestamp,'DD-MM HH24:MI:SS') as "Date/Time",
table_name, SUBSTR(sql_redo,1,40) FROM V$LOGMNR_CONTENTS
WHERE sql_redo like '%SAABUD%' AND sql_redo NOT like '%SYS%';

To determine who drop any objects.
SQL> SELECT rownum, to_char(timestamp,'DD-MM HH24:MI:SS') as "Date/Time",
table_name, SUBSTR(sql_redo,1,40) FROM V$LOGMNR_CONTENTS
WHERE sql_redo like '%SAABUD%' AND
sql_redo NOT like '%SYS%' AND
UPPER(sql_redo) like '%DROP%';

Ending LogMiner
To end the log miner.
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR;

Showing Only Committed Transactions
When you use the COMMITTED_DATA_ONLY option to DBMS_LOGMNR.START_LOGMNR, only rows belonging to committed transactions are shown in the V$LOGMNR_CONTENTS view. This enables you to filter out rolled back transactions, transactions that are in progress, and internal operations.
To enable this option, specify it when you start LogMiner, as follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY);
SQL> SELECT (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
USERNAME, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE USERNAME != 'SYS'
AND SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM');

Filtering Data By Time :
To filter data by time, set the STARTTIME and ENDTIME parameters. The procedure expects date values. Use the TO_DATE function to specify date and time, as in this example:

SQL> execute DBMS_LOGMNR.START_LOGMNR (DICTFILENAME =>'c:\dict\dictionary.ora',
STARTTIME => TO_DATE('20-Jul-2011 04:30:00', 'DD-MON-YYYY HH:MI:SS'),
ENDTIME => TO_DATE('20-Jul-2011 04:45:00', 'DD-MON-YYYY HH:MI:SS'));

If no STARTTIME or ENDTIME parameters are specified, the entire redo log is read from start to end, for each SELECT statement issued.
The timestamps should not be used to infer ordering of redo records. we can infer the order of redo records by using the SCN.

Filtering Data By SCN :
To filter data by SCN (system change number), use the STARTSCN and ENDSCN parameters, as in this example:

SQL> execute DBMS_LOGMNR.START_LOGMNR (dictfilename=>'c:\dict\dictionary.ora', STARTSCN => 100, ENDSCN => 150);

The STARTSCN and ENDSCN parameters override the STARTTIME and ENDTIME parameters in situations where all are specified.
If no STARTSCN or ENDSCN parameters are specified, the entire redo log is read from start to end, for each SELECT statement issued.

DEMONSTRATION Current Redo Log File (CATALOG)
Constants for ADD_LOGFILE Options flag

NEW
DBMS_LOGMNR.NEW purges the existing list of logfiles, if any. Place the logfile specified in the list of logfiles to be analyzed.

ADDFILE
DBMS_LOGMNR.ADDFILE adds this logfile to the list of logfiles to be analyzed. This only works if there was at least one invocation of ADD_LOGFILE with the Options parameter set to NEW.

REMOVEFILE
DBMS_LOGMNR.REMOVEFILE removes the logfile from the list of logfiles to be analyzed. This has no effect if the logfile was not previously added to the list.
===========================================================================
STEP 1:
conn sys/oracle@orcl as sysdba
(ALL LOGFILE)
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'D:\oracle\product\10.2.0\oradata\orcl\REDO01.LOG',OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'D:\oracle\product\10.2.0\oradata\orcl\REDO02.LOG',OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'D:\oracle\product\10.2.0\oradata\orcl\REDO03.LOG',OPTIONS => DBMS_LOGMNR.ADDFILE);
===========================================================================
STEP 2:
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
===========================================================================
STEP 3:
SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,SQL_REDO, SQL_UNDO
FROM V$LOGMNR_CONTENTS WHERE username IN ('HR','SCOTT');
===========================================================================
STEP 4:
EXECUTE DBMS_LOGMNR.END_LOGMNR();
===========================================================================
STEP 5:
conn scott/tiger
create table scott.test_logmnr (id number,name varchar2(10) );
insert into test_logmnr values (1,'TEST1');
insert into test_logmnr values (2,'TEST2');
commit;
update test_logmnr set name = 'TEST';
commit;
delete from test_logmnr;
commit;
===========================================================================
STEP 6:
select * from hr.employees where department_ID=60;
update hr.employees set salary=salary+2000 where department_ID=60;
===========================================================================
STEP 7:
conn sys/oracle@orcl as sysdba
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'D:\oracle\product\10.2.0\oradata\orcl\REDO01.LOG',OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'D:\oracle\product\10.2.0\oradata\orcl\REDO02.LOG',OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'D:\oracle\product\10.2.0\oradata\orcl\REDO03.LOG',OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,SQL_REDO, SQL_UNDO
FROM V$LOGMNR_CONTENTS WHERE username IN ('HR','SCOTT');
EXECUTE DBMS_LOGMNR.END_LOGMNR();
===========================================================================
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'D:\oracle\product\10.2.0\oradata\orcl\REDO01.LOG',OPTIONS => DBMS_LOGMNR.REMOVEFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'D:\oracle\product\10.2.0\oradata\orcl\REDO02.LOG',OPTIONS => DBMS_LOGMNR.REMOVEFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'D:\oracle\product\10.2.0\oradata\orcl\REDO03.LOG',OPTIONS => DBMS_LOGMNR.REMOVEFILE);
EXECUTE DBMS_LOGMNR.END_LOGMNR();
===========================================================================
===========================================================================
set echo on
set feed on
-- add a first file to the logminer list
begin
dbms_logmnr.add_logfile( logfilename=> '/usr/tmp/logmine/PLYX84670.arc',
options=> dbms_logmnr.new);
end;
/
-- now add the rest of the files
begin
dbms_logmnr.add_logfile( logfilename=> '/usr/tmp/logmine/PLYX84670.arc', options=> dbms_logmnr.addfile);
dbms_logmnr.add_logfile( logfilename=> '/usr/tmp/logmine/PLYX84671.arc', options=> dbms_logmnr.addfile);
dbms_logmnr.add_logfile( logfilename=> '/usr/tmp/logmine/PLYX84672.arc', options=> dbms_logmnr.addfile);
dbms_logmnr.add_logfile( logfilename=> '/usr/tmp/logmine/PLYX84673.arc', options=> dbms_logmnr.addfile);
dbms_logmnr.add_logfile( logfilename=> '/usr/tmp/logmine/PLYX84674.arc', options=> dbms_logmnr.addfile);
dbms_logmnr.add_logfile( logfilename=> '/usr/tmp/logmine/PLYX84675.arc', options=> dbms_logmnr.addfile);
dbms_logmnr.add_logfile( logfilename=> '/usr/tmp/logmine/PLYX84676.arc', options=> dbms_logmnr.addfile);
dbms_logmnr.add_logfile( logfilename=> '/usr/tmp/logmine/PLYX84677.arc', options=> dbms_logmnr.addfile);
dbms_logmnr.add_logfile( logfilename=> '/usr/tmp/logmine/PLYX84678.arc', options=> dbms_logmnr.addfile);
dbms_logmnr.add_logfile( logfilename=> '/usr/tmp/logmine/PLYX84679.arc', options=> dbms_logmnr.addfile);
end;
/
-- execute logminer
begin
dbms_logmnr.start_logmnr( dictfilename=>'/usr/tmp/dictionary.ora');
end;
/
ANALYZE using TIME
-- ==================
-- begin
-- dbms_logmnr.start_logmnr( DICTFILENAME=> '/oracle/dictionary.ora',
-- STARTTIME=> to_date('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS'),
-- ENDTIME=> to_date('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));
-- end;
-- /
--
-- ANALYZE using SCN
-- =================
-- begin
-- dbms_logmnr.start_logmnr( dictfilename=> '/oracle/dictionary.ora',
-- STARTSCN=> 100,
-- ENDSCN=> 150);
-- end;
-- /

Extracting the LogMiner Dictionary to a Flat File
1st TIME STEP

alter system set utl_file_dir='D:\logminer\' scope=spfile;

shutdown immediate

startup

show parameter utl_file_dir

SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

execute dbms_logmnr_d.build('dictionary.dic','D:\logminer\',options=>dbms_logmnr_d.store_in_flat_file);

----Add logfile
execute dbms_logmnr.add_logfile (logfilename => 'D:\Archive\ARC_1_1_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\Archive\ARC_1_2_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\Archive\ARC_1_3_779021390.DBF',options => dbms_logmnr.addfile);

select filename from v$logmnr_logs;

execute dbms_logmnr.start_logmnr(dictfilename=>'D:\logminer\dictionary.dic',options=>dbms_logmnr.print_pretty_sql+dbms_logmnr.no_sql_delimiter+dbms_logmnr.ddl_dict_tracking);

create table mylog as select * from v$logmnr_contents;

execute DBMS_LOGMNR.END_LOGMNR();

set lines 1000
set pages 500
column scn format a6
column username format a8
column seg_name format a11
column sql_redo format a33
column sql_undo format a33
select scn , seg_name , sql_redo , sql_undo from mylog where username = 'SCOTT'
AND (seg_owner is null OR seg_owner = 'SCOTT');

==============================================================
2nd TIME STEP
conn scott/tiger

create table scott.test_logmnr (id number,name varchar2(10) );

insert into test_logmnr values (1,'TEST1');
insert into test_logmnr values (2,'TEST2');
commit;

update test_logmnr set name = 'TEST';
commit;

delete from test_logmnr;
commit;

conn / as sysdba
select max(FIRST_CHANGE#),name from v$archived_log group by name order by 2;
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_1_779021390.DBF',options => dbms_logmnr.new);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_2_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_3_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_4_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_5_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_6_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_7_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_8_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_9_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_10_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_11_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_12_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_13_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_14_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_15_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_16_779021390.DBF',options => dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_17_779021390.DBF',options => dbms_logmnr.addfile);

select filename from v$logmnr_logs;

execute dbms_logmnr.start_logmnr(dictfilename=>'D:\logminer\dictionary.dic',options=>dbms_logmnr.print_pretty_sql+dbms_logmnr.no_sql_delimiter+dbms_logmnr.ddl_dict_tracking);

drop table mylog;

create table mylog as select * from v$logmnr_contents;

execute DBMS_LOGMNR.END_LOGMNR();

set lines 1000
set pages 500
column scn format a6
column username format a8
column seg_name format a11
column sql_redo format a33
column sql_undo format a33
select scn , seg_name , sql_redo , sql_undo from mylog where username = 'SCOTT'
AND (seg_owner is null OR seg_owner = 'SCOTT');

execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_40_779021390.DBF',options => dbms_logmnr.removefile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_41_779021390.DBF',options => dbms_logmnr.removefile);
execute dbms_logmnr.add_logfile (logfilename => 'D:\ARCHIVE\ARC_1_42_779021390.DBF',options => dbms_logmnr.removefile);


-----------------------------------------------------------------------------------------------------------

SIMPLE STEPS.----


-------- Create table by user SCOTT
create table abc (id number(10, name varchar2(10));
insert into abc values (1,'ARUN');
alter system switch logfile;
insert into abc values (2,'MOHAN');
alter system switch logfile;
insert into abc values (3,'KARAN');
alter system switch logfile;
update abc set name='TEST' where id=2;
alter system switch logfile;
delete from abc where id in (1,2);
alter system switch logfile;

Note: - You don't know about When / Where and What DML,DDL performing on your database so co-ordinating with development team else as per your guess yesterday performing any
DDL and DML  and delete or insert or drop some important objects then in case follow below the steps to recover , register the archive in a logminer as per your guess not sure. 
And Give the time as per you guess.



-----------------------  Prerequisites ----------------------------

Supplemental logging must be enabled prior to the redo/archive logs are being generated - this option will put additional information to those logs which will be analyzed by LogMiner later. 

SQL>ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; 
SQL>SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE; 

How do I do that? 

We need to do all these being a sys user. Otherwise, some special roles will be required explicitely - EXECUTE_CATALOG_ROLE and SELECT ANY TRANSACTION. 




--------- Add Archivelog 

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/u01/app/oracle/archive_standby/1_325_837208662.dbf', DBMS_LOGMNR.ADDFILE); 
EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/u01/app/oracle/archive_standby/1_326_837208662.dbf', DBMS_LOGMNR.ADDFILE); 
EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/u01/app/oracle/archive_standby/1_327_837208662.dbf', DBMS_LOGMNR.ADDFILE); 
EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/u01/app/oracle/archive_standby/1_328_837208662.dbf', DBMS_LOGMNR.ADDFILE); 
EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/u01/app/oracle/archive_standby/1_329_837208662.dbf', DBMS_LOGMNR.ADDFILE); 



---------Step-2: Start LogMiner with data dictionary information 

LogMiner requires data dictionary information to translate Object ID (kept in redo/archive logs) to Object Names when it returns data as a part of data analysis. 

The dictionary options are - 

1. Using the Online Catalog 
2. Extracting a LogMiner Dictionary to the Redo Log Files 
3. Extracting a LogMiner Dictionary to the Redo Log Files 

I used the online catalog option as I could use the database during off peak hours for log analysis. 

EXECUTE DBMS_LOGMNR.START_LOGMNR( - 
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG); 





----------- On our database streams configured and works! To stop LogMiner session we have to stop the capture process.
select capture_name, capture_user from dba_capture;





---------- Step-3: Query LogMiner view to retrieve desired information 
The main source of data is V$LOGMNR_CONTENTS. Just describe the view and queried as I wanted.

SELECT OPERATION, SQL_REDO, SQL_UNDO, TIMESTAMP 
FROM V$LOGMNR_CONTENTS 
WHERE table_name='ABC'
AND TIMESTAMP  BETWEEN TO_DATE('26-01-2009 12:00:00','dd-mm-yyyy hh24:mi:ss') AND TO_DATE('26-01-2014 14:00:00','dd-mm-yyyy hh24:mi:ss') ORDER BY TIMESTAMP; 




--------- Step-4: Close LogMiner 
EXECUTE DBMS_LOGMNR.END_LOGMNR();