Monday 29 August 2016

Data Guard Archive Tracing



- You can trace the archive logs on either the primary or the standby database by setting the log_archive_trace initialization parameter

 alter system set log_archive_trace=8

- Trace files located in user_dump_dest location.

- On the primary database, the log_archive_trace parameter controls the output of the 
ARCn (archiver),
FAL (fetcharchived log), and the
LGWR (log writer) background processes.
On the standby databases, it traces the work of the
ARCn,
RFS (remote file server), and the
FAL processes.

- You can specify any of 17 levels of archive log tracing.

 0: Disables archivelog tracing (default)
 1: Tracks archival of redo log file
 2: Tracks archival status of each archivelog destination
 4: Tracks archival operational phase
 8: Tracks archivelog destination activity
 16: Tracks detailed archivelog destination activity
 32: Tracks archivelog destination parameter modifications
 64: Tracks ARCn process state activity
 128: Tracks FAL (fetch archived log) server related activities
 256: Tracks RFS Logical Client
 512: Tracks LGWR redo shipping network activity
 1024: Tracks RFS Physical Client
 2048: Tracks RFS/ARCn Ping Heartbeat
 4096: Tracks Real Time Apply
 8192: Tracks Redo Apply (Media Recovery or Physical Standby)
 16384: Tracks redo transport buffer management

 32768: Tracks LogMiner dictionary

If you specify Level 17, the trace file will include trace information from Levels 1, 2, 4, 8 and 16.

kupprdp: master process DM00 started


Alert log:
=======

kupprdp: master process DM00 started with pid=42, OS id=7353
         to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_TABLE_01', 'S2', 'KUPC$C_1_20160821122303', 'KUPC$S_1_20160821122303', 0);
kupprdp: worker process DW01 started with worker id=1, pid=25, OS id=7355
         to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_TABLE_01', 'S2');

Solutions.
========

solution

SQL> select job_name,state from dba_datapump_jobs;

JOB_NAME                       STATE
—————————— ——————————
SYS_EXPORT_FULL_02             EXECUTING
SYS_EXPORT_FULL_01             NOT RUNNING

 correction on my instruction above:

– just take note on job_name and owner value.
– in SQLPlus execute DBMS_DATAPUMP.STOP_JOB as follows:
exec DBMS_DATAPUMP.STOP_JOB (DBMS_DATAPUMP.ATTACH(‘JOB_NAME’,’OWNER’,1.0)
– ‘1’ is to abort the job immediately
– ‘0’ is to remove the job from the job list so that it is not restartable

SQL> exec DBMS_DATAPUMP.STOP_JOB (DBMS_DATAPUMP.ATTACH(‘SYS_EXPORT_SCHEMA_01′,’SYSTEM’),1,0);

COMPRESSED INCREMENTAL BACKUP

COMPRESSED INCREMENTAL ONLINE BACKUP PLUS ARCHIVELOG FILES AND AUTO DELETE OF ARCHIVELOG


INCREMENTAL LEVEL 0 BACKUP

RUN
{
BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 0
DATABASE PLUS ARCHIVELOG;
DELETE NOPROMPT ARCHIVELOG ALL BACKED UP 1 TIMES TO DISK;
}


INCREMENTAL LEVEL 1 BACKUP

RUN
{
BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 1
DATABASE PLUS ARCHIVELOG;
DELETE NOPROMPT ARCHIVELOG ALL BACKED UP 1 TIMES TO DISK;
}



Reference:

Saturday 27 August 2016

SQLTuning Advisor (Manually)

Generate addm/ash/awr see report.
Step 0) In order to execute Sql Tuning Adviser API, one must be granted with “ADVISER” role.
grant adviser to <user>;

Step 1) The first step using SQL Tuning Adviser is to create a tuning task using DBMS_SQLTUNE.CREATE_TUNING_TASK.
— for a specific statement from AWR

SET SERVEROUTPUT ON

-- Tuning task created for specific a statement from the AWR.
DECLARE
  l_sql_tune_task_id  VARCHAR2(100);
BEGIN
  l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
                          begin_snap  => 764,
                          end_snap    => 938,
                          sql_id      => '19v5guvsgcd1v',
                          scope       => DBMS_SQLTUNE.scope_comprehensive,
                          time_limit  => 60,
                          task_name   => '19v5guvsgcd1v_AWR_tuning_task',
                          description => 'Tuning task for statement 19v5guvsgcd1v in AWR.');
  DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;

Step 2) — or for a specific statement from Shared Library Cache

DECLARE
  l_sql_tune_task_id  VARCHAR2(100);
BEGIN
  l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
                          sql_id      => 'fhahkc71k304u',
                          scope       => DBMS_SQLTUNE.scope_comprehensive,
                          time_limit  => 60,
                          task_name   => 'fhahkc71k304u_tuning_task',
                          description => 'Tuning task for statement fhahkc71k304u.');
  DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

Step 3) — or for a specific statement given manually
During the execution phase you may wish to pause and restart the task, cancel it or reset the task to allow it to be re-executed:
Interrupt and resume a tuning task.
EXEC DBMS_SQLTUNE.interrupt_tuning_task (task_name => '19v5guvsgcd1v_AWR_tuning_task');
EXEC DBMS_SQLTUNE.resume_tuning_task (task_name => '19v5guvsgcd1v_AWR_tuning_task');

-- Cancel a tuning task.
EXEC DBMS_SQLTUNE.cancel_tuning_task (task_name => '19v5guvsgcd1v_AWR_tuning_task');

-- Reset a tuning task allowing it to be re-executed.
EXEC DBMS_SQLTUNE.reset_tuning_task (task_name => '19v5guvsgcd1v_AWR_tuning_task');

——————————————————————————————
Step 4 create a tuning task from AWR
——————————————————————————————
get snap ids of today

sys@goldprod> select SNAP_ID, BEGIN_INTERVAL_TIME from dba_hist_snapshot where BEGIN_INTERVAL_TIME > trunc(sysdate) order by snap_id;

   SNAP_ID BEGIN_INTERVAL_TIME
---------- ---------------------------------------------------------------------------
      4042 28-APR-11 12.00.29.410 AM
      4043 28-APR-11 01.01.01.094 AM
      4044 28-APR-11 02.00.32.909 AM
      4045 28-APR-11 03.00.07.558 AM
      4046 28-APR-11 04.00.40.121 AM
      4047 28-APR-11 05.00.14.894 AM
      4048 28-APR-11 06.00.59.123 AM
      4049 28-APR-11 07.00.23.056 AM
      4050 28-APR-11 08.00.51.205 AM
      4051 28-APR-11 09.00.19.892 AM
      4052 28-APR-11 10.00.35.227 AM
      4053 28-APR-11 11.00.02.168 AM
      4054 28-APR-11 12.00.37.690 PM
      4055 28-APR-11 01.00.09.106 PM

DECLARE
  l_sql_tune_task_id  VARCHAR2(100);
BEGIN
  l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
                          begin_snap  => 4042,
                          end_snap    => 4055,
                          sql_id      => 'fhahkc71k304u',
                          scope       => DBMS_SQLTUNE.scope_comprehensive,
                          time_limit  => 60,
                          task_name   => 'fhahkc71k304u_AWR_tuning_task',
                          description => 'Tuning task for statement fhahkc71k304u in AWR.');
  DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

PL/SQL procedure successfully completed.

Step 5) execute the tuning task

EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => 'fhahkc71k304u_AWR_tuning_task');


Step 6) report tuning task findings

SET LONG 100000;
SET PAGESIZE 1000
SET LINESIZE 300
SELECT DBMS_SQLTUNE.report_tuning_task('fhahkc71k304u_AWR_tuning_task') AS recommendations FROM dual;
SET PAGESIZE 24

Step 7) accept recommendations

execute dbms_sqltune.accept_sql_profile(task_name =>'fhahkc71k304u_AWR_tuning_task', replace => TRUE);

Step 8) verify if the sql profile is used.

Wednesday 24 August 2016

Exadata


About


Partition Table Expdp and Impdp



Oracle 11g has several new features.

Here are main features:
·         Compression
·         Encryption
·         Transportable
·         Partition Option
·         Data Options
·         Reuse Dumpfile(s)
·         Remap_table
·         Remap Data

One of the main and essential feature is Partition option. Because if table size more than a little bit GB and if table is partitioned how to transport this partition tables using EXPDP?
  
You can now export one or more partitions of a table without having to move the entire table.  On import, you can choose to load partitions as is, merge them into a single table, or promote each into a separate table. 

To understand partition feature in expdp-impdp/datapump, let’s create a scenario as below.

We are creating 2 schemas.

1.       User1 – This schema is having partition table SALES which needs to be export
2.       User2 – schema where we are going to import sales table with new name as SALES_T.

1. Create users:

[oracle@DL-HOST01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sat Jun 18 13:15:44 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create user user1 identified by user1;

User created.

SQL> create user user2 identified by user2;

User created.

SQL> grant connect, resource to user1, user2;

Grant succeeded.

2.Create datapump directory.

SQL>CREATE OR REPLACE DIRECTORY EXPDP_DIR AS ' /oracle/data1/expdp';
SQL> GRANT READ, WRITE ON DIRECTORY EXPDP_DIR to user1,user2;

3.Create partition table SALES.

SQL> conn user1/user1
Connected.

SQL> CREATE TABLE sales
( name varchar2(5),
time_id DATE)
PARTITION BY RANGE (time_id)
(PARTITION sales_AUG VALUES LESS THAN (TO_DATE('01-AUG-2016','dd-MON-yyyy')) TABLESPACE users,

PARTITION sales_SEP VALUES LESS THAN (TO_DATE('01-SEP-2016','dd-MON-yyyy')) TABLESPACE users,
PARTITION sales_OCT VALUES LESS THAN (TO_DATE('01-OCT-2016','dd-MON-yyyy')) TABLESPACE users,
PARTITION sales_NOV VALUES LESS THAN (TO_DATE('01-NOV-2016','dd-MON-yyyy')) TABLESPACE users,
PARTITION sales_DEV VALUES LESS THAN (TO_DATE('01-JUN-2016','dd-MON-yyyy')) TABLESPACE users );
Table created.
  
SQL> insert into sales values('prod1','01-AUG-2016');

1 row created.

SQL> insert into sales values('prod2','01-SEP-2016');

1 row created.

SQL> commit;

Commit complete.

SQL> insert into sales values('prod3','01-OCT-2016');

1 row created.

Commit;
  

SQL> SELECT partitioned FROM dba_tables WHERE table_name = ‘SALES’;

partitioned
---
YES

Let`s check:

SQL> conn user1/user1

Connected.

SQL> SELECT partition_name FROM user_tab_partitions WHERE table_name = 'SALES';

PARTITION_NAME
------------------------------
SALES_AUG
SALES_DEV
SALES_NOV
SALES_OCT
SALES_SEP


EXPDP:

1. Export entire table including all partitions

[oracle@DL-HOST01 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table.dmp tables=sales logfile=sales_table.log

Export: Release 11.2.0.3.0 - Production on Sat Jun 18 11:13:37 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "USER1"."SYS_EXPORT_TABLE_01":  user1/******** directory=EXPDP_DIR dumpfile=sales_table.dmp tables=sales logfile=sales_table.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 24 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "USER1"."SALES":"SALES_NOV"                5.421 KB       1 rows
. . exported "USER1"."SALES":"SALES_OCT"                5.421 KB       1 rows
. . exported "USER1"."SALES":"SALES_SEP"                5.421 KB       1 rows
. . exported "USER1"."SALES":"SALES_AUG"                    0 KB       0 rows
. . exported "USER1"."SALES":"SALES_DEV"                    0 KB       0 rows
Master table "USER1"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
  /oracle/data1/expdp/sales_table.dmp
Job "USER1"."SYS_EXPORT_TABLE_01" successfully completed at 11:13:46


2. Export specific partition of table:

[oracle@DL-HOST01 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp tables=sales:SALES_NOV logfile=sales_table_partition.log

Export: Release 11.2.0.3.0 - Production on Sat Jun 18 12:31:06 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "USER1"."SYS_EXPORT_TABLE_01":  user1/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp tables=sales:SALES_NOV logfile=sales_table_partition.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 8 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "USER1"."SALES":"SALES_NOV"                5.421 KB       1 rows
Master table "USER1"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
  /oracle/data1/expdp/sales_table_partition.dmp
Job "USER1"."SYS_EXPORT_TABLE_01" successfully completed at 12:31:13

IMPDP

Move dmp file to target host (ftp, scp etc)

Or load data to another schema using remap_schema

1. Import entire partition table into new schema with new name.

[oracle@DL-HOST01 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table.dmp remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;

Import: Release 11.2.0.3.0 - Production on Sat Jun 18 11:19:25 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "USER2"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "USER2"."SYS_IMPORT_FULL_01":  user2/******** directory=EXPDP_DIR dumpfile=sales_table.dmp remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "USER2"."SALES_T" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "USER2"."SALES_T":"SALES_SEP"               5.421 KB       1 rows
. . imported "USER2"."SALES_T":"SALES_AUG"                   0 KB       0 rows
. . imported "USER2"."SALES_T":"SALES_DEV"                   0 KB       0 rows
. . imported "USER2"."SALES_T":"SALES_OCT"               5.421 KB       1 rows
. . imported "USER2"."SALES_T":"SALES_NOV"               5.421 KB       1 rows
Job "USER2"."SYS_IMPORT_FULL_01" successfully completed at 11:19:30

Let`s check:

SQL> conn user2/user2

Connected.

SQL> select * from sales_t;

NAME  TIME_ID
----- ---------
prod1 01-AUG-14
prod2 01-SEP-14
prod3 01-OCT-14

We have successfully imported entire table with name SALES_T.

2. Now import single partition only.

[oracle@DL-HOST01 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;

Import: Release 11.2.0.3.0 - Production on Sat Jun 18 13:05:26 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "USER2"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "USER2"."SYS_IMPORT_FULL_01":  user2/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "USER2"."SALES_T" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "USER2"."SALES_T":"SALES_NOV"               5.421 KB       1 rows
Job "USER2"."SYS_IMPORT_FULL_01" successfully completed at 13:05:28


Here we have successfully imported single partition.

If a partition name is specified, it must be the name of a partition or subpartition in the associated table. 

Only the specified set of tables, partitions, and their dependent objects are unloaded.
  
When you use partition option (PARTITION_OPTIONS) of DataPump you have to select below options:
None - Tables will be imported such that they will look like those on the system on which the export was created.
 
Departition - Partitions will be created as individual tables rather than partitions of a partitioned table.
 
Merge - Combines all partitions into a single table