Wednesday, August 22, 2018

Oracle add Sql Hints using Sql Patches


This   is  very less know  feature . Ideally most us are aware of sql profiles  to fix plan change how  this  is very less known which  can be used to  execution plan by enforcing hints when  changing code is not possible

Please note these will be applicable to single sql id , unlike  sql profile which is applicable to  sql signature,  In  case of  sql is keeps changing like in case on bind variable , we need to add  patch by modifying sql profile .



Create Sql patch :

 declare
   v_sql CLOB;
begin
   select sql_text into v_sql from dba_hist_sqltext where sql_id='bn7zqwkfgtr38';
   sys.dbms_sqldiag_internal.i_create_patch(
      sql_text  => v_sql,
      hint_text => 'DYNAMIC_SAMPLING(4)',
      name      => 'user_extents_patch');
end;
/


Listing commonly used hints in patch :
--> hint_text=>' IGNORE_OPTIM_EMBEDDED_HINTS'),
--> hint_text => 'PARALLEL(big_table,10)',
--> hint_text => 'PARALLEL(8)',
--> hint_text => 'OPT_PARAM("optimizer_index_caching" 80)',
--> hint_text => 'OPT_PARAM("optimizer_index_cost_adj" 1)',


Drop Sql Patches :

exec dbms_sqldiag.drop_sql_patch(name => 'my_patch');
PL/SQL procedure successfully completed.



check sql patch :

--> select name, status, created, sql_text from dba_sql_patches where name='user_extents_patch'

--> expalin plan:
Note ----- - SQL patch "PATCH_gz85dtvwaj4fw" used for this statemen



How to add hint when sql id changes :

When there are multiple sqlid in case of bind aware we have below 2 options

Option 1)
Modify your sql generated form COE script to add hint or add hint
to create sql profile statement . Optionally you can add hint to explain plan
statement with outline and use same outline to generate sql profile


DECLARE
     l_sql               clob;
     BEGIN
     l_sql := q'!select su_pk,su_name,su_comment,inner_view.maxamount from
                 t_supplier_su,
                 ( select max(or_totalamount) maxamount,su_fk from t_order_or group by su_fk ) inner_view
                 where t_supplier_su.su_pk = inner_view.su_fk(+) and t_supplier_su.su_name is not null!';
      dbms_sqltune.import_sql_profile( sql_text => l_sql, 
                                     name => 'SQLPROFILE_01',
                                     profile => sqlprof_attr
(q'!USE_HASH_AGGREGATION(@"SEL$639F1A6F")!',
             q'!FULL(@"SEL$639F1A6F" "T_ORDER_OR"@"SEL$2")!',
             q'!USE_NL(@"SEL$1" "INNER_VIEW"@"SEL$1")!',
             q'!LEADING(@"SEL$1" "T_SUPPLIER_SU"@"SEL$1" "INNER_VIEW"@"SEL$1")!',
             q'!NO_ACCESS(@"SEL$1" "INNER_VIEW"@"SEL$1")!',
             q'!FULL(@"SEL$1" "T_SUPPLIER_SU"@"SEL$1")!',
             q'!OUTLINE(@"SEL$1")!',
             q'!OUTLINE(@"SEL$2")!',
             q'!OUTLINE_LEAF(@"SEL$1")!',
             q'!PUSH_PRED(@"SEL$1" "INNER_VIEW"@"SEL$1" 1)!',
             q'!OUTLINE_LEAF(@"SEL$639F1A6F")!',
             q'!ALL_ROWS!',
             q'!DB_VERSION('11.2.0.3')!',
             q'!OPTIMIZER_FEATURES_ENABLE('11.2.0.3')!',
             q'!IGNORE_OPTIM_EMBEDDED_HINTS!'),
             force_match => true );
     end;
     /


Option 2 )

This is undocumented method to update sqlobj$ which your client will not allow   . First create a sql profile with any available

sql plan . Then update sqlobj$  to add int to sql profile

declare
pln_sql_id varchar2(20) :='4sdd3343222';
pln_plan_hash_value number := 2949544139;
orig_sql_id varchar2(20) := '4sdd3343222';
new_prof_name varchar2(20) := 'SQL_PROFILE_2';
ar_profile_hints sys.sqlprof_attr;
cl_sql_text clob;
begin
select extractvalue(value(d), '/hint') as outline_hints
bulk collect into ar_profile_hints
from xmltable('/*/outline_data/hint'
passing (select xmltype(other_xml) as xmlval from v$sql_plan
where sql_id = pln_sql_id and plan_hash_value = pln_plan_hash_value
and other_xml is not null)) d;
select sql_fulltext into cl_sql_text
from v$sql where sql_id = orig_sql_id;
dbms_sqltune.import_sql_profile(
sql_text => cl_sql_text,
profile => ar_profile_hints,
name => new_prof_name,
force_match => true);
end;
  /

select comp_data from sqlobj$data where signature=(select signature from sqlobj$ where name='SQL_PROFILE_2');




- For 10.2, use the following sql to get profile hints :
-- select attr#, attr_val from sqlprof$attr where signature=(select signature from sqlprof$ where sp_name='SQL_PROFILE_2');

update sqlobj$data set 
comp_data='';
1 row updated.
--
-- For 10.2 use the below update statement :
-- update sqlprof$attr set ATTR_VAL='FULL(@"SEL$1" "EMP"@"SEL$1") 
NOPARALLEL(@"SEL$1" "EMP"@"SEL$1")' where attr#=5 and 
signature='784334333455334';
--
commit;
 alter system flush shared_pool;


Export import sql patch :

SQL> select name from dba_sql_patches; SQL> exec dbms_sqldiag.create_stgtab_sqlpatch('SQLPATCH_STAGE','SYSTEM'); SQL> exec dbms_sqldiag.pack_stgtab_sqlpatch(staging_table_name=>'SQLPATCH_STAGE', staging_schema_owner=>'SYSTEM'); export import -->> SQL> exec dbms_sqldiag.unpack_stgtab_sqlpatch(replace=>true, staging_table_name=>'SQLPATCH_STAGE', staging_schema_owner=>'SYSTEM');




Reference :

How to Create a SQL Patch to add Hints to Application SQL Statements (Doc ID 1931944.1)


Wednesday, January 27, 2016

RMAN -- Cookbook




                                             RMAN   -- Cookbook 





########################################################################
########################################################################


Enable & Disable Block change tracking
BCT to be active only changed blocks to keep track of:

SQL>alter database enable block change tracking using file '/rman_bkups/change.log';
SQL>ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '+ASM_QGISC_DATA_DG1'

Disabling Block change tracking:

SQL>alter database disable block change tracking;


SQL> select file, status, bytes
   2 from   v$block_change_tracking;

STATUS         FILE                                       BYTES
-----------          ---------------------------- ---------------
ENABLED     /dba/backup/01_mf_yzmrr7.chg         10,000,000




Block change tracking records the modified blocks since last backup and stores this logs in block
change tracking file.During backups,  RMAN uses this log file to identify the specific blocks
(changed block) that must be backed up.

Benifits: BCT improves RMAN backup performance as it would not required to scan whole datafiles
to detect changed blocks.
It will only see the changed block through block chnage tracking files and take the back up that clock.

Whenever data blocks change, the Change Tracking Writer (CTWR) background process tracks the changed
blocks in a private area of memory.

When a commit is issued against the data block, the block change tracking information is copied
to a shared area in Large Pool called the CTWR buffer.
During the checkpoint, the CTWR process writes the information from the CTWR RAM buffer to the block
change-tracking file.

To see the status of BCT Status
SQL>select status from v$block_change_tracking;

To Enable BCT
SQL>alter database enable block change tracking;

OR

SQL>alter database enable block change tracking USING FILE  '/u01/app/oracle/bct/rman_bct.f' reuse;"

The "USING FILE os_file_name" syntax allows you to define the location of the change tracking file on the OS, or you can omit this clause by enabling OMF.

TO disable BCT
SQL>alter database disable block change tracking;

To Monitor BCT
SQL>select filename,status, bytes from v$block_change_tracking;

To view the size of the CTWR

SQL>select * from v$sgastat where name like 'CTWR%';



########################################################################
Parallel backup of same datafile – 11g  ########################################################################

backup as COMPRESSED BACKUPSET section size 500M tag "tdybkp" database;





########################################################################
########################################################################


select CONTROLFILE_TYPE from v$database ;
select FUZZY,count(*) from v$datafile_header group by fuzzy ;


set lines 187
set numwidth 20
set pages 999
alter session set nls_date_format='DD-MON-YYYY:HH24:MI:SS';
select * from v$recover_file;
select min(fhrba_Seq)min_seq, max(fhrba_Seq) max_seq_required from x$kcvfh;
select distinct fuzzy from v$datafile_header;



alter session set nls_date_format='YYYY/MON/DD hh24:mi:ss';

select checkpoint_time,fuzzy,count(*),status
from ( select checkpoint_time,fuzzy,status
       from v$datafile_header
       union all
       select controlfile_time,'CTL',null from v$database)
group by checkpoint_time,fuzzy,status;





########################################################################
read only tablespace restore  ########################################################################



1)  take at least one backup of the tablespace after it has been made read only and thereafter we can use the SKIP READONLY command to exclude these tablespaces from the daily or weekly database backups.
2)  while doing a restore we need to use the CHECK READONLY keywords otherwise by default the read only tablespaces will not be restored and hence the recovery will also bypass these tablespaces. Subssequent attempts to open the database will fail.

RESTORE DATABASE CHECK READONLY;
and
RECOVER DATABASE CHECK READONLY;


3) The BACKUP OPTIMIZATION feature of Oracle RMAN identifies files not changed since the last backup as files for which the backup can be "optimized" -- i.e. *skipped*






########################################################################
Rman New features ########################################################################



è  rman 9i new feature :
1) configure option for persistent configuration
2) block media recovery
3) "PLUS ARCHIVELOG " to take backup of archivelogs
4) "NOT BACKED UP [SINCE TIME….]. " -- to take backup of not been backed up
5) "recover database allow 2 corruption" -- to allow certain number of corruptions while recovery
6) trail recovery
7) REPORT OBSOLETE RECOVERY WINDOW OF 7 DAYS  -- report obsolete and delete obsolete
8) crosscheck backup
9) show command
10)retention period / recovery window .
11) controlfile auto backup
12)  RMAN now supports the multiple block sizes for tablespaces
13) skip of tablespace .
14)




è  10g rman new feature :
1) scn level backup -- to take after certain scn
2) Fast incremental backup -- enabling block change tracking file . V$BLOCK_CHANGE_TRACKING
   alter database enable block change tracking using file 'C:/changetracking/chg01.dbf';
3) drop database including backups;  -- dropping database through rman
4) Automatic Channel Failover in case of one channel failure . error recorded in V$RMAN_OUTPUT
5) BACKUP DURATION 2:00 TABLESPACE users; -- "BACKUP......DURATION " option .
   replaces RATE and READRATE in previous version of RMAN.
6) backup duration 2:00 partial tablespace users filesperset 1;
   BACKUP.....DURATION.....PARTIAL  option
7) BACKUP DURATION 4:00 PARTIAL MINIMIZE LOAD DATABASE FILESPERSET 1;
   BACKUP.....DURATION.....MINIMIZE LOAD  option
8) backup duration 2:00 minimize time tablespace users;
   BACKUP....DURATION....MINIMIZE TIME.
9) BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;
  new compression option

10) backup copy of database;
11) catalog :  backupset
12) configure archive log deletion policy to ensure that archives applied on standby
13)




è  11g rman new feature
1) recovery advisor .(list failure , advice failure , repair failure preview , repair failure.
2) validate database; -- checks database for physical corruption
3) multiple channel doingbackup of single datafile :
   In 10g each datafile is backed by only one channel. In Oracle Database 11g RMAN,
   the multiple channels can backup one datafiles parallel by breaking the datafile
   into chunks known as "sections."
4) In Oracle Database 11g, RMAN bypasses backing up the committed undo data that is
   not required in recovery.
5) Improved Block Media Recovery Performance if flashback logs are present
6) improved zip option :
   CONFIGURE COMPRESSION ALGORITHM 'ZLIB';
   CONFIGURE COMPRESSION ALGORITHM 'BZIP2';
   RMAN now supports the ZLIB binary compression algorithm as part of the Oracle Advanced Compression
   option. The ZLIB algorithm is optimized for CPU efficiency, but produces larger zip files
   than the BZIP2 algorithm available previously, which is optimized for compression.
7) Network-Enabled Database Duplication Without Backups  -- online clone .
   Once the DUPLICATE command is initiated, RMAN automatically performs the following steps:
   Copies the spfile to the destination server.
   Starts the auxiliary instance with the spfile.
   Copies the relevant database files and archived redo logs over the network to the destination server.
   Recovers the database.
   Opens the database with the RESETLOGS option.
8) virtual private catalog : --> recovery catalog enhancement
9) import catalog , --> to import previous version catalog in 11g
10) SECTION SIZE parameter to control backupset size --- different backupset with different section size
11) Improved Scripting with RMAN Substitution Variables:
    CONNECT TARGET /
BACKUP DATABASE TAG '&1';
BACKUP ARCHIVELOG ALL TAG '&2';
EXIT;
$ rman @'/tmp/backup.cmd' USING DB_20070108 ARCH_20070108
12)

3)      duplicate database enhancement . :
à active database duplication if no files transferred
à if files transferred no need to connect to target database .






########################################################################
########################################################################

One of the good new features in 11g Release 2 is that it enables us to recover from a case of a dropped tablespace. TSPITR (tablespace point in time recovery) has been around in earlier releases, but we could not recover a dropped tablespace.
What 11gR2 does is that it peforms a fully automated RMAN managed TSPITR. It creates and starts the auxiliary instance and restores just the datafiles it requires – SYSTEM,SYSAUX,UNDO and the files pertaining to the dropped tablespace – in this case datafiles 1,2,3 and 7- in the location which we specify as the ‘Auxiliary Destination’. It will first perform a recovery of the tablespace on the auxiliary instance and then use Data Pump and Transportable Tablespace technology to extract and import the tablespace meta data into the original source database.


TSPITR has a number of restrictions, a quick summary of which follows:

the database has to be open and tablespace accessible for tbspitr

1.  You can’t restore tablespaces with objects owned by SYS. Any tablespace with replicated master tables cannot be recovered with TSPITR.
2. Tablespaces with snapshot logs are not supported.
3. You can’t restore tablespaces that contain rollback segments.
4. If an object within the tablespace to be recovered has one of the following types, then TSPITR is not supported:
    1. VARRAY 
    2. Nested tables  
    3. External files
5. The current physical structure of the undo and rollback segments in the target database 
must be unchanged between the point you wish to recover to and the current point.
6. Once you have completed TSPITR on a given tablespace, all previous backups of that tablespace are no longer usable for future TSPITR recoveries of that tablespace. So you should backup your tablespace after TSPITR in case you need to run another TSPITR.



Automated Tablespace point in time :
Make sure that objects in that tablespace are self-contained.
Let’s say we have the tablesapace called “TEST”.  It contains table “TEST_TABLE” but indexes on that table resides on another tablespace called “TEST_IDX”. In this case, if you want to recover “TEST” tablespace to some point-in-time , then you must also recover “TEST_IDX” tablespace to the same point-in-time(or you will loose indexes).
To identify dependent tablesapaces, run the following:
SQL> select obj1_owner 
       ,obj1_name 
       ,obj1_type 
       ,ts2_name 
       ,reason 
from sys.ts_pitr_check 
where ( ts1_name in (‘TEST’) 
and ts2_name not in (‘TEST’) ) 
or ( ts1_name not in (‘TEST’) 
and ts2_name in (‘TEST’) );

Identify what objects will be dropped after recovering tablespace.
Run the following,
SQL> select * 
from ts_pitr_objects_to_be_dropped 
where tablespace_name=’TEST

Clean the previous failed TSPIR
SQL> exec dbms_backup_restore.manageauxinstance (‘instance_name’,1) ;


Check Tablespace Recoverability
SQL> begin
2 dbms_tts.transport_set_check('TBSPIT',TRUE,TRUE);
3 end;
4 /
PL/SQL procedure successfully completed.
SQL> select * from transport_set_violations;



The activities that happens upon kicking off the tbspitr rman commands are as listed below:

*Create auxiliary instance – pvqe with db_create_file_dest set to the location we provide on the rman script.
*Reboots the target instance - RPROD
*Run TRANSPORT_SET_CHECK on recovery set tablespaces (here TBSPIT) to check dependency. (We will manually check this before starting tbspitr lest it fails at this point).
*Restore the controlfile and mount the clone database in auxilary location
*Restore SYSTEM, SYSAUX, UNDO datafiles at the auxillary destination.
*Restore TBSPIT datafile at target database (original) destination (RPROD) itself.
*Switch SYSTEM, SYSAUX & UNDO datafiles to clone copy.
*Bring the datafiles of these clone tablespaces online.
*Recover SYSTEM, SYSAUX, UNDO & TBSPIT clone tablespaces.
*Open resetlogs clone database.
*Make TBSPIT tablespace read-only
*Create directory for data pump as the same auxdest location.
*Do a transportable tablespace (Metadata) datapump export
*Shutdown clone database.
*Drop tablespace TBSPIT including contents. Keep datafile (for transport import)
*Do metadata datapump import.
*Make tablespace TBSPIT READ WRITE
*Take tablespace TBSPIT offline.
*Clean up the clone instance – remove restored datafiles, onlinelogs and controlfiles.


Using RMAN backup the database and arvhivelogs
rman target /
backup database plus archivelog;
9)Get the latest log sequence number
SQL> select sequence#,thread# from  v$log;
SEQUENCE#    THREAD#
———- ———-
720          1
718          1
719          1
9) Create a small parameter file
cat /u01/dba/temp/auto_tsptir_check/initEEeE.ora
db_name=TESTDB1
db_unique_name=EEeE_tspitr_TESTDB1
compatible=11.2.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=20
db_create_file_dest=/u01/dba/temp/auto_tsptir_check
log_archive_dest_1=’location=/u01/dba/temp/auto_tsptir_check’
**note Automated TSPITR creates its own parameter file but the SGA_TARGET in that is only 200m which will cause the whole procedure to fail due to insufficient memory, That is the reason we created our own parameter file and will pass the same during ATSPIR
RMAN> set auxiliary INSTANCE parameter file to “/u01/dba/temp/auto_tsptir_check/initEEeE.ora”;
executing command: SET auxiliary parameter file
using target database control file instead of recovery catalog
10) Perform Automated Tablespace point in time recovery.
RMAN> RECOVER TABLESPACE ts_auto      UNTIL LOGSEQ 720 THREAD 1       AUXILIARY DESTINATION ‘/u01/dba/temp/auto_tsptir_check';
RMAN> recover tablespace "TEST","TEST_IDX" until scn 2091595 auxiliary destination ‘D:\oracle’;
RMAN> recover tablespace "TEST" until time "23-SEP-2005 10:00:00','DD-MON-YYYY HH24:MI:SS" auxiliary destination 'D:\BAckup\temp';


**AUXILIARY DESTINATION is where rman stores all the database related files for the auxiliary instance. 
11) Change the status of the tablespace to online
alter tablespace ts_auto online;





########################################################################
Incomplete recovery ########################################################################


.
until a log sequence number:

We need to determine the log sequence we need to recover until :
select sequence#,first_change#, to_char(first_time,'HH24:MI:SS') from v$log order by 3 ;


RMAN> run {
2> set until sequence=14;  >>> add one to the sequence number we have to recover until
3> restore database;
4> recover database;
5> }


Rman :
set until time ’2007-12-10 11:29:00’ (note: this is based on the NLS_DATE_FORMAT)
set until sequence 10305 thread 1; (note: it is one less than specified) 
set until scn 67543;

RMAN> RECOVER DATABASE UNTIL LOGSEQ=
RMAN> recover database until sequence 162280;
RMAN> recover database until SCN 34527024;
RMAN> recover database until time '10-11-08 06:31:15'


Sql plus :
recover database until cancel (At random type cancel to stop the recovery) 
recover database until change 309121 
recover database until time ’2007-12-10 11:29:00’ (see below for opti



SCN:

LOOK AT THE CURRENT SCN NUMBER AND NOTE IT

SQL> SELECT GROUP# ,SEQUENCE# ,STATUS , ARCHIVED , FIRST_CHANGE#
  2  FROM V$LOG;

    GROUP#  SEQUENCE# STATUS           ARC FIRST_CHANGE#
---------- ---------- ---------------- --- -------------
         1          2 ACTIVE           YES        689996
         2          3 CURRENT          NO         689998
         3          1 ACTIVE           YES        678956



Time :

SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MM-YY HH24:MI:SS';
SQL> select NAME, SEQUENCE#, THREAD#, FIRST_TIME, FIRST_CHANGE#, NEXT_TIME, NEXT_CHANGE# from v$archived_log where SEQUENCE# > 166;






########################################################################
Restore preview   ########################################################################


restore datafile'/usr05/oradata/WMST/WMST/test01.dbf' preview ;
restore database preview;
restore tablespace users preview;

# Preview
RESTORE DATABASE PREVIEW;
RESTORE TABLESPACE users PREVIEW;

# Preview Summary
RESTORE DATABASE PREVIEW SUMMARY;
RESTORE TABLESPACE users PREVIEW SUMMARY;






########################################################################
########################################################################


11g Compress backupset enhancement .
-- uses new ZLIB algorithm instead of old BZIP2 algorithm . ZLIB is 40% faster .
-- “configure compression algorithm to ZLIB”

CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO COMPRESSED BACKUPSET;



RUN {
      BACKUP AS COMPRESSED BACKUPSET CHECK LOGICAL FULL DATABASE
      PLUS ARCHIVELOG
      INCLUDE CURRENT CONTROLFILE FOR STANDBY ;
}




11g advance compression :

Oracle 11g Release 2 introduced compression algorithm levels which can be used for compressing table data, Data Pump exports and RMAN backups as well.

The compression levels are BASIC, LOW, MEDIUM and HIGH and each affords a trade off related to backup througput and the degree of compression afforded.

To use this option, we can run the following RMAN commands

RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;

followed by ..

RMAN> CONFIGURE COMPRESSION ALGORITHM ‘HIGH’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘MEDIUM’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘LOW’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘BASIC’;

Tests carried out on a small 1.5 GB database returned compression ratios of about 4.7 at best case. However, it should be noted that use of LOW, MEDIUM and HIGH requires the Advanced Compression license. The backupset size and backup durations are shown below.

Compression Level ‘HIGH’

backupset size: 226.18M
time: 00:02:21

Compression Level ‘Medium’

backupset size: 293.80M
time: 00:00:30

Compression Level ‘Low’

backupset size: 352.59M
time: 00:00:20

Compression Level ‘Basic’

backupset size: 276.55M
time: 00:00:50

To summarise we can conclude:
LOW – corresponds to LZO (11gR2) – smallest compression ratio, fastest
MEDIUM – corresponds to ZLIB (11gR1) – good compression ratio, slower than LOW
HIGH – corresponds to unmodified BZIP2 (11gR2) – highest compression ratio, slowest
BASIC (which is free) – corresponds to BZIP2 (10g style compression) – compression ratio in the range of MEDIUM, but slower