Saturday, September 14, 2024

Oracle 19c New Feature High-Frequency Statistics : No more stale statistics in 19c


Since there was  lot of hype of  19c  new feature   High frequency statistics thought of documenting few note .

Since main motto of Blog is High-Frequency Statistics have  mentioned about it first   But  later also documented  notes on our  routine stats  maintenance window 



#########################
19c High-frequency automatic optimizer statistics collection. :
#########################

The new feature introduced in Oracle 19c called High-Frequency Automatic Optimizer Statistics Collection complements the standard automatic statistics collection job.

AutoTask schedules tasks to run automatically between  maintenance windows. By default, one window is scheduled for each day of the week. Automatic optimizer statistics collection (DBMS_STATS) runs in all predefined maintenance windows.

Statistics can go stale between two consecutive statistics collection tasks. If data changes frequently, the stale statistics could cause performance problems. For example, a brokerage company might receive tremendous data during trading hours, leading the optimizer to use stale statistics for queries executed during this period.


By default, the high-frequency statistics collection occurs every 15 minutes and as such there is less possibility of having stale statistics even for those tables where data is changing continuously.

The DBMS_STATS.SET_GLOBAL_PREFS procedure is used to enable and disable the high-frequency statistics gather task as well as change the execution interval (default 15 minutes) and the maximum run time (60 minutes).



select
s.START_TIME, s.END_TIME
,s.STALENESS
,s.OSIZE
,s.OBJ#, name, NVL(subname,'<NULL>') subname
,s.TYPE#
,decode(s.type#, 0, 'NEXT OBJECT', 1, 'INDEX', 2, 'TABLE', 3, 'CLUSTER',
19, 'TABLE PARTITION', 20, 'INDEX PARTITION',
34, 'TABLE SUBPARTITION', 35, 'INDEX SUBPARTITION',
'UNDEFINED') object_type
,s.FLAGS, s.STATUS
,s.SID, s.SERIAL#
,s.PART#, s.BO#
from sys.stats_target$ s
left outer join sys.obj$ o on o.obj# = s.obj#
Where start_time >= sysdate-10/1440
order by start_time
/



==>  To Change Setting 

EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_STATUS','ON');
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_MAX_RUN_TIME','600');
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_INTERVAL','240');

 


==> to check current setting 

SET LINESIZE 100 pages 99
COLUMN auto_task_status HEADING 'Auto Task|Status' FORMAT a10
COLUMN auto_task_max_run_time HEADING 'Auto Task|Max Run Time' FORMAT a15
COLUMN auto_task_interval HEADING 'Auto Task|Interval' FORMAT a10
SELECT DBMS_STATS.GET_PREFS('AUTO_TASK_STATUS') AS auto_task_status
, DBMS_STATS.GET_PREFS('AUTO_TASK_MAX_RUN_TIME') AS auto_task_max_run_time
, DBMS_STATS.GET_PREFS('AUTO_TASK_INTERVAL') AS auto_task_interval
FROM dual;



==>  To Check execution history 


COL OPID FORMAT 99999999
COL STATUS FORMAT a11
COL ORIGIN FORMAT a20
COL COMPLETED FORMAT 99999
COL FAILED FORMAT 99999
COL TIMEOUT FORMAT 99999
COL INPROG FORMAT 99999

SELECT OPID, ORIGIN, STATUS, TO_CHAR(START_TIME, 'DD/MM HH24:MI:SS' ) AS BEGIN_TIME,
       TO_CHAR(END_TIME, 'DD/MM HH24:MI:SS') AS END_TIME, COMPLETED, FAILED,
       TIMED_OUT AS TIMEOUT, IN_PROGRESS AS INPROG
FROM  DBA_AUTO_STAT_EXECUTIONS
ORDER BY OPID;



with x as (
SELECT x.*
, end_time-start_time diff
, start_time-(LAG(end_time,1) over (order by start_time)) start_lag
FROM DBA_AUTO_STAT_EXECUTIONS x
ORDER BY x.start_time
)
select opid, origin, status
, ((extract( day from start_lag )*24+
extract( hour from start_lag ))*60+
extract( minute from start_lag ))*60+
extract( second from start_lag ) start_lag
, start_time, end_time
, ((extract( day from diff )*24+
extract( hour from diff ))*60+
extract( minute from diff ))*60+
extract( second from diff ) secs
, completed, failed, timed_out, in_progress
from x
Where start_time >= sysdate-1/24
/





#########################
To Enable Or Disable old  Auto task during daily maintenance window 
#########################


BEGIN
  DBMS_AUTO_TASK_ADMIN.ENABLE (  
    client_name  => 'auto optimizer stats collection'
,   operation    => NULL
,   window_name  => NULL 
);
END;
/


BEGIN
  DBMS_AUTO_TASK_ADMIN.DISABLE (  
    client_name  => 'auto optimizer stats collection'
,   operation    => NULL
,   window_name  => NULL 
);
END;
/


-- disable sql tuning advisor on monday
BEGIN
  dbms_auto_task_admin.disable(
    client_name => 'sql tuning advisor',
    operation   => NULL,
    window_name => 'MONDAY_WINDOW');
END;
/




#########################
Alter old auto stats setting for  daily maintenance window 
#########################

EXEC DBMS_STATS.alter_stats_history_retention(90);
EXEC DBMS_STATS.set_global_prefs('estimate_percent', '5');


BEGIN
  DBMS_SQLTUNE.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK', 'LOCAL_TIME_LIMIT', 1200);
  DBMS_SQLTUNE.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK', 'ACCEPT_SQL_PROFILES', 'FALSE');
  DBMS_SQLTUNE.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK', 'MAX_SQL_PROFILES_PER_EXEC', 20);
  DBMS_SQLTUNE.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK', 'MAX_AUTO_SQL_PROFILES', 10000);
END;
/



#########################
Old Auto Task Details View for  daily maintenance window :
#########################


select log_id,owner,job_name,status,actual_start_date,instance_id from dba_scheduler_job_run_details
where job_name = 'GATHER_STATS_JOB' and ACTUAL_START_DATE>SYSDATE-3
or job_name like 'ORA$AT_OS_OPT_SY%' and ACTUAL_START_DATE>SYSDATE-3
order by log_date desc;



COL CLIENT_NAME FORMAT a31
select client_name, status, window_group from dba_autotask_client;

SELECT client_name, status FROM dba_autotask_operation;

SELECT task_name, status, TO_CHAR(execution_end,'DD-MON-YY HH24:MI')
FROM dba_advisor_executions WHERE task_name='SYS_AUTO_SQL_TUNING_TASK'
ORDER BY execution_end;

SELECT WINDOW_NAME, to_char(START_TIME,'DD-Mon-RR hh24:mi') START_TIME, DURATION
FROM DBA_AUTOTASK_SCHEDULE
ORDER BY WINDOW_NAME, START_TIME DESC;

select client_name,JOB_NAME,JOB_STATUS,JOB_START_TIME,JOB_DURATION from DBA_AUTOTASK_JOB_HISTORY 
where JOB_START_TIME >systimestamp -7 and client_name='auto optimizer stats collection'  order by 4 desc ; 

select parameter_name, parameter_value
from dba_advisor_parameters
where task_name = 'SYS_AUTO_SQL_TUNING_TASK'
and parameter_name IN ('ACCEPT_SQL_PROFILES',
'MAX_SQL_PROFILES_PER_EXEC',
'MAX_AUTO_SQL_PROFILES');




select window_name, repeat_interval, duration from dba_scheduler_windows where window_name like 'WEEK%';

select window_name,  next_start_date , enabled , resource_plan from dba_scheduler_windows ; 

SELECT window_name,TO_CHAR(window_next_time,'DD-MON-YY HH24:MI:SS')
,sql_tune_advisor, optimizer_stats, segment_advisor
FROM dba_autotask_window_clients;

select * from dba_scheduler_wingroup_members order by 1,2;





col target  for a20
col start_time for a20
col end_time  for a20
col notes for a20
select target,start_time,end_time,notes 
from DBA_OPTSTAT_OPERATION_TASKS 
-- where target like '%DEMO%' 
order by OPID desc;


 
 set linesize 250
set pagesize 100
column preference_name format a30
column preference_value format a50
 
-- global preferences
with preflist (preference_name,global_only)
as (select 'APPROXIMATE_NDV_ALGORITHM',0 from dual union all
    select 'AUTO_STAT_EXTENSIONS'     ,0 from dual union all
    select 'AUTO_TASK_STATUS'         ,0 from dual union all
    select 'AUTO_TASK_MAX_RUN_TIME'   ,0 from dual union all
    select 'AUTO_TASK_INTERVAL'       ,0 from dual union all
    select 'AUTOSTATS_TARGET'         ,1 from dual union all
    select 'CASCADE'                  ,0 from dual union all
    select 'CONCURRENT'               ,0 from dual union all
    select 'DEGREE'                   ,0 from dual union all
    select 'ESTIMATE_PERCENT'         ,0 from dual union all
    select 'GLOBAL_TEMP_TABLE_STATS'  ,0 from dual union all
    select 'GRANULARITY'              ,0 from dual union all
    select 'INCREMENTAL'              ,0 from dual union all
    select 'INCREMENTAL_STALENESS'    ,0 from dual union all
    select 'INCREMENTAL_LEVEL'        ,0 from dual union all
    select 'METHOD_OPT'               ,0 from dual union all
    select 'NO_INVALIDATE'            ,0 from dual union all
    select 'OPTIONS'                  ,0 from dual union all
    select 'PREFERENCE_OVERRIDES_PARAMETER',0 from dual union all
    select 'PUBLISH'                  ,0 from dual union all
    select 'STALE_PERCENT'            ,0 from dual union all
    select 'STAT_CATEGORY'            ,0 from dual union all
    select 'TABLE_CACHED_BLOCKS'      ,0 from dual union all
    select 'WAIT_TIME_TO_UPDATE_STATS',0 from dual )
select preference_name, 
       sys.dbms_stats.get_prefs(preference_name) as preference_value,
       global_only
from preflist;




#########################
Pdb Parameters related to AutoTask 
#########################

AUTOTASK_MAX_ACTIVE_PDBS
ENABLE_AUTOMATIC_MAINTENANCE_PDB



#########################
Old Auto Task Window for  daily maintenance window  : 
#########################

select window_name, repeat_interval, duration from dba_scheduler_windows order by window_name;
select * from dba_scheduler_window_groups;
select * from dba_scheduler_wingroup_members order by 1,2;
select client_name, status, con_id from cdb_autotask_client;


EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('WEEKNIGHT_WINDOW','repeat_interval','freq=daily;byday=MON,TUE,WED,THU,FRI;byhour=05;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('WEEKEND_WINDOW','repeat_interval',' freq=daily;byday=SAT;byhour=07;byminute=0;bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('MONDAY_WINDOW',   'repeat_interval','freq=daily;byday=MON;byhour=05;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('TUESDAY_WINDOW',  'repeat_interval','freq=daily;byday=TUE;byhour=05;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('WEDNESDAY_WINDOW','repeat_interval','freq=daily;byday=WED;byhour=05;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('THURSDAY_WINDOW', 'repeat_interval','freq=daily;byday=THU;byhour=05;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('FRIDAY_WINDOW',   'repeat_interval','freq=daily;byday=FRI;byhour=05;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('SATURDAY_WINDOW', 'repeat_interval','freq=daily;byday=SAT;byhour=13;byminute=0; bysecond=0');
EXECUTE DBMS_SCHEDULER.SET_ATTRIBUTE('SUNDAY_WINDOW',   'repeat_interval','freq=daily;byday=SUN;byhour=13;byminute=0; bysecond=0');


BEGIN
  dbms_scheduler.disable(
    name  => 'WINDOW_NAME');
  dbms_scheduler.set_attribute(
    name      => 'WINDOW_NAME',
    attribute => 'DURATION',
    value     => numtodsinterval(6, 'hour'));
  dbms_scheduler.enable(
    name => 'WINDOW_NAME');
END;
/



BEGIN
  DBMS_SCHEDULER.disable(
    name  => 'SYS.MAINTENANCE_WINDOW_GROUP',
    force => TRUE);

  DBMS_SCHEDULER.enable(
    name  => 'SYS.MAINTENANCE_WINDOW_GROUP');
END;
/


 
BEGIN
dbms_scheduler.create_window(
window_name=>'MORNING_WINDOW',
resource_plan=>'DEFAULT_MAINTENANCE_PLAN',
repeat_interval=>'freq=daily;byhour=10;byminute=0;bysecond=0',
duration=>interval '2' hour,
comments=>'Maintenance window');

dbms_scheduler.add_window_group_member('MAINTENANCE_WINDOW_GROUP',
'MORNING_WINDOW');
END;
/



BEGIN
dbms_scheduler.remove_window_group_member('MAINTENANCE_WINDOW_GROUP',
'MONDAY_WINDOW');
END;
/



#########################
Data Dictionary Views:
#########################

DBA_AUTOTASK_CLIENT_JOB 
DBA_AUTOTASK_CLIENT 
DBA_AUTOTASK_JOB_HISTORY 
DBA_AUTOTASK_WINDOW_CLIENTS 
DBA_AUTOTASK_CLIENT_HISTORY 
DBA_AUTOTASK_OPERATION
dba_autotask_schedule




#########################
Reference : 
#########################

https://tinky2jed.wordpress.com/technical-stuff/oracle-stuff/changing-the-oracle-database-default-maintenance-window-time/
https://docs.oracle.com/database/121/ADMIN/tasks.htm


Tuesday, August 27, 2024

Oracle Sql Loader handy commands

 

Data can be modified as it loads into the Oracle Database. One can also populate columns with static or derived values. However, this only applies for the conventional load path (and not for direct path loads



SQL*Loader provides the following methods to load data:

Conventional path loads – construct INSERT statements from the contents of the input datafile based on the predefined specification and execute the inserts.

Direct path loads – creates data blocks in Oracle database block format from the datafile and directly writes the data block to the database. This way is much faster than the conventional path but subject to some restrictions.

External table loads – create an external table for the data stored in the datafile and execute INSERT statements to insert the data from the datafile into the target table. The external table loads support parallel loading if datafile is big enough.



To execute the SQL*Load tool, you need at least three files:

The input data file stores delimited or raw data
The parameter file stores the location of the input/output files
The control file contains the specification on how data is loaded.




Sample Sql lOADER Commadns :

sqlldr username/password contorl=controlfile.ctl bad=bad.log data=data.dat logfile=log1.log direct=true parallel=true multithreading=true  discard=discard.log silent=feedback 
errors=1000000 bindsize=1000000 readsize=1000000  rows=1000000    multithreading=true



SQL*Loader Tuning options :

1) Direct mode 
2) bindsize=1000000 readsize=1000000  rows=1000000
3) Multiple controlfile using load and skip 
4) use append hint 
5) multithread and parallel 




Sample Control file  contents :

options (errors=10000) 
load data 
APPEND INTO TABLE AAA  ( COLUMN ) 
INSERT INTO TABLE SSS ( DDDD  ) 


  cat example1.ctl
load data
 infile '/home/ramesh/employee.txt'
 into table employee
 fields terminated by ","
 ( id, name, dept, salary )



If you like to insert more data to the tables without having to delete the existing rows, use the “append’ command as shown in the following control file.

$ vi sqlldr-append-more.ctl
load data
 infile '/home/ramesh/newemployee.txt'
 append
 into table employee
 fields terminated by ","
 ( id, name, dept, salary )


LOAD DATA
 INFILE *
 INTO TABLE tab1 WHEN tab = 'tab1'
   ( tab  FILLER CHAR(4),
     col1 INTEGER
   )
 INTO TABLE tab2 WHEN tab = 'tab2'
   ( tab  FILLER POSITION(1:4),
     col1 INTEGER
   )
BEGINDATA
tab1|1
tab1|2
tab2|2
tab3|3





LOAD DATA
CHARACTERSET WE8EBCDIC500
INFILE data.ebc "fix 86 buffers 1024"
BADFILE data.bad'
DISCARDFILE data.dsc'
REPLACE
INTO TABLE temp_data
(
 field1    POSITION (1:4)     INTEGER EXTERNAL,
 field2    POSITION (5:6)     INTEGER EXTERNAL,
 field3    POSITION (7:12)    INTEGER EXTERNAL,
 field4    POSITION (13:42)   CHAR,
 field5    POSITION (43:72)   CHAR,
 field6    POSITION (73:73)   INTEGER EXTERNAL,
 field7    POSITION (74:74)   INTEGER EXTERNAL,
 field8    POSITION (75:75)   INTEGER EXTERNAL,
 field9    POSITION (76:86)   INTEGER EXTERNAL
)




Unload data from database Table To Flat File : 


set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on
spool oradata.txt
select col1 || ',' || col2 || ',' || col3
  from tab1
where col2 = 'XYZ';
spool off



set colsep ','
set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on
spool oradata.txt
select col1, col2, col3
  from tab1
 where col2 = 'XYZ';
spool off



declare fp utl_file.file_type;
begin
  fp := utl_file.fopen('c:\oradata','tab1.txt','w');
  utl_file.putf(fp, '%s, %sn', 'TextField', 55);
  utl_file.fclose(fp);
end;
/




Reference :

https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-sql-loader-commands.html#GUID-CD662CD8-DAA7-4A30-BC84-546E4C40DB31




Tuesday, July 16, 2024

Oracle options to check fragmentation and perform Table and Index reorganization

 

There were many   request  coming in for Reorg  Hence  thought of documenting handy article to check fragmentation and perform reorg .

After performing Reorganization we need to also gather fresh statistics  .   Check invalid objects pre and post Reorganization . 

Article  will have 2  main topics mainly 
1)  Ways to check Fragmentation 
2)  Performing Reorganization .



For Lob    i have  documented in my  previous Blog  below 

https://abdul-hafeez-kalsekar.blogspot.com/2024/01/oracle-database-lob-maintenance.html




Ways to  check Fragmentations : 



==> High water mark(HWM)  check query :

set verify off
     column owner format a10
     column alcblks heading 'Allocated|Blocks' just c
     column usdblks heading 'Used|Blocks'      just c
     column hgwtr heading 'High|Water'         just c
     break on owner skip page
     select a.owner,a.table_name,b.blocks alcblks,a.blocks usdblks,(b.blocks-a.empty_blocks-1)
     hgwtr from dba_tables a,dba_segments b where a.table_name=b.segment_name
     and a.owner=b.owner and a.owner not in('SYS','SYSTEM') and a.blocks <> (b.blocks-a.empty_blocks-1)
     and a.owner like upper('&owner')||'%'and a.table_name like upper('&table_name')||'%'
     order by 1,2;

Enter value for owner:  ABDUL 
Enter value for table_name: TABLE1



==> Check  Fragmented Table 


col TABLE_NAME for a30
col fragmented_size for a15
col table_size for a15
col Used_size for a15
select table_name,round(((blocks*8)/1024/1024),2)||'GB' "table_size",round((num_rows*avg_row_len/1024/1024/1024),2)||'GB' "Used_size",
(round(((blocks*8/1024/1024)),2)-round((num_rows*avg_row_len/1024/1024/1024),2))|| 'GB' "fragmented_size" from dba_tables
where owner not in ('SYS','SYSTEM','PERFSTAT') and last_analyzed is not null and num_rows>0 and table_name='SECURE' and owner='HAFEEZ'
and blocks >0
order by 4 asc
/




==> Checking Partition table fragmentation : 

  SELECT table_name,
         partition_name,
         ROUND ( (blocks / 1024 * 16), 2) "size (mb)",
         ROUND ( (num_rows * avg_row_len / 1024 / 1024), 2) "actual_data (mb)",
         (  ROUND ( (blocks / 1024 * 16), 2)
          - ROUND ( (num_rows * avg_row_len / 1024 / 1024), 2))
            "wasted_space (mb)"
    FROM dba_tab_partitions
   WHERE     (ROUND ( (blocks / 1024 * 16), 2) >
                 ROUND ( (num_rows * avg_row_len / 1024 / 1024), 2))
         AND table_name = 'TBK_XXX'
ORDER BY 5 DESC;

 select 
  table_owner
  ,table_name
  ,partition_name
  ,high_value
  ,compression
  ,compress_for
  ,avg_row_len
  ,round(((blocks*16/1024)),2)/1024 "TOTAL_SIZE_GB" --The amount of space used by the partition in gigabytes.
  ,round((num_rows*avg_row_len/1024/1024),2)/1024 "ACTUAL_SIZE_GB" -- The amount of space used by the partition in gigabytes, calculated based on the number of rows and the average row length.
  ,round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/1024 "FRAGMENTED_SPACE_GB" --The amount of space that is not used by the partition, in gigabytes.
  ,decode(round(((blocks*16/1024)),2),0,0, (round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100) "percentage" --The percentage of unused space in the partition, calculated by dividing the unused space by the total space and multiplying by 100. 
from dba_tab_partitions 
WHERE 
1=1
and table_owner='DEV_DW'
and table_name='TEST_TABLE'
and partition_name='SYS_P21822227'
and round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/1024 > 0
order by 6 desc
;






==>  Checking Top Fragmented Tables 

select t1.owner,t1.table_name,nvl(t1.blocks*8,1) "size fragmented KB",round(nvl((t1.num_rows*t1.avg_row_len/1024),1),2) "actaul size KB",round(nvl(t1.blocks*8,1) - nvl((t1.num_rows*t1.avg_row_len/1024),1),2) "fragmentation KB"
from dba_tables t1
order by t1.owner,round(nvl(t1.blocks*8,1) - nvl((t1.num_rows*t1.avg_row_len/1024),1),2) desc ; 



select blocks,last_analyzed,owner,table_name,round((blocks*8),2) "size (kb)" , 
                            round((num_rows*avg_row_len/1024),2) "actual_data (kb)",
                            (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) "wasted_space (kb)"
from dba_tables
where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
and owner in ('HAFEEZ')
order by 4  ;




==>  Checking  Index fragmentation : 

You can run the ANALYZE INDEX <index> VALIDATE STRUCTURE command on the affected indexes - each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS'.

you may decide that index should be rebuilt if more than 20% of its rows are deleted:


ANALYZE INDEX &&index_name VALIDATE STRUCTURE;
 
    col name         heading 'Index Name'          format a30
    col del_lf_rows  heading 'Deleted|Leaf Rows'   format 99999999
    col lf_rows_used heading 'Used|Leaf Rows'      format 99999999
    col ibadness     heading '% Deleted|Leaf Rows' format 999.99999
     SELECT name,
       del_lf_rows,
       lf_rows - del_lf_rows lf_rows_used,
       to_char(del_lf_rows / (lf_rows)*100,'999.99999') ibadness
    FROM index_stats
       where name = upper('&index_name');
    undefine index_name 
 

select del_lf_rows * 100 / decode(lf_rows,0,1,lf_rows) from index_stats
where name = 'index_ name';






==>  Checking Fragmentation  using segment space advisor  

set echo off
set feedback off
set verify off
set linesize 80
set serveroutput on size unlimited

spool /home/oracle/scripts/segment_advisor.txt

declare
 v_task_name varchar2(100);
 v_task_desc varchar2(500);
 v_objid number;
begin
 begin
 v_task_name := 'SEGMENT_ADVISOR_RUN';
 v_task_desc := 'MANUAL SEGMENT ADVISOR RUN';


-- Create Segment Advisor task.

 dbms_advisor.create_task
 (
 advisor_name => 'Segment Advisor',
 task_name => v_task_name,
 task_desc => v_task_desc
 );

-- Add all segments to the task.

 for s in (select segment_name, segment_type
 from DBA_segments
 where segment_type in ('TABLE', 'INDEX', 'LOB')
 and owner != 'SYS' and owner != 'SYSTEM' and owner != 'OLAPSYS' and owner != 'SYSMAN' and owner != 'ODM' and owner != 'RMAN' and owner != 'ORACLE_OCM' and owner != 'EXFSYS' and owner != 'OUTLN' and owner != 'DBSNMP' and owner != 'OPS' and owner != 'DIP' and owner != 'ORDSYS' and owner != 'WMSYS' and owner != 'XDB' and owner != 'CTXSYS' and owner != 'DMSYS' and owner != 'SCOTT' and owner != 'TSMSYS' and owner != 'MDSYS' and owner != 'WKSYS' and owner != 'ORDDATA' and owner != 'OWBSYS' and owner != 'ORDPLUGINS' and owner != 'SI_INFORMTN_SCHEMA' and owner != 'PUBLIC' and owner != 'OWBSYS_AUDIT' and owner != 'APPQOSSYS' and owner != 'APEX_030200' and owner != 'FLOWS_030000' and owner != 'WK_TEST' and owner != 'SWBAPPS' and owner != 'WEBDB' and owner != 'OAS_PUBLIC' and owner != 'FLOWS_FILES' and owner != 'QMS')
 loop
 dbms_advisor.create_object
 (
 task_name => v_task_name,
 object_type => s.segment_type,
 attr1 => user,
 attr2 => s.segment_name,
 attr3 => null,
 attr4 => null,
 attr5 => null,
 object_id => v_objid
 );
 end loop;


-- Set task parameter to recommend all.

 dbms_advisor.set_task_parameter
 (
 task_name => v_task_name,
 parameter => 'RECOMMEND_ALL',
 value => 'TRUE'
 );


-- Run Segment Advisor.

 dbms_advisor.execute_task(v_task_name);

exception when others then
 dbms_output.put_line('Exception: ' || SQLERRM);
 end;


-- Output findings.

 dbms_output.put_line(chr(10));
 dbms_output.put_line('Segment Advisor Recommendations');
 dbms_output.put_line('--------------------------------------------------------------------------------');

for r in (select segment_owner, segment_name, segment_type, partition_name,
 tablespace_name, allocated_space, used_space,
 reclaimable_space, chain_rowexcess, recommendations, c1, c2, c3
 from table(dbms_space.asa_recommendations('TRUE', 'TRUE', 'FALSE'))
 where segment_owner != 'SYS' and segment_owner != 'SYSTEM' and segment_owner != 'OLAPSYS' and segment_owner != 'SYSMAN' and segment_owner != 'ODM' and segment_owner != 'RMAN' and segment_owner != 'ORACLE_OCM' and segment_owner != 'EXFSYS' and segment_owner != 'OUTLN' and segment_owner != 'DBSNMP' and segment_owner != 'OPS' and segment_owner != 'DIP' and segment_owner != 'ORDSYS' and segment_owner != 'WMSYS' and segment_owner != 'XDB' and segment_owner != 'CTXSYS' and segment_owner != 'DMSYS' and segment_owner != 'SCOTT' and segment_owner != 'TSMSYS' and segment_owner != 'MDSYS' and segment_owner != 'WKSYS' and segment_owner != 'ORDDATA' and segment_owner != 'OWBSYS' and segment_owner != 'ORDPLUGINS' and segment_owner != 'SI_INFORMTN_SCHEMA' and segment_owner != 'PUBLIC' and segment_owner != 'OWBSYS_AUDIT' and segment_owner != 'APPQOSSYS' and segment_owner != 'APEX_030200' and segment_owner != 'FLOWS_030000' and segment_owner != 'WK_TEST' and segment_owner != 'SWBAPPS' and segment_owner != 'WEBDB' and segment_owner != 'OAS_PUBLIC' and segment_owner != 'FLOWS_FILES' and segment_owner != 'QMS'
 order by reclaimable_space desc)
 loop
 dbms_output.put_line('');
 dbms_output.put_line('Owner : ' || r.segment_owner);
 dbms_output.put_line('Segment : ' || r.segment_name);
 dbms_output.put_line('Segment Type : ' || r.segment_type);
 dbms_output.put_line('Partition Name : ' || r.partition_name);
 dbms_output.put_line('Tablespace : ' || r.tablespace_name);
 dbms_output.put_line('Allocated Space : ' || r.allocated_space);
 dbms_output.put_line('Used Space : ' || r.used_space);
 dbms_output.put_line('Reclaimable Space in MB : ' || r.reclaimable_space/1024/1024);
 dbms_output.put_line('Chain Rowexcess : ' || r.chain_rowexcess);
 dbms_output.put_line('Recommendations : ' || r.recommendations);
 dbms_output.put_line('Run First : ' || r.c3);
 dbms_output.put_line('Run Second : ' || r.c2);
 dbms_output.put_line('Run Third : ' || r.c1);
 dbms_output.put_line('--------------------------------------------------------------------------------');
 end loop;

-- Remove Segment Advisor task.
 dbms_advisor.delete_task(v_task_name);
end;
/


spool off;






Ways  to   remove Fragmentations : 



Main ways commonly  followed  are  : 

Shrink 
Alter table Move
CTAS method
Datapump or exp/imp




==>   Options  to Perform Reorg of  Table 

SQL> ALTER TABLE table name ENABLE ROW MOVEMENT;
SQL> ALTER TABLE table name SHRINK SPACE CASCADE;
SQL> ALTER TABLE table name DISABLE ROW MOVEMENT;


ALTER TABLE mytable MOVE PARALLEL (DEGREE 8) ONLINE UPDATE INDEXES




==>  Options  to  Perform  Reorg of  Partition Table  
 

ALTER TABLE TEST_TABLE enable row movement;
ALTER TABLE order MODIFY PARTITION SYS_P21674395 SHRINK SPACE;  

ALTER TABLE order MOVE PARTITION SYS_P21674395 UPDATE INDEXES;
ALTER TABLE order MOVE PARTITION SYS_P21674395 ONLINE UPDATE INDEXES;

ALTER TABLE  TEST_TABLE move PARTITION SYS_P21822227 NOCOMPRESS UPDATE INDEXES;  

ALTER TABLE  order MOVE PARTITION SYS_P21674395  COMPRESS FOR QUERY HIGH UPDATE INDEXES;
 




==>  Options  to  Perform rebuild of index / Remove Index Fragmentation 


ALTER TABLE four_seasons MODIFY PARTITION quarter_two REBUILD UNUSABLE LOCAL INDEXES;

ALTER INDEX sales_area_ix REBUILD PARTITION jan99_ix;
 
 ALTER INDEX index name SHRINK SPACE;

ALTER INDEX SCOTT.EMP_IDX REBUILD ONLINE parallel 10 ;






Check the progress of Alter Shrink Space Command

-- Following command will search alter table command and give you output how much it covered according to table size in first command.
-- GB_read give you how much it covered yet.
select a.event, a.WAIT_TIME, c.SQL_TEXT,
c.PHYSICAL_READ_BYTES / 1024 / 1024 / 1024 "GB_READ",
c.PHYSICAL_WRITE_BYTES / 1024 / 1024 / 1024 "GB_WRITE"
from v$session_wait a , v$session b , v$sql c
where UPPER(c.SQL_TEXT) like UPPER('%ALTER TABLE%')
and a.sid = b.sid
and b.SQL_ID = c.SQL_ID;



--If you have session id of session from which command running then use following command:
select a.event, a.WAIT_TIME, c.SQL_TEXT,
c.PHYSICAL_READ_BYTES / 1024 / 1024 / 1024 "GB_READ",
c.PHYSICAL_WRITE_BYTES / 1024 / 1024 / 1024 "GB_WRITE"
from v$session_wait a , v$session b , v$sql c
where a.SID =
and a.sid = b.sid
and b.SQL_ID = c.SQL_ID;








Online Redefinition  to perform Reorg  : 



A primary key is mandatory since materialized views and logs are created during the start of redefinition.
 
The user performing the re-organization requires the following
privileges:

* CREATE ANY TABLE
* ALTER ANY TABLE
* DROP ANY TABLE
* LOCK ANY TABLE
* SELECT ANY TABLE
* CREATE ANY INDEX
* CREATE ANY TRIGGER



GENERAL STEPS TO ONLINE REDEFINE A TABLE

1) Determine if the table to be moved can be redefined online (DBMS_REDEFINITION.CAN_REDEF_TABLE)

2) Create the interim table
The interim table need not be the same 'shape' (have similar structure) as the original table

3) Start the redefinition of the table (DBMS_REDEFINITION.START_REDEF_TABLE)
4) Copy the dependents from the original table to the interim table (DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS)
5) Execute a final synchronization between the original and interim tables (DBMS_REDEFINITION.SYNC_INTERIM_TABLE)
  This step will minimize the amount of time needed to execute the finish of the redefinition

6) Finish the redefinition (DBMS_REDEFINITION.FINISH_REDEF_TABLE)
7) Verify that the redefinition succeeded by comparing the original table (formerly interim) to the interim table (formerly original)
8) Drop the interim table




---------------------------------------------------------------------
-- Test if table can be redefined...
---------------------------------------------------------------------

 SET SERVEROUTPUT ON
begin
DBMS_REDEFINITION.CAN_REDEF_TABLE
(
uname=>'SCOTT',
tname=>'emp' );
end;
/




---------------------------------------------------------------------
-- CREATE THE INTERIM TABLE
---------------------------------------------------------------------

create table emp_work (
COL1 NUMBER,
COL2 VARCHAR2(1000),
COL3 VARCHAR2(1000),
COL4 VARCHAR2(1000));



---------------------------------------------------------------------
-- Start table redefinition...
---------------------------------------------------------------------

During this phase Oracle will copy (and transform) the data from the production table to the interim table. Oracle will also create a materialized view (snapshot) log on the table to track DML changes.

SET SERVEROUTPUT ON
 begin
 DBMS_REDEFINITION.start_redef_table( 
 uname=>'SCOTT',
 orig_table=>'emp',
 int_table=>'emp_work');
 end;
 /
 



---------------------------------------------------------------------
-- COPY THE TABLE DEPENDENTS FROM THE ORIGINAL TABLE TO THE INTERIM TABLE
---------------------------------------------------------------------
  
 SET SERVEROUTPUT ON
   DECLARE
  l_num_errors PLS_INTEGER;
    begin
   DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
   uname=>'SCOTT',
   orig_table=>'emp',
   int_table=>'emp_work',
  copy_indexes => DBMS_REDEFINITION.cons_orig_params,
   copy_triggers => TRUE,
  copy_constraints => TRUE,
  copy_privileges => TRUE,
   ignore_errors => FALSE,
    num_errors => l_num_errors);
  end;
  /


--------------------------------------------------------------------
-- Do  necessary changes:  here we Create constraints on the interim table   (  skip thisif only reorganization is needec ) 
---------------------------------------------------------------------

SQL> alter table emp_work   add constraint int_empx_pk primary key(empno);
SQL> alter table emp_work add constraint 2 int_empx_fk foreign key(deptno) references dept(deptno);
SQL> alter table emp_work MODIFY CONSTRAINT int_empx_fk 2 DISABLE KEEP INDEX;



---------------------------------------------------------------------
-- Sync intermediate changes to interim table (optional)
---------------------------------------------------------------------

This step will apply changes captured in the materialized view log to the interim table. Perform this step frequently for high transaction tables.

SQL> exec dbms_redefinition.sync_interim_table('scott', 'emp', 'emp_work');




---------------------------------------------------------------------
-- Finish the redefinition process (this will swap the two tables)...
---------------------------------------------------------------------
EXEC DBMS_REDEFINITION.FINISH_REDEF_TABLE('scott', 'emp', 'emp_work');



---------------------------------------------------------------------
-- Drop the interim working table...
---------------------------------------------------------------------
DROP TABLE emp_work;







Redefine  table  without  primary Key : 


The Reason the table failed is because the Online redefinition happens either by using a primary key or using ROWID,
The Default is the Primary Key, 

If we dont  haveprimary key we need to use rowid  option . For    that "options_flag=>DBMS_REDEFINITION.cons_use_rowid"   needs to be added in   DBMS_REDEFINITION.CAN_REDEF_TABLE  and  DBMS_REDEFINITION.start_redef_table  function 

Eg 

 SET SERVEROUTPUT ON
begin
DBMS_REDEFINITION.CAN_REDEF_TABLE(
uname=>'SCOTT',
tname=>'emp',
options_flag=>DBMS_REDEFINITION.cons_use_rowid);
end;
/


SET SERVEROUTPUT ON
 begin
 DBMS_REDEFINITION.start_redef_table( 
 uname=>'SCOTT',
 orig_table=>'emp',
 int_table=>'emp_work',
 options_flag=>DBMS_REDEFINITION.cons_use_rowid);
 end;
 /
 


 
-- To perform online redefinition in parallel 
 
alter session force parallel dml parallel 32;
alter session force parallel query parallel 32;







References : 

https://oracle-base.com/articles/12c/online-move-partitions-and-subpartitions-12cr1#google_vignette

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/vldbg/maintenance-partition-tables-indexes.html#GUID-CDAA2363-B83A-408C-82C9-3E3FA3928D2D
Or 
https://docs.oracle.com/en/database/oracle/oracle-database/18/vldbg/maintenance-partition-tables-indexes.html#GUID-BAFFE31C-07A2-4ED6-BDCF-8ECB79D7FE7D


Attempt to reduce the size of a LOB segment after the table / LOB becomes sparse -> How to Shrink (make less sparse) a LOB (BASICFILE or SECUREFILE)? (Doc ID 1451124.1)

 How to Re-Organize a Table Online (Doc ID 177407.1)

How to release unused space (SHRINK) occupied by a LOB segment by adding / dropping columns? (Doc ID 1417697.1) 

How to Shrink (make less sparse) a LOB (BASICFILE or SECUREFILE)? (Doc ID 1451124.1)

Sunday, June 16, 2024

Restore Table Into dumpfile using Oracle database Rman Backup of Pluggable database

 
There are many  documents Online   for this  but since we  had requirement  to perform same   thought of   documenting my own 
 
Instead of importing  table in same database its preferable to import into   dump file . 




Below is the high level procedure on how the restoration works.

1) Identifies the set of tablespace that needs to be recovered for this table restore.
2) Creates temporary instance with a random unique name
3) Starts database in nomount state and performs control file restore based on until time/scn value defined.
4) Sets new destinations for the identified data files to be restored
5) Then initiates the restoration of SYSTEM,SYSAUX,UNDO  tablespaces for CDB and SYSTEM and SYSAUX and EXAMPLE tablespace  for PDB
6) Perform recovery of database until time/scn by restoring archive logs and applying them to temporary database.
7) Once done its open the Database in open read only along with any PDB database under which table is available
8) Restart in nomount state and initiate restoraton of Datafiles related to Application Table(User Tablespace datafiles)
9) Recover archive logs and apply until time/scn
10) RMAN opens database and creates temporary DBA Directory to hold expdp dump
11) Performs expdp dump of table
12) Shutdown abort temporary database
13) Import the table dump to Source database
14) Cleanup all files created at the end of session




1) Restore  table into  dumpfile 

Create a pfile for auxiliary db use /tmp/init_aux.ora .  Username and  Table name must be in caps 


Add below parameter in it (Ensure other parameters like db_name/sga are specified along with it)
_disable_health_check=true


Main parameters to be added  to  auxiliary pfile are 

enable_pluggable_database=true 
_clone_one_pdb_recovery=true 
wallet_root
tde_configuration
compatible 
db_block_size 
db_files 
diagnostic_dest 
_system_trigger_enabled =false 
db_domain 
sga_target 
db_create_files_dest      ( to auxiliary destination used in   restore ) 
log_archive_dest_1       ( to auxiliary destination used in   restore ) 
db_name 
db_unique_name



rman target=/

RUN {
SET AUXILIARY INSTANCE PARAMETER FILE TO '/tmp/init_aux.ora';
recover table  "MYUSER"."MYTABLE"   OF PLUGGABLE DATABASE pdb2
  until scn 37128783
  auxiliary destination '/tmp/aux'
  datapump destination '/var/oracle/data/export'
  dump file 'saved_copy.dmp'
  notableimport;
}

Or 

RUN {
SET AUXILIARY INSTANCE PARAMETER FILE TO '/tmp/init_aux.ora';
recover table "MYUSER"."MYTABLE" OF PLUGGABLE DATABASE nameofpluggabledb
until time "to_date('07/11/2022 16:34:10','dd/mm/yyyy hh24:mi:ss')"
auxiliary destination '/var/oracle/backup/aux'
datapump destination '/var/oracle/backup/datapump'
dump file 'saved_copy.dmp'
notableimport;
}



If we  want to  import directly in database we can  use below 

RUN {
SET AUXILIARY INSTANCE PARAMETER FILE TO '/tmp/init_aux.ora';
recover table "MYUSER"."MYTABLE" OF PLUGGABLE DATABASE nameofpluggabledb
until time "to_date('07/11/2022 16:34:10','dd/mm/yyyy hh24:mi:ss')"
auxiliary destination '/var/oracle/backup/aux'
REMAP TABLE 'SMEDS'.'RECTEST':'TEST_RECTEST';
}




2) Import dumpfile in database from above dmpfile 

impdp youruser/yourpassword@yourhost/nameofpluggabledb full=Y directory=DUMP_DIR dumpfile=saved_copy.dmp






References :

Rman Recover Table Doesnot Clean up Files Created Under Dbs Folder For the Auxiliary Instance After it Completes (Doc ID 2882639.1)
RMAN Recover Table Feature in Oracle Database 12c and Higher (Doc ID 1521524.1)
Undo Datafile at Auxiliary Instance Remains after Executing Recover Table (Doc ID 2407419.1)

Monday, May 20, 2024

Oracle Handling Distributed, Pending , In-Doubt and Remote Transaction

 

A distributed transaction modifies data related to two or more databases, it contains DML statements than span many nodes. For a distributed transaction to be succesful all or none of the database nodes involved in the transaction need to commit or rollback the whole transaction.

Note the difference between a distributed and a remote transaction; a remote transaction contains one or more DML statements that are executed on the SAME remote node,


Manual Resolution of In-Doubt Transactions
–The in-doubt transaction has locks on critical data or undo segments.
–The cause of the system, network, or software failure cannot be repaired quickly.




Commit or Rollback Pending Transaction -- UNSTUCK TRANSACTION : 


SQL> select local_tran_id,global_tran_id, state,mixed, commit# from dba_2pc_pending;

If the state of the transaction is “prepared” and there is no inconsistency, the transaction can be forced to rollback, or maybe if the underlying problem which caused the in-doubt transaction is resolved the transaction can be forced to commit as follows:

SQL> ROLLBACK FORCE  '97.33.166765' /* ->replace with ur own trx_id */
or
SQL> COMMIT FORCE  '97.33.166765' /* ->replace with ur own trx_id */


If the state of the transaction is “collecting” and you execute the above command, you may see an error like:

ERROR at line 1:
ORA-02058: no prepared transaction found with ID 97.33.166765




 DBA_2PC_PENDING view have entries about our transaction but there is no transaction in reality

If the state of the transaction (in DBA_2PC_PENDING) is committed, rollback forced or commit forced then it can be cleaned by:

SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('97.33.166765'); /* ->replace with ur own trx_id */





HANDLING STUCK TRANSACTIONS

Our ultimate goal is not seeing the transaction in X$KTUXE table; and ensuring that the dictionary tables like PENDING_TRANS$ to be consistent with this information.

Stuck transactions can be examined under the below conditions:



2.1. Condition 1: DBA_2PC_PENDING view have entries about our transaction but there is no transaction in reality

The condition is that; when we issue select to the dictionary views like the DBA_2PC_PENDING, PENDING_TRANS$, etc. we see the transaction, but the transaction does not exist in X$KTUXE view.

The actual transaction entry view is X$KTUXE ([K]ernel [T]ransaction [U]ndo Transa[X]tion [E]ntry) where the columns correspond to the following sections of the transaction id:

KTUXEUSN.KTUXESLT.KTUXESQN = 96.22.163456   (The concat of KTUXEUSN, KTUXESLT and KTUXESQN gives us the transacion number)
KTUXEUSN=96
KTUXESLT=22
KTUXESQN=163456

Therefore, the condition1 holds when DBA_2PC_PENDING has the entry but X$KTUXE does not.

SQL> SELECT LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, COMMIT# FROM DBA_2PC_PENDING;

    -- Returns: 96.22.163456 TESTDB.723a8559.96.22.163456    prepared    no    56759427464

SQL> SELECT * FROM X$KTUXE WHERE KTUXEUSN=96 AND KTUXESLT=22 AND KTUXESQN =163456;
  
  -- Returns: No Rows

Solution 1 to Condition 1: If the state of the transaction (in DBA_2PC_PENDING) is committed, rollback forced or commit forced then it can be cleaned by:

SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456'); 
Solution 2 to Condition 1: If the state of the transaction is prepared, we have to clean manually as follows:

SQL> DELETE FROM SYS.PENDING_TRANS$ WHERE LOCAL_TRAN_ID = '96.22.163456'; 
SQL> DELETE FROM SYS.PENDING_SESSIONS$ WHERE LOCAL_TRAN_ID ='96.22.163456' ; 
SQL> DELETE FROM SYS.PENDING_SUB_SESSIONS$ WHERE LOCAL_TRAN_ID = '96.22.163456';
SQL> COMMIT;




2.2. Condition 2: DBA_2PC_PENDING view does NOT have entries about our transaction but there IS A transaction.

This is something like a orphan transaction that the dictionary is not aware of.
Trying to force commit or rollback this transaction may result in error like below, since the dictionary is not aware:

SQL> ROLLBACK FORCE '96.22.163456'

-- ORA-02058: no prepared transaction found with ID 96.22.163456
Solution to Condition 2: What we need to do at this point is; recovering our transaction from being an orphan by inserting some dummy records into dictionay tables (so the views…) and then force a rollback or commit: You do not have to change the parameters in the insert command other than the transaction id.

SQL> ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;
    
SQL> INSERT INTO PENDING_TRANS$ (LOCAL_TRAN_ID, GLOBAL_TRAN_FMT, GLOBAL_ORACLE_ID, STATE, STATUS, SESSION_VECTOR, RECO_VECTOR, TYPE#, FAIL_TIME,RECO_TIME)
    VALUES
    (
         '96.22.163456', 
         299354,
         'XXXXXXX.12345.1.2.3',
         'prepared','P',
         hextoraw( '00000001' ),
         hextoraw( '00000000' ),
         0, sysdate, sysdate
    );
    
SQL> INSERT INTO PENDING_SESSIONS$
    VALUES
    (
     '96.22.163456', 
     1, hextoraw('05004F003A1500000104'),
     'C', 0, 30258592, '',
     146
    );
SQL>  COMMIT;

Now, we should be able to rollback or commit.


SQL> ROLLBACK FORCE '96.22.163456' 
or
SQL> COMMIT FORCE '96.22.163456'

Lastly, we remove the dummy entry from the dictionary:

SQL> ALTER SYSTEM ENABLE DISTRIBUTED RECOVERY;
    
SQL> ALTER SYSTEM SET "_smu_debug_mode" = 4;
    
SQL> COMMIT;
    
SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456');
    
SQL> ALTER SYSTEM SET "_smu_debug_mode" = 0;
    
SQL> COMMIT;
Check to see whether the transaction has gone:

SQL> SELECT * FROM X$KTUXE WHERE KTUXEUSN=96 AND KTUXESLT=22 AND KTUXESQN =163456;
    -- Returns: No Rows    



2.3. Condition 3: DBA_2PC_PENDING has entry and there is a transaction but COMMIT or ROLLBACK HANGS!

In the situation, where COMMIT FORCE or ROLLBACK FORCE hangs,
Trying to purge the transaction will give an error like:

SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456');
    -- ERROR at line 1:
    -- ORA-06510: PL/SQL: unhandled user-defined exception
    -- ORA-06512: at "SYS.DBMS_TRANSACTION", line 94
    -- ORA-06512: at line 1


Solution to Condition 3: The solution is the combination of Cond1 and Cond2:

First, delete the dictionary entries:

SQL> DELETE FROM SYS.PENDING_TRANS$ WHERE LOCAL_TRAN_ID = '96.22.163456'; 
    
SQL> DELETE FROM SYS.PENDING_SESSIONS$ WHERE LOCAL_TRAN_ID ='96.22.163456' ; 
    
SQL> DELETE FROM SYS.PENDING_SUB_SESSIONS$ WHERE LOCAL_TRAN_ID = '96.22.163456';
    
SQL> COMMIT;
Then, insert dummy record, force commit and finally purge the transaction:

SQL> ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;
    
SQL> INSERT INTO PENDING_TRANS$ (LOCAL_TRAN_ID, GLOBAL_TRAN_FMT, GLOBAL_ORACLE_ID,
STATE, STATUS, SESSION_VECTOR, RECO_VECTOR, TYPE#, FAIL_TIME,RECO_TIME)
    VALUES
    (
         '96.22.163456',
         306206,
         'XXXXXXX.12345.1.2.3',
         'prepared','P',
         hextoraw( '00000001' ),
         hextoraw( '00000000' ),
         0, sysdate, sysdate
    );
     
SQL> INSERT INTO PENDING_SESSIONS$
    VALUES
    (
         '96.22.163456',
         1, hextoraw('05004F003A1500000104'),
         'C', 0, 30258592, '',
         146
    );
SQL> COMMIT;
    
SQL> COMMIT FORCE '96.22.163456';
    
SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456');







Two-Phase Commit (2PC)

The two phase-commit mechanism is used to ensure data integrity in a distributed transaction. It is automatically used during all distributed transactions and coordinates either the commit or roll back of all the changes in the transaction as a single, self-contained unit.

In short, there are three phases for the 2PC:
PREPARE: The initiating node ask each of its referenced nodes to promise to perform a commit or rollback when told to do so. The preparing node will flush the redo log buffer to the online redo log. It converts locks to in-doubt transaction locks on the data blocks and passes its highest SCN value to the initiating node.
COMMIT: The initiating node commits and writes to its redo log the committed SCN. The Data Block locks are released.
FORGET:  Pending transactions tables are related database views are cleared (dba_2pc_pending/dba_2pc_neighbors)
All the above phases take place quickly and transparently to the application where the transaction originated.

NOTE:
A crash during the PREPARE Phase results in a ROLLBACK
A crash during the COMMIT Phase results in either COMMIT or ROLLBACK





Recoverer Process (RECO)

RECO is a mandatory background process that, in a distributed database, automatically resolves failures in distributed transactions. The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction; it can use an existing connection or establish a new connection to other nodes involved in the failed transaction.. When RECO reestablishes a connection between the databases, it automatically resolves all in-doubt transactions, removing from each database's pending transaction table any rows that correspond to the resolved transactions.

At exponentially growing time intervals, the RECO background process of a node attempts to recover the local portion of an in-doubt distributed transaction.

init.ora parameter distributed_transactions > 0  (for RECO to start)

If init.ora open_links is set to 0, then no distributed transactions are allowed.

The RECO process is present only if the instance permits distributed transactions.

You can enable and disable RECO using the ALTER SYSTEM statement with the ENABLE/DISABLE DISTRIBUTED RECOVERY options.

You can disable distributed recovery if the database is mounted but not open.

select dbid,name,log_mode,open_mode from v$database;
select instance_number,instance_name,status from v$instance;


Disabling and Enabling RECO

You can enable and disable RECO using the ALTER SYSTEM statement with the ENABLE/DISABLE DISTRIBUTED RECOVERY options. For example, you can temporarily disable RECO to force the failure of a two-phase commit and manually resolve the in-doubt transaction.

-To disable RECO:
ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY

-To enable RECO and let it automatically resolve indoubt transactions
ALTER SYSTEM ENABLE DISTRIBUTED RECOVERY;




Database Parameters related to distributed Transactions :

DISTRIBUTED_LOCK_TIMEOUT  : 
DISTRIBUTED_LOCK_TIMEOUT is used to specify maximum time in  seconds   that instance waits for locked resources . Default is 60 seconds 



COMMIT_POINT_STRENGTH :
In a two phase commit process, one site or node is the designated “Commit Point Site”  . This  is determined by the values of the
COMMIT_POINT_STRENGTH parameter in all the database instances involved in the transaction which are adjacent to and include the Global Coordinator
 
The values range from 0 to 255 and if not set then the is determined by the software

The COMMIT_POINT_STRENGTH parameter in nonadjacent nodes is only used to resolve who will be the recursive“Commit Point Site” when a Local
Coordinator is the main “Commit Point Site"





Known Issues :

ORA-02053: transaction committed, some remote DBs may be in-doubt
The transaction has been locally committed, however we have lost communication with one or more local coordinators.

ORA-02054: transaction in-doubt
The transaction is neither committed or rolled back locally, and we have lost communication with the global coordinator.

ORA-02050: transaction rolled back, some remote DBs may be in-doubt
Indicates that a communication error ocurred during the two-phase commit

ORA-01591: lock held by in-doubt distributed transaction
Encountering the above error and users/applications unable to proceed with their work. In this case, Oracle automatically rolls back the user attempted transaction and the DBA has now to manually commit or rollback the in-doubt transaction.






Views  : 

DBA_2PC_PENDING  and DBA_2PC_NEIGHBORS   are  2 main views used for checking details 



DBA_2PC_PENDING 

view to determine the global commit number for a particular transaction ID. Lists all in-doubt distributed transactions. 
The view is empty until populated by an in-doubt transaction. After the transaction is resolved, the view is purged


Note: When LOCAL_TRAN_ID and GLOBAL_TRAN_ID value is same then node is the global coordinator of the transaction.

1) LOCAL_TRAN_ID : Local transaction identifier
2) GLOBAL_TRAN_ID : Global database identifier
3) STATE : –Collecting: node is currently collecting information from other database servers before it can decide whether it can prepare.
–Prepared: Node has prepared and holding locks on resources. May or not acknowledge to local coordinator
–Committed: Node has committed the transaction but other node may or may not done.
–Forced Commit: Administrator manually forced to commit the pending transaction at local node.
–Forced rollback: A pending transaction forced to rollback at local node.

4) MIXED: YES means part of transaction committed on one node and rollback at other node.
5) TRAN_COMMENT: Transaction comment
6) HOST: Hostname
7) COMMIT# : Global commit number for committed transaction



COL LOCAL_TRAN_ID FORMAT A13
COL GLOBAL_TRAN_ID FORMAT A30
COL STATE FORMAT A8
COL MIXED FORMAT A3
COL HOST FORMAT A10
COL COMMIT# FORMAT A10
SELECT LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, HOST, COMMIT# FROM DBA_2PC_PENDING;




DBA_2PC_NEIGHBORS :

Lists all incoming and outgoing in-doubt distributed transactions. It also indicates whether the local node is the commit point site in the transaction.


1) LOCAL_TRAN_ID: Local transaction identifier
2) IN_OUT: IN for incoming transactions; OUT for outgoing transactions
3) DATABASE: For incoming transactions, the name of the client database for outgoing transactions, the name of the database link used to access information on a remote server.
4) DBUSER_OWNER: For incoming transactions, the local account used to connect, for outgoing transactions: the owner of the database link.
5) INTERFACE: C is a commit message; N is either indicating a prepared state or read-only commit.

COL LOCAL_TRAN_ID FORMAT A13
COL IN_OUT FORMAT A6
COL DATABASE FORMAT A25
COL DBUSER_OWNER FORMAT A15
COL INTERFACE FORMAT A3
SELECT LOCAL_TRAN_ID, IN_OUT, DATABASE, DBUSER_OWNER, INTERFACE FROM DBA_2PC_NEIGHBORS





Other views : 

SQL> SELECT * FROM GLOBAL_NAME;
SQL> select * from DBA_2PC_PENDING;
SQL> select * from DBA_2PC_NEIGHBORS;
SQL> select * from sys.pending_trans$;
SQL> select * from SYS.PENDING_SESSIONS$;
SQL> select * from SYS.PENDING_SUB_SESSIONS$;
SQL> select * from SYS.V$GLOBAL_TRANSACTION  ;

SELECT
    ktuxeusn            usn#     -- 65535 = no-undo transaction
  , ktuxeslt            slot#    -- 65535 = invalid slot#
  , ktuxesqn            seq#
  , ktuxesta            status
  , ktuxecfl            flags
  , ktuxesiz            undo_blks
  , ktuxerdbf           curfile 
  , ktuxerdbb           curblock
  , ktuxescnw * power(2, 32) + ktuxescnb cscn -- commit/prepare commit SCN
  , ktuxeuel            
  -- distributed xacts
  --, ktuxeddbf           r_rfile
  --, ktuxeddbb           r_rblock
  --, ktuxepusn           r_usn#
  --, ktuxepslt           r_slot#
  --, ktuxepsqn           r_seq#
FROM
    x$ktuxe
WHERE ktuxesta != 'INACTIVE'
ORDER BY
    ktuxeusn
  , ktuxeslt
/


SELECT
    "AREA NAME"                               -- VARCHAR2(32)
  , "NUMAPG"                                  -- NUMBER
  , "PAGESIZE" / 1024 mempage_kb                 -- NUMBER
  , ROUND("SEGMENT SIZE" / 1048576) segsize_mb -- NUMBER
  , ROUND("SIZE" / 1048576) area_size_mb            -- NUMBER
  , ROUND("REMAINING ALLOC SIZE" / 1048576) remain_mb                   -- NUMBER
  , "SHMID"                                   -- NUMBER
  , "SEGMENT DISTRIBUTED"                     -- VARCHAR2(20)
  , "AREA FLAGS"                              -- NUMBER
  , "SEGMENT DEFERRED"                        -- VARCHAR2(20)
  , "SEG_START ADDR"                          -- RAW(8)
  , "START ADDR"                              -- RAW(8)
FROM
    x$ksmssinfo
/





Before  Rollback or committing  please capture below details  as If these transaction were XA transactions, the responsibility of cleaning them up would be the external transaction manager.



***********************
.
SET MARKUP HTML ON SPOOL ON HEAD "<TITLE>2PC - INFO </title> -
<STYLE TYPE='TEXT/CSS'><!--BODY {background: ffffc6} --></STYLE>"
SET ECHO OFF
REM Spooling to html file
REM Insert the queries to be executed between the SPOOL commands.
.
define COLLNAME='SUP_'
define SRDCNAME='2PC'
column SRDCSPOOLNAME new_val SRDCSPOOLNAME
--select 'SRDC_'||upper('&&SRDCNAME')||'_'||upper(instance_name)||'_'||to_char(sysdate,'YYYYMMDD_HH24MISS')||'.htm' SRDCSPOOLNAME from v$instance;
select '&&COLLNAME'||upper('&&SRDCNAME')||'_'||upper(instance_name)||'_CDB_'||upper(cdb)||'_'||to_char(sysdate,'YYYYMMDD_HH24MISS')||'.htm' SRDCSPOOLNAME from v$instance, v$database;
spool &SRDCSPOOLNAME

ALTER SESSION set NLS_DATE_FORMAT='MM/DD/YY HH24:Mi:SS';
set pagesize 1000
select sysdate from dual;

SELECT SYS_CONTEXT ('USERENV', 'CON_NAME') "Current Container" FROM DUAL;
select con_id, name, cdb, current_scn from gv$database order by inst_id;
select * from gv$containers order by inst_id,con_id;
select * from gv$pdbs order by inst_id,con_id;
select * from gv$version;


select * from global_name;
select * from dba_2pc_pending;
select * from dba_2pc_neighbors;
select * from sys.pending_trans$;
select * from sys.pending_sessions$;
select * from sys.pending_sub_sessions$;

select * from v$session_wait order by SECONDS_IN_WAIT;

SELECT KTUXEUSN, KTUXESLT, KTUXESQN, /* Transaction ID */
KTUXESTA Status,
KTUXECFL Flags
FROM x$ktuxe
WHERE ktuxesta!='INACTIVE';

prompt +++ In memory transaction +++
select /*+ ORDERED */
'----------------------------------------'||'
Curent Time : '|| substr(to_char(sysdate,' HH24.MI.SS'),1,22) ||'
'||'TX start_time: '||t.KTCXBSTM||'
'||'FORMATID: '||g.K2GTIFMT ||'
'||'GTXID: '||g.K2GTITID_EXT ||'
'||'Branch: '||g.K2GTIBID ||'
Local_Tran_Id ='||substr(t.KXIDUSN||'.'||t.kXIDSLT||'.'||t.kXIDSQN,1,15)||'
'||'KTUXESTA='|| x.KTUXESTA ||'
'||'KTUXEDFL='|| x.KTUXECFL ||'
Lock_Info: ID1: ' || ((t.kXIDUSN*64*1024)+ t.kXIDSLT)
||' ID2: '|| t.kXIDSQN
XA_transaction_INFO
from x$k2gte g, x$ktcxb t, x$ktuxe x
where g.K2GTDXCB =t.ktcxbxba and
x.KTUXEUSN = t.KXIDUSN(+) and
x.KTUXESLT = t.kXIDSLT(+) and
x.KTUXESQN =t.kXIDSQN(+);

prompt +++ Timed out, prepared XA transactions +++

select global_tran_fmt, global_foreign_id, branch_id,state,
tran.local_tran_id
from sys.pending_trans$ tran, sys.pending_sessions$ sess
where tran.local_tran_id = sess.local_tran_id
and tran.state = 'prepared'
and dbms_utility.is_bit_set(tran.session_vector, sess.session_id)=1;

select sid, type, id1, id2, lmode, request from GV$lock
where type = 'TM' or type = 'TX';

select LOCAL_TRAN_ID, GLOBAL_TRAN_ID,to_char(FAIL_TIME,'dd-mm-yyyy HH24:MI:SS') FAIL_TIME,STATE, MIXED from DBA_2PC_PENDING;

select NAME, VALUE from V$PARAMETER where upper(NAME) = 'COMPATIBLE';
select NAME, VALUE from V$PARAMETER where upper(NAME) = 'DISTRIBUTED_LOCK_TIMEOUT';

select OWNER, OBJECT_NAME, OBJECT_TYPE, STATUS from DBA_OBJECTS where OBJECT_NAME like 'DBMS_XA%' order by OWNER, OBJECT_NAME, OBJECT_TYPE;

select blocking_session, sid, serial#,event, wait_class, seconds_in_wait
from v$session
where blocking_session is not NULL
order by blocking_session;

spool off
.
SET MARKUP HTML OFF
SET ECHO ON
.
***********************








References : 

Distributed Database, Transactions and Two Phase Commit (Doc ID 13229.1)

https://dbatrain.wordpress.com/wp-content/uploads/2009/01/database-links-masterclass_2.pdf

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/admin/managing-distributed-transactions.html#GUID-AB19431E-BF9F-4181-BC57-699F8A06149D

Primary Note for Troubleshooting Oracle Managed Distributed Transactions (Doc ID 100664.1)

https://www.linkedin.com/pulse/resolve-in-doubt-transactions-handling-unstuck-mohsen-taheri/

https://oraclefiles.com/2019/03/04/resolving-in-doubt-transactions/

Saturday, May 18, 2024

Storing Oracle user password in Oracle wallet on client side


We have move to Exacc , using OS authentication for application user was challenge  and we have to arrange other Secure External Password Store  .

Thanks to Oracle Doc 340559.1   that shared steps on using Oracle wallet on client side . 


When clients are configured to use the secure external password store, applications can connect to a database with the following CONNECT statement syntax, without specifying database login credentials:
connect /@db_connect_string
 
where db_connect_string is a valid connect string to access the intended database. In this case, the database credentials, username and password, 
are securely stored in an Oracle wallet created for this purpose. The autologin feature of this wallet is turned on so the system does not need a password to open the wallet.

From the wallet, it gets the credentials to access the database for the user they represent.



Configuring Clients to Use the External Password Store 


1) Create a wallet on the client by using the following syntax at the command line:

$ORACLE_HOME/oracle_common/bin/mkstore -wrl <wallet_location> -create    ( need to input wallet password once asked ) 



2) Create database connection credentials in the wallet by using the following syntax at the command line:

mkstore -wrl <wallet_location> -createCredential <db_connect_string> <username> <password>

example:

("N102" in the following example is a connect descriptor located in the tnsnames.ora.)

mkstore -wrl /home/mseibt/pstore -createCredential N102 <user> <password>
Enter password: welcome1          



3) In the client sqlnet.ora file, enter the WALLET_LOCATION parameter and set it to the directory location of the wallet you created in Step 1.


WALLET_LOCATION =
   (SOURCE =
      (METHOD = FILE)
      (METHOD_DATA = (DIRECTORY = /home/mseibt/pstore))
)


SQLNET.WALLET_OVERRIDE = TRUE
SSL_CLIENT_AUTHENTICATION = FALSE
SSL_VERSION = 0


4) In the client sqlnet.ora file, enter the SQLNET.WALLET_OVERRIDE parameter and set it to TRUE 

SQLNET.WALLET_OVERRIDE = TRUE



5) With the external password store configured, connect as <user>:

sqlplus /@N102



--Datapump using wallet
nohup expdp /@BSA1EP directory=DUMP_DIR dumpfile=scott_%U.dmp logfile=EXP_SCOTT.log schemas=scott parallel=4 &

-- JDBC using wallet
Connection conn = DriverManager.getConnection ("jdbc:oracle:oci:/@BSA1EP");

-- sqlplus 
sqlplus /@BSA1EP




Managing External Password Store Credentials 


1) Listing the contents of the external password store: 

mkstore -wrl /home/mseibt/pstore -listCredential



2) Adding database login credentials to an existing client wallet:

mkstore -wrl /home/mseibt/pstore -createCredential N101 <user><password>



3) Modifying database login credentials in a wallet: 

mkstore -wrl /home/mseibt/pstore -modifyCredential N102 <user> newpassword



4) Deleting database login credentials from a wallet: 

mkstore -wrl /home/mseibt/pstore -deleteCredential N101



5) Listing wallet entries: 

mkstore -wrl /home/mseibt/pstore -list



6) Listing entry values: 
The name will be something like oracle.wallet.passwordn where n is the entry number, Finally, open the entry using this:

mkstore -wrl /home/mseibt/pstore -viewEntry oracle.security.client.connect_string1    (  <name obtained>  in step  5) 



7) Modifying entry values: 

mkstore -wrl /home/mseibt/pstore -modifyEntry oracle.security.client.password1 newpass




Known Issues: 


Issues 1 : Oracle Wallet Cannot Be Created using mkstore (Doc ID 951276.1)

SOLUTION
1. Stop the Database
2. cd $ORACLE_HOME/bin
3. relink all
4. mkstore -wrl $ORACLE_HOME/admin/wallet -create



Reference : 

Using The Secure External Password Store (Doc ID 340559.1)

Wednesday, May 8, 2024

Changing max_string_size to extended for Oracle Pluggable database



Recently we have been reported   "ORA-00910:specified length too long for its datatype"   by application team .    Upon checking we planned to upgrade MAX_STRING_SIZE to EXTENDED.
 
We cant set  only at pdb level  too   and changes are replicated to  standby through redo logs if changes are  done only at PDB .


We will be  changing  MAX_STRING_SIZE only at pdb level 

In order to expand the maximum value of varchar2 (also nvarchar2 and raw) datatypes in Oracle 12c and beyond, the max_string_size parameter needs to be set to "extended".  This will change the maximum value from 4k (4096) to 32k (32767).

WARNING:  You MUST run utl32k.sql immediately after changing max_string_size=extended, else you risk invalidating the database columns.




Notes  : 

1) The COMPATIBLE initialization parameter must be set to 12.0.0.0 or higher to set MAX_STRING_SIZE = EXTENDED.

2) The utl32k.sql script increases the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns for the views where this is required. The script does not increase the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns in some views because of the way the SQL for those views is written.

3) When the 32k varchar2 feature is enabled, then any function based indexes that rely on PL/SQL functions returning a varchar2 will become unusable when the utl32k.sql script is run. Such indexes will subsequently have to be dropped and it will not be possible to re-create them. Any attempt to recreate such an index will fail with ORA-1450. The reason for this is that such functional indexes will have a "key" size of 32k and this is larger than the maximum allowed key size. Prior to enabling the 32k varchar2 feature, such indexes would have had a key size of 4000 bytes, which is within the allowed limit for an index key.


4) In some cases

Running the rdbms/admin/utl32k.sql script may cause the below error:

a-ORA-14415: index in partially dropped state, submit DROP INDEX
This is caused by  BUG 21450985 - ORA-14415: INDEX IN PARTIALLY DROPPED STATE, SUBMIT DROP INDEXSev 1 24X7 SR

b-ORA-30556: either functional or bitmap join index is defined on the column to be modified
This is caused by Bug 20539050 - ORA-30556 ON USING /RDBMS/ADMIN/UTL32K
Both the bugs are fixed in 12.2 version.For earlier versions one off patch can be requested by creating a SR to Oracle Support.

c-ORA-02019 WHEN RUNNING @?/RDBMS/ADMIN/UTL32K.SQL ON mview with private DB link
This is caused by BUG 19063812 - ORA-02019 WHEN RUNNING @?/RDBMS/ADMIN/UTL32K.SQL ON MV WITH PRIVATE DB LINK
It is fixed in 12.2.





Steps : 

1)  Purge  recyclebin in pdb 

1) Take pre snap  of invalid Objects in pdb 

2) Shut down the PDB abd restart  in upgrade mode 

alter pluggable database PDB1 close immediate instances=all ;
ALTER PLUGGABLE DATABASE pdb-name OPEN UPGRADE;


3)  Change the setting of MAX_STRING_SIZE in the PDB to EXTENDED.

alter system set MAX_STRING_SIZE=EXTENDED  sid='*';
@?/rdbms/admin/utl32k.sql

 Note:
The utl32k.sql script increases the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns for the views where this is required. 
The script does not increase the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns in some views because of the way the SQL for those views is written.
Run the rdbms/admin/utlrp.sql script in the PDB to recompile invalid objects. You must be connected AS SYSDBA to run the script.



4) Reopen the PDB in NORMAL mode

alter pluggable database PDB1 close immediate instances=all ;
ALTER PLUGGABLE DATABASE pdb-name OPEN  ;

Verify client connectivity using service .


5)  Compile Invalid Objects .

Generate  post snap of invalid objects   and  compare with pre snap taken   in step 1 


6)  Replicating to  Standby  :

Since we are  changing at PDB level   ,   changes should replicate to standby   through archive logs . 
Try to open in mount state temporarily and then normalize  in   read only mode after sometime . 





Views : 

select NAME,VALUE,ISPDB_MODIFIABLE,CON_ID from v$system_parameter where upper(NAME)='MAX_STRING_SIZE';



Reference : 

https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/MAX_STRING_SIZE.html
Effect of Max_String_Size on Data guard (Doc ID 2010497.1)
How to Increase the Maximum Size of VARCHAR2, NVARCHAR2, and RAW Columns in 12C Database using MAX_STRING_SIZE ? (Doc ID 1570297.1)
Alter Pluggable Database Close Instances=ALL Not Working (Doc ID 2062080.1)