Tuesday, August 27, 2024

Oracle Sql Loader handy commands

 

Data can be modified as it loads into the Oracle Database. One can also populate columns with static or derived values. However, this only applies for the conventional load path (and not for direct path loads



SQL*Loader provides the following methods to load data:

Conventional path loads – construct INSERT statements from the contents of the input datafile based on the predefined specification and execute the inserts.

Direct path loads – creates data blocks in Oracle database block format from the datafile and directly writes the data block to the database. This way is much faster than the conventional path but subject to some restrictions.

External table loads – create an external table for the data stored in the datafile and execute INSERT statements to insert the data from the datafile into the target table. The external table loads support parallel loading if datafile is big enough.



To execute the SQL*Load tool, you need at least three files:

The input data file stores delimited or raw data
The parameter file stores the location of the input/output files
The control file contains the specification on how data is loaded.




Sample Sql lOADER Commadns :

sqlldr username/password contorl=controlfile.ctl bad=bad.log data=data.dat logfile=log1.log direct=true parallel=true multithreading=true  discard=discard.log silent=feedback 
errors=1000000 bindsize=1000000 readsize=1000000  rows=1000000    multithreading=true



SQL*Loader Tuning options :

1) Direct mode 
2) bindsize=1000000 readsize=1000000  rows=1000000
3) Multiple controlfile using load and skip 
4) use append hint 
5) multithread and parallel 




Sample Control file  contents :

options (errors=10000) 
load data 
APPEND INTO TABLE AAA  ( COLUMN ) 
INSERT INTO TABLE SSS ( DDDD  ) 


  cat example1.ctl
load data
 infile '/home/ramesh/employee.txt'
 into table employee
 fields terminated by ","
 ( id, name, dept, salary )



If you like to insert more data to the tables without having to delete the existing rows, use the “append’ command as shown in the following control file.

$ vi sqlldr-append-more.ctl
load data
 infile '/home/ramesh/newemployee.txt'
 append
 into table employee
 fields terminated by ","
 ( id, name, dept, salary )


LOAD DATA
 INFILE *
 INTO TABLE tab1 WHEN tab = 'tab1'
   ( tab  FILLER CHAR(4),
     col1 INTEGER
   )
 INTO TABLE tab2 WHEN tab = 'tab2'
   ( tab  FILLER POSITION(1:4),
     col1 INTEGER
   )
BEGINDATA
tab1|1
tab1|2
tab2|2
tab3|3





LOAD DATA
CHARACTERSET WE8EBCDIC500
INFILE data.ebc "fix 86 buffers 1024"
BADFILE data.bad'
DISCARDFILE data.dsc'
REPLACE
INTO TABLE temp_data
(
 field1    POSITION (1:4)     INTEGER EXTERNAL,
 field2    POSITION (5:6)     INTEGER EXTERNAL,
 field3    POSITION (7:12)    INTEGER EXTERNAL,
 field4    POSITION (13:42)   CHAR,
 field5    POSITION (43:72)   CHAR,
 field6    POSITION (73:73)   INTEGER EXTERNAL,
 field7    POSITION (74:74)   INTEGER EXTERNAL,
 field8    POSITION (75:75)   INTEGER EXTERNAL,
 field9    POSITION (76:86)   INTEGER EXTERNAL
)




Unload data from database Table To Flat File : 


set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on
spool oradata.txt
select col1 || ',' || col2 || ',' || col3
  from tab1
where col2 = 'XYZ';
spool off



set colsep ','
set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on
spool oradata.txt
select col1, col2, col3
  from tab1
 where col2 = 'XYZ';
spool off



declare fp utl_file.file_type;
begin
  fp := utl_file.fopen('c:\oradata','tab1.txt','w');
  utl_file.putf(fp, '%s, %sn', 'TextField', 55);
  utl_file.fclose(fp);
end;
/




Reference :

https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-sql-loader-commands.html#GUID-CD662CD8-DAA7-4A30-BC84-546E4C40DB31




Tuesday, July 16, 2024

Oracle options to check fragmentation and perform Table and Index reorganization

 

There were many   request  coming in for Reorg  Hence  thought of documenting handy article to check fragmentation and perform reorg .

After performing Reorganization we need to also gather fresh statistics  .   Check invalid objects pre and post Reorganization . 

Article  will have 2  main topics mainly 
1)  Ways to check Fragmentation 
2)  Performing Reorganization .



For Lob    i have  documented in my  previous Blog  below 

https://abdul-hafeez-kalsekar.blogspot.com/2024/01/oracle-database-lob-maintenance.html




Ways to  check Fragmentations : 



==> High water mark(HWM)  check query :

set verify off
     column owner format a10
     column alcblks heading 'Allocated|Blocks' just c
     column usdblks heading 'Used|Blocks'      just c
     column hgwtr heading 'High|Water'         just c
     break on owner skip page
     select a.owner,a.table_name,b.blocks alcblks,a.blocks usdblks,(b.blocks-a.empty_blocks-1)
     hgwtr from dba_tables a,dba_segments b where a.table_name=b.segment_name
     and a.owner=b.owner and a.owner not in('SYS','SYSTEM') and a.blocks <> (b.blocks-a.empty_blocks-1)
     and a.owner like upper('&owner')||'%'and a.table_name like upper('&table_name')||'%'
     order by 1,2;

Enter value for owner:  ABDUL 
Enter value for table_name: TABLE1



==> Check  Fragmented Table 


col TABLE_NAME for a30
col fragmented_size for a15
col table_size for a15
col Used_size for a15
select table_name,round(((blocks*8)/1024/1024),2)||'GB' "table_size",round((num_rows*avg_row_len/1024/1024/1024),2)||'GB' "Used_size",
(round(((blocks*8/1024/1024)),2)-round((num_rows*avg_row_len/1024/1024/1024),2))|| 'GB' "fragmented_size" from dba_tables
where owner not in ('SYS','SYSTEM','PERFSTAT') and last_analyzed is not null and num_rows>0 and table_name='SECURE' and owner='HAFEEZ'
and blocks >0
order by 4 asc
/




==> Checking Partition table fragmentation : 

  SELECT table_name,
         partition_name,
         ROUND ( (blocks / 1024 * 16), 2) "size (mb)",
         ROUND ( (num_rows * avg_row_len / 1024 / 1024), 2) "actual_data (mb)",
         (  ROUND ( (blocks / 1024 * 16), 2)
          - ROUND ( (num_rows * avg_row_len / 1024 / 1024), 2))
            "wasted_space (mb)"
    FROM dba_tab_partitions
   WHERE     (ROUND ( (blocks / 1024 * 16), 2) >
                 ROUND ( (num_rows * avg_row_len / 1024 / 1024), 2))
         AND table_name = 'TBK_XXX'
ORDER BY 5 DESC;

 select 
  table_owner
  ,table_name
  ,partition_name
  ,high_value
  ,compression
  ,compress_for
  ,avg_row_len
  ,round(((blocks*16/1024)),2)/1024 "TOTAL_SIZE_GB" --The amount of space used by the partition in gigabytes.
  ,round((num_rows*avg_row_len/1024/1024),2)/1024 "ACTUAL_SIZE_GB" -- The amount of space used by the partition in gigabytes, calculated based on the number of rows and the average row length.
  ,round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/1024 "FRAGMENTED_SPACE_GB" --The amount of space that is not used by the partition, in gigabytes.
  ,decode(round(((blocks*16/1024)),2),0,0, (round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100) "percentage" --The percentage of unused space in the partition, calculated by dividing the unused space by the total space and multiplying by 100. 
from dba_tab_partitions 
WHERE 
1=1
and table_owner='DEV_DW'
and table_name='TEST_TABLE'
and partition_name='SYS_P21822227'
and round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/1024 > 0
order by 6 desc
;






==>  Checking Top Fragmented Tables 

select t1.owner,t1.table_name,nvl(t1.blocks*8,1) "size fragmented KB",round(nvl((t1.num_rows*t1.avg_row_len/1024),1),2) "actaul size KB",round(nvl(t1.blocks*8,1) - nvl((t1.num_rows*t1.avg_row_len/1024),1),2) "fragmentation KB"
from dba_tables t1
order by t1.owner,round(nvl(t1.blocks*8,1) - nvl((t1.num_rows*t1.avg_row_len/1024),1),2) desc ; 



select blocks,last_analyzed,owner,table_name,round((blocks*8),2) "size (kb)" , 
                            round((num_rows*avg_row_len/1024),2) "actual_data (kb)",
                            (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) "wasted_space (kb)"
from dba_tables
where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
and owner in ('HAFEEZ')
order by 4  ;




==>  Checking  Index fragmentation : 

You can run the ANALYZE INDEX <index> VALIDATE STRUCTURE command on the affected indexes - each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS'.

you may decide that index should be rebuilt if more than 20% of its rows are deleted:


ANALYZE INDEX &&index_name VALIDATE STRUCTURE;
 
    col name         heading 'Index Name'          format a30
    col del_lf_rows  heading 'Deleted|Leaf Rows'   format 99999999
    col lf_rows_used heading 'Used|Leaf Rows'      format 99999999
    col ibadness     heading '% Deleted|Leaf Rows' format 999.99999
     SELECT name,
       del_lf_rows,
       lf_rows - del_lf_rows lf_rows_used,
       to_char(del_lf_rows / (lf_rows)*100,'999.99999') ibadness
    FROM index_stats
       where name = upper('&index_name');
    undefine index_name 
 

select del_lf_rows * 100 / decode(lf_rows,0,1,lf_rows) from index_stats
where name = 'index_ name';






==>  Checking Fragmentation  using segment space advisor  

set echo off
set feedback off
set verify off
set linesize 80
set serveroutput on size unlimited

spool /home/oracle/scripts/segment_advisor.txt

declare
 v_task_name varchar2(100);
 v_task_desc varchar2(500);
 v_objid number;
begin
 begin
 v_task_name := 'SEGMENT_ADVISOR_RUN';
 v_task_desc := 'MANUAL SEGMENT ADVISOR RUN';


-- Create Segment Advisor task.

 dbms_advisor.create_task
 (
 advisor_name => 'Segment Advisor',
 task_name => v_task_name,
 task_desc => v_task_desc
 );

-- Add all segments to the task.

 for s in (select segment_name, segment_type
 from DBA_segments
 where segment_type in ('TABLE', 'INDEX', 'LOB')
 and owner != 'SYS' and owner != 'SYSTEM' and owner != 'OLAPSYS' and owner != 'SYSMAN' and owner != 'ODM' and owner != 'RMAN' and owner != 'ORACLE_OCM' and owner != 'EXFSYS' and owner != 'OUTLN' and owner != 'DBSNMP' and owner != 'OPS' and owner != 'DIP' and owner != 'ORDSYS' and owner != 'WMSYS' and owner != 'XDB' and owner != 'CTXSYS' and owner != 'DMSYS' and owner != 'SCOTT' and owner != 'TSMSYS' and owner != 'MDSYS' and owner != 'WKSYS' and owner != 'ORDDATA' and owner != 'OWBSYS' and owner != 'ORDPLUGINS' and owner != 'SI_INFORMTN_SCHEMA' and owner != 'PUBLIC' and owner != 'OWBSYS_AUDIT' and owner != 'APPQOSSYS' and owner != 'APEX_030200' and owner != 'FLOWS_030000' and owner != 'WK_TEST' and owner != 'SWBAPPS' and owner != 'WEBDB' and owner != 'OAS_PUBLIC' and owner != 'FLOWS_FILES' and owner != 'QMS')
 loop
 dbms_advisor.create_object
 (
 task_name => v_task_name,
 object_type => s.segment_type,
 attr1 => user,
 attr2 => s.segment_name,
 attr3 => null,
 attr4 => null,
 attr5 => null,
 object_id => v_objid
 );
 end loop;


-- Set task parameter to recommend all.

 dbms_advisor.set_task_parameter
 (
 task_name => v_task_name,
 parameter => 'RECOMMEND_ALL',
 value => 'TRUE'
 );


-- Run Segment Advisor.

 dbms_advisor.execute_task(v_task_name);

exception when others then
 dbms_output.put_line('Exception: ' || SQLERRM);
 end;


-- Output findings.

 dbms_output.put_line(chr(10));
 dbms_output.put_line('Segment Advisor Recommendations');
 dbms_output.put_line('--------------------------------------------------------------------------------');

for r in (select segment_owner, segment_name, segment_type, partition_name,
 tablespace_name, allocated_space, used_space,
 reclaimable_space, chain_rowexcess, recommendations, c1, c2, c3
 from table(dbms_space.asa_recommendations('TRUE', 'TRUE', 'FALSE'))
 where segment_owner != 'SYS' and segment_owner != 'SYSTEM' and segment_owner != 'OLAPSYS' and segment_owner != 'SYSMAN' and segment_owner != 'ODM' and segment_owner != 'RMAN' and segment_owner != 'ORACLE_OCM' and segment_owner != 'EXFSYS' and segment_owner != 'OUTLN' and segment_owner != 'DBSNMP' and segment_owner != 'OPS' and segment_owner != 'DIP' and segment_owner != 'ORDSYS' and segment_owner != 'WMSYS' and segment_owner != 'XDB' and segment_owner != 'CTXSYS' and segment_owner != 'DMSYS' and segment_owner != 'SCOTT' and segment_owner != 'TSMSYS' and segment_owner != 'MDSYS' and segment_owner != 'WKSYS' and segment_owner != 'ORDDATA' and segment_owner != 'OWBSYS' and segment_owner != 'ORDPLUGINS' and segment_owner != 'SI_INFORMTN_SCHEMA' and segment_owner != 'PUBLIC' and segment_owner != 'OWBSYS_AUDIT' and segment_owner != 'APPQOSSYS' and segment_owner != 'APEX_030200' and segment_owner != 'FLOWS_030000' and segment_owner != 'WK_TEST' and segment_owner != 'SWBAPPS' and segment_owner != 'WEBDB' and segment_owner != 'OAS_PUBLIC' and segment_owner != 'FLOWS_FILES' and segment_owner != 'QMS'
 order by reclaimable_space desc)
 loop
 dbms_output.put_line('');
 dbms_output.put_line('Owner : ' || r.segment_owner);
 dbms_output.put_line('Segment : ' || r.segment_name);
 dbms_output.put_line('Segment Type : ' || r.segment_type);
 dbms_output.put_line('Partition Name : ' || r.partition_name);
 dbms_output.put_line('Tablespace : ' || r.tablespace_name);
 dbms_output.put_line('Allocated Space : ' || r.allocated_space);
 dbms_output.put_line('Used Space : ' || r.used_space);
 dbms_output.put_line('Reclaimable Space in MB : ' || r.reclaimable_space/1024/1024);
 dbms_output.put_line('Chain Rowexcess : ' || r.chain_rowexcess);
 dbms_output.put_line('Recommendations : ' || r.recommendations);
 dbms_output.put_line('Run First : ' || r.c3);
 dbms_output.put_line('Run Second : ' || r.c2);
 dbms_output.put_line('Run Third : ' || r.c1);
 dbms_output.put_line('--------------------------------------------------------------------------------');
 end loop;

-- Remove Segment Advisor task.
 dbms_advisor.delete_task(v_task_name);
end;
/


spool off;






Ways  to   remove Fragmentations : 



Main ways commonly  followed  are  : 

Shrink 
Alter table Move
CTAS method
Datapump or exp/imp




==>   Options  to Perform Reorg of  Table 

SQL> ALTER TABLE table name ENABLE ROW MOVEMENT;
SQL> ALTER TABLE table name SHRINK SPACE CASCADE;
SQL> ALTER TABLE table name DISABLE ROW MOVEMENT;


ALTER TABLE mytable MOVE PARALLEL (DEGREE 8) ONLINE UPDATE INDEXES




==>  Options  to  Perform  Reorg of  Partition Table  
 

ALTER TABLE TEST_TABLE enable row movement;
ALTER TABLE order MODIFY PARTITION SYS_P21674395 SHRINK SPACE;  

ALTER TABLE order MOVE PARTITION SYS_P21674395 UPDATE INDEXES;
ALTER TABLE order MOVE PARTITION SYS_P21674395 ONLINE UPDATE INDEXES;

ALTER TABLE  TEST_TABLE move PARTITION SYS_P21822227 NOCOMPRESS UPDATE INDEXES;  

ALTER TABLE  order MOVE PARTITION SYS_P21674395  COMPRESS FOR QUERY HIGH UPDATE INDEXES;
 




==>  Options  to  Perform rebuild of index / Remove Index Fragmentation 


ALTER TABLE four_seasons MODIFY PARTITION quarter_two REBUILD UNUSABLE LOCAL INDEXES;

ALTER INDEX sales_area_ix REBUILD PARTITION jan99_ix;
 
 ALTER INDEX index name SHRINK SPACE;

ALTER INDEX SCOTT.EMP_IDX REBUILD ONLINE parallel 10 ;






Check the progress of Alter Shrink Space Command

-- Following command will search alter table command and give you output how much it covered according to table size in first command.
-- GB_read give you how much it covered yet.
select a.event, a.WAIT_TIME, c.SQL_TEXT,
c.PHYSICAL_READ_BYTES / 1024 / 1024 / 1024 "GB_READ",
c.PHYSICAL_WRITE_BYTES / 1024 / 1024 / 1024 "GB_WRITE"
from v$session_wait a , v$session b , v$sql c
where UPPER(c.SQL_TEXT) like UPPER('%ALTER TABLE%')
and a.sid = b.sid
and b.SQL_ID = c.SQL_ID;



--If you have session id of session from which command running then use following command:
select a.event, a.WAIT_TIME, c.SQL_TEXT,
c.PHYSICAL_READ_BYTES / 1024 / 1024 / 1024 "GB_READ",
c.PHYSICAL_WRITE_BYTES / 1024 / 1024 / 1024 "GB_WRITE"
from v$session_wait a , v$session b , v$sql c
where a.SID =
and a.sid = b.sid
and b.SQL_ID = c.SQL_ID;








Online Redefinition  to perform Reorg  : 



A primary key is mandatory since materialized views and logs are created during the start of redefinition.
 
The user performing the re-organization requires the following
privileges:

* CREATE ANY TABLE
* ALTER ANY TABLE
* DROP ANY TABLE
* LOCK ANY TABLE
* SELECT ANY TABLE
* CREATE ANY INDEX
* CREATE ANY TRIGGER



GENERAL STEPS TO ONLINE REDEFINE A TABLE

1) Determine if the table to be moved can be redefined online (DBMS_REDEFINITION.CAN_REDEF_TABLE)

2) Create the interim table
The interim table need not be the same 'shape' (have similar structure) as the original table

3) Start the redefinition of the table (DBMS_REDEFINITION.START_REDEF_TABLE)
4) Copy the dependents from the original table to the interim table (DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS)
5) Execute a final synchronization between the original and interim tables (DBMS_REDEFINITION.SYNC_INTERIM_TABLE)
  This step will minimize the amount of time needed to execute the finish of the redefinition

6) Finish the redefinition (DBMS_REDEFINITION.FINISH_REDEF_TABLE)
7) Verify that the redefinition succeeded by comparing the original table (formerly interim) to the interim table (formerly original)
8) Drop the interim table




---------------------------------------------------------------------
-- Test if table can be redefined...
---------------------------------------------------------------------

 SET SERVEROUTPUT ON
begin
DBMS_REDEFINITION.CAN_REDEF_TABLE
(
uname=>'SCOTT',
tname=>'emp' );
end;
/




---------------------------------------------------------------------
-- CREATE THE INTERIM TABLE
---------------------------------------------------------------------

create table emp_work (
COL1 NUMBER,
COL2 VARCHAR2(1000),
COL3 VARCHAR2(1000),
COL4 VARCHAR2(1000));



---------------------------------------------------------------------
-- Start table redefinition...
---------------------------------------------------------------------

During this phase Oracle will copy (and transform) the data from the production table to the interim table. Oracle will also create a materialized view (snapshot) log on the table to track DML changes.

SET SERVEROUTPUT ON
 begin
 DBMS_REDEFINITION.start_redef_table( 
 uname=>'SCOTT',
 orig_table=>'emp',
 int_table=>'emp_work');
 end;
 /
 



---------------------------------------------------------------------
-- COPY THE TABLE DEPENDENTS FROM THE ORIGINAL TABLE TO THE INTERIM TABLE
---------------------------------------------------------------------
  
 SET SERVEROUTPUT ON
   DECLARE
  l_num_errors PLS_INTEGER;
    begin
   DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
   uname=>'SCOTT',
   orig_table=>'emp',
   int_table=>'emp_work',
  copy_indexes => DBMS_REDEFINITION.cons_orig_params,
   copy_triggers => TRUE,
  copy_constraints => TRUE,
  copy_privileges => TRUE,
   ignore_errors => FALSE,
    num_errors => l_num_errors);
  end;
  /


--------------------------------------------------------------------
-- Do  necessary changes:  here we Create constraints on the interim table   (  skip thisif only reorganization is needec ) 
---------------------------------------------------------------------

SQL> alter table emp_work   add constraint int_empx_pk primary key(empno);
SQL> alter table emp_work add constraint 2 int_empx_fk foreign key(deptno) references dept(deptno);
SQL> alter table emp_work MODIFY CONSTRAINT int_empx_fk 2 DISABLE KEEP INDEX;



---------------------------------------------------------------------
-- Sync intermediate changes to interim table (optional)
---------------------------------------------------------------------

This step will apply changes captured in the materialized view log to the interim table. Perform this step frequently for high transaction tables.

SQL> exec dbms_redefinition.sync_interim_table('scott', 'emp', 'emp_work');




---------------------------------------------------------------------
-- Finish the redefinition process (this will swap the two tables)...
---------------------------------------------------------------------
EXEC DBMS_REDEFINITION.FINISH_REDEF_TABLE('scott', 'emp', 'emp_work');



---------------------------------------------------------------------
-- Drop the interim working table...
---------------------------------------------------------------------
DROP TABLE emp_work;







Redefine  table  without  primary Key : 


The Reason the table failed is because the Online redefinition happens either by using a primary key or using ROWID,
The Default is the Primary Key, 

If we dont  haveprimary key we need to use rowid  option . For    that "options_flag=>DBMS_REDEFINITION.cons_use_rowid"   needs to be added in   DBMS_REDEFINITION.CAN_REDEF_TABLE  and  DBMS_REDEFINITION.start_redef_table  function 

Eg 

 SET SERVEROUTPUT ON
begin
DBMS_REDEFINITION.CAN_REDEF_TABLE(
uname=>'SCOTT',
tname=>'emp',
options_flag=>DBMS_REDEFINITION.cons_use_rowid);
end;
/


SET SERVEROUTPUT ON
 begin
 DBMS_REDEFINITION.start_redef_table( 
 uname=>'SCOTT',
 orig_table=>'emp',
 int_table=>'emp_work',
 options_flag=>DBMS_REDEFINITION.cons_use_rowid);
 end;
 /
 


 
-- To perform online redefinition in parallel 
 
alter session force parallel dml parallel 32;
alter session force parallel query parallel 32;







References : 

https://oracle-base.com/articles/12c/online-move-partitions-and-subpartitions-12cr1#google_vignette

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/vldbg/maintenance-partition-tables-indexes.html#GUID-CDAA2363-B83A-408C-82C9-3E3FA3928D2D
Or 
https://docs.oracle.com/en/database/oracle/oracle-database/18/vldbg/maintenance-partition-tables-indexes.html#GUID-BAFFE31C-07A2-4ED6-BDCF-8ECB79D7FE7D


Attempt to reduce the size of a LOB segment after the table / LOB becomes sparse -> How to Shrink (make less sparse) a LOB (BASICFILE or SECUREFILE)? (Doc ID 1451124.1)

 How to Re-Organize a Table Online (Doc ID 177407.1)

How to release unused space (SHRINK) occupied by a LOB segment by adding / dropping columns? (Doc ID 1417697.1) 

How to Shrink (make less sparse) a LOB (BASICFILE or SECUREFILE)? (Doc ID 1451124.1)

Sunday, June 16, 2024

Restore Table Into dumpfile using Oracle database Rman Backup of Pluggable database

 
There are many  documents Online   for this  but since we  had requirement  to perform same   thought of   documenting my own 
 
Instead of importing  table in same database its preferable to import into   dump file . 




Below is the high level procedure on how the restoration works.

1) Identifies the set of tablespace that needs to be recovered for this table restore.
2) Creates temporary instance with a random unique name
3) Starts database in nomount state and performs control file restore based on until time/scn value defined.
4) Sets new destinations for the identified data files to be restored
5) Then initiates the restoration of SYSTEM,SYSAUX,UNDO  tablespaces for CDB and SYSTEM and SYSAUX and EXAMPLE tablespace  for PDB
6) Perform recovery of database until time/scn by restoring archive logs and applying them to temporary database.
7) Once done its open the Database in open read only along with any PDB database under which table is available
8) Restart in nomount state and initiate restoraton of Datafiles related to Application Table(User Tablespace datafiles)
9) Recover archive logs and apply until time/scn
10) RMAN opens database and creates temporary DBA Directory to hold expdp dump
11) Performs expdp dump of table
12) Shutdown abort temporary database
13) Import the table dump to Source database
14) Cleanup all files created at the end of session




1) Restore  table into  dumpfile 

Create a pfile for auxiliary db use /tmp/init_aux.ora .  Username and  Table name must be in caps 


Add below parameter in it (Ensure other parameters like db_name/sga are specified along with it)
_disable_health_check=true


Main parameters to be added  to  auxiliary pfile are 

enable_pluggable_database=true 
_clone_one_pdb_recovery=true 
wallet_root
tde_configuration
compatible 
db_block_size 
db_files 
diagnostic_dest 
_system_trigger_enabled =false 
db_domain 
sga_target 
db_create_files_dest      ( to auxiliary destination used in   restore ) 
log_archive_dest_1       ( to auxiliary destination used in   restore ) 
db_name 
db_unique_name



rman target=/

RUN {
SET AUXILIARY INSTANCE PARAMETER FILE TO '/tmp/init_aux.ora';
recover table  "MYUSER"."MYTABLE"   OF PLUGGABLE DATABASE pdb2
  until scn 37128783
  auxiliary destination '/tmp/aux'
  datapump destination '/var/oracle/data/export'
  dump file 'saved_copy.dmp'
  notableimport;
}

Or 

RUN {
SET AUXILIARY INSTANCE PARAMETER FILE TO '/tmp/init_aux.ora';
recover table "MYUSER"."MYTABLE" OF PLUGGABLE DATABASE nameofpluggabledb
until time "to_date('07/11/2022 16:34:10','dd/mm/yyyy hh24:mi:ss')"
auxiliary destination '/var/oracle/backup/aux'
datapump destination '/var/oracle/backup/datapump'
dump file 'saved_copy.dmp'
notableimport;
}



If we  want to  import directly in database we can  use below 

RUN {
SET AUXILIARY INSTANCE PARAMETER FILE TO '/tmp/init_aux.ora';
recover table "MYUSER"."MYTABLE" OF PLUGGABLE DATABASE nameofpluggabledb
until time "to_date('07/11/2022 16:34:10','dd/mm/yyyy hh24:mi:ss')"
auxiliary destination '/var/oracle/backup/aux'
REMAP TABLE 'SMEDS'.'RECTEST':'TEST_RECTEST';
}




2) Import dumpfile in database from above dmpfile 

impdp youruser/yourpassword@yourhost/nameofpluggabledb full=Y directory=DUMP_DIR dumpfile=saved_copy.dmp






References :

Rman Recover Table Doesnot Clean up Files Created Under Dbs Folder For the Auxiliary Instance After it Completes (Doc ID 2882639.1)
RMAN Recover Table Feature in Oracle Database 12c and Higher (Doc ID 1521524.1)
Undo Datafile at Auxiliary Instance Remains after Executing Recover Table (Doc ID 2407419.1)

Monday, May 20, 2024

Oracle Handling Distributed, Pending , In-Doubt and Remote Transaction

 

A distributed transaction modifies data related to two or more databases, it contains DML statements than span many nodes. For a distributed transaction to be succesful all or none of the database nodes involved in the transaction need to commit or rollback the whole transaction.

Note the difference between a distributed and a remote transaction; a remote transaction contains one or more DML statements that are executed on the SAME remote node,


Manual Resolution of In-Doubt Transactions
–The in-doubt transaction has locks on critical data or undo segments.
–The cause of the system, network, or software failure cannot be repaired quickly.




Commit or Rollback Pending Transaction -- UNSTUCK TRANSACTION : 


SQL> select local_tran_id,global_tran_id, state,mixed, commit# from dba_2pc_pending;

If the state of the transaction is “prepared” and there is no inconsistency, the transaction can be forced to rollback, or maybe if the underlying problem which caused the in-doubt transaction is resolved the transaction can be forced to commit as follows:

SQL> ROLLBACK FORCE  '97.33.166765' /* ->replace with ur own trx_id */
or
SQL> COMMIT FORCE  '97.33.166765' /* ->replace with ur own trx_id */


If the state of the transaction is “collecting” and you execute the above command, you may see an error like:

ERROR at line 1:
ORA-02058: no prepared transaction found with ID 97.33.166765




 DBA_2PC_PENDING view have entries about our transaction but there is no transaction in reality

If the state of the transaction (in DBA_2PC_PENDING) is committed, rollback forced or commit forced then it can be cleaned by:

SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('97.33.166765'); /* ->replace with ur own trx_id */





HANDLING STUCK TRANSACTIONS

Our ultimate goal is not seeing the transaction in X$KTUXE table; and ensuring that the dictionary tables like PENDING_TRANS$ to be consistent with this information.

Stuck transactions can be examined under the below conditions:



2.1. Condition 1: DBA_2PC_PENDING view have entries about our transaction but there is no transaction in reality

The condition is that; when we issue select to the dictionary views like the DBA_2PC_PENDING, PENDING_TRANS$, etc. we see the transaction, but the transaction does not exist in X$KTUXE view.

The actual transaction entry view is X$KTUXE ([K]ernel [T]ransaction [U]ndo Transa[X]tion [E]ntry) where the columns correspond to the following sections of the transaction id:

KTUXEUSN.KTUXESLT.KTUXESQN = 96.22.163456   (The concat of KTUXEUSN, KTUXESLT and KTUXESQN gives us the transacion number)
KTUXEUSN=96
KTUXESLT=22
KTUXESQN=163456

Therefore, the condition1 holds when DBA_2PC_PENDING has the entry but X$KTUXE does not.

SQL> SELECT LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, COMMIT# FROM DBA_2PC_PENDING;

    -- Returns: 96.22.163456 TESTDB.723a8559.96.22.163456    prepared    no    56759427464

SQL> SELECT * FROM X$KTUXE WHERE KTUXEUSN=96 AND KTUXESLT=22 AND KTUXESQN =163456;
  
  -- Returns: No Rows

Solution 1 to Condition 1: If the state of the transaction (in DBA_2PC_PENDING) is committed, rollback forced or commit forced then it can be cleaned by:

SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456'); 
Solution 2 to Condition 1: If the state of the transaction is prepared, we have to clean manually as follows:

SQL> DELETE FROM SYS.PENDING_TRANS$ WHERE LOCAL_TRAN_ID = '96.22.163456'; 
SQL> DELETE FROM SYS.PENDING_SESSIONS$ WHERE LOCAL_TRAN_ID ='96.22.163456' ; 
SQL> DELETE FROM SYS.PENDING_SUB_SESSIONS$ WHERE LOCAL_TRAN_ID = '96.22.163456';
SQL> COMMIT;




2.2. Condition 2: DBA_2PC_PENDING view does NOT have entries about our transaction but there IS A transaction.

This is something like a orphan transaction that the dictionary is not aware of.
Trying to force commit or rollback this transaction may result in error like below, since the dictionary is not aware:

SQL> ROLLBACK FORCE '96.22.163456'

-- ORA-02058: no prepared transaction found with ID 96.22.163456
Solution to Condition 2: What we need to do at this point is; recovering our transaction from being an orphan by inserting some dummy records into dictionay tables (so the views…) and then force a rollback or commit: You do not have to change the parameters in the insert command other than the transaction id.

SQL> ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;
    
SQL> INSERT INTO PENDING_TRANS$ (LOCAL_TRAN_ID, GLOBAL_TRAN_FMT, GLOBAL_ORACLE_ID, STATE, STATUS, SESSION_VECTOR, RECO_VECTOR, TYPE#, FAIL_TIME,RECO_TIME)
    VALUES
    (
         '96.22.163456', 
         299354,
         'XXXXXXX.12345.1.2.3',
         'prepared','P',
         hextoraw( '00000001' ),
         hextoraw( '00000000' ),
         0, sysdate, sysdate
    );
    
SQL> INSERT INTO PENDING_SESSIONS$
    VALUES
    (
     '96.22.163456', 
     1, hextoraw('05004F003A1500000104'),
     'C', 0, 30258592, '',
     146
    );
SQL>  COMMIT;

Now, we should be able to rollback or commit.


SQL> ROLLBACK FORCE '96.22.163456' 
or
SQL> COMMIT FORCE '96.22.163456'

Lastly, we remove the dummy entry from the dictionary:

SQL> ALTER SYSTEM ENABLE DISTRIBUTED RECOVERY;
    
SQL> ALTER SYSTEM SET "_smu_debug_mode" = 4;
    
SQL> COMMIT;
    
SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456');
    
SQL> ALTER SYSTEM SET "_smu_debug_mode" = 0;
    
SQL> COMMIT;
Check to see whether the transaction has gone:

SQL> SELECT * FROM X$KTUXE WHERE KTUXEUSN=96 AND KTUXESLT=22 AND KTUXESQN =163456;
    -- Returns: No Rows    



2.3. Condition 3: DBA_2PC_PENDING has entry and there is a transaction but COMMIT or ROLLBACK HANGS!

In the situation, where COMMIT FORCE or ROLLBACK FORCE hangs,
Trying to purge the transaction will give an error like:

SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456');
    -- ERROR at line 1:
    -- ORA-06510: PL/SQL: unhandled user-defined exception
    -- ORA-06512: at "SYS.DBMS_TRANSACTION", line 94
    -- ORA-06512: at line 1


Solution to Condition 3: The solution is the combination of Cond1 and Cond2:

First, delete the dictionary entries:

SQL> DELETE FROM SYS.PENDING_TRANS$ WHERE LOCAL_TRAN_ID = '96.22.163456'; 
    
SQL> DELETE FROM SYS.PENDING_SESSIONS$ WHERE LOCAL_TRAN_ID ='96.22.163456' ; 
    
SQL> DELETE FROM SYS.PENDING_SUB_SESSIONS$ WHERE LOCAL_TRAN_ID = '96.22.163456';
    
SQL> COMMIT;
Then, insert dummy record, force commit and finally purge the transaction:

SQL> ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;
    
SQL> INSERT INTO PENDING_TRANS$ (LOCAL_TRAN_ID, GLOBAL_TRAN_FMT, GLOBAL_ORACLE_ID,
STATE, STATUS, SESSION_VECTOR, RECO_VECTOR, TYPE#, FAIL_TIME,RECO_TIME)
    VALUES
    (
         '96.22.163456',
         306206,
         'XXXXXXX.12345.1.2.3',
         'prepared','P',
         hextoraw( '00000001' ),
         hextoraw( '00000000' ),
         0, sysdate, sysdate
    );
     
SQL> INSERT INTO PENDING_SESSIONS$
    VALUES
    (
         '96.22.163456',
         1, hextoraw('05004F003A1500000104'),
         'C', 0, 30258592, '',
         146
    );
SQL> COMMIT;
    
SQL> COMMIT FORCE '96.22.163456';
    
SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456');







Two-Phase Commit (2PC)

The two phase-commit mechanism is used to ensure data integrity in a distributed transaction. It is automatically used during all distributed transactions and coordinates either the commit or roll back of all the changes in the transaction as a single, self-contained unit.

In short, there are three phases for the 2PC:
PREPARE: The initiating node ask each of its referenced nodes to promise to perform a commit or rollback when told to do so. The preparing node will flush the redo log buffer to the online redo log. It converts locks to in-doubt transaction locks on the data blocks and passes its highest SCN value to the initiating node.
COMMIT: The initiating node commits and writes to its redo log the committed SCN. The Data Block locks are released.
FORGET:  Pending transactions tables are related database views are cleared (dba_2pc_pending/dba_2pc_neighbors)
All the above phases take place quickly and transparently to the application where the transaction originated.

NOTE:
A crash during the PREPARE Phase results in a ROLLBACK
A crash during the COMMIT Phase results in either COMMIT or ROLLBACK





Recoverer Process (RECO)

RECO is a mandatory background process that, in a distributed database, automatically resolves failures in distributed transactions. The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction; it can use an existing connection or establish a new connection to other nodes involved in the failed transaction.. When RECO reestablishes a connection between the databases, it automatically resolves all in-doubt transactions, removing from each database's pending transaction table any rows that correspond to the resolved transactions.

At exponentially growing time intervals, the RECO background process of a node attempts to recover the local portion of an in-doubt distributed transaction.

init.ora parameter distributed_transactions > 0  (for RECO to start)

If init.ora open_links is set to 0, then no distributed transactions are allowed.

The RECO process is present only if the instance permits distributed transactions.

You can enable and disable RECO using the ALTER SYSTEM statement with the ENABLE/DISABLE DISTRIBUTED RECOVERY options.

You can disable distributed recovery if the database is mounted but not open.

select dbid,name,log_mode,open_mode from v$database;
select instance_number,instance_name,status from v$instance;


Disabling and Enabling RECO

You can enable and disable RECO using the ALTER SYSTEM statement with the ENABLE/DISABLE DISTRIBUTED RECOVERY options. For example, you can temporarily disable RECO to force the failure of a two-phase commit and manually resolve the in-doubt transaction.

-To disable RECO:
ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY

-To enable RECO and let it automatically resolve indoubt transactions
ALTER SYSTEM ENABLE DISTRIBUTED RECOVERY;




Database Parameters related to distributed Transactions :

DISTRIBUTED_LOCK_TIMEOUT  : 
DISTRIBUTED_LOCK_TIMEOUT is used to specify maximum time in  seconds   that instance waits for locked resources . Default is 60 seconds 



COMMIT_POINT_STRENGTH :
In a two phase commit process, one site or node is the designated “Commit Point Site”  . This  is determined by the values of the
COMMIT_POINT_STRENGTH parameter in all the database instances involved in the transaction which are adjacent to and include the Global Coordinator
 
The values range from 0 to 255 and if not set then the is determined by the software

The COMMIT_POINT_STRENGTH parameter in nonadjacent nodes is only used to resolve who will be the recursive“Commit Point Site” when a Local
Coordinator is the main “Commit Point Site"





Known Issues :

ORA-02053: transaction committed, some remote DBs may be in-doubt
The transaction has been locally committed, however we have lost communication with one or more local coordinators.

ORA-02054: transaction in-doubt
The transaction is neither committed or rolled back locally, and we have lost communication with the global coordinator.

ORA-02050: transaction rolled back, some remote DBs may be in-doubt
Indicates that a communication error ocurred during the two-phase commit

ORA-01591: lock held by in-doubt distributed transaction
Encountering the above error and users/applications unable to proceed with their work. In this case, Oracle automatically rolls back the user attempted transaction and the DBA has now to manually commit or rollback the in-doubt transaction.






Views  : 

DBA_2PC_PENDING  and DBA_2PC_NEIGHBORS   are  2 main views used for checking details 



DBA_2PC_PENDING 

view to determine the global commit number for a particular transaction ID. Lists all in-doubt distributed transactions. 
The view is empty until populated by an in-doubt transaction. After the transaction is resolved, the view is purged


Note: When LOCAL_TRAN_ID and GLOBAL_TRAN_ID value is same then node is the global coordinator of the transaction.

1) LOCAL_TRAN_ID : Local transaction identifier
2) GLOBAL_TRAN_ID : Global database identifier
3) STATE : –Collecting: node is currently collecting information from other database servers before it can decide whether it can prepare.
–Prepared: Node has prepared and holding locks on resources. May or not acknowledge to local coordinator
–Committed: Node has committed the transaction but other node may or may not done.
–Forced Commit: Administrator manually forced to commit the pending transaction at local node.
–Forced rollback: A pending transaction forced to rollback at local node.

4) MIXED: YES means part of transaction committed on one node and rollback at other node.
5) TRAN_COMMENT: Transaction comment
6) HOST: Hostname
7) COMMIT# : Global commit number for committed transaction



COL LOCAL_TRAN_ID FORMAT A13
COL GLOBAL_TRAN_ID FORMAT A30
COL STATE FORMAT A8
COL MIXED FORMAT A3
COL HOST FORMAT A10
COL COMMIT# FORMAT A10
SELECT LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, HOST, COMMIT# FROM DBA_2PC_PENDING;




DBA_2PC_NEIGHBORS :

Lists all incoming and outgoing in-doubt distributed transactions. It also indicates whether the local node is the commit point site in the transaction.


1) LOCAL_TRAN_ID: Local transaction identifier
2) IN_OUT: IN for incoming transactions; OUT for outgoing transactions
3) DATABASE: For incoming transactions, the name of the client database for outgoing transactions, the name of the database link used to access information on a remote server.
4) DBUSER_OWNER: For incoming transactions, the local account used to connect, for outgoing transactions: the owner of the database link.
5) INTERFACE: C is a commit message; N is either indicating a prepared state or read-only commit.

COL LOCAL_TRAN_ID FORMAT A13
COL IN_OUT FORMAT A6
COL DATABASE FORMAT A25
COL DBUSER_OWNER FORMAT A15
COL INTERFACE FORMAT A3
SELECT LOCAL_TRAN_ID, IN_OUT, DATABASE, DBUSER_OWNER, INTERFACE FROM DBA_2PC_NEIGHBORS





Other views : 

SQL> SELECT * FROM GLOBAL_NAME;
SQL> select * from DBA_2PC_PENDING;
SQL> select * from DBA_2PC_NEIGHBORS;
SQL> select * from sys.pending_trans$;
SQL> select * from SYS.PENDING_SESSIONS$;
SQL> select * from SYS.PENDING_SUB_SESSIONS$;
SQL> select * from SYS.V$GLOBAL_TRANSACTION  ;

SELECT
    ktuxeusn            usn#     -- 65535 = no-undo transaction
  , ktuxeslt            slot#    -- 65535 = invalid slot#
  , ktuxesqn            seq#
  , ktuxesta            status
  , ktuxecfl            flags
  , ktuxesiz            undo_blks
  , ktuxerdbf           curfile 
  , ktuxerdbb           curblock
  , ktuxescnw * power(2, 32) + ktuxescnb cscn -- commit/prepare commit SCN
  , ktuxeuel            
  -- distributed xacts
  --, ktuxeddbf           r_rfile
  --, ktuxeddbb           r_rblock
  --, ktuxepusn           r_usn#
  --, ktuxepslt           r_slot#
  --, ktuxepsqn           r_seq#
FROM
    x$ktuxe
WHERE ktuxesta != 'INACTIVE'
ORDER BY
    ktuxeusn
  , ktuxeslt
/


SELECT
    "AREA NAME"                               -- VARCHAR2(32)
  , "NUMAPG"                                  -- NUMBER
  , "PAGESIZE" / 1024 mempage_kb                 -- NUMBER
  , ROUND("SEGMENT SIZE" / 1048576) segsize_mb -- NUMBER
  , ROUND("SIZE" / 1048576) area_size_mb            -- NUMBER
  , ROUND("REMAINING ALLOC SIZE" / 1048576) remain_mb                   -- NUMBER
  , "SHMID"                                   -- NUMBER
  , "SEGMENT DISTRIBUTED"                     -- VARCHAR2(20)
  , "AREA FLAGS"                              -- NUMBER
  , "SEGMENT DEFERRED"                        -- VARCHAR2(20)
  , "SEG_START ADDR"                          -- RAW(8)
  , "START ADDR"                              -- RAW(8)
FROM
    x$ksmssinfo
/





Before  Rollback or committing  please capture below details  as If these transaction were XA transactions, the responsibility of cleaning them up would be the external transaction manager.



***********************
.
SET MARKUP HTML ON SPOOL ON HEAD "<TITLE>2PC - INFO </title> -
<STYLE TYPE='TEXT/CSS'><!--BODY {background: ffffc6} --></STYLE>"
SET ECHO OFF
REM Spooling to html file
REM Insert the queries to be executed between the SPOOL commands.
.
define COLLNAME='SUP_'
define SRDCNAME='2PC'
column SRDCSPOOLNAME new_val SRDCSPOOLNAME
--select 'SRDC_'||upper('&&SRDCNAME')||'_'||upper(instance_name)||'_'||to_char(sysdate,'YYYYMMDD_HH24MISS')||'.htm' SRDCSPOOLNAME from v$instance;
select '&&COLLNAME'||upper('&&SRDCNAME')||'_'||upper(instance_name)||'_CDB_'||upper(cdb)||'_'||to_char(sysdate,'YYYYMMDD_HH24MISS')||'.htm' SRDCSPOOLNAME from v$instance, v$database;
spool &SRDCSPOOLNAME

ALTER SESSION set NLS_DATE_FORMAT='MM/DD/YY HH24:Mi:SS';
set pagesize 1000
select sysdate from dual;

SELECT SYS_CONTEXT ('USERENV', 'CON_NAME') "Current Container" FROM DUAL;
select con_id, name, cdb, current_scn from gv$database order by inst_id;
select * from gv$containers order by inst_id,con_id;
select * from gv$pdbs order by inst_id,con_id;
select * from gv$version;


select * from global_name;
select * from dba_2pc_pending;
select * from dba_2pc_neighbors;
select * from sys.pending_trans$;
select * from sys.pending_sessions$;
select * from sys.pending_sub_sessions$;

select * from v$session_wait order by SECONDS_IN_WAIT;

SELECT KTUXEUSN, KTUXESLT, KTUXESQN, /* Transaction ID */
KTUXESTA Status,
KTUXECFL Flags
FROM x$ktuxe
WHERE ktuxesta!='INACTIVE';

prompt +++ In memory transaction +++
select /*+ ORDERED */
'----------------------------------------'||'
Curent Time : '|| substr(to_char(sysdate,' HH24.MI.SS'),1,22) ||'
'||'TX start_time: '||t.KTCXBSTM||'
'||'FORMATID: '||g.K2GTIFMT ||'
'||'GTXID: '||g.K2GTITID_EXT ||'
'||'Branch: '||g.K2GTIBID ||'
Local_Tran_Id ='||substr(t.KXIDUSN||'.'||t.kXIDSLT||'.'||t.kXIDSQN,1,15)||'
'||'KTUXESTA='|| x.KTUXESTA ||'
'||'KTUXEDFL='|| x.KTUXECFL ||'
Lock_Info: ID1: ' || ((t.kXIDUSN*64*1024)+ t.kXIDSLT)
||' ID2: '|| t.kXIDSQN
XA_transaction_INFO
from x$k2gte g, x$ktcxb t, x$ktuxe x
where g.K2GTDXCB =t.ktcxbxba and
x.KTUXEUSN = t.KXIDUSN(+) and
x.KTUXESLT = t.kXIDSLT(+) and
x.KTUXESQN =t.kXIDSQN(+);

prompt +++ Timed out, prepared XA transactions +++

select global_tran_fmt, global_foreign_id, branch_id,state,
tran.local_tran_id
from sys.pending_trans$ tran, sys.pending_sessions$ sess
where tran.local_tran_id = sess.local_tran_id
and tran.state = 'prepared'
and dbms_utility.is_bit_set(tran.session_vector, sess.session_id)=1;

select sid, type, id1, id2, lmode, request from GV$lock
where type = 'TM' or type = 'TX';

select LOCAL_TRAN_ID, GLOBAL_TRAN_ID,to_char(FAIL_TIME,'dd-mm-yyyy HH24:MI:SS') FAIL_TIME,STATE, MIXED from DBA_2PC_PENDING;

select NAME, VALUE from V$PARAMETER where upper(NAME) = 'COMPATIBLE';
select NAME, VALUE from V$PARAMETER where upper(NAME) = 'DISTRIBUTED_LOCK_TIMEOUT';

select OWNER, OBJECT_NAME, OBJECT_TYPE, STATUS from DBA_OBJECTS where OBJECT_NAME like 'DBMS_XA%' order by OWNER, OBJECT_NAME, OBJECT_TYPE;

select blocking_session, sid, serial#,event, wait_class, seconds_in_wait
from v$session
where blocking_session is not NULL
order by blocking_session;

spool off
.
SET MARKUP HTML OFF
SET ECHO ON
.
***********************








References : 

Distributed Database, Transactions and Two Phase Commit (Doc ID 13229.1)

https://dbatrain.wordpress.com/wp-content/uploads/2009/01/database-links-masterclass_2.pdf

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/admin/managing-distributed-transactions.html#GUID-AB19431E-BF9F-4181-BC57-699F8A06149D

Primary Note for Troubleshooting Oracle Managed Distributed Transactions (Doc ID 100664.1)

https://www.linkedin.com/pulse/resolve-in-doubt-transactions-handling-unstuck-mohsen-taheri/

https://oraclefiles.com/2019/03/04/resolving-in-doubt-transactions/

Saturday, May 18, 2024

Storing Oracle user password in Oracle wallet on client side


We have move to Exacc , using OS authentication for application user was challenge  and we have to arrange other Secure External Password Store  .

Thanks to Oracle Doc 340559.1   that shared steps on using Oracle wallet on client side . 


When clients are configured to use the secure external password store, applications can connect to a database with the following CONNECT statement syntax, without specifying database login credentials:
connect /@db_connect_string
 
where db_connect_string is a valid connect string to access the intended database. In this case, the database credentials, username and password, 
are securely stored in an Oracle wallet created for this purpose. The autologin feature of this wallet is turned on so the system does not need a password to open the wallet.

From the wallet, it gets the credentials to access the database for the user they represent.



Configuring Clients to Use the External Password Store 


1) Create a wallet on the client by using the following syntax at the command line:

$ORACLE_HOME/oracle_common/bin/mkstore -wrl <wallet_location> -create    ( need to input wallet password once asked ) 



2) Create database connection credentials in the wallet by using the following syntax at the command line:

mkstore -wrl <wallet_location> -createCredential <db_connect_string> <username> <password>

example:

("N102" in the following example is a connect descriptor located in the tnsnames.ora.)

mkstore -wrl /home/mseibt/pstore -createCredential N102 <user> <password>
Enter password: welcome1          



3) In the client sqlnet.ora file, enter the WALLET_LOCATION parameter and set it to the directory location of the wallet you created in Step 1.


WALLET_LOCATION =
   (SOURCE =
      (METHOD = FILE)
      (METHOD_DATA = (DIRECTORY = /home/mseibt/pstore))
)


SQLNET.WALLET_OVERRIDE = TRUE
SSL_CLIENT_AUTHENTICATION = FALSE
SSL_VERSION = 0


4) In the client sqlnet.ora file, enter the SQLNET.WALLET_OVERRIDE parameter and set it to TRUE 

SQLNET.WALLET_OVERRIDE = TRUE



5) With the external password store configured, connect as <user>:

sqlplus /@N102



--Datapump using wallet
nohup expdp /@BSA1EP directory=DUMP_DIR dumpfile=scott_%U.dmp logfile=EXP_SCOTT.log schemas=scott parallel=4 &

-- JDBC using wallet
Connection conn = DriverManager.getConnection ("jdbc:oracle:oci:/@BSA1EP");

-- sqlplus 
sqlplus /@BSA1EP




Managing External Password Store Credentials 


1) Listing the contents of the external password store: 

mkstore -wrl /home/mseibt/pstore -listCredential



2) Adding database login credentials to an existing client wallet:

mkstore -wrl /home/mseibt/pstore -createCredential N101 <user><password>



3) Modifying database login credentials in a wallet: 

mkstore -wrl /home/mseibt/pstore -modifyCredential N102 <user> newpassword



4) Deleting database login credentials from a wallet: 

mkstore -wrl /home/mseibt/pstore -deleteCredential N101



5) Listing wallet entries: 

mkstore -wrl /home/mseibt/pstore -list



6) Listing entry values: 
The name will be something like oracle.wallet.passwordn where n is the entry number, Finally, open the entry using this:

mkstore -wrl /home/mseibt/pstore -viewEntry oracle.security.client.connect_string1    (  <name obtained>  in step  5) 



7) Modifying entry values: 

mkstore -wrl /home/mseibt/pstore -modifyEntry oracle.security.client.password1 newpass




Known Issues: 


Issues 1 : Oracle Wallet Cannot Be Created using mkstore (Doc ID 951276.1)

SOLUTION
1. Stop the Database
2. cd $ORACLE_HOME/bin
3. relink all
4. mkstore -wrl $ORACLE_HOME/admin/wallet -create



Reference : 

Using The Secure External Password Store (Doc ID 340559.1)

Wednesday, May 8, 2024

Changing max_string_size to extended for Oracle Pluggable database



Recently we have been reported   "ORA-00910:specified length too long for its datatype"   by application team .    Upon checking we planned to upgrade MAX_STRING_SIZE to EXTENDED.
 
We cant set  only at pdb level  too   and changes are replicated to  standby through redo logs if changes are  done only at PDB .


We will be  changing  MAX_STRING_SIZE only at pdb level 

In order to expand the maximum value of varchar2 (also nvarchar2 and raw) datatypes in Oracle 12c and beyond, the max_string_size parameter needs to be set to "extended".  This will change the maximum value from 4k (4096) to 32k (32767).

WARNING:  You MUST run utl32k.sql immediately after changing max_string_size=extended, else you risk invalidating the database columns.




Notes  : 

1) The COMPATIBLE initialization parameter must be set to 12.0.0.0 or higher to set MAX_STRING_SIZE = EXTENDED.

2) The utl32k.sql script increases the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns for the views where this is required. The script does not increase the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns in some views because of the way the SQL for those views is written.

3) When the 32k varchar2 feature is enabled, then any function based indexes that rely on PL/SQL functions returning a varchar2 will become unusable when the utl32k.sql script is run. Such indexes will subsequently have to be dropped and it will not be possible to re-create them. Any attempt to recreate such an index will fail with ORA-1450. The reason for this is that such functional indexes will have a "key" size of 32k and this is larger than the maximum allowed key size. Prior to enabling the 32k varchar2 feature, such indexes would have had a key size of 4000 bytes, which is within the allowed limit for an index key.


4) In some cases

Running the rdbms/admin/utl32k.sql script may cause the below error:

a-ORA-14415: index in partially dropped state, submit DROP INDEX
This is caused by  BUG 21450985 - ORA-14415: INDEX IN PARTIALLY DROPPED STATE, SUBMIT DROP INDEXSev 1 24X7 SR

b-ORA-30556: either functional or bitmap join index is defined on the column to be modified
This is caused by Bug 20539050 - ORA-30556 ON USING /RDBMS/ADMIN/UTL32K
Both the bugs are fixed in 12.2 version.For earlier versions one off patch can be requested by creating a SR to Oracle Support.

c-ORA-02019 WHEN RUNNING @?/RDBMS/ADMIN/UTL32K.SQL ON mview with private DB link
This is caused by BUG 19063812 - ORA-02019 WHEN RUNNING @?/RDBMS/ADMIN/UTL32K.SQL ON MV WITH PRIVATE DB LINK
It is fixed in 12.2.





Steps : 

1)  Purge  recyclebin in pdb 

1) Take pre snap  of invalid Objects in pdb 

2) Shut down the PDB abd restart  in upgrade mode 

alter pluggable database PDB1 close immediate instances=all ;
ALTER PLUGGABLE DATABASE pdb-name OPEN UPGRADE;


3)  Change the setting of MAX_STRING_SIZE in the PDB to EXTENDED.

alter system set MAX_STRING_SIZE=EXTENDED  sid='*';
@?/rdbms/admin/utl32k.sql

 Note:
The utl32k.sql script increases the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns for the views where this is required. 
The script does not increase the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns in some views because of the way the SQL for those views is written.
Run the rdbms/admin/utlrp.sql script in the PDB to recompile invalid objects. You must be connected AS SYSDBA to run the script.



4) Reopen the PDB in NORMAL mode

alter pluggable database PDB1 close immediate instances=all ;
ALTER PLUGGABLE DATABASE pdb-name OPEN  ;

Verify client connectivity using service .


5)  Compile Invalid Objects .

Generate  post snap of invalid objects   and  compare with pre snap taken   in step 1 


6)  Replicating to  Standby  :

Since we are  changing at PDB level   ,   changes should replicate to standby   through archive logs . 
Try to open in mount state temporarily and then normalize  in   read only mode after sometime . 





Views : 

select NAME,VALUE,ISPDB_MODIFIABLE,CON_ID from v$system_parameter where upper(NAME)='MAX_STRING_SIZE';



Reference : 

https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/MAX_STRING_SIZE.html
Effect of Max_String_Size on Data guard (Doc ID 2010497.1)
How to Increase the Maximum Size of VARCHAR2, NVARCHAR2, and RAW Columns in 12C Database using MAX_STRING_SIZE ? (Doc ID 1570297.1)
Alter Pluggable Database Close Instances=ALL Not Working (Doc ID 2062080.1)

Sunday, March 31, 2024

Setting max size of each Oracle Pluggable Database Inside same Cdb -- For capacity management

 
We had requirement  where we  want to assign maxsize  for each   Pdb for capacity  planning  whenw e had multiple Pdb iniside Same Cdb .

Luckily Oracle  Doc ID 2166477.1  came to rescue  with below Steps 


Default of MAX_PDB_STORAGE is no limit.



Run following sql in current PDB to limit the size of all datafiles of that PDB:

ALTER PLUGGABLE DATABASE STORAGE (MAXSIZE <MAXSIZE>);


To verify the setting, run below in current PDB:

select PROPERTY_VALUE FROM database_properties WHERE property_name = 'MAX_PDB_STORAGE';


To check every PDB's setting, run below in CDB:

select PROPERTY_NAME,PROPERTY_VALUE,DESCRIPTION,CON_ID FROM cdb_properties WHERE property_name = 'MAX_PDB_STORAGE';



The storage limits for PDBs (like MAX_PDB_STORAGE and MAX_SHARED_TEMP_SIZE) are also  stored in CDB_PROPERTIES.

  SELECT name,
     total_size,
     NVL(property_value, 'UNLIMITED') AS "MAX_SIZE"
 FROM v$containers c, cdb_properties p
  WHERE p.con_id(+) = c.con_id
  AND p.property_name(+) = 'MAX_PDB_STORAGE'
  /



Note : 

The MAX_SIZE column in V$PDBS reflects the original MAXSIZE set using the “ALTER PLUGGABLE DATABASE STORAGE” statement. However, this MAX_SIZE value does not include the discount percent (_pdb_max_size_discount with default 10%) which is included in all internal calculations to get the real maxsize.

To get the real MAXSIZE including the discount percent, query ACTUAL_MAX_SIZE from X$CON 


SQL> select ACTUAL_MAX_SIZE from X$CON;
SQL> SELECT total_size, max_size, (max_size + (max_size*0.10)) AS REAL_MAX_SIZE FROM v$pdbs;


We will   get below Eror ORA-65114  if we  try to add space in pdb beyond  ACTUAL_MAX_SIZE

ORA-65114: space usage in container is too high



References : 

How to set and check the max size of PLUGGABLE DATABASE STORAGE (Doc ID 2166477.1)

Saturday, March 16, 2024

TDE and Tls Encryption for Oracle database in 19C


Thought of documenting this since on exacc we have pluggable database and it uses tde. Having this article will help as one stop to get insight of Tde   wallets management 

For  both TDE and TlS  we create wallet .  In case of TDE wallet contains encryption keys and TLS store contains certificates 


Exacc   there is also 1 more external wallet  called as  Secure External Password Store (SEPS) for  storing sys password which is managed by mkstore  .  Will cover SEPS  in another  Blog 

This Blog is dedicated to  Tde and Tls 



################################################################
Encryption of Data at Rest (TDE)
################################################################

-- Encryption of the Data Files on Disk


Main Steps For TDE Include 
  • TDE DB Parameter Changes
  • Configure a Software KeyStore
  • Create Master Key
  • Create Auto-login KeyStore
  • Encryption of Tablespaces
  • Backup Keys
 
When we  talk about encryption  it could be either  columnar encryption or Tablespace encryption . Here we are going discuss Tablespace encryption . 


Wallet Types ; 
- United – TDE master encryption key for CDB and all PDBs are in same keystore
- Isolated – TDE master encryption key for CDB and PDB’s are in individual PDB’s keystores


Oracle database 12c introduced a new way to manage keystores, encryption keys and secrets using the ADMINISTER KEY MANAGEMENT command. 
This replaces the ALTER SYSTEM SET ENCRYPTION KEY and ALTER SYSTEM SET ENCRYPTION WALLET commands for key and wallet administration from previous releases.
KeyStroke File can  be placed as File or Under Hsm .  TDE_CONFIGURATION parameter is used to control location of key storoke file .

If using Oracle 18c or later, the sqlnet.ora file is officially deprecated
and you should use the WALLET_ROOT and TDE_CONFIGURATION parameters.


New parameters to Set in the Database

 ENCRYPT_NEW_TABLESPACES - specifies whether to encrypt newly created user tablespaces (AES128)

 WALLET_ROOT - specifies the path to the root of a directory tree containing the wallet ❋

 TDE_CONFIGURATION - specifies the Key Store Type being used (Hardware / Software)
 _TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM ❋
 - specifies default algorithm to use for encryption


❋ WALLET_ROOT parameter was introduced in 19c, prior to this the wallet location was specified in the   sqlnet.ora file. Eg:

 ENCRYPTION_WALLET_LOCATION =
 (SOURCE =
 (METHOD = FILE)
 (METHOD_DATA =
 (DIRECTORY = /u01/app/oracle/product/12.1.0.2/db_1/network/admin/$ORACLE_SID/wallet) ) )

 ❋ In 19c this is an underscore parameter. The regular parameter without the underscore was introduced in 20c. See Doc ID 2654121.1

 Patch 30398099 must be installed to use this parameter (Patch 30398099 has been integrated into the April 2021 RU for Oracle Database  release 19c (19.11))



TDE DB Parameter Changes

1. Set the Parameters
SQL> ALTER SYSTEM SET ENCRYPT_NEW_TABLESPACES = 'ALWAYS' SCOPE = SPFILE SID = '*';
SQL> ALTER SYSTEM SET WALLET_ROOT = '+DATA/PRMYDB-RAC' SCOPE = SPFILE SID ='*';
SQL> ALTER SYSTEM SET "_tablespace_encryption_default_algorithm" = "AES256" SCOPE=SPFILE SID='*’;


2. Bounce the Database

$ srvctl stop database –d prmydb-rac
$ srvctl start database –d prmydb-rac


3. Set the type of Key Store
SQL>  ALTER SYSTEM SET TDE_CONFIGURATION = "KEYSTORE_CONFIGURATION=HSM|FILE" SCOPE=BOTH SID='*';





Configure a Software KeyStore

The CDBs keystore is used to store encryption keys for all the associated PDBs, but they each need their own master encryption key. The master encryption key for the PDB must be exported before an unplug operation, so it can be imported after a subsequent plugin operation.


 SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATA/PRMYDB-RAC/TDE' IDENTIFIED BY "<passwd>";


 SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN FORCE KEYSTORE IDENTIFIED BY "<passwd>";     --> Open wallet 


 SQL> select wrl_type,wrl_parameter,status from gv$encryption_wallet;


-->  For Pluggable database execute same On Pdb  
 SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE  IDENTIFIED BY "<passwd>" WITH BACKUP;    --> Amend  wallet   (master encryption key) 


 SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATA/PRMYDB-RAC/TDE' IDENTIFIED BY "<tde-passwd  --> Create the Master Key for TDE Encryption on the Database


 SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '+DATA/PRMYDB-RAC/TDE’ IDENTIFIED BY "<passwd>"; --> Create the KeyStore to be an Auto-Login Software KeyStore

$ srvctl stop database –d PRMYDB-RAC
$ srvctl start database –d PRMYDB-RAC

 
SQL> select wrl_type,wrl_parameter,status from gv$encryption_wallet;

--> Copy Key Store file on Standby 



orapki wallet display -wallet /u01/app/oracle/admin/${DB_UNIQUE_NAME}/wallet/tde
orapki wallet display -wallet ewallet.p12 -summary




You need to create and activate a master key in the root container and one in each of the pluggable databases. Using the CONTAINER=ALL clause does it in a single step. If the CONTAINER=ALL clause is omitted, it will only be done in the current container and will need to be done again for each PDB individually. Information about the master key is displayed using the V$ENCRYPTION_KEYS view.

Eg 

ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY myPassword WITH BACKUP CONTAINER=ALL;

-- Open
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword CONTAINER=ALL;

-- Close
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY myPassword CONTAINER=ALL;






Enable TDE using  Isolated method  for Newly Plugged Database . 


Step 1 : Configure init.ora parameter to enable file based wallet. 

sqlplus /nolog <<!
conn /as sysdba

Ensure PDB is not in restricted mode

-- switch to the target PDB
alter session set container = PDBNAME;
show con_name
show parameter tde_configuration
alter system set tde_configuration='KEYSTORE_CONFIGURATION=FILE';



Step 2 Create MEK ( Master encryption key ) using key management service

administer key management create keystore identified by "passsword";  
administer key management set key force keystore identified by "passsword" with backup;


Step 3: Create AOW ( Auto wallet login)

-- identify the wallet location
col wrl_parameter new_value wallet_location
select wrl_parameter from v$encryption_wallet;

-- create Auto Wallet login
--update the wallet location with output of the previous command [ LN01103U is CDB and 7ABB7DFB3F0A0F0DE0531B5CF40A0FF4 is GUID of PDB ]
--eg  wallet_location '/var/opt/oracle/dbaas_acfs/LN01103U/wallet_root/7ABB7DFB3F0A0F0DE0531B5CF40A0FF4/tde'

administer key management create auto_login keystore from keystore 'wallet_location' identified by "password";



Step 4 : Enable AOW ( either close/open wallet or bounce PDB if we have downtime )

set lines 200 pages 300
col WRL_PARAMETER for a30
col wrl_type for a8
select * from gv$encryption_wallet order by inst_id, con_id;

NB: here WALLET_TYPE output shows as PASSWORD.. so we need to set as auto login without putting container down

administer key management set keystore close identified by "password";
administer key management set keystore open identified by "password";



--follow step 6 if we get downtime for this pdb

Step 5: Verify wallets are AUTOLOGIN and OPEN state

set lines 200 pages 300
col WRL_PARAMETER for a30
col wrl_type for a8
select * from gv$encryption_wallet order by inst_id, con_id;
NB: here WALLET_TYPE output shows as AUTOLOGIN..



Step 6: Bounce PDB if possible & repeat step 5 for verification .

incase if you get downtime

[ BEGIN ]
alter pluggable database PDBNAME close immediate instances = all;
alter pluggable database PDBNAME open instances = all;  
[ END ]





Tablespace Encryption : 

Online Encryption of the Tablespaces done on the Primary Only.

The Standby DB is Encrypted using the Data Guard Redo Stream.

The parameter TABLESPACE_ENCRYPTION = MANUAL_ENABLE (default)
This allows you to encrypt Tablespaces if the DB is licensed for Advanced Security.
AUTO_ENABLE = All TS are encrypted by default (Cloud) cannot decrypt.
DECRYPT ONLY = Cannot encrypt TS, prevent licence violatio


You can set the ENCRYPT_NEW_TABLESPACES database initialization
parameter to automatically encrypt future tablespaces that you
create.
➢ SQL> alter system set "_tablespace_encryption_default_algorithm" = 'AES256' scope =
both;
➢ alter system set encrypt_new_tablespaces = ALWAYS scope = both;


--> Online Ecryption : 

SQL> select t.name,e.ENCRYPTIONALG,e.ENCRYPTEDTS,e.STATUS from V$ENCRYPTED_TABLESPACES e,v$tablespace t where t.ts#=e.ts#(+);
 
SQL> ALTER TABLESPACE APP_TS1 ENCRYPTION ONLINE USING 'AES256' ENCRYPT ;
SQL> ALTER TABLESPACE TOOLS ENCRYPTION ONLINE USING 'AES256' ENCRYPT ;
SQL> ALTER TABLESPACE USERS ENCRYPTION ONLINE USING 'AES256' ENCRYPT ;

SQL> select t.name,e.ENCRYPTIONALG,e.ENCRYPTEDTS,e.STATUS from V$ENCRYPTED_TABLESPACES e,v$tablespace t where t.ts#=e.ts#(+);
 
SQL> CREATE TABLESPACE TEST_NEW_ENC_TS DATAFILE SIZE 1G ENCRYPTION USING 'AES256' DEFAULT STORAGE (ENCRYPT);  --> Creation of a New Tablespace (Primary)



--> Offline Tablespace Encryption on the Primary Database 
 
SQL> ALTER TABLEPACE APP_TS1 OFFLINE;
SQL> select t.name as ts_nme,d.name, d.file# from v$tablespace t, v$datafile d where t.ts# = d.ts# and t.name = 'APP_TS1' order by file#;
SQL> alter database datafile 2 encrypt;
SQL> ALTER TABLEPACE APP_TS1 ONLINE;


 For smaller tablespaces we encrypted at the Tablespace level

 SQL> ALTER TABLESPACE USERS OFFLINE;
 SQL> ALTER TABLESPACE USERS ENCRYPTION OFFLINE ENCRYPT;
 SQL> ALTER TABLESPACE USERS ONLINE;




--> Decryption of a Tablespace

SQL> ALTER TABLESPACE APP_TS1 ENCRYPTION ONLINE DECRYPT;
SQL> ALTER TABLESPACE APP_TS1 OFFLINE;
SQL> ALTER TABLESPACE APP_TS1 ENCRYPTION OFFLINE DECRYPT;
SQL> ALTER TABLESPACE APP_TS1 ONLINE;





Recovering forgot Oracle Database TDE Wallet Password. 

Ideally There is no staright  way to recover Tde password there seems some indirect  shared online which is shared below using wallet merge 

Below are some of schenario where we  need TDE Wallet Password Incase of Auto-Login Configured:
To perform REKEY operation
To generate new master key
To export key during PDB migration
During PDB Remote Cloning
To Migrated file based wallet to OKV



Wallet Merge Steps 

1.Take backup of current wallet files
2.Connect to database
3.Create brand new TDE wallet/Keystore
4.perform merge operation
5.perform validation using orapki/mkstore
6.Create Auto-Login
7.Copy New files to wallet_root parameter location
8.close and open wallet from database level.


Take Backup
cp /u01/app/oracle/admin/<DB_UNIQUE_NAME>/tde /backup/wallet_backup_01AUG2023/


Create New Wallet:
sqlplus / as sysdba
ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '<NEW_LOCATION>' IDENTIFIED BY Welcome;


Merge TDE Wallet:
ADMINISTER KEY MANAGEMENT MERGE KEYSTORE '<ORIGINAL_LOCATION_FILES>' INTO EXISTING KEYSTORE '<NEW_LOCATION>'  IDENTIFIED BY "<NEW_KEYSTORE_PASSWORD>" WITH BACKUP;


Perform Validation:
orapki wallet display -wallet <NEW_LOCATION>
Note: above command will ask password, enter the new wallet password. post that it will display master keys.



Creating Auto-Login:
ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '<NEW_LOCATION' IDENTIFIED by Welcome;
Copy both files to original Location of tde wallet files and also copy to other nodes, as well as standby databases.
*** DONE ***






Views : 


set pages 300  lines 250 feedback off 
col wrl_parameter format a60 
col wrl_type format a5
col name format a8 
col status format a10 
select  inst_id , con_id , name , wrl_type , wrl_parameter , status , keystore_mode , wallet_type , wallet_order from gv$encryption_wallet natural join gv$containers order by wrl_type, inst_id , con_id , name ; 

-- Masterkey details 
set lines 300 
col name format a15 
select inst_id , name , masterkeyid  from  gv$containers natural join  gv$database_key_info ; 


 select con_id, mkloc from x$kcbdbk   ;


set lines 300
col name for a15
col wrl_type for a10
col status for a20
select p.con_id, p.name, p.open_mode, ew.wrl_type, ew.wallet_type, ew.status 
from v$pdbs p join v$encryption_wallet ew on (ew.con_id = p.con_id)
order by p.con_id;


set lines 300
col name for a10
col key_id for a60
col creation_time for a40
select p.con_id, p.name, p.open_mode, ek.key_id --, ek.creation_time, ek.KEY_USE
from v$pdbs p left outer join v$encryption_keys ek on (ek.con_id = p.con_id)
order by p.con_id;

select p.con_id, p.name, p.open_mode, ew.wrl_type, ew.wallet_type, ew.status 
from v$pdbs p join v$encryption_wallet ew on (ew.con_id = p.con_id)
where p.name = 'PDB2';


select * from v$encrypted_tablespaces ; 

select tablespace_name , encrypted from  dba_tablespaces  ; 

select file# , encrypted from  v$datafile_header ; 


set linesize 250 pagesize 250
column name format a40
column masterkeyid_base64 format a60
select  name,utl_raw.cast_to_varchar2( utl_encode.base64_encode('01'||substr(mkeyid,1,4))) || utl_raw.cast_to_varchar2( utl_encode.base64_encode(substr(mkeyid,5,length(mkeyid)))) masterkeyid_base64  FROM (select t.name, RAWTOHEX(x.mkid) mkeyid from v$tablespace t, x$kcbtek x where t.ts#=x.ts#);
select con_id, wallet_type,wrl_parameter, status from v$encryption_wallet;
select t.name,e.ENCRYPTIONALG,e.ENCRYPTEDTS,e.STATUS from V$ENCRYPTED_TABLESPACES e, v$tablespace t where t.ts#=e.ts#(+);
exit




References : 

https://docs.oracle.com/en/database/oracle/oracle-database/19/asoag/managing-keystore-and-tde-master-encryption-key.html#GUID-0C080CA8-1812-4DCD-AC30-FE47534C7962




################################################################
Encryption of Data In-Transit (TLS1.2)
################################################################

-- Encryption of data over the Network Layer


Main Steps For TLS Include 
  • Certificate Creation
  • Signing of Certificates (CA)
  • Wallets
  • Add Certificates into a Wallet
  • Network Parameter Changes
  • Database Parameter Changes
  • Application Connection 



• Components:

Certificate Authority (CA): a trusted third party that certifies the identity of entities,
such as users, databases, administrators, clients, and servers.
Certificates: created when an entity's public key is signed by a trusted certificate
authority (CA).
Wallet: a container that stores authentication and signing credentials, including
private keys, certificates, and trusted certificates SSL
Certificates revocation lists: validity of CA signed certificates 


Certificate Authority (CA)

• A third party trusted by both of the communicating parties (e.g.Verisign)
• Validates, identities and issue/revoke certificates
• The CA uses its private key to encrypt a message
• The CA public key is well known and does not have to be authenticated each time it is accessed (browsers, wallets, etc.)
• Organization can use in-house CA (e.g. MS Certificate services)

Publicly Trusted CAs
• A trusted third party (TTP) used as CA for the certificates
• Commercial
– Verisign, Digicert, GoDaddy
• Web browsers includes by default public keys of TTPs CA
• De-facto standard for websites, as certificates from non trusted CAs are reported by default as dangerous



Signature
• One-way hash of the data (certificate) encrypted with signer’s private key – it cannot be reversed

• Receiver validates the integrity of the data:
– Receiver gets the data and signature
– Data is decrypted using sender’s public key
– Signature is decrypted using sender’s public key
– New signature is created using same algorithm
– Both new and received signature should match if data was not tampered



Wallet
• A file storing authentication and signing credentials, including private keys,
certificates, and trusted certificates SSL needs.
• Oracle server and client using SSL needs a wallet file
– configured in sqlnet.ora, listener.ora, optional in tnsnames.ora (instead of sqlnet)
– Must be auto-login
• Managed with Oracle Wallet Manager and orapki tool


Protocol Year
SSL 1.0 No
SSL 2.0 1995
SSL 3.0 1996
TLS 1.0 1999
TLS 1.1 2006
TLS 1.2 2008
TLS 1.3 2018


We will use an Oracle Wallet to securely store and retrieve credentials such as certificates, certificate requests, and private keys. The wallet will be created in a directory of your choice on your database server. In examples that follow, we use the placeholder <server_wallet_directory> to represent that directory name.

The use of PKI (orapki) encryption with Transparent Data Encryption is deprecated.
Use the ADMINISTER KEY MANAGEMENT SQL statement. 


We create the wallet using the Oracle Public Key Infrastructure (orapki) utility. The wallet will be created with two additional options.

The auto_login_local option allows the database to read the values of the wallet without requiring interactive password authentication. Additionally, it enforces that the wallet may only be opened by the user that created the wallet on the machine where the wallet was created. Because of this security requirement you will need to create the wallet as the same user that your database executes as.

The wallet also requires that a password be specified for it. We will use the placeholder <server_wallet_password> in the examples to represent that value.
Connections use SSL or TLS depending on the cipher suites selected. Place the ciphers in the strongest-to-weakest order in the list.



_____________________________________________
Wallet  Configuration  Steps 
___________________________________________


1) 

Set the parameters common_user_prefix, os_authent_prefix and remote_os_authent to NULL in the database.
alter system set common_user_prefix='' scope=spfile;
alter system set os_authent_prefix='' scope=spfile;
alter system set remote_os_authent=FALSE scope=spfile;


2) Update server sqlnet.ora settings:

WALLET_LOCATION =
   (SOURCE =
     (METHOD = FILE)
     (METHOD_DATA = (DIRECTORY = /u01/app/oracle/pki/server_wallet)     )
  )  

SQLNET.AUTHENTICATION_SERVICES= (TCPS,IPC,BEQ) 

SSL_CLIENT_AUTHENTICATION = TRUE   ( or false ) 

SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_GCM_SHA384,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_256_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_RC4_128_SHA,SSL_RSA_WITH_RC4_128_MD5) 
SSL_VERSION = 1.2 or 1.1

sqlnet.crypto_checksum_client = ACCEPTED    ( or  requested ) 
sqlnet.crypto_checksum_server = ACCEPTED  ( or  requested ) 
sqlnet.crypto_checksum_types_server=(SHA356, SHA1) 


3) Update server listener.ora settings:
WALLET_LOCATION =( SOURCE=(METHOD=FILE) ( EMOTHOD_DATA=(DIRECTORY= /u01/app/oracle/pki/server_wallet ))) 
SSL_CLIENT_AUTHENTICATION=FALSE
SSL_VERSION=1.2



4)  Create the Server Wallet and Certificate Signing Request (CSR). In this example we are using this password: Welcome1234.

mkdir /u01/app/oracle/pki/server_wallet
cd /u01/app/oracle/pki/server_wallet

orapki wallet create -wallet /u01/app/oracle/pki/server_wallet -auto_login -pwd Welcome123

orapki wallet display -wallet /u01/app/oracle/pki/server_wallet/



5) Create a self-signed certificate and load it into the wallet.

orapki wallet add -wallet /u01/app/oracle/pki/server_wallet -dn "CN=oracle12c.eem.com" -keysize 2048 -sign_alg sha256 -pwd Welcome1234

parameter value
dn A user specified distinguished name. For example: "CN=ORCLCDB,O=testsecurity,C=US"
keysize The certificates keysize must be one of the 3 values, either (512|1024|2048)
validity The number of days the certificate should be valid for
self_signed Self-sign the certification




6) Export the CSR into a file.

Finally we will export the server certificate from the wallet to a file to use later with the client wallet.

orapki wallet export -wallet /u01/app/oracle/pki/server_wallet -dn "CN=oracle12c.eem.com" -request devserver.txt

orapki wallet export -wallet <server_wallet_directory> -pwd <server_wallet_password> -dn "CN=ORCLCDB,O=testsecurity,C=US" -cert /tmp/oracle-db-certificate.crt

$ cat  /tmp/oracle-db-certificate.crt
 


7)  If the Certificate Authority (CA) is unable to process a CSR with MD5 algorithm then the below commands need to be run (Oracle MOS Note 1448161.1).

openssl pkcs12 -in ewallet.p12 -nodes -out nonoracle_wallet.pem
openssl req -new -key nonoracle_wallet.pem -out devserver.csr



8)  (LOAD TRUESTED CERT)   Submit the CSR to the CA for them to generate a certificate.Once the CA provides the server  certificate, upload the certificate chain to the wallet.

orapki wallet add -wallet /u01/app/oracle/pki/server_wallet -trusted_cert -cert /u01/app/oracle/pki/server_wallet/ca_cert.crt -pwd Welcome1234

orapki wallet add -wallet /u01/app/oracle/pki/server_wallet -trusted_cert -cert /u01/app/oracle/pki/server_wallet/chain.crt -pwd Welcome1234

orapki wallet add -wallet /u01/app/oracle/pki/server_wallet -user_cert -cert /u01/app/oracle/pki/server_wallet/devserver.crt -pwd Welcome1234

orapki wallet display -wallet /var/opt//oracle/wallets/

orapki cert display  -cert /var/opt//oracle/wallets/ -summary 



9)  Modify endpoints  in scan listener and restart  listeners 

[oracle@myserver ~]$ crsctl stat res -p |grep ENDPOINTS
[oracle@myserver ~]$ srvctl modify listener -p "TCP:1521/TCPS:2484"
[oracle@myserver ~]$ srvctl modify scan_listener -p "TCP:1521/TCPS:2484"
[oracle@myserver ~]$ crsctl stat res -p |grep -B20 ENDPOINTS | grep -e ENDPOINTS -e "^NAME="


New TCPS listener should be added to the list of listeners in the LOCAL_LISTENER database parameter

– Static registration can be used for single instances instead

ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)...) (ADDRESS=(PROTOCOL=TCPS)...)))' scope=both; 


$ srvctl stop scan_listener ; srvctl start scan_listener ; srvctl status scan_listener
$ srvctl stop listener ; srvctl start listener ; srvctl status listener




For Rac :

1. Update the listener.ora at $GRID_HOME/network/admin (WALLET_LOCATION parameter is the location of the server wallet. In the examples below, we are using the shared server wallet location for Oracle RAC.)

The file listener.ora must be updated to enable a port for secure communication and specify the location of the Oracle wallet.
Please note, when specifying the value for the DIRECTORY parameter make sure to use a fully qualified path to the <server_wallet_directory> location.

For now, we are going to continue to use username/password authentication, so we will disable SSL client authentication on the server by setting the SSL_CLIENT_AUTHENTICATION property to FALSE.

Next, we will add the key SECURE_PROTOCOL_LISTENER and set it to use the more secure Oracle IPC (which only allows communication with other processes on the system) rather than the TCP protocol. By setting this we are instructing the listener to only accept administration and registration requests from the IPC protocol.
Lastly, update the listener entry to enable support for TCPS traffic by changing the protocol from TCP to TCPS and the PORT to 2484.


SSL_CLIENT_AUTHENTICATION = FALSE
SECURE_PROTOCOL_LISTENER=(IPC)
WALLET_LOCATION =
  (SOURCE =
   (METHOD = FILE)
    (METHOD_DATA =
     (DIRECTORY = /acfs/oracle/pki_wallet/server_wallet)
    )
  )


2. Update sqlnet.ora at $GRID_HOME/network/admin

The file sqlnet.ora must also be updated to enable secure communications. Like the changes we made to the listener.ora file, the SSL_CLIENT_AUTHENTICATION and WALLET_LOCATION need to be set.
Once again, when specifying the value for DIRECTORY, make sure to use a fully qualified path to the <server_wallet_directory> location you specified.
Lastly, we need to specify the property SSL_CIPHER_SUITES to define which encryption algorithms the server should use/require to encrypt network traffic.


SQLNET.ENCRYPTION_SERVER = required
The parameter ENCRYPTION_SERVER has the following options:
REQUESTED – to enable the security service if the client allows it 
REQUIRED – to enable the security service and disallow the connection if the client is not enabled for the security service 
ACCEPTED – to enable the security service if required or requested by the client 
REJECTED – to disable the security service, even if required by the client



WALLET_LOCATION =
  (SOURCE =
    (METHOD = FILE)
    (METHOD_DATA =
      (DIRECTORY = /acfs/oracle/pki_wallet/server_wallet)
    )
  )
SQLNET.ENCRYPTION_TYPES_SERVER = (AES256,AES192)
SQLNET.ENCRYPTION_TYPES_CLIENT = (AES256,AES192)
SQLNET.CRYPTO_CHECKSUM_CLIENT = ACCEPTED
SQLNET.CRYPTO_CHECKSUM_SERVER = ACCEPTED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA256,SHA1,SHA384)
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT = (SHA256,SHA1,SHA384)


###############  SSL  ########################
SQLNET.AUTHENTICATION_SERVICES= (TCPS, BEQ, IPC)
SSL_CLIENT_AUTHENTICATION = TRUE
SSL_CIPHER_SUITES = (SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_AES_256_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA)
SSL_VERSION = 1.2 or 1.1


3. Update sqlnet.ora at $ORACLE_HOME/network/admin

WALLET_LOCATION =
  (SOURCE =
    (METHOD = FILE)
    (METHOD_DATA =
      (DIRECTORY = /acfs/oracle/pki_wallet/server_wallet)
    )
  )
SQLNET.ENCRYPTION_TYPES_SERVER = (AES256,AES192)
SQLNET.ENCRYPTION_TYPES_CLIENT = (AES256,AES192)
SQLNET.CRYPTO_CHECKSUM_CLIENT = ACCEPTED
SQLNET.CRYPTO_CHECKSUM_SERVER = ACCEPTED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA256,SHA1,SHA384)
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT = (SHA256,SHA1,SHA384)


4. Restart the database(s) and all listeners so that the new settings take effect.
5. Connect via sqlplus using the TCPS tnsnames entry.
6. Toad: the username should be EXTERNAL and the password field is left blank.
7. SQL Developer: Requires version 17 of SQL Developer and perform the following steps:
a. Select Tools-> Preferences-> Database-> Advanced
b. Check – ‘Use Oracle Client’ and ‘Use OCI/Thick Driver’
c. Set Oracle Homes and TNS Names Directory
d. Create New Connection (Select the TCPS TNS Names Entry)
e. Select OS Authentication

f. Once connected, run the below queries to verify the connection.
Select user from dual;
SELECT SYS_CONTEXT ('USERENV', 'AUTHENTICATED_IDENTITY') FROM DUAL;
SELECT SYS_CONTEXT ('USERENV', 'AUTHENTICATION_METHOD') FROM DUAL; 
SELECT SYS_CONTEXT ('USERENV', 'NETWORK_PROTOCOL') FROM DUAL;





 
Client Side TLS Wallet Configuration 
  

Now that we have the server set up for encrypted communication, we also need to set up encryption in the client as well. The client will also use an Oracle Wallet to store the server security certificate to encode and decode communication with the server.

As noted in the assumptions section, this article assumes you have a machine with the Oracle database client installed. We will create another Oracle Wallet (this time on the client) with the following command, again setting the wallet to enable the auto_login_local option and setting a password. As above we will use a placeholder <client_wallet_directory> in examples that follow to represent the directory name you have chosen.

orapki wallet create -wallet <client_wallet_directory> -auto_login_local -pwd <client_wallet_password>


1) Create a new auto-login wallet.

orapki wallet create -wallet "c:\app\oracle\wallet" -pwd WalletPasswd123 -auto_login_local


2) Create a self-signed certificate and load it into the wallet.

orapki wallet add -wallet "c:\app\oracle\wallet" -pwd WalletPasswd123 -dn "CN=%computername%" -keysize 1024 -self_signed -validity 3650


3)  Now we need to add the server security certificate (the file oracle-db-certificate.crt we generated from the database server) to the client wallet to facilitate encrypted communication. For example

orapki wallet add -wallet <client_wallet_directory> -pwd <client_wallet_password> -trusted_cert -cert oracle-db-certificate.crt

orapki wallet add -wallet "c:\app\oracle\wallet" -pwd WalletPasswd123 -trusted_cert -cert c:\ol7-121.localdomain-certificate.crt

orapki wallet display -wallet "c:\app\oracle\wallet" -pwd WalletPasswd1234



4)   Update the client side sqlnet.ora

SQLNET.AUTHENTICATION_SERVICES=(TCPS,BEQ)
NAMES.DIRECTORY_PATH= (TNSNAMES)
SSL_CLIENT_AUTHENTICATION = TRUE
SSL_VERSION = 1.1
SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_GCM_SHA384,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_256_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_RC4_128_SHA,SSL_RSA_WITH_RC4_128_MD5)

WALLET_LOCATION = (SOURCE = (METHOD=MCS))
Or
WALLET_LOCATION =
  (SOURCE =
    (METHOD = FILE)
    (METHOD_DATA =
      (DIRECTORY = <server_wallet_directory>)
    )
  )


4)   Update the client tnsnames.ora file to include an entry for the TCPS connection.

First, we will need to update the file tnsnames.orain your client to register the new TCPS listener on your database server. You should replace SERVER_ADDRESS with the IP Address or FQDN of the server hosting your database.
Update the net_service_name entry (in this example named SERVER) to enable support for TCPS traffic by changing the protocol from TCP to TCPS and the PORT to 2484.


testdb_TCPS=
  (DESCRIPTION=
    (ADDRESS=
      (PROTOCOL=TCPS)
      (HOST=oracle12c.aem.com)
      (PORT=1522)
    )
    (CONNECT_DATA=
      (SERVER=dedicated)
      (SERVICE_NAME=testdb.aem.com)
    )
  )



6)  Update the database user account to be identified by the certificate (this will be different for every user based on Distinguished Name of their certificate).

alter user ggarrison identified externally as 'CN=lastname.firstname.middlename,OU=PKI,C=US';

 
To check that encryption is effectively taking place, execute the following SQL query on the database server side:
 
SQL> select network_service_banner from v$session_connect_info
 where sid in (select distinct sid from v$mystat);




7)   Exchange Cert:     Export   client  cert and  import in  server   (  This  is not needed for OU class Cert ) 



Export the certificate so we can load it into the server later.

c:\>orapki wallet export -wallet "c:\app\oracle\wallet" -pwd WalletPasswd123 -dn "CN=%computername%" -cert c:\%computername%-certificate.crt


Load the client certificate into the server wallet.

$ orapki wallet add -wallet "/u01/app/oracle/wallet" -pwd WalletPasswd123  -trusted_cert -cert /tmp/my-computer-certificate.crt

orapki wallet display -wallet "/u01/app/oracle/wallet" -pwd WalletPasswd123







SSL Troubleshooting Guide (Doc ID 166492.1)

• ORA-12560: TNS:protocol adapter error
• ORA-28862: SSL connection failed
– 28759, 00000, "Failed to open file"
– 28859, 00000, "SSL negotiation failure"
– ntzCreateConnection: failed with error 549
• ORA-29024:Certificate Validation Failure
• ORA-29143: Message 29143 not found
• ORA-29106: Can not import PKCS # 12 wallet
• ORA-28860: Fatal SSL error
• ORA-29263: HTTP protocol error
• ORA-28868: certificate chain check failed • ORA-28750: unknown error  
• ORA-28865: SSL connection closed
• ORA-01004: Default username feature not supported; log denied
• ORA-28864: SSL connection closed gracefully
• ORA-01017: invalid username/password; logon denied
• alert.log: "SSL Client: Server DN doesn't contain expected SID name"
• ORA-29113: Cannot access attributes from a PKCS #12 key bag.
• ORA-29002: SSL transport detected invalid or obsolete server certificate
• ORA-29003: SSL transport detected mismatched server certificate
• ORA-28857: Unknown SSL Error




Reference : 

https://docs.oracle.com/en/cloud/paas/data-safe/admds/create-wallet-or-certificates-tls-connection.html#TASK_QZ2_XRQ_ZHB

https://2019.hroug.hr/eng/content/download/20367/433976/file/Nelson+Calero+-+SSL+certificates+in+the+Oracle+Database+without+surprises.pdf

https://docs.oracle.com/en/database/oracle/oracle-database/18/dbseg/configuring-secure-sockets-layer-authentication.html
 
https://blogs.oracle.com/dev2dev/ssl-connection-to-oracle-db-using-jdbc,-tlsv12,-jks-or-oracle-wallets


################################################################
################################################################