Monday, May 20, 2024

Oracle Handling Distributed, Pending , In-Doubt and Remote Transaction

 

A distributed transaction modifies data related to two or more databases, it contains DML statements than span many nodes. For a distributed transaction to be succesful all or none of the database nodes involved in the transaction need to commit or rollback the whole transaction.

Note the difference between a distributed and a remote transaction; a remote transaction contains one or more DML statements that are executed on the SAME remote node,


Manual Resolution of In-Doubt Transactions
–The in-doubt transaction has locks on critical data or undo segments.
–The cause of the system, network, or software failure cannot be repaired quickly.




Commit or Rollback Pending Transaction -- UNSTUCK TRANSACTION : 


SQL> select local_tran_id,global_tran_id, state,mixed, commit# from dba_2pc_pending;

If the state of the transaction is “prepared” and there is no inconsistency, the transaction can be forced to rollback, or maybe if the underlying problem which caused the in-doubt transaction is resolved the transaction can be forced to commit as follows:

SQL> ROLLBACK FORCE  '97.33.166765' /* ->replace with ur own trx_id */
or
SQL> COMMIT FORCE  '97.33.166765' /* ->replace with ur own trx_id */


If the state of the transaction is “collecting” and you execute the above command, you may see an error like:

ERROR at line 1:
ORA-02058: no prepared transaction found with ID 97.33.166765




 DBA_2PC_PENDING view have entries about our transaction but there is no transaction in reality

If the state of the transaction (in DBA_2PC_PENDING) is committed, rollback forced or commit forced then it can be cleaned by:

SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('97.33.166765'); /* ->replace with ur own trx_id */





HANDLING STUCK TRANSACTIONS

Our ultimate goal is not seeing the transaction in X$KTUXE table; and ensuring that the dictionary tables like PENDING_TRANS$ to be consistent with this information.

Stuck transactions can be examined under the below conditions:



2.1. Condition 1: DBA_2PC_PENDING view have entries about our transaction but there is no transaction in reality

The condition is that; when we issue select to the dictionary views like the DBA_2PC_PENDING, PENDING_TRANS$, etc. we see the transaction, but the transaction does not exist in X$KTUXE view.

The actual transaction entry view is X$KTUXE ([K]ernel [T]ransaction [U]ndo Transa[X]tion [E]ntry) where the columns correspond to the following sections of the transaction id:

KTUXEUSN.KTUXESLT.KTUXESQN = 96.22.163456   (The concat of KTUXEUSN, KTUXESLT and KTUXESQN gives us the transacion number)
KTUXEUSN=96
KTUXESLT=22
KTUXESQN=163456

Therefore, the condition1 holds when DBA_2PC_PENDING has the entry but X$KTUXE does not.

SQL> SELECT LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, COMMIT# FROM DBA_2PC_PENDING;

    -- Returns: 96.22.163456 TESTDB.723a8559.96.22.163456    prepared    no    56759427464

SQL> SELECT * FROM X$KTUXE WHERE KTUXEUSN=96 AND KTUXESLT=22 AND KTUXESQN =163456;
  
  -- Returns: No Rows

Solution 1 to Condition 1: If the state of the transaction (in DBA_2PC_PENDING) is committed, rollback forced or commit forced then it can be cleaned by:

SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456'); 
Solution 2 to Condition 1: If the state of the transaction is prepared, we have to clean manually as follows:

SQL> DELETE FROM SYS.PENDING_TRANS$ WHERE LOCAL_TRAN_ID = '96.22.163456'; 
SQL> DELETE FROM SYS.PENDING_SESSIONS$ WHERE LOCAL_TRAN_ID ='96.22.163456' ; 
SQL> DELETE FROM SYS.PENDING_SUB_SESSIONS$ WHERE LOCAL_TRAN_ID = '96.22.163456';
SQL> COMMIT;




2.2. Condition 2: DBA_2PC_PENDING view does NOT have entries about our transaction but there IS A transaction.

This is something like a orphan transaction that the dictionary is not aware of.
Trying to force commit or rollback this transaction may result in error like below, since the dictionary is not aware:

SQL> ROLLBACK FORCE '96.22.163456'

-- ORA-02058: no prepared transaction found with ID 96.22.163456
Solution to Condition 2: What we need to do at this point is; recovering our transaction from being an orphan by inserting some dummy records into dictionay tables (so the views…) and then force a rollback or commit: You do not have to change the parameters in the insert command other than the transaction id.

SQL> ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;
    
SQL> INSERT INTO PENDING_TRANS$ (LOCAL_TRAN_ID, GLOBAL_TRAN_FMT, GLOBAL_ORACLE_ID, STATE, STATUS, SESSION_VECTOR, RECO_VECTOR, TYPE#, FAIL_TIME,RECO_TIME)
    VALUES
    (
         '96.22.163456', 
         299354,
         'XXXXXXX.12345.1.2.3',
         'prepared','P',
         hextoraw( '00000001' ),
         hextoraw( '00000000' ),
         0, sysdate, sysdate
    );
    
SQL> INSERT INTO PENDING_SESSIONS$
    VALUES
    (
     '96.22.163456', 
     1, hextoraw('05004F003A1500000104'),
     'C', 0, 30258592, '',
     146
    );
SQL>  COMMIT;

Now, we should be able to rollback or commit.


SQL> ROLLBACK FORCE '96.22.163456' 
or
SQL> COMMIT FORCE '96.22.163456'

Lastly, we remove the dummy entry from the dictionary:

SQL> ALTER SYSTEM ENABLE DISTRIBUTED RECOVERY;
    
SQL> ALTER SYSTEM SET "_smu_debug_mode" = 4;
    
SQL> COMMIT;
    
SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456');
    
SQL> ALTER SYSTEM SET "_smu_debug_mode" = 0;
    
SQL> COMMIT;
Check to see whether the transaction has gone:

SQL> SELECT * FROM X$KTUXE WHERE KTUXEUSN=96 AND KTUXESLT=22 AND KTUXESQN =163456;
    -- Returns: No Rows    



2.3. Condition 3: DBA_2PC_PENDING has entry and there is a transaction but COMMIT or ROLLBACK HANGS!

In the situation, where COMMIT FORCE or ROLLBACK FORCE hangs,
Trying to purge the transaction will give an error like:

SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456');
    -- ERROR at line 1:
    -- ORA-06510: PL/SQL: unhandled user-defined exception
    -- ORA-06512: at "SYS.DBMS_TRANSACTION", line 94
    -- ORA-06512: at line 1


Solution to Condition 3: The solution is the combination of Cond1 and Cond2:

First, delete the dictionary entries:

SQL> DELETE FROM SYS.PENDING_TRANS$ WHERE LOCAL_TRAN_ID = '96.22.163456'; 
    
SQL> DELETE FROM SYS.PENDING_SESSIONS$ WHERE LOCAL_TRAN_ID ='96.22.163456' ; 
    
SQL> DELETE FROM SYS.PENDING_SUB_SESSIONS$ WHERE LOCAL_TRAN_ID = '96.22.163456';
    
SQL> COMMIT;
Then, insert dummy record, force commit and finally purge the transaction:

SQL> ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;
    
SQL> INSERT INTO PENDING_TRANS$ (LOCAL_TRAN_ID, GLOBAL_TRAN_FMT, GLOBAL_ORACLE_ID,
STATE, STATUS, SESSION_VECTOR, RECO_VECTOR, TYPE#, FAIL_TIME,RECO_TIME)
    VALUES
    (
         '96.22.163456',
         306206,
         'XXXXXXX.12345.1.2.3',
         'prepared','P',
         hextoraw( '00000001' ),
         hextoraw( '00000000' ),
         0, sysdate, sysdate
    );
     
SQL> INSERT INTO PENDING_SESSIONS$
    VALUES
    (
         '96.22.163456',
         1, hextoraw('05004F003A1500000104'),
         'C', 0, 30258592, '',
         146
    );
SQL> COMMIT;
    
SQL> COMMIT FORCE '96.22.163456';
    
SQL> EXECUTE DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('96.22.163456');







Two-Phase Commit (2PC)

The two phase-commit mechanism is used to ensure data integrity in a distributed transaction. It is automatically used during all distributed transactions and coordinates either the commit or roll back of all the changes in the transaction as a single, self-contained unit.

In short, there are three phases for the 2PC:
PREPARE: The initiating node ask each of its referenced nodes to promise to perform a commit or rollback when told to do so. The preparing node will flush the redo log buffer to the online redo log. It converts locks to in-doubt transaction locks on the data blocks and passes its highest SCN value to the initiating node.
COMMIT: The initiating node commits and writes to its redo log the committed SCN. The Data Block locks are released.
FORGET:  Pending transactions tables are related database views are cleared (dba_2pc_pending/dba_2pc_neighbors)
All the above phases take place quickly and transparently to the application where the transaction originated.

NOTE:
A crash during the PREPARE Phase results in a ROLLBACK
A crash during the COMMIT Phase results in either COMMIT or ROLLBACK





Recoverer Process (RECO)

RECO is a mandatory background process that, in a distributed database, automatically resolves failures in distributed transactions. The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction; it can use an existing connection or establish a new connection to other nodes involved in the failed transaction.. When RECO reestablishes a connection between the databases, it automatically resolves all in-doubt transactions, removing from each database's pending transaction table any rows that correspond to the resolved transactions.

At exponentially growing time intervals, the RECO background process of a node attempts to recover the local portion of an in-doubt distributed transaction.

init.ora parameter distributed_transactions > 0  (for RECO to start)

If init.ora open_links is set to 0, then no distributed transactions are allowed.

The RECO process is present only if the instance permits distributed transactions.

You can enable and disable RECO using the ALTER SYSTEM statement with the ENABLE/DISABLE DISTRIBUTED RECOVERY options.

You can disable distributed recovery if the database is mounted but not open.

select dbid,name,log_mode,open_mode from v$database;
select instance_number,instance_name,status from v$instance;


Disabling and Enabling RECO

You can enable and disable RECO using the ALTER SYSTEM statement with the ENABLE/DISABLE DISTRIBUTED RECOVERY options. For example, you can temporarily disable RECO to force the failure of a two-phase commit and manually resolve the in-doubt transaction.

-To disable RECO:
ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY

-To enable RECO and let it automatically resolve indoubt transactions
ALTER SYSTEM ENABLE DISTRIBUTED RECOVERY;




Database Parameters related to distributed Transactions :

DISTRIBUTED_LOCK_TIMEOUT  : 
DISTRIBUTED_LOCK_TIMEOUT is used to specify maximum time in  seconds   that instance waits for locked resources . Default is 60 seconds 



COMMIT_POINT_STRENGTH :
In a two phase commit process, one site or node is the designated “Commit Point Site”  . This  is determined by the values of the
COMMIT_POINT_STRENGTH parameter in all the database instances involved in the transaction which are adjacent to and include the Global Coordinator
 
The values range from 0 to 255 and if not set then the is determined by the software

The COMMIT_POINT_STRENGTH parameter in nonadjacent nodes is only used to resolve who will be the recursive“Commit Point Site” when a Local
Coordinator is the main “Commit Point Site"





Known Issues :

ORA-02053: transaction committed, some remote DBs may be in-doubt
The transaction has been locally committed, however we have lost communication with one or more local coordinators.

ORA-02054: transaction in-doubt
The transaction is neither committed or rolled back locally, and we have lost communication with the global coordinator.

ORA-02050: transaction rolled back, some remote DBs may be in-doubt
Indicates that a communication error ocurred during the two-phase commit

ORA-01591: lock held by in-doubt distributed transaction
Encountering the above error and users/applications unable to proceed with their work. In this case, Oracle automatically rolls back the user attempted transaction and the DBA has now to manually commit or rollback the in-doubt transaction.






Views  : 

DBA_2PC_PENDING  and DBA_2PC_NEIGHBORS   are  2 main views used for checking details 



DBA_2PC_PENDING 

view to determine the global commit number for a particular transaction ID. Lists all in-doubt distributed transactions. 
The view is empty until populated by an in-doubt transaction. After the transaction is resolved, the view is purged


Note: When LOCAL_TRAN_ID and GLOBAL_TRAN_ID value is same then node is the global coordinator of the transaction.

1) LOCAL_TRAN_ID : Local transaction identifier
2) GLOBAL_TRAN_ID : Global database identifier
3) STATE : –Collecting: node is currently collecting information from other database servers before it can decide whether it can prepare.
–Prepared: Node has prepared and holding locks on resources. May or not acknowledge to local coordinator
–Committed: Node has committed the transaction but other node may or may not done.
–Forced Commit: Administrator manually forced to commit the pending transaction at local node.
–Forced rollback: A pending transaction forced to rollback at local node.

4) MIXED: YES means part of transaction committed on one node and rollback at other node.
5) TRAN_COMMENT: Transaction comment
6) HOST: Hostname
7) COMMIT# : Global commit number for committed transaction



COL LOCAL_TRAN_ID FORMAT A13
COL GLOBAL_TRAN_ID FORMAT A30
COL STATE FORMAT A8
COL MIXED FORMAT A3
COL HOST FORMAT A10
COL COMMIT# FORMAT A10
SELECT LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, HOST, COMMIT# FROM DBA_2PC_PENDING;




DBA_2PC_NEIGHBORS :

Lists all incoming and outgoing in-doubt distributed transactions. It also indicates whether the local node is the commit point site in the transaction.


1) LOCAL_TRAN_ID: Local transaction identifier
2) IN_OUT: IN for incoming transactions; OUT for outgoing transactions
3) DATABASE: For incoming transactions, the name of the client database for outgoing transactions, the name of the database link used to access information on a remote server.
4) DBUSER_OWNER: For incoming transactions, the local account used to connect, for outgoing transactions: the owner of the database link.
5) INTERFACE: C is a commit message; N is either indicating a prepared state or read-only commit.

COL LOCAL_TRAN_ID FORMAT A13
COL IN_OUT FORMAT A6
COL DATABASE FORMAT A25
COL DBUSER_OWNER FORMAT A15
COL INTERFACE FORMAT A3
SELECT LOCAL_TRAN_ID, IN_OUT, DATABASE, DBUSER_OWNER, INTERFACE FROM DBA_2PC_NEIGHBORS





Other views : 

SQL> SELECT * FROM GLOBAL_NAME;
SQL> select * from DBA_2PC_PENDING;
SQL> select * from DBA_2PC_NEIGHBORS;
SQL> select * from sys.pending_trans$;
SQL> select * from SYS.PENDING_SESSIONS$;
SQL> select * from SYS.PENDING_SUB_SESSIONS$;
SQL> select * from SYS.V$GLOBAL_TRANSACTION  ;

SELECT
    ktuxeusn            usn#     -- 65535 = no-undo transaction
  , ktuxeslt            slot#    -- 65535 = invalid slot#
  , ktuxesqn            seq#
  , ktuxesta            status
  , ktuxecfl            flags
  , ktuxesiz            undo_blks
  , ktuxerdbf           curfile 
  , ktuxerdbb           curblock
  , ktuxescnw * power(2, 32) + ktuxescnb cscn -- commit/prepare commit SCN
  , ktuxeuel            
  -- distributed xacts
  --, ktuxeddbf           r_rfile
  --, ktuxeddbb           r_rblock
  --, ktuxepusn           r_usn#
  --, ktuxepslt           r_slot#
  --, ktuxepsqn           r_seq#
FROM
    x$ktuxe
WHERE ktuxesta != 'INACTIVE'
ORDER BY
    ktuxeusn
  , ktuxeslt
/


SELECT
    "AREA NAME"                               -- VARCHAR2(32)
  , "NUMAPG"                                  -- NUMBER
  , "PAGESIZE" / 1024 mempage_kb                 -- NUMBER
  , ROUND("SEGMENT SIZE" / 1048576) segsize_mb -- NUMBER
  , ROUND("SIZE" / 1048576) area_size_mb            -- NUMBER
  , ROUND("REMAINING ALLOC SIZE" / 1048576) remain_mb                   -- NUMBER
  , "SHMID"                                   -- NUMBER
  , "SEGMENT DISTRIBUTED"                     -- VARCHAR2(20)
  , "AREA FLAGS"                              -- NUMBER
  , "SEGMENT DEFERRED"                        -- VARCHAR2(20)
  , "SEG_START ADDR"                          -- RAW(8)
  , "START ADDR"                              -- RAW(8)
FROM
    x$ksmssinfo
/





Before  Rollback or committing  please capture below details  as If these transaction were XA transactions, the responsibility of cleaning them up would be the external transaction manager.



***********************
.
SET MARKUP HTML ON SPOOL ON HEAD "<TITLE>2PC - INFO </title> -
<STYLE TYPE='TEXT/CSS'><!--BODY {background: ffffc6} --></STYLE>"
SET ECHO OFF
REM Spooling to html file
REM Insert the queries to be executed between the SPOOL commands.
.
define COLLNAME='SUP_'
define SRDCNAME='2PC'
column SRDCSPOOLNAME new_val SRDCSPOOLNAME
--select 'SRDC_'||upper('&&SRDCNAME')||'_'||upper(instance_name)||'_'||to_char(sysdate,'YYYYMMDD_HH24MISS')||'.htm' SRDCSPOOLNAME from v$instance;
select '&&COLLNAME'||upper('&&SRDCNAME')||'_'||upper(instance_name)||'_CDB_'||upper(cdb)||'_'||to_char(sysdate,'YYYYMMDD_HH24MISS')||'.htm' SRDCSPOOLNAME from v$instance, v$database;
spool &SRDCSPOOLNAME

ALTER SESSION set NLS_DATE_FORMAT='MM/DD/YY HH24:Mi:SS';
set pagesize 1000
select sysdate from dual;

SELECT SYS_CONTEXT ('USERENV', 'CON_NAME') "Current Container" FROM DUAL;
select con_id, name, cdb, current_scn from gv$database order by inst_id;
select * from gv$containers order by inst_id,con_id;
select * from gv$pdbs order by inst_id,con_id;
select * from gv$version;


select * from global_name;
select * from dba_2pc_pending;
select * from dba_2pc_neighbors;
select * from sys.pending_trans$;
select * from sys.pending_sessions$;
select * from sys.pending_sub_sessions$;

select * from v$session_wait order by SECONDS_IN_WAIT;

SELECT KTUXEUSN, KTUXESLT, KTUXESQN, /* Transaction ID */
KTUXESTA Status,
KTUXECFL Flags
FROM x$ktuxe
WHERE ktuxesta!='INACTIVE';

prompt +++ In memory transaction +++
select /*+ ORDERED */
'----------------------------------------'||'
Curent Time : '|| substr(to_char(sysdate,' HH24.MI.SS'),1,22) ||'
'||'TX start_time: '||t.KTCXBSTM||'
'||'FORMATID: '||g.K2GTIFMT ||'
'||'GTXID: '||g.K2GTITID_EXT ||'
'||'Branch: '||g.K2GTIBID ||'
Local_Tran_Id ='||substr(t.KXIDUSN||'.'||t.kXIDSLT||'.'||t.kXIDSQN,1,15)||'
'||'KTUXESTA='|| x.KTUXESTA ||'
'||'KTUXEDFL='|| x.KTUXECFL ||'
Lock_Info: ID1: ' || ((t.kXIDUSN*64*1024)+ t.kXIDSLT)
||' ID2: '|| t.kXIDSQN
XA_transaction_INFO
from x$k2gte g, x$ktcxb t, x$ktuxe x
where g.K2GTDXCB =t.ktcxbxba and
x.KTUXEUSN = t.KXIDUSN(+) and
x.KTUXESLT = t.kXIDSLT(+) and
x.KTUXESQN =t.kXIDSQN(+);

prompt +++ Timed out, prepared XA transactions +++

select global_tran_fmt, global_foreign_id, branch_id,state,
tran.local_tran_id
from sys.pending_trans$ tran, sys.pending_sessions$ sess
where tran.local_tran_id = sess.local_tran_id
and tran.state = 'prepared'
and dbms_utility.is_bit_set(tran.session_vector, sess.session_id)=1;

select sid, type, id1, id2, lmode, request from GV$lock
where type = 'TM' or type = 'TX';

select LOCAL_TRAN_ID, GLOBAL_TRAN_ID,to_char(FAIL_TIME,'dd-mm-yyyy HH24:MI:SS') FAIL_TIME,STATE, MIXED from DBA_2PC_PENDING;

select NAME, VALUE from V$PARAMETER where upper(NAME) = 'COMPATIBLE';
select NAME, VALUE from V$PARAMETER where upper(NAME) = 'DISTRIBUTED_LOCK_TIMEOUT';

select OWNER, OBJECT_NAME, OBJECT_TYPE, STATUS from DBA_OBJECTS where OBJECT_NAME like 'DBMS_XA%' order by OWNER, OBJECT_NAME, OBJECT_TYPE;

select blocking_session, sid, serial#,event, wait_class, seconds_in_wait
from v$session
where blocking_session is not NULL
order by blocking_session;

spool off
.
SET MARKUP HTML OFF
SET ECHO ON
.
***********************








References : 

Distributed Database, Transactions and Two Phase Commit (Doc ID 13229.1)

https://dbatrain.wordpress.com/wp-content/uploads/2009/01/database-links-masterclass_2.pdf

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/admin/managing-distributed-transactions.html#GUID-AB19431E-BF9F-4181-BC57-699F8A06149D

Primary Note for Troubleshooting Oracle Managed Distributed Transactions (Doc ID 100664.1)

https://www.linkedin.com/pulse/resolve-in-doubt-transactions-handling-unstuck-mohsen-taheri/

https://oraclefiles.com/2019/03/04/resolving-in-doubt-transactions/

Saturday, May 18, 2024

Storing Oracle user password in Oracle wallet on client side


We have move to Exacc , using OS authentication for application user was challenge  and we have to arrange other Secure External Password Store  .

Thanks to Oracle Doc 340559.1   that shared steps on using Oracle wallet on client side . 


When clients are configured to use the secure external password store, applications can connect to a database with the following CONNECT statement syntax, without specifying database login credentials:
connect /@db_connect_string
 
where db_connect_string is a valid connect string to access the intended database. In this case, the database credentials, username and password, 
are securely stored in an Oracle wallet created for this purpose. The autologin feature of this wallet is turned on so the system does not need a password to open the wallet.

From the wallet, it gets the credentials to access the database for the user they represent.



Configuring Clients to Use the External Password Store 


1) Create a wallet on the client by using the following syntax at the command line:

$ORACLE_HOME/oracle_common/bin/mkstore -wrl <wallet_location> -create    ( need to input wallet password once asked ) 



2) Create database connection credentials in the wallet by using the following syntax at the command line:

mkstore -wrl <wallet_location> -createCredential <db_connect_string> <username> <password>

example:

("N102" in the following example is a connect descriptor located in the tnsnames.ora.)

mkstore -wrl /home/mseibt/pstore -createCredential N102 <user> <password>
Enter password: welcome1          



3) In the client sqlnet.ora file, enter the WALLET_LOCATION parameter and set it to the directory location of the wallet you created in Step 1.


WALLET_LOCATION =
   (SOURCE =
      (METHOD = FILE)
      (METHOD_DATA = (DIRECTORY = /home/mseibt/pstore))
)


SQLNET.WALLET_OVERRIDE = TRUE
SSL_CLIENT_AUTHENTICATION = FALSE
SSL_VERSION = 0


4) In the client sqlnet.ora file, enter the SQLNET.WALLET_OVERRIDE parameter and set it to TRUE 

SQLNET.WALLET_OVERRIDE = TRUE



5) With the external password store configured, connect as <user>:

sqlplus /@N102



--Datapump using wallet
nohup expdp /@BSA1EP directory=DUMP_DIR dumpfile=scott_%U.dmp logfile=EXP_SCOTT.log schemas=scott parallel=4 &

-- JDBC using wallet
Connection conn = DriverManager.getConnection ("jdbc:oracle:oci:/@BSA1EP");

-- sqlplus 
sqlplus /@BSA1EP




Managing External Password Store Credentials 


1) Listing the contents of the external password store: 

mkstore -wrl /home/mseibt/pstore -listCredential



2) Adding database login credentials to an existing client wallet:

mkstore -wrl /home/mseibt/pstore -createCredential N101 <user><password>



3) Modifying database login credentials in a wallet: 

mkstore -wrl /home/mseibt/pstore -modifyCredential N102 <user> newpassword



4) Deleting database login credentials from a wallet: 

mkstore -wrl /home/mseibt/pstore -deleteCredential N101



5) Listing wallet entries: 

mkstore -wrl /home/mseibt/pstore -list



6) Listing entry values: 
The name will be something like oracle.wallet.passwordn where n is the entry number, Finally, open the entry using this:

mkstore -wrl /home/mseibt/pstore -viewEntry oracle.security.client.connect_string1    (  <name obtained>  in step  5) 



7) Modifying entry values: 

mkstore -wrl /home/mseibt/pstore -modifyEntry oracle.security.client.password1 newpass




Known Issues: 


Issues 1 : Oracle Wallet Cannot Be Created using mkstore (Doc ID 951276.1)

SOLUTION
1. Stop the Database
2. cd $ORACLE_HOME/bin
3. relink all
4. mkstore -wrl $ORACLE_HOME/admin/wallet -create



Reference : 

Using The Secure External Password Store (Doc ID 340559.1)

Wednesday, May 8, 2024

Changing max_string_size to extended for Oracle Pluggable database



Recently we have been reported   "ORA-00910:specified length too long for its datatype"   by application team .    Upon checking we planned to upgrade MAX_STRING_SIZE to EXTENDED.
 
We cant set  only at pdb level  too   and changes are replicated to  standby through redo logs if changes are  done only at PDB .


We will be  changing  MAX_STRING_SIZE only at pdb level 

In order to expand the maximum value of varchar2 (also nvarchar2 and raw) datatypes in Oracle 12c and beyond, the max_string_size parameter needs to be set to "extended".  This will change the maximum value from 4k (4096) to 32k (32767).

WARNING:  You MUST run utl32k.sql immediately after changing max_string_size=extended, else you risk invalidating the database columns.




Notes  : 

1) The COMPATIBLE initialization parameter must be set to 12.0.0.0 or higher to set MAX_STRING_SIZE = EXTENDED.

2) The utl32k.sql script increases the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns for the views where this is required. The script does not increase the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns in some views because of the way the SQL for those views is written.

3) When the 32k varchar2 feature is enabled, then any function based indexes that rely on PL/SQL functions returning a varchar2 will become unusable when the utl32k.sql script is run. Such indexes will subsequently have to be dropped and it will not be possible to re-create them. Any attempt to recreate such an index will fail with ORA-1450. The reason for this is that such functional indexes will have a "key" size of 32k and this is larger than the maximum allowed key size. Prior to enabling the 32k varchar2 feature, such indexes would have had a key size of 4000 bytes, which is within the allowed limit for an index key.


4) In some cases

Running the rdbms/admin/utl32k.sql script may cause the below error:

a-ORA-14415: index in partially dropped state, submit DROP INDEX
This is caused by  BUG 21450985 - ORA-14415: INDEX IN PARTIALLY DROPPED STATE, SUBMIT DROP INDEXSev 1 24X7 SR

b-ORA-30556: either functional or bitmap join index is defined on the column to be modified
This is caused by Bug 20539050 - ORA-30556 ON USING /RDBMS/ADMIN/UTL32K
Both the bugs are fixed in 12.2 version.For earlier versions one off patch can be requested by creating a SR to Oracle Support.

c-ORA-02019 WHEN RUNNING @?/RDBMS/ADMIN/UTL32K.SQL ON mview with private DB link
This is caused by BUG 19063812 - ORA-02019 WHEN RUNNING @?/RDBMS/ADMIN/UTL32K.SQL ON MV WITH PRIVATE DB LINK
It is fixed in 12.2.





Steps : 

1)  Purge  recyclebin in pdb 

1) Take pre snap  of invalid Objects in pdb 

2) Shut down the PDB abd restart  in upgrade mode 

alter pluggable database PDB1 close immediate instances=all ;
ALTER PLUGGABLE DATABASE pdb-name OPEN UPGRADE;


3)  Change the setting of MAX_STRING_SIZE in the PDB to EXTENDED.

alter system set MAX_STRING_SIZE=EXTENDED  sid='*';
@?/rdbms/admin/utl32k.sql

 Note:
The utl32k.sql script increases the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns for the views where this is required. 
The script does not increase the maximum size of the VARCHAR2, NVARCHAR2, and RAW columns in some views because of the way the SQL for those views is written.
Run the rdbms/admin/utlrp.sql script in the PDB to recompile invalid objects. You must be connected AS SYSDBA to run the script.



4) Reopen the PDB in NORMAL mode

alter pluggable database PDB1 close immediate instances=all ;
ALTER PLUGGABLE DATABASE pdb-name OPEN  ;

Verify client connectivity using service .


5)  Compile Invalid Objects .

Generate  post snap of invalid objects   and  compare with pre snap taken   in step 1 


6)  Replicating to  Standby  :

Since we are  changing at PDB level   ,   changes should replicate to standby   through archive logs . 
Try to open in mount state temporarily and then normalize  in   read only mode after sometime . 





Views : 

select NAME,VALUE,ISPDB_MODIFIABLE,CON_ID from v$system_parameter where upper(NAME)='MAX_STRING_SIZE';



Reference : 

https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/MAX_STRING_SIZE.html
Effect of Max_String_Size on Data guard (Doc ID 2010497.1)
How to Increase the Maximum Size of VARCHAR2, NVARCHAR2, and RAW Columns in 12C Database using MAX_STRING_SIZE ? (Doc ID 1570297.1)
Alter Pluggable Database Close Instances=ALL Not Working (Doc ID 2062080.1)

Sunday, March 31, 2024

Setting max size of each Oracle Pluggable Database Inside same Cdb -- For capacity management

 
We had requirement  where we  want to assign maxsize  for each   Pdb for capacity  planning  whenw e had multiple Pdb iniside Same Cdb .

Luckily Oracle  Doc ID 2166477.1  came to rescue  with below Steps 


Default of MAX_PDB_STORAGE is no limit.



Run following sql in current PDB to limit the size of all datafiles of that PDB:

ALTER PLUGGABLE DATABASE STORAGE (MAXSIZE <MAXSIZE>);


To verify the setting, run below in current PDB:

select PROPERTY_VALUE FROM database_properties WHERE property_name = 'MAX_PDB_STORAGE';


To check every PDB's setting, run below in CDB:

select PROPERTY_NAME,PROPERTY_VALUE,DESCRIPTION,CON_ID FROM cdb_properties WHERE property_name = 'MAX_PDB_STORAGE';



The storage limits for PDBs (like MAX_PDB_STORAGE and MAX_SHARED_TEMP_SIZE) are also  stored in CDB_PROPERTIES.

  SELECT name,
     total_size,
     NVL(property_value, 'UNLIMITED') AS "MAX_SIZE"
 FROM v$containers c, cdb_properties p
  WHERE p.con_id(+) = c.con_id
  AND p.property_name(+) = 'MAX_PDB_STORAGE'
  /



Note : 

The MAX_SIZE column in V$PDBS reflects the original MAXSIZE set using the “ALTER PLUGGABLE DATABASE STORAGE” statement. However, this MAX_SIZE value does not include the discount percent (_pdb_max_size_discount with default 10%) which is included in all internal calculations to get the real maxsize.

To get the real MAXSIZE including the discount percent, query ACTUAL_MAX_SIZE from X$CON 


SQL> select ACTUAL_MAX_SIZE from X$CON;
SQL> SELECT total_size, max_size, (max_size + (max_size*0.10)) AS REAL_MAX_SIZE FROM v$pdbs;


We will   get below Eror ORA-65114  if we  try to add space in pdb beyond  ACTUAL_MAX_SIZE

ORA-65114: space usage in container is too high



References : 

How to set and check the max size of PLUGGABLE DATABASE STORAGE (Doc ID 2166477.1)

Saturday, March 16, 2024

TDE and Tls Encryption for Oracle database in 19C


Thought of documenting this since on exacc we have pluggable database and it uses tde. Having this article will help as one stop to get insight of Tde   wallets management 

For  both TDE and TlS  we create wallet .  In case of TDE wallet contains encryption keys and TLS store contains certificates 


Exacc   there is also 1 more external wallet  called as  Secure External Password Store (SEPS) for  storing sys password which is managed by mkstore  .  Will cover SEPS  in another  Blog 

This Blog is dedicated to  Tde and Tls 



################################################################
Encryption of Data at Rest (TDE)
################################################################

-- Encryption of the Data Files on Disk


Main Steps For TDE Include 
  • TDE DB Parameter Changes
  • Configure a Software KeyStore
  • Create Master Key
  • Create Auto-login KeyStore
  • Encryption of Tablespaces
  • Backup Keys
 
When we  talk about encryption  it could be either  columnar encryption or Tablespace encryption . Here we are going discuss Tablespace encryption . 


Wallet Types ; 
- United – TDE master encryption key for CDB and all PDBs are in same keystore
- Isolated – TDE master encryption key for CDB and PDB’s are in individual PDB’s keystores


Oracle database 12c introduced a new way to manage keystores, encryption keys and secrets using the ADMINISTER KEY MANAGEMENT command. 
This replaces the ALTER SYSTEM SET ENCRYPTION KEY and ALTER SYSTEM SET ENCRYPTION WALLET commands for key and wallet administration from previous releases.
KeyStroke File can  be placed as File or Under Hsm .  TDE_CONFIGURATION parameter is used to control location of key storoke file .

If using Oracle 18c or later, the sqlnet.ora file is officially deprecated
and you should use the WALLET_ROOT and TDE_CONFIGURATION parameters.


New parameters to Set in the Database

 ENCRYPT_NEW_TABLESPACES - specifies whether to encrypt newly created user tablespaces (AES128)

 WALLET_ROOT - specifies the path to the root of a directory tree containing the wallet ❋

 TDE_CONFIGURATION - specifies the Key Store Type being used (Hardware / Software)
 _TABLESPACE_ENCRYPTION_DEFAULT_ALGORITHM ❋
 - specifies default algorithm to use for encryption


❋ WALLET_ROOT parameter was introduced in 19c, prior to this the wallet location was specified in the   sqlnet.ora file. Eg:

 ENCRYPTION_WALLET_LOCATION =
 (SOURCE =
 (METHOD = FILE)
 (METHOD_DATA =
 (DIRECTORY = /u01/app/oracle/product/12.1.0.2/db_1/network/admin/$ORACLE_SID/wallet) ) )

 ❋ In 19c this is an underscore parameter. The regular parameter without the underscore was introduced in 20c. See Doc ID 2654121.1

 Patch 30398099 must be installed to use this parameter (Patch 30398099 has been integrated into the April 2021 RU for Oracle Database  release 19c (19.11))



TDE DB Parameter Changes

1. Set the Parameters
SQL> ALTER SYSTEM SET ENCRYPT_NEW_TABLESPACES = 'ALWAYS' SCOPE = SPFILE SID = '*';
SQL> ALTER SYSTEM SET WALLET_ROOT = '+DATA/PRMYDB-RAC' SCOPE = SPFILE SID ='*';
SQL> ALTER SYSTEM SET "_tablespace_encryption_default_algorithm" = "AES256" SCOPE=SPFILE SID='*’;


2. Bounce the Database

$ srvctl stop database –d prmydb-rac
$ srvctl start database –d prmydb-rac


3. Set the type of Key Store
SQL>  ALTER SYSTEM SET TDE_CONFIGURATION = "KEYSTORE_CONFIGURATION=HSM|FILE" SCOPE=BOTH SID='*';





Configure a Software KeyStore

The CDBs keystore is used to store encryption keys for all the associated PDBs, but they each need their own master encryption key. The master encryption key for the PDB must be exported before an unplug operation, so it can be imported after a subsequent plugin operation.


 SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATA/PRMYDB-RAC/TDE' IDENTIFIED BY "<passwd>";


 SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN FORCE KEYSTORE IDENTIFIED BY "<passwd>";     --> Open wallet 


 SQL> select wrl_type,wrl_parameter,status from gv$encryption_wallet;


-->  For Pluggable database execute same On Pdb  
 SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE  IDENTIFIED BY "<passwd>" WITH BACKUP;    --> Amend  wallet   (master encryption key) 


 SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATA/PRMYDB-RAC/TDE' IDENTIFIED BY "<tde-passwd  --> Create the Master Key for TDE Encryption on the Database


 SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '+DATA/PRMYDB-RAC/TDE’ IDENTIFIED BY "<passwd>"; --> Create the KeyStore to be an Auto-Login Software KeyStore

$ srvctl stop database –d PRMYDB-RAC
$ srvctl start database –d PRMYDB-RAC

 
SQL> select wrl_type,wrl_parameter,status from gv$encryption_wallet;

--> Copy Key Store file on Standby 



orapki wallet display -wallet /u01/app/oracle/admin/${DB_UNIQUE_NAME}/wallet/tde
orapki wallet display -wallet ewallet.p12 -summary




You need to create and activate a master key in the root container and one in each of the pluggable databases. Using the CONTAINER=ALL clause does it in a single step. If the CONTAINER=ALL clause is omitted, it will only be done in the current container and will need to be done again for each PDB individually. Information about the master key is displayed using the V$ENCRYPTION_KEYS view.

Eg 

ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY myPassword WITH BACKUP CONTAINER=ALL;

-- Open
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY myPassword CONTAINER=ALL;

-- Close
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY myPassword CONTAINER=ALL;






Enable TDE using  Isolated method  for Newly Plugged Database . 


Step 1 : Configure init.ora parameter to enable file based wallet. 

sqlplus /nolog <<!
conn /as sysdba

Ensure PDB is not in restricted mode

-- switch to the target PDB
alter session set container = PDBNAME;
show con_name
show parameter tde_configuration
alter system set tde_configuration='KEYSTORE_CONFIGURATION=FILE';



Step 2 Create MEK ( Master encryption key ) using key management service

administer key management create keystore identified by "passsword";  
administer key management set key force keystore identified by "passsword" with backup;


Step 3: Create AOW ( Auto wallet login)

-- identify the wallet location
col wrl_parameter new_value wallet_location
select wrl_parameter from v$encryption_wallet;

-- create Auto Wallet login
--update the wallet location with output of the previous command [ LN01103U is CDB and 7ABB7DFB3F0A0F0DE0531B5CF40A0FF4 is GUID of PDB ]
--eg  wallet_location '/var/opt/oracle/dbaas_acfs/LN01103U/wallet_root/7ABB7DFB3F0A0F0DE0531B5CF40A0FF4/tde'

administer key management create auto_login keystore from keystore 'wallet_location' identified by "password";



Step 4 : Enable AOW ( either close/open wallet or bounce PDB if we have downtime )

set lines 200 pages 300
col WRL_PARAMETER for a30
col wrl_type for a8
select * from gv$encryption_wallet order by inst_id, con_id;

NB: here WALLET_TYPE output shows as PASSWORD.. so we need to set as auto login without putting container down

administer key management set keystore close identified by "password";
administer key management set keystore open identified by "password";



--follow step 6 if we get downtime for this pdb

Step 5: Verify wallets are AUTOLOGIN and OPEN state

set lines 200 pages 300
col WRL_PARAMETER for a30
col wrl_type for a8
select * from gv$encryption_wallet order by inst_id, con_id;
NB: here WALLET_TYPE output shows as AUTOLOGIN..



Step 6: Bounce PDB if possible & repeat step 5 for verification .

incase if you get downtime

[ BEGIN ]
alter pluggable database PDBNAME close immediate instances = all;
alter pluggable database PDBNAME open instances = all;  
[ END ]





Tablespace Encryption : 

Online Encryption of the Tablespaces done on the Primary Only.

The Standby DB is Encrypted using the Data Guard Redo Stream.

The parameter TABLESPACE_ENCRYPTION = MANUAL_ENABLE (default)
This allows you to encrypt Tablespaces if the DB is licensed for Advanced Security.
AUTO_ENABLE = All TS are encrypted by default (Cloud) cannot decrypt.
DECRYPT ONLY = Cannot encrypt TS, prevent licence violatio


You can set the ENCRYPT_NEW_TABLESPACES database initialization
parameter to automatically encrypt future tablespaces that you
create.
➢ SQL> alter system set "_tablespace_encryption_default_algorithm" = 'AES256' scope =
both;
➢ alter system set encrypt_new_tablespaces = ALWAYS scope = both;


--> Online Ecryption : 

SQL> select t.name,e.ENCRYPTIONALG,e.ENCRYPTEDTS,e.STATUS from V$ENCRYPTED_TABLESPACES e,v$tablespace t where t.ts#=e.ts#(+);
 
SQL> ALTER TABLESPACE APP_TS1 ENCRYPTION ONLINE USING 'AES256' ENCRYPT ;
SQL> ALTER TABLESPACE TOOLS ENCRYPTION ONLINE USING 'AES256' ENCRYPT ;
SQL> ALTER TABLESPACE USERS ENCRYPTION ONLINE USING 'AES256' ENCRYPT ;

SQL> select t.name,e.ENCRYPTIONALG,e.ENCRYPTEDTS,e.STATUS from V$ENCRYPTED_TABLESPACES e,v$tablespace t where t.ts#=e.ts#(+);
 
SQL> CREATE TABLESPACE TEST_NEW_ENC_TS DATAFILE SIZE 1G ENCRYPTION USING 'AES256' DEFAULT STORAGE (ENCRYPT);  --> Creation of a New Tablespace (Primary)



--> Offline Tablespace Encryption on the Primary Database 
 
SQL> ALTER TABLEPACE APP_TS1 OFFLINE;
SQL> select t.name as ts_nme,d.name, d.file# from v$tablespace t, v$datafile d where t.ts# = d.ts# and t.name = 'APP_TS1' order by file#;
SQL> alter database datafile 2 encrypt;
SQL> ALTER TABLEPACE APP_TS1 ONLINE;


 For smaller tablespaces we encrypted at the Tablespace level

 SQL> ALTER TABLESPACE USERS OFFLINE;
 SQL> ALTER TABLESPACE USERS ENCRYPTION OFFLINE ENCRYPT;
 SQL> ALTER TABLESPACE USERS ONLINE;




--> Decryption of a Tablespace

SQL> ALTER TABLESPACE APP_TS1 ENCRYPTION ONLINE DECRYPT;
SQL> ALTER TABLESPACE APP_TS1 OFFLINE;
SQL> ALTER TABLESPACE APP_TS1 ENCRYPTION OFFLINE DECRYPT;
SQL> ALTER TABLESPACE APP_TS1 ONLINE;





Recovering forgot Oracle Database TDE Wallet Password. 

Ideally There is no staright  way to recover Tde password there seems some indirect  shared online which is shared below using wallet merge 

Below are some of schenario where we  need TDE Wallet Password Incase of Auto-Login Configured:
To perform REKEY operation
To generate new master key
To export key during PDB migration
During PDB Remote Cloning
To Migrated file based wallet to OKV



Wallet Merge Steps 

1.Take backup of current wallet files
2.Connect to database
3.Create brand new TDE wallet/Keystore
4.perform merge operation
5.perform validation using orapki/mkstore
6.Create Auto-Login
7.Copy New files to wallet_root parameter location
8.close and open wallet from database level.


Take Backup
cp /u01/app/oracle/admin/<DB_UNIQUE_NAME>/tde /backup/wallet_backup_01AUG2023/


Create New Wallet:
sqlplus / as sysdba
ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '<NEW_LOCATION>' IDENTIFIED BY Welcome;


Merge TDE Wallet:
ADMINISTER KEY MANAGEMENT MERGE KEYSTORE '<ORIGINAL_LOCATION_FILES>' INTO EXISTING KEYSTORE '<NEW_LOCATION>'  IDENTIFIED BY "<NEW_KEYSTORE_PASSWORD>" WITH BACKUP;


Perform Validation:
orapki wallet display -wallet <NEW_LOCATION>
Note: above command will ask password, enter the new wallet password. post that it will display master keys.



Creating Auto-Login:
ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '<NEW_LOCATION' IDENTIFIED by Welcome;
Copy both files to original Location of tde wallet files and also copy to other nodes, as well as standby databases.
*** DONE ***






Views : 


set pages 300  lines 250 feedback off 
col wrl_parameter format a60 
col wrl_type format a5
col name format a8 
col status format a10 
select  inst_id , con_id , name , wrl_type , wrl_parameter , status , keystore_mode , wallet_type , wallet_order from gv$encryption_wallet natural join gv$containers order by wrl_type, inst_id , con_id , name ; 

-- Masterkey details 
set lines 300 
col name format a15 
select inst_id , name , masterkeyid  from  gv$containers natural join  gv$database_key_info ; 


 select con_id, mkloc from x$kcbdbk   ;


set lines 300
col name for a15
col wrl_type for a10
col status for a20
select p.con_id, p.name, p.open_mode, ew.wrl_type, ew.wallet_type, ew.status 
from v$pdbs p join v$encryption_wallet ew on (ew.con_id = p.con_id)
order by p.con_id;


set lines 300
col name for a10
col key_id for a60
col creation_time for a40
select p.con_id, p.name, p.open_mode, ek.key_id --, ek.creation_time, ek.KEY_USE
from v$pdbs p left outer join v$encryption_keys ek on (ek.con_id = p.con_id)
order by p.con_id;

select p.con_id, p.name, p.open_mode, ew.wrl_type, ew.wallet_type, ew.status 
from v$pdbs p join v$encryption_wallet ew on (ew.con_id = p.con_id)
where p.name = 'PDB2';


select * from v$encrypted_tablespaces ; 

select tablespace_name , encrypted from  dba_tablespaces  ; 

select file# , encrypted from  v$datafile_header ; 


set linesize 250 pagesize 250
column name format a40
column masterkeyid_base64 format a60
select  name,utl_raw.cast_to_varchar2( utl_encode.base64_encode('01'||substr(mkeyid,1,4))) || utl_raw.cast_to_varchar2( utl_encode.base64_encode(substr(mkeyid,5,length(mkeyid)))) masterkeyid_base64  FROM (select t.name, RAWTOHEX(x.mkid) mkeyid from v$tablespace t, x$kcbtek x where t.ts#=x.ts#);
select con_id, wallet_type,wrl_parameter, status from v$encryption_wallet;
select t.name,e.ENCRYPTIONALG,e.ENCRYPTEDTS,e.STATUS from V$ENCRYPTED_TABLESPACES e, v$tablespace t where t.ts#=e.ts#(+);
exit




References : 

https://docs.oracle.com/en/database/oracle/oracle-database/19/asoag/managing-keystore-and-tde-master-encryption-key.html#GUID-0C080CA8-1812-4DCD-AC30-FE47534C7962




################################################################
Encryption of Data In-Transit (TLS1.2)
################################################################

-- Encryption of data over the Network Layer


Main Steps For TLS Include 
  • Certificate Creation
  • Signing of Certificates (CA)
  • Wallets
  • Add Certificates into a Wallet
  • Network Parameter Changes
  • Database Parameter Changes
  • Application Connection 



• Components:

Certificate Authority (CA): a trusted third party that certifies the identity of entities,
such as users, databases, administrators, clients, and servers.
Certificates: created when an entity's public key is signed by a trusted certificate
authority (CA).
Wallet: a container that stores authentication and signing credentials, including
private keys, certificates, and trusted certificates SSL
Certificates revocation lists: validity of CA signed certificates 


Certificate Authority (CA)

• A third party trusted by both of the communicating parties (e.g.Verisign)
• Validates, identities and issue/revoke certificates
• The CA uses its private key to encrypt a message
• The CA public key is well known and does not have to be authenticated each time it is accessed (browsers, wallets, etc.)
• Organization can use in-house CA (e.g. MS Certificate services)

Publicly Trusted CAs
• A trusted third party (TTP) used as CA for the certificates
• Commercial
– Verisign, Digicert, GoDaddy
• Web browsers includes by default public keys of TTPs CA
• De-facto standard for websites, as certificates from non trusted CAs are reported by default as dangerous



Signature
• One-way hash of the data (certificate) encrypted with signer’s private key – it cannot be reversed

• Receiver validates the integrity of the data:
– Receiver gets the data and signature
– Data is decrypted using sender’s public key
– Signature is decrypted using sender’s public key
– New signature is created using same algorithm
– Both new and received signature should match if data was not tampered



Wallet
• A file storing authentication and signing credentials, including private keys,
certificates, and trusted certificates SSL needs.
• Oracle server and client using SSL needs a wallet file
– configured in sqlnet.ora, listener.ora, optional in tnsnames.ora (instead of sqlnet)
– Must be auto-login
• Managed with Oracle Wallet Manager and orapki tool


Protocol Year
SSL 1.0 No
SSL 2.0 1995
SSL 3.0 1996
TLS 1.0 1999
TLS 1.1 2006
TLS 1.2 2008
TLS 1.3 2018


We will use an Oracle Wallet to securely store and retrieve credentials such as certificates, certificate requests, and private keys. The wallet will be created in a directory of your choice on your database server. In examples that follow, we use the placeholder <server_wallet_directory> to represent that directory name.

The use of PKI (orapki) encryption with Transparent Data Encryption is deprecated.
Use the ADMINISTER KEY MANAGEMENT SQL statement. 


We create the wallet using the Oracle Public Key Infrastructure (orapki) utility. The wallet will be created with two additional options.

The auto_login_local option allows the database to read the values of the wallet without requiring interactive password authentication. Additionally, it enforces that the wallet may only be opened by the user that created the wallet on the machine where the wallet was created. Because of this security requirement you will need to create the wallet as the same user that your database executes as.

The wallet also requires that a password be specified for it. We will use the placeholder <server_wallet_password> in the examples to represent that value.
Connections use SSL or TLS depending on the cipher suites selected. Place the ciphers in the strongest-to-weakest order in the list.



_____________________________________________
Wallet  Configuration  Steps 
___________________________________________


1) 

Set the parameters common_user_prefix, os_authent_prefix and remote_os_authent to NULL in the database.
alter system set common_user_prefix='' scope=spfile;
alter system set os_authent_prefix='' scope=spfile;
alter system set remote_os_authent=FALSE scope=spfile;


2) Update server sqlnet.ora settings:

WALLET_LOCATION =
   (SOURCE =
     (METHOD = FILE)
     (METHOD_DATA = (DIRECTORY = /u01/app/oracle/pki/server_wallet)     )
  )  

SQLNET.AUTHENTICATION_SERVICES= (TCPS,IPC,BEQ) 

SSL_CLIENT_AUTHENTICATION = TRUE   ( or false ) 

SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_GCM_SHA384,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_256_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_RC4_128_SHA,SSL_RSA_WITH_RC4_128_MD5) 
SSL_VERSION = 1.2 or 1.1

sqlnet.crypto_checksum_client = ACCEPTED    ( or  requested ) 
sqlnet.crypto_checksum_server = ACCEPTED  ( or  requested ) 
sqlnet.crypto_checksum_types_server=(SHA356, SHA1) 


3) Update server listener.ora settings:
WALLET_LOCATION =( SOURCE=(METHOD=FILE) ( EMOTHOD_DATA=(DIRECTORY= /u01/app/oracle/pki/server_wallet ))) 
SSL_CLIENT_AUTHENTICATION=FALSE
SSL_VERSION=1.2



4)  Create the Server Wallet and Certificate Signing Request (CSR). In this example we are using this password: Welcome1234.

mkdir /u01/app/oracle/pki/server_wallet
cd /u01/app/oracle/pki/server_wallet

orapki wallet create -wallet /u01/app/oracle/pki/server_wallet -auto_login -pwd Welcome123

orapki wallet display -wallet /u01/app/oracle/pki/server_wallet/



5) Create a self-signed certificate and load it into the wallet.

orapki wallet add -wallet /u01/app/oracle/pki/server_wallet -dn "CN=oracle12c.eem.com" -keysize 2048 -sign_alg sha256 -pwd Welcome1234

parameter value
dn A user specified distinguished name. For example: "CN=ORCLCDB,O=testsecurity,C=US"
keysize The certificates keysize must be one of the 3 values, either (512|1024|2048)
validity The number of days the certificate should be valid for
self_signed Self-sign the certification




6) Export the CSR into a file.

Finally we will export the server certificate from the wallet to a file to use later with the client wallet.

orapki wallet export -wallet /u01/app/oracle/pki/server_wallet -dn "CN=oracle12c.eem.com" -request devserver.txt

orapki wallet export -wallet <server_wallet_directory> -pwd <server_wallet_password> -dn "CN=ORCLCDB,O=testsecurity,C=US" -cert /tmp/oracle-db-certificate.crt

$ cat  /tmp/oracle-db-certificate.crt
 


7)  If the Certificate Authority (CA) is unable to process a CSR with MD5 algorithm then the below commands need to be run (Oracle MOS Note 1448161.1).

openssl pkcs12 -in ewallet.p12 -nodes -out nonoracle_wallet.pem
openssl req -new -key nonoracle_wallet.pem -out devserver.csr



8)  (LOAD TRUESTED CERT)   Submit the CSR to the CA for them to generate a certificate.Once the CA provides the server  certificate, upload the certificate chain to the wallet.

orapki wallet add -wallet /u01/app/oracle/pki/server_wallet -trusted_cert -cert /u01/app/oracle/pki/server_wallet/ca_cert.crt -pwd Welcome1234

orapki wallet add -wallet /u01/app/oracle/pki/server_wallet -trusted_cert -cert /u01/app/oracle/pki/server_wallet/chain.crt -pwd Welcome1234

orapki wallet add -wallet /u01/app/oracle/pki/server_wallet -user_cert -cert /u01/app/oracle/pki/server_wallet/devserver.crt -pwd Welcome1234

orapki wallet display -wallet /var/opt//oracle/wallets/

orapki cert display  -cert /var/opt//oracle/wallets/ -summary 



9)  Modify endpoints  in scan listener and restart  listeners 

[oracle@myserver ~]$ crsctl stat res -p |grep ENDPOINTS
[oracle@myserver ~]$ srvctl modify listener -p "TCP:1521/TCPS:2484"
[oracle@myserver ~]$ srvctl modify scan_listener -p "TCP:1521/TCPS:2484"
[oracle@myserver ~]$ crsctl stat res -p |grep -B20 ENDPOINTS | grep -e ENDPOINTS -e "^NAME="


New TCPS listener should be added to the list of listeners in the LOCAL_LISTENER database parameter

– Static registration can be used for single instances instead

ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)...) (ADDRESS=(PROTOCOL=TCPS)...)))' scope=both; 


$ srvctl stop scan_listener ; srvctl start scan_listener ; srvctl status scan_listener
$ srvctl stop listener ; srvctl start listener ; srvctl status listener




For Rac :

1. Update the listener.ora at $GRID_HOME/network/admin (WALLET_LOCATION parameter is the location of the server wallet. In the examples below, we are using the shared server wallet location for Oracle RAC.)

The file listener.ora must be updated to enable a port for secure communication and specify the location of the Oracle wallet.
Please note, when specifying the value for the DIRECTORY parameter make sure to use a fully qualified path to the <server_wallet_directory> location.

For now, we are going to continue to use username/password authentication, so we will disable SSL client authentication on the server by setting the SSL_CLIENT_AUTHENTICATION property to FALSE.

Next, we will add the key SECURE_PROTOCOL_LISTENER and set it to use the more secure Oracle IPC (which only allows communication with other processes on the system) rather than the TCP protocol. By setting this we are instructing the listener to only accept administration and registration requests from the IPC protocol.
Lastly, update the listener entry to enable support for TCPS traffic by changing the protocol from TCP to TCPS and the PORT to 2484.


SSL_CLIENT_AUTHENTICATION = FALSE
SECURE_PROTOCOL_LISTENER=(IPC)
WALLET_LOCATION =
  (SOURCE =
   (METHOD = FILE)
    (METHOD_DATA =
     (DIRECTORY = /acfs/oracle/pki_wallet/server_wallet)
    )
  )


2. Update sqlnet.ora at $GRID_HOME/network/admin

The file sqlnet.ora must also be updated to enable secure communications. Like the changes we made to the listener.ora file, the SSL_CLIENT_AUTHENTICATION and WALLET_LOCATION need to be set.
Once again, when specifying the value for DIRECTORY, make sure to use a fully qualified path to the <server_wallet_directory> location you specified.
Lastly, we need to specify the property SSL_CIPHER_SUITES to define which encryption algorithms the server should use/require to encrypt network traffic.


SQLNET.ENCRYPTION_SERVER = required
The parameter ENCRYPTION_SERVER has the following options:
REQUESTED – to enable the security service if the client allows it 
REQUIRED – to enable the security service and disallow the connection if the client is not enabled for the security service 
ACCEPTED – to enable the security service if required or requested by the client 
REJECTED – to disable the security service, even if required by the client



WALLET_LOCATION =
  (SOURCE =
    (METHOD = FILE)
    (METHOD_DATA =
      (DIRECTORY = /acfs/oracle/pki_wallet/server_wallet)
    )
  )
SQLNET.ENCRYPTION_TYPES_SERVER = (AES256,AES192)
SQLNET.ENCRYPTION_TYPES_CLIENT = (AES256,AES192)
SQLNET.CRYPTO_CHECKSUM_CLIENT = ACCEPTED
SQLNET.CRYPTO_CHECKSUM_SERVER = ACCEPTED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA256,SHA1,SHA384)
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT = (SHA256,SHA1,SHA384)


###############  SSL  ########################
SQLNET.AUTHENTICATION_SERVICES= (TCPS, BEQ, IPC)
SSL_CLIENT_AUTHENTICATION = TRUE
SSL_CIPHER_SUITES = (SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_AES_256_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA)
SSL_VERSION = 1.2 or 1.1


3. Update sqlnet.ora at $ORACLE_HOME/network/admin

WALLET_LOCATION =
  (SOURCE =
    (METHOD = FILE)
    (METHOD_DATA =
      (DIRECTORY = /acfs/oracle/pki_wallet/server_wallet)
    )
  )
SQLNET.ENCRYPTION_TYPES_SERVER = (AES256,AES192)
SQLNET.ENCRYPTION_TYPES_CLIENT = (AES256,AES192)
SQLNET.CRYPTO_CHECKSUM_CLIENT = ACCEPTED
SQLNET.CRYPTO_CHECKSUM_SERVER = ACCEPTED
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA256,SHA1,SHA384)
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT = (SHA256,SHA1,SHA384)


4. Restart the database(s) and all listeners so that the new settings take effect.
5. Connect via sqlplus using the TCPS tnsnames entry.
6. Toad: the username should be EXTERNAL and the password field is left blank.
7. SQL Developer: Requires version 17 of SQL Developer and perform the following steps:
a. Select Tools-> Preferences-> Database-> Advanced
b. Check – ‘Use Oracle Client’ and ‘Use OCI/Thick Driver’
c. Set Oracle Homes and TNS Names Directory
d. Create New Connection (Select the TCPS TNS Names Entry)
e. Select OS Authentication

f. Once connected, run the below queries to verify the connection.
Select user from dual;
SELECT SYS_CONTEXT ('USERENV', 'AUTHENTICATED_IDENTITY') FROM DUAL;
SELECT SYS_CONTEXT ('USERENV', 'AUTHENTICATION_METHOD') FROM DUAL; 
SELECT SYS_CONTEXT ('USERENV', 'NETWORK_PROTOCOL') FROM DUAL;





 
Client Side TLS Wallet Configuration 
  

Now that we have the server set up for encrypted communication, we also need to set up encryption in the client as well. The client will also use an Oracle Wallet to store the server security certificate to encode and decode communication with the server.

As noted in the assumptions section, this article assumes you have a machine with the Oracle database client installed. We will create another Oracle Wallet (this time on the client) with the following command, again setting the wallet to enable the auto_login_local option and setting a password. As above we will use a placeholder <client_wallet_directory> in examples that follow to represent the directory name you have chosen.

orapki wallet create -wallet <client_wallet_directory> -auto_login_local -pwd <client_wallet_password>


1) Create a new auto-login wallet.

orapki wallet create -wallet "c:\app\oracle\wallet" -pwd WalletPasswd123 -auto_login_local


2) Create a self-signed certificate and load it into the wallet.

orapki wallet add -wallet "c:\app\oracle\wallet" -pwd WalletPasswd123 -dn "CN=%computername%" -keysize 1024 -self_signed -validity 3650


3)  Now we need to add the server security certificate (the file oracle-db-certificate.crt we generated from the database server) to the client wallet to facilitate encrypted communication. For example

orapki wallet add -wallet <client_wallet_directory> -pwd <client_wallet_password> -trusted_cert -cert oracle-db-certificate.crt

orapki wallet add -wallet "c:\app\oracle\wallet" -pwd WalletPasswd123 -trusted_cert -cert c:\ol7-121.localdomain-certificate.crt

orapki wallet display -wallet "c:\app\oracle\wallet" -pwd WalletPasswd1234



4)   Update the client side sqlnet.ora

SQLNET.AUTHENTICATION_SERVICES=(TCPS,BEQ)
NAMES.DIRECTORY_PATH= (TNSNAMES)
SSL_CLIENT_AUTHENTICATION = TRUE
SSL_VERSION = 1.1
SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_GCM_SHA384,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_256_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_RC4_128_SHA,SSL_RSA_WITH_RC4_128_MD5)

WALLET_LOCATION = (SOURCE = (METHOD=MCS))
Or
WALLET_LOCATION =
  (SOURCE =
    (METHOD = FILE)
    (METHOD_DATA =
      (DIRECTORY = <server_wallet_directory>)
    )
  )


4)   Update the client tnsnames.ora file to include an entry for the TCPS connection.

First, we will need to update the file tnsnames.orain your client to register the new TCPS listener on your database server. You should replace SERVER_ADDRESS with the IP Address or FQDN of the server hosting your database.
Update the net_service_name entry (in this example named SERVER) to enable support for TCPS traffic by changing the protocol from TCP to TCPS and the PORT to 2484.


testdb_TCPS=
  (DESCRIPTION=
    (ADDRESS=
      (PROTOCOL=TCPS)
      (HOST=oracle12c.aem.com)
      (PORT=1522)
    )
    (CONNECT_DATA=
      (SERVER=dedicated)
      (SERVICE_NAME=testdb.aem.com)
    )
  )



6)  Update the database user account to be identified by the certificate (this will be different for every user based on Distinguished Name of their certificate).

alter user ggarrison identified externally as 'CN=lastname.firstname.middlename,OU=PKI,C=US';

 
To check that encryption is effectively taking place, execute the following SQL query on the database server side:
 
SQL> select network_service_banner from v$session_connect_info
 where sid in (select distinct sid from v$mystat);




7)   Exchange Cert:     Export   client  cert and  import in  server   (  This  is not needed for OU class Cert ) 



Export the certificate so we can load it into the server later.

c:\>orapki wallet export -wallet "c:\app\oracle\wallet" -pwd WalletPasswd123 -dn "CN=%computername%" -cert c:\%computername%-certificate.crt


Load the client certificate into the server wallet.

$ orapki wallet add -wallet "/u01/app/oracle/wallet" -pwd WalletPasswd123  -trusted_cert -cert /tmp/my-computer-certificate.crt

orapki wallet display -wallet "/u01/app/oracle/wallet" -pwd WalletPasswd123







SSL Troubleshooting Guide (Doc ID 166492.1)

• ORA-12560: TNS:protocol adapter error
• ORA-28862: SSL connection failed
– 28759, 00000, "Failed to open file"
– 28859, 00000, "SSL negotiation failure"
– ntzCreateConnection: failed with error 549
• ORA-29024:Certificate Validation Failure
• ORA-29143: Message 29143 not found
• ORA-29106: Can not import PKCS # 12 wallet
• ORA-28860: Fatal SSL error
• ORA-29263: HTTP protocol error
• ORA-28868: certificate chain check failed • ORA-28750: unknown error  
• ORA-28865: SSL connection closed
• ORA-01004: Default username feature not supported; log denied
• ORA-28864: SSL connection closed gracefully
• ORA-01017: invalid username/password; logon denied
• alert.log: "SSL Client: Server DN doesn't contain expected SID name"
• ORA-29113: Cannot access attributes from a PKCS #12 key bag.
• ORA-29002: SSL transport detected invalid or obsolete server certificate
• ORA-29003: SSL transport detected mismatched server certificate
• ORA-28857: Unknown SSL Error




Reference : 

https://docs.oracle.com/en/cloud/paas/data-safe/admds/create-wallet-or-certificates-tls-connection.html#TASK_QZ2_XRQ_ZHB

https://2019.hroug.hr/eng/content/download/20367/433976/file/Nelson+Calero+-+SSL+certificates+in+the+Oracle+Database+without+surprises.pdf

https://docs.oracle.com/en/database/oracle/oracle-database/18/dbseg/configuring-secure-sockets-layer-authentication.html
 
https://blogs.oracle.com/dev2dev/ssl-connection-to-oracle-db-using-jdbc,-tlsv12,-jks-or-oracle-wallets


################################################################
################################################################
   
  


Saturday, March 2, 2024

Oracle Sql performance Analyzer using Sql tuning sets for testing Optimizer version change .


Since  between  Migrations we were required  to compare sql performance  with different optimizer modes .   Here old  tool  SPA  helped us    .  

Documenting handy steps of myself and will help  others Too . 



The steps involved in the SQL PERFORMANCE Analyzer are

1) Create the Sql Tuning Set (STS).
2) Create a task to run Sql Performance Analyzer.
3) Execute Before Change TEST EXECUTE (Pre-Change SQL Trial).

         Make a change that needs to tested.

4) Execute After Change TEST EXECUTE (Post-Change SQL Trial).
5) Comparing SQL Trials.
6) Generate Compare report.




You can use the SQL Performance Analyzer to analyze the SQL performance impact of any type of  system change. Examples of common system changes include:

•Database upgrades
•Configuration changes to the operating system, hardware, or database
•Database initialization parameter changes
•Schema changes, such as adding new indexes or materialized views
•Gathering optimizer statistics
•SQL tuning actions, such as creating SQL profiles



Check  what metrics  SPA compare performance of : 

SQL> SELECT metric_name FROM v$sqlpa_metric;

METRIC_NAME   
-------------------------
PARSE_TIME               
ELAPSED_TIME             
CPU_TIME                 
USER_IO_TIME             
BUFFER_GETS              
DISK_READS               
DIRECT_WRITES            
OPTIMIZER_COST           
IO_INTERCONNECT_BYTES  
 
9 rows selected.



Report generation Options : 


DBMS_SQLPA.REPORT_ANALYSIS_TASK(
  task_name      IN VARCHAR2,
  type           IN VARCHAR2 := 'text',
  level          IN VARCHAR2 := 'typical',
  section        IN VARCHAR2 := 'summary',
  object_id      IN NUMBER   := NULL,
  top_sql        IN NUMBER   := 100,
  task_owner     IN VARCHAR2 := NULL,
  execution_name IN VARCHAR2 := NULL)
RETURN CLOB;


LevelDescription
BASICSame as typical
TYPICAL (default)Information about all statements
ALLDetails of all SQL
IMPROVEDOnly improved SQL
REGRESSEDOnly regressed SQL
CHANGEDSQL with changed performance
UNCHANGEDOpposite of CHANGED
CHANGED_PLANSOnly SQL with plan changes
UNCHANGED_PLANSOpposite of above
ERRORSSQL with errors only





SPA  Steps :


1)  create a STS

exec DBMS_SQLTUNE.create_sqlset (sqlset_name=> '94rn6s4ba24wn') ;

DECLARE
  cursor1 DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
  OPEN cursor1 FOR SELECT VALUE(p)
  FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('sql_id = ''94rn6s4ba24wn''')) p;
 
  DBMS_SQLTUNE.LOAD_SQLSET(sqlset_name => 'STS01', populate_cursor => cursor1);
END;
/



2)  create a SPA task and associate the STS with it 

DECLARE
  task_name VARCHAR2(64);
  sts_task  VARCHAR2(64);
BEGIN
  task_name := 'Task01';
 
  sts_task:=DBMS_SQLPA.CREATE_ANALYSIS_TASK(sqlset_name => 'STS01', task_name => task_name, description => 'Task for sql_id 94rn6s4ba24wn');
END;
/



3) Execute CONVERT SQLSET Before Change TEST EXECUTE 


SQL> EXEC DBMS_SQLPA.RESET_ANALYSIS_TASK('Task01');
SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'CONVERT SQLSET', execution_name => 'convert_sqlset');
 


Generate the report with a SQL statement like this:

SQL> SET LONG 999999 longchunksize 100000 linesize 200 head off feedback off echo off
SQL> spool task01_before_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off


result file that with this particular execution all different plans are displayed, while in compare performance objective SPA is taking only one plan (the one it is currently parsing with information currently available). In any case comparing all plans of same sql_id would provide very complex reports that would probably be not usable…



4) Create Tets execute task ( pre and post changes  ) 

SQL> EXEC DBMS_SQLPA.RESET_ANALYSIS_TASK('Task01');

SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'TEST EXECUTE', execution_name => 'before_change');

 SET LONG 999999 longchunksize 100000 linesize 200 head off feedback off echo off
SQL> spool task01_before_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off

SQL> ALTER SESSION SET optimizer_features_enable='11.2.0.2';

SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'TEST EXECUTE', execution_name => 'after_change');


 SET LONG 999999 longchunksize 100000 linesize 200 head off feedback off echo off
SQL> spool task01_after_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off



5)  Run Compare Performance : 

SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'COMPARE PERFORMANCE', execution_name => 'compare_performance');
 
PL/SQL PROCEDURE successfully completed.

Or 


BEGIN
  DBMS_SQLPA.execute_analysis_task(
    task_name        => :v_task,
    execution_type   => 'compare performance', 
    execution_params => dbms_advisor.arglist(
                          'execution_name1', 
                          'before_change', 
                          'execution_name2', 
                          'after_change')
    );
END;
/



  SET LONG 999999 longchunksize 100000 linesize 200 head off feedback off echo off
SQL> spool task01_compare_performance.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off



Sample Compare report is  published below :

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2024/02/spa-sample-compare-report.html





Views : 


select sql_id, plan_hash_value, buffer_gets, elapsed_time, substr(sql_text,1, 30
) text, executions from dba_sqlset_statements 
where sqlset_name = :sts_name
order by sql_id, plan_hash_value;

SQL> SET lines 200
SQL> col description FOR a30
SQL> SELECT * FROM dba_sqlset;



SELECT task_name, status
FROM dba_advisor_tasks
WHERE task_name = :tname;



select execution_name,status, execution_end
from DBA_ADVISOR_EXECUTIONS where task_name='my_sqlpa_demo_task';

SQL> SELECT execution_name, execution_type, TO_CHAR(execution_start,'dd-mon-yyyy hh24:mi:ss') AS execution_start,
     TO_CHAR(execution_end,'dd-mon-yyyy hh24:mi:ss') AS execution_end, status
     FROM dba_advisor_executions
     WHERE task_name='Task01';


SQL> SELECT execution_name, execution_type, TO_CHAR(execution_start,'dd-mon-yyyy hh24:mi:ss') AS execution_start,
     TO_CHAR(execution_end,'dd-mon-yyyy hh24:mi:ss') AS execution_end, advisor_name, status
     FROM dba_advisor_executions
     WHERE task_name='Task01';


SQL> SELECT last_execution,execution_type,TO_CHAR(execution_start,'dd-mon-yyyy hh24:mi:ss') AS execution_start,
     TO_CHAR(execution_end,'dd-mon-yyyy hh24:mi:ss') AS execution_end,status
     FROM dba_advisor_tasks
     WHERE task_name='Task01';


SQL> col EXECUTION_NAME FOR a15
SQL> SELECT execution_name, plan_hash_value, parse_time, elapsed_time, cpu_time,user_io_time,buffer_gets,disk_reads,direct_writes,
     physical_read_bytes,physical_write_bytes,rows_processed
     FROM dba_advisor_sqlstats
     WHERE task_name='Task01';


SQL> col PLAN FOR a140
SQL> SET pages 500
SQL> SELECT p.plan_id, RPAD('(' || p.ID || ' ' || NVL(p.parent_id,'0') || ')',8) || '|' ||
     RPAD(LPAD (' ', 2*p.DEPTH) || p.operation || ' ' || p.options,40,'.') ||
     NVL2(p.object_owner||p.object_name, '(' || p.object_owner|| '.' || p.object_name || ') ', '') ||
     'Cost:' || p.COST || ' ' || NVL2(p.bytes||p.CARDINALITY,'(' || p.bytes || ' bytes, ' || p.CARDINALITY || ' rows)','') || ' ' ||
     NVL2(p.partition_id || p.partition_start || p.partition_stop,'PId:' || p.partition_id || ' PStart:' ||
     p.partition_start || ' PStop:' || p.partition_stop,'') ||
     'io cost=' || p.io_cost || ',cpu_cost=' || p.cpu_cost AS PLAN
     FROM dba_advisor_sqlplans p
     WHERE task_name='Task01'
     oder BY p.plan_id, p.id, p.parent_id;



SQL> col message FOR a80
SQL> col FINDING_NAME FOR a30
SQL> col EXECUTION_NAME FOR a20
SQL> SELECT execution_name,finding_name,TYPE,impact,message FROM dba_advisor_findings WHERE task_name='Task01';




Reference : 

SQL Performance Analyzer Example (Doc ID 455889.1)

https://docs.oracle.com/en/database/oracle/oracle-database/19/admqs/analyzing-database-changes-sql-performance.html#GUID-28869D5B-ECCD-4A89-8391-116AE4C6A7D4