Friday, December 22, 2023

Export/Import Oracle Dataguard / Dg Broker Configuration

 

Most   of time  we  feel   need  to re-create  Dataguard Configuration  however we are scared  to loose   current   configuration . 

From 18c we have option to  backup and  Import  dataguard configuration  making life easy  


To Export 

DGMGRL>export configuration to 'dgbrokerconfigurationexport.txt';
or
DGMGRL>export configuration to '/<location>/dgbrokerconfigurationexport.txt';


If 'TO' clause is not used then the configuration is exported to filename 'SID_dmon_processID-of-DMON_brkmeta_serial-number.trc. ' in the tracedump.
Example : prod_dmon_1000_brkmeta_2.trc , if prod is the SID and pmon process id is 1000



To Import : 

DGMGRL>import configuration from 'dgbrokerconfigurationexport.txt';
or
DGMGRL>import configuration from '/<location/dgbrokerconfigurationexport.txt';




Reference : 

How to Export/Import DG Broker Configuration (Doc ID 2951452.1)

Saturday, December 9, 2023

Change Oracle database TIME_ZONE


Oracle allows you to specify the database timezone using a time zone name, listed in the V$TIMEZONE_NAMES view, or using a UTC offset (+/-HH:MI). 

For performance reasons, Oracle recommends setting the database time zone to UTC (0:00), as no conversion of time zones will be required.

The multitenant architecture allows you to specify a different database time zone for each pluggable database, with the time zone of the container database being used as the default.


When we change   Database Timezone   , we also need to update cluster and  database  Scheduler : 

On standby we  just need to  make changes on Crs .


Note 
--A PDB can have a different time zone.
--You can only change the database time zone if you have no TSLTZ columns


To locate the tables that uses the data type LOCAL TIME ZONE WITH LOCAL TIMESTAMP:

select t.owner, t.table_name, t.column_name, t.data_type
  from dba_tab_cols t, dba_objects o
 where t.data_type like '%WITH LOCAL TIME ZONE'
   and t.owner=o.owner
   and t.table_name = o.object_name
   and o.object_type = 'TABLE'
order by 1
/

 
Changing Time Zone at database level 


We can do only at Pdb level in case of     Pluggable database environment 

alter database set TIME_ZONE='+00:00'; 
ALTER DATABASE SET TIME_ZONE='Europe/London';
 alter pluggable database <pdb1> set time_zone = '00:00';  





Update Clusterware with  Timezone Data:

Please note to change   for Listener only if  we have dedicated listener for your database other wise no need 


During the GI installation, Oracle saves Timezone information in $CRS_HOME/crs/install/s_crsconfig_hostname_env.txt file, that makes TZ not to change for CRS even it is changed on OS level.

Timezone can be changed for the database using srvctl:

 timedatectl status|grep zone

 srvctl setenv database -d rspaws -t   'TZ=America/Phoenix'
 srvctl setenv listener -l LISTENER -t 'TZ=America/Phoenix'

 restart database and listener 

 srvctl getenv database -d rspaws
 srvctl getenv listener


Changing Database  Scheduler TimeZone :



select dbms_scheduler.stime from dual;

exec DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE('default_timezone','EUROPE/LONDON')

select dbms_scheduler.stime from dual;






Changing  Time Zone In environment Variable : 

We also need   check  environment variable ORA_TZFILE  as  that also influence  Db timezone . 

eg :   export ORA_TZFILE = America/Phoenix





Changing  Timezone for sql developer and  Windows  Registry : 


If you need to change the time zone of Oracle SQL Developer (or Oracle Data Modeler), then this is how to do it:

Go to the installation directory of Oracle SQL Developer.
Open the file located at: sqldeveloper/bin/sqldeveloper.conf.
At the end of file, add the following line: AddVMOption -Duser.timezone=GMT-4.
Restart your Oracle Sql Developer.



Verifying  Timezone in Oracle Client 

Oracle client uses your TimeZone environment  set at OS 

Use same Timezone for Database and Oracle Client .   Oracle Instant Client comes  with its own timezone  file imbedded in it 

We can check Timezone for Oracle client using ./genezi utility  under $ORACLE_HOME/bin/  .  

We will  see Oracle errors like ORA-01805 and ORA-01804  if  Timezone for Cleint  doesnt matches  database Timezone 

If we are using Full Oracle Client , sometime copying  files under $ORACLE_HOME/oracore/zoneinfo  folder  from Database server to Client home  does fix   for ORA-01804


 ORA-01804  is also reported due to missing libociei.dylib into /usr/locale/lib folder.  

In worse case , to fix    ORA-01804     reinstall Oracle client after fixing  Timezone Environment  at OS level 



To Verify  Environment Variable  for Windows Registry 

Registry Key HKCU\SOFTWARE\ORACLE\KEY_{Oracle Home Name}\ORA_SDTZ

Registry Key HKLM\SOFTWARE\ORACLE\KEY_{Oracle Home Name}\ORA_SDTZ

(resp. HKLM\SOFTWARE\Wow6432Node\ORACLE\KEY_{Oracle Home Name}\ORA_SDTZ)

Environment variable ORA_SDTZ

Current locale settings of your machine (most likely).

Database time zone if none from above is found (just an assumption)







Change the time zone at session level

-- use alter session commands
ALTER SESSION SET TIME_ZONE=local;
ALTER SESSION SET TIME_ZONE=dbtimezone;
ALTER SESSION SET TIME_ZONE='Asia/Hong_Kong';
ALTER SESSION SET TIME_ZONE='+10:00';

  SELECT sessiontimezone FROM DUAL;
 
   
  col SYSTIMESTAMP for a50
  col STIME for a50
 select (select TO_CHAR(SYSDATE,'YYYYMMDD hh24:mi') from dual) sdate,SYSTIMESTAMP, (select dbms_scheduler.stime from dual) STIME, DBTIMEZONE,( SELECT TZ_OFFSET( SESSIONTIMEZONE ) FROM DUAL) SESSIONTIMEZONE,
(SELECT TZ_OFFSET( 'EUROPE/LONDON' ) FROM DUAL) "specific_time_zone" from dual;



Views :


SELECT dbtimezone FROM DUAL;

 select value$ from props$ where name = 'DBTIMEZONE';

select systimestamp from dual;

SELECT tz_version FROM registry$database;

COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A20
SELECT property_name, property_value
FROM   database_properties
WHERE  property_name LIKE 'DST_%'
ORDER BY property_name;



set lines 240
col SYSTIMESTAMP forma a40
col CURRENT_TIMESTAMP forma a40
alter session set NLS_DATE_FORMAT='dd-MON-yyyy:hh24:mi:ss';
select dbtimezone,systimestamp,sysdate,current_timestamp,localtimestamp from dual;




References :

Configure Different Timezones per PDB (Doc ID 2143879.1)

ORA-1804: Failure to Initialize Timezone Information (Doc ID 365558.1)


Thursday, December 7, 2023

Upgrade DST Time Zone Version in 19C using utltz_upg_apply.sql to avoid ORA-39405 during import


Difference In Dst TZ is  very common  issue  during  migration where we see  below message .  
Luckily  fix is  made very easy in 19c . 


Oracle Data Pump does not support importing from a source database with TSTZ version 35 or 36 into a target database with TSTZ version 34.
OR 
ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ version 34 into a target database with TSTZ version 32.



Upgrade Part : 

For releases (18c, 19c), the timezone upgrade scripts are included in the target ORACLE_HOME under rdbms/admin directory

In  case if  Pluggable database we  need to run  script in both cdb and pdb 


The following scripts get delivered with Oracle Database 18c onward

    $ORACLE_HOME/rdbms/admin/utltz_countstats.sql
    Script to gives how much TIMESTAMP WITH TIME ZONE data there is in a database using stats info. No restart required. 
    
    $ORACLE_HOME/rdbms/admin/utltz_countstar.sql
    Script to approximate how much TIMESTAMP WITH TIME ZONE data there is in a database using a COUNT(*) for each table that has a TSTZ column. This script is useful when using DBMS_DST package or the scripts of utlz_upg_check.sql and utlz_upg_apply.sql scripts.
    
    $ORACLE_HOME/rdbms/admin/utltz_upg_check.sql
    Time zone upgrade check script
    
    $ORACLE_HOME/rdbms/admin/utltz_upg_apply.sql
    Time zone apply script. Warning: This script will restart the database and adjust time zone data.




Note: If you want to see what is happening when the scripts utltz_upg_check.sql and utltz_upg_apply.sql are being executed, run the following commands:

set PAGES 1000

-- query the V$SESSION_LONGOPS view
select TARGET, TO_CHAR(START_TIME,'HH24:MI:SS - DD-MM-YY'),
TIME_REMAINING, SOFAR, TOTALWORK, SID, SERIAL#, OPNAME
from V$SESSION_LONGOPS
where sid in
(select SID from V$SESSION where CLIENT_INFO = 'upg_tzv')
and
SOFAR < TOTALWORK
order by START_TIME;

-- query the V$SESSION and V$SQLAREA views
select S.SID, S.SERIAL#, S.SQL_ID, S.PREV_SQL_ID,
S.EVENT#, S.EVENT, S.P1TEXT, S.P1, S.P2TEXT,
S.P2, S.P3TEXT, S.P3, S.TIME_REMAINING_MICRO,
S.SEQ#, S.BLOCKING_SESSION, BS.PROGRAM "Blocking Program",
Q1.SQL_TEXT "Current SQL", Q2.SQL_TEXT "Previous SQL"
from V$SESSION S, V$SQLAREA Q1, V$SQLAREA Q2, V$SESSION BS
where S.SQL_ID = Q1.SQL_ID(+) and
S.PREV_SQL_ID = Q2.SQL_ID(+) and
S.BLOCKING_SESSION = BS.SID(+) and
S.CLIENT_INFO = 'upg_tzv';
 



Have  documented Old method in below Article : 

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/12/upgrade-timezone-version-on-195-target.html




Views : 

SELECT version FROM v$timezone_file; 

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;

SELECT tz_version FROM registry$database;


SELECT DBMS_DST.get_latest_timezone_version FROM   dual;




Reference : 

Multitenant - CDB/PDB - Upgrading DST using scripts - 12.2 and above - ( With Example Test Case - 19.11 ) (Doc ID 2794739.1)

https://docs.oracle.com/en/database/oracle/oracle-database/18/nlspg/datetime-data-types-and-time-zone-support.html#GUID-7A1BA319-767A-43CC-A579-4DAC7063B243

Different Time Zone Version In Registry$Database And V$Timezone_file (Doc ID 1255474.1)


Thursday, November 23, 2023

Tracking Child Sqlid / Session Id executed By Plsql and Procedure using program_id


Its not easy to track and tune  Child sql that are  executed internally as part  of plsql. 


Unfortunately  Sqlid attached execution plan  and Sql tuning advisor doesnt support on sqlid for plsql hence we need to identify  Child sql that are  executed internally as part  of plsql and tune child sqlid .

While this execution is in-progress, we could easily track the SQL that is currently executing along with its line no# with in the package.
The PLSQL_ENTRY_OBJECT_ID and PLSQL_ENTRY_SUBPROGRAM_ID from V$SESSION would tell us the name of the package and method that is currently executing.



Select owner,object_name,object_type,object_id  from dba_objects  where object_name in( 'MYPKG' );

OWNER      OBJECT_NAM  OBJECT_TYPE           OBJECT_ID
---------- ---------- -------------------- ----------
ABDUL      MYPKG      PACKAGE BODY              99507
ABDUL      MYPKG      PACKAGE                   99506




Check the query is triggered from any procedure

SELECT s.sql_id, s.sql_text FROM gv$sqlarea s JOIN dba_objects o ON s.program_id = o.object_id and o.object_name = '&procedure_name';




 select sql_fulltext, program_id, program_line#,
        (select object_name
          from all_objects
         where object_id = program_id) object_name ,
       plsql_entry_object_id c1,
       plsql_entry_subprogram_id c2,
        plsql_object_id  c3,
      plsql_subprogram_id c4
   from v$session b1,
       v$sql b2
  where b1.program_id in ( '99506'  ,'99507') 
  and b1.sql_id = b2.sql_id
  and b1.sql_child_number = b2.child_number;


SQL_FULLTEXT         PROGRAM_ID PROGRAM_LINE# OBJECT_NAME          C1         C2      C3            C4
-------------------- ---------- ------------- ------------ ---------- ---------- ---------- ----------
SELECT B1.OWNER,B2.U      99507             5 MYPKG             99506          2
SERNAME FROM ALL_OBJ
ECTS B1, ALL_USERS B2




PROGRAM_LINE# represents the line no# at which this SQL is present.


  column text format a40
  select line, text
  from all_source
  where name ='MYPKG'
  and type ='PACKAGE BODY'
  and owner ='DEMO'
  and line <=10
  order by line ;

      LINE TEXT
---------- ----------------------------------------
         1 package body mypkg
         2 as
         3      procedure p1 as
         4      begin
         5              for x in (select b1.owner,b2.username
         6                      from all_objects b1, all_users b2)
         7              loop
         8                      null;
         9              end loop;
        10      end;

10 rows selected.




If we want to get  only  objects details  that are accessed  by package we can get  it by below sql 


select pname, tab, type, text from (
select ui.name pname, ud.table_name tab, us.type, 
       ui.st, ui.en, us.line, us.text,
       max(line) over (partition by us.name, us.type) mx_line
from   user_source us
join   user_tables ud
on     upper(us.text) like '%' || table_name || '%'
join   (select name, object_type, object_name, 
               line st, lead(line) over (partition by object_type order by line)-1 en
        from   user_identifiers
        where  type in ('FUNCTION', 'PROCEDURE')
        and    usage in ('DECLARATION', 'DEFINITION')
        and    object_type like 'PACKAGE%') ui
on     ui.object_name = us.name
and    ui.object_type = us.type
where  us.name = 'PKG'
)
where  line between st and nvl(en, mx_line);



 WITH TESTING AS
(
select
DISTINCT
name,
type,
decode(usage,'DECLARATION', 'body only', 'DEFINITION', 'spec and body', usage) defined_on,
line body_line,
object_name
from user_identifiers ui
where type = 'PROCEDURE'
and usage_context_id = (select usage_id
from user_identifiers
where object_name = ui.object_name
and object_type = ui.object_type
and usage_context_id = 0)
and object_name = 'MY_PACKAGE_NAME'
and object_type = 'PACKAGE BODY'
order by line
);

Tuesday, October 10, 2023

Oracle Database Connection pooling Strategies : Database Resident Connection Pooling (DRCP)



Connection pooling is very  less known  to dba community and hence it becomes tough to  troubleshoot connection  issues 


Connection pooling is generally the practice of a middle tier (application server) getting N connections to a database (say 20 connections).

These connections are stored in a pool in the middle tier, an "array" if you will. Each connection is set to "not in use"

When a user submits a web page to the application server, it runs a piece of your code, your code says "i need to get to the database", instead of connecting right there and then (that takes time), it just goes to this pool and says "give me a connection please". the connect pool software marks the connection as "in use" and gives it to you.



We have many connection  poling strategies  for oracle database . 

1)  Database side Resident Connection Pooling ( database side ) 
2)  Jdbc connection pooling using Hikari pool and other 3rd party application 
3)  OCI Session Pooling : Active session Pooling   and Persistent Pooling



In this article we will  talk about DRCP 




######################
Jdbc COnnection Pooling 
######################

Will document  on Jdbc / Hirakapool  pooling   in another  Blog  however we can get some insight in  below doc 


https://docs.oracle.com/cd/B28359_01/java.111/e10788/optimize.htm#CFHDDFCI




######################
Oci connection pool  details can be fetched from below link :
######################


The OCI8 extension provides three different functions for connecting to Oracle. 
The standard connection function is oci_connect(). This creates a connection to an Oracle database and returns a resource used by subsequent database calls.

The oci_pconnect() function uses a persistent cache of connections that can be re-used across different script requests.
 This means that the connection overhead will typically only occur once per PHP process (or Apache child).


The function oci_new_connect() always creates a new connection to the Oracle server, regardless of what other connections might already exist.
High traffic web applications should avoid using oci_new_connect(), especially in the busiest sections of the application.


References :

https://docs.oracle.com/en/database/oracle/oracle-database/21/lnoci/session-and-connection-pooling.html#GUID-DFA21225-E83C-4177-A79A-B8BA29DC662C

https://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci09adv.htm#LNOCI091



######################
DRCP 
######################

DRCP is introduced in 11g version of Oracle. It is used for sharing connection to achieve scalability in multi process and multi threaded environment.

Database resident connection pooling provides a connection pool in the database server for typical Web application usage scenarios in which an application acquires a database connection, works on it for a relatively short duration, and then releases it.

Database resident connection pooling pools “dedicated” servers process.

A pooled server is the equivalent of a server foreground process and a database session combined.

Having a pool of readily available servers has the additional benefit of reducing the cost of creating and closing client connections.

This feature is useful for applications that must maintain persistent connections to the database and optimize server resources (such as memory).


DRCP Pools are like dedicated it’s used to share the connection between multiple application process from different hosts.


Note: Database Resident Connection Pooling is consuming less memory as compared to other dedicated or Shared connection.


POOLED SERVER
Database server processes and sessions combination is known as a pooled server.


CONNECTION BROKER
A connection broker manages the pooled server in database instance. Client are connected and authenticated to the broker. The background process called is Connection Broker process(CMON).


Clients obtaining connections out of the database resident connection pool are persistently connected to a background process, the connection broker, instead of the dedicated servers. The connection broker implements the pool functionality and performs the multiplexing of inbound connections from the clients to a pool of dedicated servers with sessions.

When a client must perform database work, the connection broker picks up a dedicated server process from the pool and assigns it to the client. Subsequently, the client is directly connected to the dedicated server until the request is served. After the server finishes processing the client request, the server goes back into the pool and the connection from the client is restored to the connection broker.




Steps follow by Client for Connection
1. Client request for connection
2. Broker authenticated and pick the pooled server and hand-off client to that pooled server.
3. The client directly communicates with the pooled server for all its database activity.
4. The pooled server is handed back to the broker when the client releases it.



In 19.10 they have added two new initialization parameters to control number of processes to handle session authentication 

min_auth_servers       
max_auth_servers      


In 19.11 they have added DRCP_DEDICATED_OPT parameter to have the pool behave like a dedicated server (so as a one-to-one client/server) in case you are below the maximum number of pool connection. This is again a trade between performance and resources consumption (the V$AUTHPOOL_STATS view is NOT existing, probably a bug):

DRCP_DEDICATED_OPT




Configure the DRCP

1. Start or Enabled the Pool by connecting with sysdba

Following command start the broker and register with database listener. It must be started before client started request.


SQL> SELECT connection_pool, status, maxsize FROM dba_cpool_info;

CONNECTION_POOL                STATUS     MAXSIZE
------------------------------ ---------- ----------
SYS_DEFAULT_CONNECTION_POOL    INACTIVE   20

SQL> exec dbms_connection_pool.start_pool;
PL/SQL procedure successfully completed.

SQL> SELECT connection_pool, status, maxsize FROM dba_cpool_info;

CONNECTION_POOL                STATUS   MAXSIZE
------------------------------ -------- -------
SYS_DEFAULT_CONNECTION_POOL    ACTIVE   20





2. Disable or Stop the DRCP

execute dbms_connection_pool.stop_pool();




3. For using the Pool you need to make some connection changes at TNS entry level or application level

Add POOLED keyword in TNS entry and application connection string

-- For JDBC connection modified :
host1.oracle.com:1521/orcl:POOLED

OR

-- For TNS connection
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=host1.oracle.com)
(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=orcl)
(SERVER=POOLED)))





4. Configure and ALTER DRCP with package DBMS_CONNECTION_POOL.

This is step needed if you want to change the default setting before configuring the DRCP pool.
Following example already set with default value:

execute dbms_connection_pool.configure_pool(
pool_name => 'SYS_DEFAULT_CONNECTION_POOL',
minsize => 4,
maxsize => 40,
incrsize => 2,
session_cached_cursors => 20,
inactivity_timeout => 300,
max_think_time => 600,
max_use_session => 500000,
max_lifetime_session => 86400);

execute dbms_connection_pool.alter_param(
pool_name => 'DRPC_POOL',
param_name => 'MAX_THINK_TIME',
param_value => '1200');


Parameter meaning as:

POOL_NAME: Name of the pool. Default is SYS_DEFAULT_CONNECTION_POOL
minsize: minimum number of pooled server in pool. (default 4)
maxsize: maximum number of pooled server in pool. (default 40)
incrsize: increased number of pooled server is not available if pool is not max limit( default 2)
session_cached_cursors: SESSION_CACHED_CURSORS for all connections in the pool(default 20)
inactivity_timeout: time to remain an idle server in the pool. If a server remains idle upto time limit, it is killed. (default 300 seconds)
max_think_time: Maximum time of inactivity the PHP script is allowed after connecting. (default 120 seconds)
max_use_session: Maximum number of times a server can be taken and released to the pool before it is flagged for restarting. (default 500000)
max_lifetime_session: Time to live for a pooled server before it is restarted. (default 86400 seconds)
num_cbrok: The number of connection brokers that are created to handle connection (default 1)
maxconn_cbrok: The maximum number of connections that each connection broker can handle.(default 40000)





6. Monitor the DRCP pooling from following views:

select connection_pool, status, maxsize from dba_cpool_info;

select num_requests, num_hits, num_misses, num_waits from v$cpool_stats;

select cclass_name, num_requests, num_hits, num_misses from v$cpool_cc_stats;




At Linux level it will create background processes, called connection broker to handle client request and free server processes:

ps -ef | grep ora_l00 | grep -v grep

oracle   10316     1  0 15:24 ?        00:00:00 ora_l000_orcl
oracle   13522     1  0 15:24 ?        00:00:00 ora_l001_orcl



Views : 

DBA_CPOOL_INFO
V$CPOOL_CC_INFO
V$CPOOL_CC_STATS
V$CPOOL_STATS

 set lines 200
 col username for a15
 col cclass_name for a15
 select username, cclass_name, process_id, program, machine from v$cpool_conn_info;



Reference : 

https://docs.oracle.com/en/database/oracle/oracle-database/19/jjdbc/database-resident-connection-pooling.html#GUID-D4F9DBD7-7DC6-4233-B831-933809173E39


https://docs.oracle.com/javase/8/docs/technotes/guides/net/http-keepalive.html

Thursday, October 5, 2023

Oracle Exadata Cell offloading/ smart scan Not working for sqlid

 


Recently  user reported  that cell offloading was not working for  certain  sql id  . To explore further  on this  planned to  write handy Blog  with some handily information 



Below are parameters controlling offloading : 

cell_offload_compaction: Cell packet compaction strategy.
cell_offload_decryption: Enable SQL processing offload of encrypted data to cells.
cell_offload_parameters: Additional cell offload parameters.
cell_offload_plan_display: Cell offload explain plan display.
cell_offload_processing: Enable SQL processing offload to cells. 




What operations benefit from Smart Scan

Full scan of a heap table.
Fast full scan of a B-Tree or bitmap index.




What operations do not benefit from Smart Scan

Scans of IOTs or clustered tables.
Index range scans.
Access to a compressed index.
Access to a reverse key index.
Secure Enterprise Search.
CREATE INDEX with NOSORT
LONG or LOB column
Cache clause queries
Query more than 255 column
Clustered table






Below checks  were  done to check if   cell offloading is working and to force offloading


--> Check if   cell offloading enabled on database 

show parameter cell 




--> Check if   sqlid is  eligible for  Cell offloading 

select  sql_id , sum(IO_OFFLOAD_ELIG_BYTES_TOTAL) eligible from dba_hist_sqlstat   where  sqlid='99jjh86790j'   group by sql_id ;



--> Check if  offloading is working fine at database level  for other sql 

select stat_name ,  sum(value)  from  dba_hist_sysstat  where stat_name like 'cell physical IO%'  group by stat_name ; 


select stat_name ,  sum(value)  from  dba_hist_sysstat  where stat_name  in ('cell physical IO bytes saved by storage index',
'cell physical IO interconnect bytes returned by smart scan')  group by   STAT_NAME ; 




--> Check if Cell offloading working for sql id 


column  sql_test format a10 
select sql_id , IO_CELL_OFFLOAD_ELIGIBLE_BYTES , IO_INTERCONNECT_BYTES, 100*(IO_CELL_OFFLOAD_ELIGIBLE_BYTES-IO_INTERCONNECT_BYTES)/IO_CELL_OFFLOAD_ELIGIBLE_BYTES "IO_SAVED_%" , SQL_text  from v$sql where  sql_id='cbhjyp1iio'; 


try generating sql monitoring report 


set pagesize 999
set lines 190
col sql_text format a40 trunc
col child format 99999 heading CHILD
col execs format 9,999,999
col avg_etime format 99,999.99
col avg_cpu  format 9,999,999.99
col avg_lio format 999,999,999
col avg_pio format 999,999,999
col "IO_SAVED_%" format 999.99
col avg_px format 999
col offload for a7
select sql_id, child_number child,
decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'No','Yes') Offload,
decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,0,
100*(IO_CELL_OFFLOAD_ELIGIBLE_BYTES-IO_INTERCONNECT_BYTES)
/decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,1,IO_CELL_OFFLOAD_ELIGIBLE_BYTES)) "IO_                                      SAVED_%",
(elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime,
--decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,buffer_gets/decode(nvl(executions,0),0                                      ,1,executions),null) avg_lio,
--decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,disk_reads/decode(nvl(executions,0),0,                                      1,executions),null) avg_pio,
sql_text
from v$sql s
where upper(sql_text) like upper(nvl('&sql_text',sql_text))
and sql_text not like 'BEGIN :sql_text := %'
and sql_text not like '%IO_CELL_OFFLOAD_ELIGIBLE_BYTES%'
and sql_id like nvl('&sql_id',sql_id)
order by 1, 2, 3
/



-->  Ensure Database should not be in DST upgrade state. 

Reference  Exadata: Database Performance Degrades when Database is in Timezone Upgrade Mode (Doc ID 1583297.1)

select name, value$ from sys.props$ where name like '%DST%';






Ways to  force cell offloading 

using parallel hint 
"_serial_direct_read"=TRUE 
 /*+ OPT_PARAM('cell_offload_processing' 'true') */
alter session set "_simple_view_merging"=TRUE;
alter session set "_with_subquery"=materialize;
Making index invisible 
Use full hint . Eg   /*+ FULL(emp) */ 





Tuesday, September 26, 2023

Handling errors faced while Generating Oracle Awr report / Awr snapshots



Considering we were facing lot of issues related to awr snapshots and awr report  thought of documenting  known  troubleshooting and  known  fix 



Known Issues : 


Issue 1 )    Generating AWR report Hangs on 'Control File Sequential Read

Solution ::  alter session set "_hash_join_enabled"=true  as per Doc  Id : 2045523.1




Issue 2)  AWR snaoshot hangs with enq: WF - contention 

Solution 1:  As per Doc ID 2695000.1 Kill the MMON child processes and the MMON process for the instance that is not generating AWR reports so that it is restarted by the database.
 ps -ef|grep ora_m0
 ps -ef|grep _mmon_




Solution 2 :  Gather stats on these 3 tables and check the manual report generation again . If possible try gathering   fixed objects stats 

SQL> exec dbms_stats.gather_table_stats('SYS', 'X$KEWRSQLIDTAB');
SQL> exec dbms_stats.gather_table_stats('SYS', 'X$KEWRTSQLPLAN');
SQL> exec dbms_stats.gather_table_stats('SYS', 'X$KEHSQT');

Verify with the table last analyzed details

SQL> select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name ='X$KEWRSQLIDTAB';
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name='X$KEWRTSQLPLAN';
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name='X$KEHSQT';




Solution 3  : As per   Doc ID 2649588.1  MMON Process Causing Blocking And Contention - enq: WF - contention waits   due to increase in  size of awr data which  needs to be purged 

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/09/mmon-process-causing-blocking-and.html  --  Doc ID 2649588.1

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/09/troubleshooting-missing-automatic.html

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/09/purging-oracle-awr-details.html

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/09/manually-purge-optimizer-statistics-awr.html



Obtain flush timing for all the tables. (obtained from note 1301503.1)

set pagesize 999
column name format a28
column time format a29
variable dbid number
exec select dbid into :dbid from v$database;

variable snap_id number
exec select max(snap_id) into :snap_id from wrm$_snapshot where dbid=:dbid;

select table_name_kewrtb name, end_time-begin_time time
from wrm$_snapshot_details, x$kewrtb
where snap_id = :snap_id
and dbid = :dbid
and table_id = table_id_kewrtb
order by table_id;






Data Collection for analysis  : 


1) 10046   tracing 

Try generating Snapshots manually  . Try generating 10046   tracing  in session where you are trying manual snapshots 

alter session set timed_statistics = true;
alter session set statistics_level=all;
alter session set max_dump_file_size = unlimited;
alter session set tracefile_identifier = '10046_awr';
alter session set events '10046 trace name context forever,level 12';

set timing on
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;




2) Try generating Tfa 

$TFA_HOME/bin/tfactl diagcollect -srdc dbawrspace



3) Verify Mmon trace files and  alert log files 



4) . AWR info report

Check to see which objects are the largest in the AWR by running an AWR info report

conn / as sysdba
@?/rdbms/admin/awrinfo.sql




4) Other Views 

================

SQL> select con_id, instance_number, snap_id, begin_interval_time, end_interval_time from cdb_hist_snapshot order by 1,2,3;

SQL> select * from cdb_hist_wr_control;



COLUMN NAME FORMAT a40
COLUMN VALUE FORMAT a30
set lines 200 pages 200

select ksppinm name,
ksppstvl value
from sys.x$ksppi x,
sys.x$ksppcv y
where (x.indx = y.indx)
and ksppinm = '_awr_mmon_cpuusage';


Select count(*) from WRI$_OPTSTAT_HISTHEAD_HISTORY ;



select dbms_stats.get_stats_history_retention from dual;

select dbms_stats.get_stats_history_availability from dual;

select systimestamp - min(savtime) from sys.wri$_optstat_histgrm_history;
select min(savtime) from sys.wri$_optstat_histgrm_history;

SELECT MIN(SNAP_ID), MAX(SNAP_ID), COUNT(*) FROM SYS.WRH$_SYSMETRIC_HISTORY A
WHERE NOT EXISTS
(SELECT * FROM SYS.DBA_HIST_SNAPSHOT B WHERE B.SNAP_ID = A.SNAP_ID AND A.DBID = B.DBID);

SELECT COUNT(1) Orphaned_ASH_Rows
FROM wrh$_active_session_history a
WHERE NOT EXISTS
(SELECT 1
FROM wrm$_snapshot
WHERE snap_id = a.snap_id
AND dbid = a.dbid
AND instance_number = a.instance_number
);


========================





**************************
set markup html on spool on
spool /tmp/AWRCheck25thNov.html
set echo on

select systimestamp from dual;
select instance_name,version,host_name,status from gv$instance;

alter session set container = CDB$ROOT;

show con_name

show parameter awr

select * from cdb_hist_wr_control;

select con_id, instance_number, snap_id, begin_interval_time, end_interval_time from cdb_hist_snapshot order by 1,2,3;

alter session set container=FREXR1P;

exec dbms_workload_repository.create_snapshot();

alter session set container = CDB$ROOT;

select con_id, instance_number, snap_id, begin_interval_time, end_interval_time from cdb_hist_snapshot order by 1,2,3;

spool off
set markup html off
**************************





References : 

Troubleshooting: Missing Automatic Workload Repository (AWR) Snapshots and Other Collection Issues (Doc ID 1301503.1)

AWR Snapshots Not Generating ORA-13516 ( Doc ID 2756259.1 )

ORA-13516 AWR Operation failed: SWRF Schema not initialized ORA-06512 SYS.DBMS_WORKLOAD_REPOSITORY ( Doc ID 459887.1 )

Problem: ORA-13516 When Trying To Make A Manual Snapshot Of Repository Database ( Doc ID 365097.1 )

AWR Snapshots Not Generating ( Doc ID 308003.1 )

ORA-13516: AWR Operation Failed: CATPROC Not Valid (Doc ID 2547174.1)

MMON Trace Shows: "*** KEWRAFC: Flush slave failed, AWR Enqueue Timeout" ( Doc ID 560204.1 )

AWR Snapshots Fail to Generate With Associated "Suspending MMON action '%s' for 82800 seconds" alert log messages and Underlying ORA-12751 Errors ( Doc ID 2043531.1 )

Monday, September 11, 2023

Oracle sysresv utility to release shared memory / semaphores -- No need for ipcs and ipcrm


After shutdown of Oracle database , shared memory segments staying behind is known issues   in Linux .  Ideally Dba use Ipcs and Ipcrm  to release  shared  memory segments .


Also there is less known Linux utility pmap 
Eg :  [oracle]$ pmap `pgrep -f lgwr`



There is  Oracle utility sysresv  that can  be used to release shared memory segments and there is no need  to use Ipcs and Ipcrm .  

Eg :  sysresv -l instance1 

 
Usage:
------

sysresv: 
usage   : sysresv [-if] [-d <on/off>] [-l sid1 <sid2> ...]
          -i : Prompt before removing ipc resources for each sid
          -f : Remove ipc resources silently, oevrrides -i option
          -d <on/off> : List ipc resources for each sid if on
          -l sid1 <sid2> .. : apply sysresv to each sid
Default : sysresv -d on -l $ORACLE_SID
Note    : ipc resources are attempted to be deleted for a
          sid only if there is no currently running instance
          with that sid. 


   
Examples:
---------

o  Instance is not running:

   /u02/app/oracle/product/8.1.7> sysresv

   IPC Resources for ORACLE_SID "R817" :
   Shared Memory
   ID              KEY
   No shared memory segments used
   Semaphores:
   ID              KEY
   No semaphore resources used
   Oracle Instance not alive for sid "R817" 


o  Instance is running:

   /u03/app/oracle/product/8.1.6> sysresv

   IPC Resources for ORACLE_SID "ORCL" :
   Shared Memory:
   ID              KEY
   16437           0xe4efa8dc
   Semaphores:
   ID              KEY
   12320802        0x09d48346
   Oracle Instance alive for sid "ORCL"


o  Attempting to remove memory and semphores using sysresv when Oracle 
   detects an instance is running:

   /u03/app/oracle/product/8.1.6> sysresv -f

   IPC Resources for ORACLE_SID "ORCL" :
   Shared Memory:
   ID              KEY
   16437           0xe4efa8dc
   Semaphores:
   ID              KEY
   12320802        0x09d48346
   Oracle Instance alive for sid "ORCL"
   SYSRESV-005: Warning
           Instance maybe alive - aborting remove for sid "ORCL" 


o  Removing IPC resources:

   [Sysresv shows memory and semaphores exist but Oracle determines the 
    instance is not alive.  Cleanup is needed.]

   /u03/app/oracle/product/8.1.6> sysresv

   IPC Resources for ORACLE_SID "ORCL" :
   Shared Memory:
   ID              KEY
   16837           0xe4efa8dc
   Semaphores:
   ID              KEY
   12714018        0x09d48346
   Oracle Instance not alive for sid "ORCL" 
 

o  Removing IPC resources using sysresv:

   /u03/app/oracle/product/8.1.6> sysresv -i

   IPC Resources for ORACLE_SID "ORCL" :
   Shared Memory
   ID              KEY
   No shared memory segments used
   Semaphores:
   ID              KEY
   No semaphore resources used
   Oracle Instance not alive for sid "ORCL"
   Remove ipc resources for sid "ORCL" (y/n)?y
   Done removing ipc resources for sid "ORCL"
   /u03/app/oracle/product/8.1.6


   Verify the resources were removed:

   /u03/app/oracle/product/8.1.6> sysresv

   IPC Resources for ORACLE_SID "ORCL" :
   Shared Memory
   ID              KEY
   No shared memory segments used
   Semaphores:
   ID              KEY
   No semaphore resources used
   Oracle Instance not alive for sid "ORCL"  

     
o  If you need to remove memory segments, and Oracle detects the
   instance is alive through sysresv:

   % ipcrm -m <memid>

   Where <memid> is the memory id shown in the sysresv output.

   Example:
   % ipcrm -m 16437

   If you need to remove semaphores, and Oracle detects the
   instance is alive through sysresv:

   % ipcrm -s <semid>

   where <semid> is the semaphore id shown in the sysresv output.

   Example:
   % ipcrm -s 12320802



Reference :

SYSRESV Utility (Doc ID 123322.1)

Startup fail with ORA-29701 & ORA-29702 after the DB crash. (Doc ID 2160481.1)


Tuesday, August 22, 2023

Oracle Solaris Zone management


Solaris Zones is a software partitioning technology, which provides a means of virtualizing operating system services to create an isolated environment for running applications. 
This isolation prevents processes that are running in one zone from monitoring or affecting processes running in other zones

After installing Oracle Solaris 10 on a system, but before creating any zones, all processes run in the global zone. 

After you create a zone, it has processes that are associated with that zone and no other zone. 

Any process created by a process in a non-global zone is also associated with that non-global zone. Any zone which is not the global zone is called a non-global zone.




File and Directory : 

As a  Oracle Dba script to start database as part of cluster are placed under  /etc/rc0.d/.  Eg  /etc/rc0.d/K10dbora 


1) 

The directory /etc/zones contains the configuration files for each zone by the zone name. Examining these files will show the show the configuration of the zones.

The index file in this directory also contains the status of the zones:

# cat <zonename>.xml
cat /etc/zones/myzone.xml



Viewing Global Zone name from Local Zone :


Global-Zone# cat globalzonename.sh
#!/usr/bin/bash
for i in `/usr/sbin/zoneadm list -v | /usr/bin/egrep -v "ID|global" |/usr/bin/awk ' { print $4 } '`; do echo `uname -n` > $i/root/etc/GLOBALZ; done




Commands : 

1) 
# /usr/sbin/zoneadm list -vi to show status of all installed zones (i-stands for installed)
# /usr/sbin/zoneadm list -vc to show status of all configured zones (c-stands for configured which includes installed zones).



2) 
zonecfg -z zonename info    -->  To list resources used by each zones : Use zonecfg -z with the info option to list a specific zone configuration.



3) 
zonename


4) 
zpool list



5) 
zonestat -z zonename -r physical-memory 2
zonestat -z zone1,zone2 5 



Friday, August 18, 2023

Oracle Database SQLTXPLAIN (SQLT) Installation and report generation

 
There are  lot of documents online sqlt however  i have documented below for my quick reference and if it helps someone 

Sqlt Needs  installation which  becomes challenge sometimes , in that case lighter version sqlhc can be used  



Installation : 

Install SQLT
Install SQL by executing the installation script sqlt/install/sqcreate.sql connected as SYS:


# cd sqlt/install
# sqlplus / as sysdba
SQL> START sqcreate.sql



During the installation you will be asked to enter values for these parameters:

Optional Connect Identifier - In some restricted-access systems you may need to specify a connect identifier like @PROD. If a connect identifier is not needed, enter nothing and just hit the "Enter" key.

SQLTXPLAIN password - Case sensitive in most systems.

SQLTXPLAIN Default Tablespace - Select from a list of available permanent tablespaces which one should be used by SQLTXPLAIN for the SQLT repository. It must have more than 50MB of free space.

SQLTXPLAIN Temporary Tablespace - Select from a list of available temporary tablespaces which one should be used by SQLTXPLAIN for volatile operations and objects.

Optional Application User - This is the user that issued the SQL statement to be analyzed. For example, if this were an EBS system specify APPS, on Siebel you would specify SIEBEL and on People Soft 
SYSADM. You won't be asked to enter the password for this user. You can add additional SQLT users after the tool is installed, by granting them role SQLT_USER_ROLE or by using provided script sqlt/install/sqguser.sql

Licensed Oracle Pack. (T, D or N) - You can specify T for Oracle Tuning, D for Oracle Diagnostic or N for none. If T or D is selected, SQLT may include licensed content within the diagnostics files it produces. Default is T. If N is selected, SQLT installs with limited functionality.



Report Generation : 

Note  :  after 19c migration  sqltadmin object becomes invalid and  sqlt has to be reinstalled as per doc id 2920902.1 


== EXTRACT SQLT reports:

Starting with 12cR1, SQLT can be executed under SYS user -- for so, do as follows before starting execution:

Fix for :  ORA-06598   - inherit any  and  sqlt role 
SQL> CONN /as sysdba
SQL> GRANT INHERIT PRIVILEGES ON USER sys TO sqltxadmin;



1: To speedup SQLT Extraction -- connect as SQLTXADMIN user and run:

EXEC sqltxplain.sqlt$a.set_param('sta_time_limit_secs', '30');
EXEC sqltxplain.sqlt$a.set_sess_param('sql_tuning_advisor','N');
EXEC sqltxadmin.sqlt$a.set_param('ash_reports', '0');
EXEC sqltxadmin.sqlt$a.set_param('awr_reports', '0');




-- sqltxtract.sql script gather the details from memory or else from AWR snapshots.
cd sqlt/run
sqlplus ep/ep; -- connect as application user
@sqltxtract.sql sqlid sqltpassword
@/home/oracle/sqlt/run/sqltxtract.sql jkz7v96ym42c6 SQLTXPLAIN



 


Reference : 

How to Collect Standard Diagnostic Information Using SQLT for SQL Issues (Doc ID 1683772.1)
Document 215187.1 SQLT Diagnostic Tool

Monday, August 14, 2023

Tracing Oracle Database PLSQL using DBMS_HPROF and PLSQL Profiler

 

There may be times when a normal SQL trace of a PLSQL package is not sufficient. For example, when the SQL statements seem to be running fast, but the procedure takes a long time. It may be useful to understand which calls in the PLSQL package are taking the time. With this in mind, this document provides a step by step guide to tracing PLSQL using DBMS_HPROF.

If we are writing Procedure ,  its always advice able to  add below in procedure to capture more details on error 

DBMS_UTILITY.FORMAT_ERROR_BACKTRACE holds the line number of the error.
DBMS_UTILITY.FORMAT_ERROR_STACK still holds the last error message.



One of way to debug  compilation issues is using  PLSQL_DEBUG = true; before complication

EG 

alter session set plsql_warnings='enable:all';
ALTER SESSION SET PLSQL_WARNINGS='ENABLE:ALL','DISABLE:06015';
ALTER SESSION SET PLSQL_DEBUG = true;
ALTER PACKAGE XXXX COMPILE DEBUG;
ALTER PACKAGE XXXX COMPILE DEBUG BODY;





Tracing Plsql using Event 10938

One of  other way   to trace plslq is using   trace event 10938  .


Event 10928 - Trace PL/SQL Execution  :  This event traces PL/SQL execution
ALTER SESSION SET EVENTS '10928 trace name context forever, level 1';


Event 10938 - Dump PL/SQL Execution Statistics  : This event dumps PL/SQL execution statistics.
ALTER SESSION SET EVENTS '10938 trace name context forever, level 1';

Level    Description for event 10938
1 Trace all calls 
2 Trace calls to enabled programs only 
4 Trace all exceptions 
8 Trace exceptions in enabled program units only 
10 Tracing for levels 2 and 8
14 Tracing for levels 2, 4, and 8


There is also option for tracing  plsql  using dbms_trace.set_plsql_trace

dbms_trace.set_plsql_trace(dbms_trace.TRACE_ENABLED_CALLS+dbms_trace.TRACE_LIMIT);



DBMS_HPROF.

create directory PROF_DIR as '/home/oracle/pl';
grant all on directory plshprof_dir to pltest;
grant execute on dbms_hprof to pltest;




-- as user PLTEST 

execute dbms_hprof.create_tables();

With the last call the tables DBMSHP_TRACE_DATA, DBMSHP_RUNS, DBMSHP_FUNCTION_INFO and DBMSHP_PARENT_CHILD_INFO are created. They store the information in the database schema  and can be queried afterwards. 


Create the DBMS_HRPOF objects if not present .

Log on to the PLTEST  that owns the PLSQL package being profiled.
sqlplus <user>/<password>
SQL> @$ORACLE_HOME/rdbms/admin/dbmshptab.sql




Profiling Pl/sql 

SET SERVEROUTPUT ON;
EXEC DBMS_HPROF.START_PROFILING('PROF_DIR','P1_PROFILE_DATA.trc');

/* Within the same session run the P1 procedure */
EXEC P1;

/* Stop the profiling */
EXEC DBMS_HPROF.STOP_PROFILING;




Analyzing Raw File Generated : 

This raw profiler trace file can be analyzed in two ways:

1) Using the DBMS_HPROF.ANALYZE function.
2) Using the "plshprof" command line utility. 


Eg :

plshprof -output <out_file_name> P1_PROFILE_DATA.trc





Reference : 

How to Tune PL/SQL Applications and Identify Hot Spots using DBMS_HPROF (Doc ID 763944.1)

How to resolve error - PLS-00201: identifier 'DBMS_HPROF' must be declared? (Doc ID 2867790.1)

SRDC - Providing Supporting Information for PL/SQL Runtime Issues (Doc ID 2288591.1)

https://blogs.oracle.com/coretec/post/plsql-tuning-with-plsql-hierarchical-profiler



DBMS_PROFILER

 What about customers who have only limited privileges to their Oracle environment and who may not have access to a logical directory. The solution is the usage of the package DBMS_PROFILER, which writes the data directly into database tables. The procedure works similar to the described procedure of DBMS_HPROF.

First the corresponding repository tables PLSQL_PROFILER_RUNS, PLSQL_PROFILER_DATA and PLSQL_PROFILER_UNITS are created in the application schema. To do this, use the proftab.sql utility from the $ORACLE_HOME/rdbms/admin directory.



Configuring the profiler is fairly easy; in the schema that owns the PL/SQL objects the proftab.sql script (located at $ORACLE_HOME/rdbms/admin) 
must be run to create the tables that the profiler needs to store run data; those tables are:


PLSQL_PROFILER_UNITS
PLSQL_PROFILER_RUNS
PLSQL_PROFILER_DATA


An example of how to use the profiler once the setup has been successfully completed follows:


set linesize 1000 trimspool on
select dbms_profiler.start_profiler('DATES PACKAGE') from dual;
select dates_pkg.minutes_elapsed(sysdate-10, sysdate) from dual;
select dbms_profiler.stop_profiler from dual;
spool dates_package_profiler_run.rpt
@?/rdbms/admin/profsum
spool off

or 

execute dbms_profiler.start_profiler;
execute testprof;
execute dbms_profiler.stop_profiler;

select runid, run_date, run_comment from plsql_profiler_runs;  





Reference : 

https://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_profiler.htm#ADFNS02301

Script to produce HTML report with top consumers out of PL/SQL Profiler DBMS_PROFILER data (Doc ID 243755.1)

https://oracle-base.com/articles/11g/plsql-hierarchical-profiler-11gr1


Saturday, August 12, 2023

Oracle Rac Node failover Testing


     As part of new build we need to  do  node failover testing hence documenting some steps  known to me 

Objective

Action

Expectations

To test accidental change of
 IP addresses of the cluster

Modify etc/hosts and change
theVIP addresses - Try starting thecluster

The cluster must not be able to
bootup with error messages related to network Ips.

To test the failure of ASM instance
 (ASM Instancecrash)

On Node 1, kill the processes
 related to ASM

The cluster must not be affected
by ASM crash on node 1

To test the failover of the network
 card of the public IPaddress.

Plug out one of the cableconnecting
 the node to thepublic network

The ping from the client
 should beable to go thru

To test the stability of the db by killing  a ora server process

kill off a process related to the
oracle daemons from a terminal.This should crash the database processes.

The sql session should not be
affected and failover to the next node without any disruption

To test thecomplete failover of the session in event of a node(e.g. sudden shutdown

1. Connect from the client to the database via sqlplus.

2. power off the unix box inwhich the sqlplus connection isat.

The sql session should not be
 affected and failover to the nextnode without any disruption

To test the failover session of a
connected oracleuser in the event of an instance failureTo test the failoversession of aconnected oracleuser in the event of an instance failure

1. Connect from a client to node1 in the RAC Environment usingsqlplus.

2. Connect to the database onnode 1 and do a shutdown.

sqlplus / as sysdba

The sql connection should be able to
swing over to other instance and stillcontinue the select query

# – sqlplus connection from client todb is still available

To test Crash of Grid  Processes

Kill Lmon process on rac node

They must be restarted
 automatically

To  Test for automatically shifting of
 scan listeners

Reboot Node 1 using  the OS
reboot command

1. When noe 1 goes down all scan
listeners must shift to remaining
node

2. When node 1 comes up  scan listeners must again shift back
to original node

To test storage  crash

Forcibly make scan disk unvisible to
both nodes

The cluster must go down and
should not start since rac does not
safeguard to storage failures

TAF failoer test

to test sql and session   failover 

 

 



Tuesday, August 8, 2023

Oracle Database Performance Hub report using DBMS_PERF



  When we  have performance issues normally we have to login to  oem  to get  performance details . 

  We can get quick snap of  performance using DBMS_PERF.REPORT_PERFHUB    as mentioned below .   Its old tool  introduced in 12c but  not frequently used . 


Command : 

SET  LONG   10000000 ;
 SET LONGC 10000000 ;
 SET LINESIZE 32767 ;
 SET PAGESIZE 0 ;
 SET TRIMSPOOL ON ;
SPOOL perfhub.html
SELECT DBMS_PERF.REPORT_PERFHUB FROM DUAL;
SPOOL OFF;





.Other option used :


SET HEADING OFF LINESIZE 32767 PAGESIZE 0 TRIMSPOOL ON TRIMOUT ON LONG 9999999 VERIFY OFF LONGCHUNKSIZE 100000 FEEDBACK OFF
SET TERMOUT OFF

spool perfhub.html

SELECT DBMS_PERF.REPORT_PERFHUB (
    is_realtime         => 0           -- (  0 = dba_hist, 1 = v$ash  )
  , outer_start_time    => sysdate-1
  , outer_end_time      => sysdate
  , selected_start_time => TIMESTAMP'2023-05-15 20:15:00'
  , selected_end_time   => TIMESTAMP'2023-05-15 20:45:00'
  , type=>'ACTIVE'
)
FROM dual ;


spool off
SET TERMOUT ON HEADING ON PAGESIZE 5000 LINESIZE 999 FEEDBACK ON 



 

Historical SQL monitoring report can be extracted for particular SQL statement using SQL Details report.

Example: To generate SQL details report to get the SQL monitoring information for particular SQL statement in historical mode.

SQL> set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
SQL> spool sql_details_history.html
SQL> select dbms_perf.report_sql(sql_id=>'9vkyyg1xj6fgc',is_realtime=>0,type=>'active',selected_start_time=>to_date('10-SEP-18 04:00:00','dd-MON-YY hh24:mi:ss'),selected_end_time=>to_date('10-SEP-18 05:00:00','dd-MON-YY hh24:mi:ss')) from dual;
SQL> spool off


Note in 19c or higher it is better to add "outer_start_time" and  "outer_end_time" to get the desired results:

SQL> set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
SQL> spool sql_details_history.html
SQL> select dbms_perf.report_perfhub( is_realtime=>0, outer_start_time=>to_date('06-SEP-2022 12:00:00','dd-MON-YYYY hh24:mi:ss'), outer_end_time=>to_date('06-SEP-2022 13:00:00','dd-MON-YYYY hh24:mi:ss'), selected_start_time=>to_date('06-SEP-2022 12:00:00','dd-MON-YYYY hh24:mi:ss'), selected_end_time=>to_date('06-SEP-2022 13:00:00','dd-MON-YYYY hh24:mi:ss')) from dual;
SQL> spool off





We can also   generate   historical sql  monitoring report  using dbms_auto_report 


==>  Getting Details 


SQL> select distinct COMPONENT_NAME from DBA_HIST_REPORTS;

COMPONENT_NAME
-----------------------------------------------------------
sqlmonitor
perf



COL SQL_EXEC_START for a25
col sql_exec_id form a20
col  sql_id for a20

select key1 as sql_id ,  to_char( key2) as sql_exec_id ,   
to_char( to_date (key3 , 'mm:dd:yyyy hh24:mi:ss')  , 'yyyy-mm-dd hh24:mi:ss' ) sql_exec_start , 
report_id , instance_number , session_id , session_serial# , period_start_time , period_end_time 
from dba_hist_reports where key1='&sql_id'  order by sql_exec_start ;



SELECT report_id,
       EXTRACTVALUE(XMLType(report_summary),'/report_repository_summary/sql/@sql_id') sql_id,
       EXTRACTVALUE(XMLType(report_summary),'/report_repository_summary/sql/@sql_exec_id') sql_exec_id,
       EXTRACTVALUE(XMLType(report_summary),'/report_repository_summary/sql/@sql_exec_start') sql_exec_start
  FROM dba_hist_reports
 WHERE component_name = 'sqlmonitor'




SELECT /*+ NO_XML_QUERY_REWRITE */ t.report_id, x1.sql_id, x1.plan_hash, x1.sql_exec_id, x1.elapsed_time/1000000 ELAP_SEC
FROM dba_hist_reports t    
   , xmltable('/report_repository_summary/sql'    
       PASSING xmlparse(document t.report_summary)    
       COLUMNS    
         sql_id                path '@sql_id'     
       , sql_exec_start        path '@sql_exec_start'    
       , sql_exec_id           path '@sql_exec_id'      
       , status                path 'status'    
       , sql_text    path 'sql_text'
       , first_refresh_time    path 'first_refresh_time'
       , last_refresh_time     path 'last_refresh_time'
       , refresh_count         path 'refresh_count'
       , inst_id               path 'inst_id'
       , session_id            path 'session_id'
       , session_serial        path 'session_serial'
       , user_id               path 'user_id'
       , username              path 'user'
       , con_id                path 'con_id'
       , con_name              path 'con_name'
       , modul                 path 'module'
       , action                path 'action'
       , service               path 'service'
       , program               path 'program'
       , plan_hash             path 'plan_hash'
       , is_cross_instance     path 'is_cross_instance'
       , dop    path 'dop'
       , instances             path 'instances'
       , px_servers_requested  path 'px_servers_requested'
       , px_servers_allocated  path 'px_servers_allocated'
       , duration              path 'stats/stat[@name="duration"]'  
       , elapsed_time          path 'stats/stat[@name="elapsed_time"]'  
       , cpu_time              path 'stats/stat[@name="cpu_time"]'  
       , user_io_wait_time     path 'stats/stat[@name="user_io_wait_time"]'
       , application_wait_time path 'stats/stat[@name="application_wait_time"]'
       , concurrency_wait_time path 'stats/stat[@name="concurrency_wait_time"]'
       , cluster_wait_time     path 'stats/stat[@name="cluster_wait_time"]'
       , plsql_exec_time       path 'stats/stat[@name="plsql_exec_time"]'
       , other_wait_time       path 'stats/stat[@name="other_wait_time"]'
       , buffer_gets           path 'stats/stat[@name="buffer_gets"]'
       , read_reqs             path 'stats/stat[@name="read_reqs"]'
       , read_bytes            path 'stats/stat[@name="read_bytes"]'
     ) x1 
where x1.elapsed_time/1000000 > 200
and   t.COMPONENT_NAME = 'sqlmonitor'
order by 5
/



==> Report Generation 



set long 1000000
set longsize 330
set longchunksize 1000000
set pagesize 0
set trim on 
set echo off
set feedback off 
set trimspool on 

select dbms_auto_report.report_repository_detail( RID=> &report_id , type=> 'TEXT' ) as  report  from dual ; 

or 

select dbms_auto_report.report_repository_detail( RID=> &report_id , type=> 'active' ) as  report  from dual ; 






Generating Database Performance Hub Report :

@?/rdbms/admin/perfhubrpt.sql



Syntax

DBMS_PERF.REPORT_PERFHUB ( is_realtime IN NUMBER DEFAULT NULL, outer_start_time IN DATE DEFAULT NULL, outer_end_time IN DATE DEFAULT NULL, selected_start_time IN DATE DEFAULT NULL, selected_end_time IN DATE DEFAULT NULL, inst_id IN NUMBER DEFAULT NULL, dbid IN NUMBER DEFAULT NULL, monitor_list_detail IN NUMBER DEFAULT NULL, workload_sql_detail IN NUMBER DEFAULT NULL, addm_task_detail IN NUMBER DEFAULT NULL, report_reference IN VARCHAR2 DEFAULT NULL, report_level IN VARCHAR2 DEFAULT NULL, type IN VARCHAR2 DEFAULT 'ACTIVE', base_path IN VARCHAR2 DEFAULT NULL); RETURN CLOB;



 REPORT_PERFHUB Function Parameters

ParameterDescription

is_realtime

If 1, then real-time. If NULL (default) or 0, then historical mode.

outer_start_time

Start time of outer period shown in the time selector. If NULL (default):

  • If is_realtime=0 (historical), then 24 hours before outer_end_time.

  • If is_realtime=1 (realtime mode), then 1 hour before outer_end_time.

outer_end_time

End time of outer period shown in the time selector. If NULL (default), then latest AWR snapshot.

  • If is_realtime=0 (historical), then the latest AWR snapshot

  • If is_realtime=1 (realtime mode), this is the current time (and any input is ignored)

selected_start_time

Start time period of selection. If NULL (default)

  • If is_realtime=0, then 1 hour before selected_end_time

  • If is_realtime=1, then 5 minutes before selected_end_time

selected_end_time

End time period of selection. If NULL (default)

  • If is_realtime=0, then latest AWR snapshot

  • If is_realtime=1, then current time

inst_id

Instance ID to for which to retrieve data

  • If -1, then current instance

  • If number is specified, then for that instance

  • If NULL (default), then all instances

dbid

DBID to query.

  • If NULL, then current DBID.

  • If is_realtime=1, then DBID must be the local DBID.

monitor_list_detail

Top N in SQL monitor list for which to retrieve SQL monitor details.

  • If NULL (default), then retrieves top 10

  • If 0, then retrieves no monitor list details

workload_sql_detail

Top N in Workload Top SQL list to retrieve monitor details,

  • If NULL (default), then retrieves top 10

  • If 0, then retrieves no monitor list details

addm_task_detail

Maximum N latest ADDM tasks to retrieve

  • If NULL (default), retrieves available data but no more than N

  • If 0, then retrieves no ADDM task details

report_reference

Must be NULL when used from SQL*Plus.

report_level

'typical' will get all tabs in performance hub

type

Report type:

  • 'ACTIVE' (default)

  • 'xml' returns XML

base_path

URL path for HTML resources since flex HTML requires access to external files. This is only valid for type='ACTIVE' and is typically not used. Default value will retrieve the required files from OTN.

 


Reference : 

Monitoring Database Performance Using Performance Hub Report (Doc ID 2436566.1)

https://docs.oracle.com/en/database/oracle/oracle-database/21/arpls/DBMS_PERF.html#GUID-85CD8AB9-B6E0-444F-91A5-762EB92A74E9 

How To Get Historical SQL Monitor Report For SQL Statements (Doc ID 2555350.1)