Tuesday, September 26, 2023

Handling errors faced while Generating Oracle Awr report / Awr snapshots



Considering we were facing lot of issues related to awr snapshots and awr report  thought of documenting  known  troubleshooting and  known  fix 



Known Issues : 


Issue 1 )    Generating AWR report Hangs on 'Control File Sequential Read

Solution ::  alter session set "_hash_join_enabled"=true  as per Doc  Id : 2045523.1




Issue 2)  AWR snaoshot hangs with enq: WF - contention 

Solution 1:  As per Doc ID 2695000.1 Kill the MMON child processes and the MMON process for the instance that is not generating AWR reports so that it is restarted by the database.
 ps -ef|grep ora_m0
 ps -ef|grep _mmon_




Solution 2 :  Gather stats on these 3 tables and check the manual report generation again . If possible try gathering   fixed objects stats 

SQL> exec dbms_stats.gather_table_stats('SYS', 'X$KEWRSQLIDTAB');
SQL> exec dbms_stats.gather_table_stats('SYS', 'X$KEWRTSQLPLAN');
SQL> exec dbms_stats.gather_table_stats('SYS', 'X$KEHSQT');

Verify with the table last analyzed details

SQL> select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name ='X$KEWRSQLIDTAB';
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name='X$KEWRTSQLPLAN';
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name='X$KEHSQT';




Solution 3  : As per   Doc ID 2649588.1  MMON Process Causing Blocking And Contention - enq: WF - contention waits   due to increase in  size of awr data which  needs to be purged 

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/09/mmon-process-causing-blocking-and.html  --  Doc ID 2649588.1

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/09/troubleshooting-missing-automatic.html

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/09/purging-oracle-awr-details.html

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/09/manually-purge-optimizer-statistics-awr.html



Obtain flush timing for all the tables. (obtained from note 1301503.1)

set pagesize 999
column name format a28
column time format a29
variable dbid number
exec select dbid into :dbid from v$database;

variable snap_id number
exec select max(snap_id) into :snap_id from wrm$_snapshot where dbid=:dbid;

select table_name_kewrtb name, end_time-begin_time time
from wrm$_snapshot_details, x$kewrtb
where snap_id = :snap_id
and dbid = :dbid
and table_id = table_id_kewrtb
order by table_id;






Data Collection for analysis  : 


1) 10046   tracing 

Try generating Snapshots manually  . Try generating 10046   tracing  in session where you are trying manual snapshots 

alter session set timed_statistics = true;
alter session set statistics_level=all;
alter session set max_dump_file_size = unlimited;
alter session set tracefile_identifier = '10046_awr';
alter session set events '10046 trace name context forever,level 12';

set timing on
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;




2) Try generating Tfa 

$TFA_HOME/bin/tfactl diagcollect -srdc dbawrspace



3) Verify Mmon trace files and  alert log files 



4) . AWR info report

Check to see which objects are the largest in the AWR by running an AWR info report

conn / as sysdba
@?/rdbms/admin/awrinfo.sql




4) Other Views 

================

SQL> select con_id, instance_number, snap_id, begin_interval_time, end_interval_time from cdb_hist_snapshot order by 1,2,3;

SQL> select * from cdb_hist_wr_control;



COLUMN NAME FORMAT a40
COLUMN VALUE FORMAT a30
set lines 200 pages 200

select ksppinm name,
ksppstvl value
from sys.x$ksppi x,
sys.x$ksppcv y
where (x.indx = y.indx)
and ksppinm = '_awr_mmon_cpuusage';


Select count(*) from WRI$_OPTSTAT_HISTHEAD_HISTORY ;



select dbms_stats.get_stats_history_retention from dual;

select dbms_stats.get_stats_history_availability from dual;

select systimestamp - min(savtime) from sys.wri$_optstat_histgrm_history;
select min(savtime) from sys.wri$_optstat_histgrm_history;

SELECT MIN(SNAP_ID), MAX(SNAP_ID), COUNT(*) FROM SYS.WRH$_SYSMETRIC_HISTORY A
WHERE NOT EXISTS
(SELECT * FROM SYS.DBA_HIST_SNAPSHOT B WHERE B.SNAP_ID = A.SNAP_ID AND A.DBID = B.DBID);

SELECT COUNT(1) Orphaned_ASH_Rows
FROM wrh$_active_session_history a
WHERE NOT EXISTS
(SELECT 1
FROM wrm$_snapshot
WHERE snap_id = a.snap_id
AND dbid = a.dbid
AND instance_number = a.instance_number
);


========================





**************************
set markup html on spool on
spool /tmp/AWRCheck25thNov.html
set echo on

select systimestamp from dual;
select instance_name,version,host_name,status from gv$instance;

alter session set container = CDB$ROOT;

show con_name

show parameter awr

select * from cdb_hist_wr_control;

select con_id, instance_number, snap_id, begin_interval_time, end_interval_time from cdb_hist_snapshot order by 1,2,3;

alter session set container=FREXR1P;

exec dbms_workload_repository.create_snapshot();

alter session set container = CDB$ROOT;

select con_id, instance_number, snap_id, begin_interval_time, end_interval_time from cdb_hist_snapshot order by 1,2,3;

spool off
set markup html off
**************************





References : 

Troubleshooting: Missing Automatic Workload Repository (AWR) Snapshots and Other Collection Issues (Doc ID 1301503.1)

AWR Snapshots Not Generating ORA-13516 ( Doc ID 2756259.1 )

ORA-13516 AWR Operation failed: SWRF Schema not initialized ORA-06512 SYS.DBMS_WORKLOAD_REPOSITORY ( Doc ID 459887.1 )

Problem: ORA-13516 When Trying To Make A Manual Snapshot Of Repository Database ( Doc ID 365097.1 )

AWR Snapshots Not Generating ( Doc ID 308003.1 )

ORA-13516: AWR Operation Failed: CATPROC Not Valid (Doc ID 2547174.1)

MMON Trace Shows: "*** KEWRAFC: Flush slave failed, AWR Enqueue Timeout" ( Doc ID 560204.1 )

AWR Snapshots Fail to Generate With Associated "Suspending MMON action '%s' for 82800 seconds" alert log messages and Underlying ORA-12751 Errors ( Doc ID 2043531.1 )

Monday, September 11, 2023

Oracle sysresv utility to release shared memory / semaphores -- No need for ipcs and ipcrm


After shutdown of Oracle database , shared memory segments staying behind is known issues   in Linux .  Ideally Dba use Ipcs and Ipcrm  to release  shared  memory segments .


Also there is less known Linux utility pmap 
Eg :  [oracle]$ pmap `pgrep -f lgwr`



There is  Oracle utility sysresv  that can  be used to release shared memory segments and there is no need  to use Ipcs and Ipcrm .  

Eg :  sysresv -l instance1 

 
Usage:
------

sysresv: 
usage   : sysresv [-if] [-d <on/off>] [-l sid1 <sid2> ...]
          -i : Prompt before removing ipc resources for each sid
          -f : Remove ipc resources silently, oevrrides -i option
          -d <on/off> : List ipc resources for each sid if on
          -l sid1 <sid2> .. : apply sysresv to each sid
Default : sysresv -d on -l $ORACLE_SID
Note    : ipc resources are attempted to be deleted for a
          sid only if there is no currently running instance
          with that sid. 


   
Examples:
---------

o  Instance is not running:

   /u02/app/oracle/product/8.1.7> sysresv

   IPC Resources for ORACLE_SID "R817" :
   Shared Memory
   ID              KEY
   No shared memory segments used
   Semaphores:
   ID              KEY
   No semaphore resources used
   Oracle Instance not alive for sid "R817" 


o  Instance is running:

   /u03/app/oracle/product/8.1.6> sysresv

   IPC Resources for ORACLE_SID "ORCL" :
   Shared Memory:
   ID              KEY
   16437           0xe4efa8dc
   Semaphores:
   ID              KEY
   12320802        0x09d48346
   Oracle Instance alive for sid "ORCL"


o  Attempting to remove memory and semphores using sysresv when Oracle 
   detects an instance is running:

   /u03/app/oracle/product/8.1.6> sysresv -f

   IPC Resources for ORACLE_SID "ORCL" :
   Shared Memory:
   ID              KEY
   16437           0xe4efa8dc
   Semaphores:
   ID              KEY
   12320802        0x09d48346
   Oracle Instance alive for sid "ORCL"
   SYSRESV-005: Warning
           Instance maybe alive - aborting remove for sid "ORCL" 


o  Removing IPC resources:

   [Sysresv shows memory and semaphores exist but Oracle determines the 
    instance is not alive.  Cleanup is needed.]

   /u03/app/oracle/product/8.1.6> sysresv

   IPC Resources for ORACLE_SID "ORCL" :
   Shared Memory:
   ID              KEY
   16837           0xe4efa8dc
   Semaphores:
   ID              KEY
   12714018        0x09d48346
   Oracle Instance not alive for sid "ORCL" 
 

o  Removing IPC resources using sysresv:

   /u03/app/oracle/product/8.1.6> sysresv -i

   IPC Resources for ORACLE_SID "ORCL" :
   Shared Memory
   ID              KEY
   No shared memory segments used
   Semaphores:
   ID              KEY
   No semaphore resources used
   Oracle Instance not alive for sid "ORCL"
   Remove ipc resources for sid "ORCL" (y/n)?y
   Done removing ipc resources for sid "ORCL"
   /u03/app/oracle/product/8.1.6


   Verify the resources were removed:

   /u03/app/oracle/product/8.1.6> sysresv

   IPC Resources for ORACLE_SID "ORCL" :
   Shared Memory
   ID              KEY
   No shared memory segments used
   Semaphores:
   ID              KEY
   No semaphore resources used
   Oracle Instance not alive for sid "ORCL"  

     
o  If you need to remove memory segments, and Oracle detects the
   instance is alive through sysresv:

   % ipcrm -m <memid>

   Where <memid> is the memory id shown in the sysresv output.

   Example:
   % ipcrm -m 16437

   If you need to remove semaphores, and Oracle detects the
   instance is alive through sysresv:

   % ipcrm -s <semid>

   where <semid> is the semaphore id shown in the sysresv output.

   Example:
   % ipcrm -s 12320802



Reference :

SYSRESV Utility (Doc ID 123322.1)

Startup fail with ORA-29701 & ORA-29702 after the DB crash. (Doc ID 2160481.1)


Tuesday, August 22, 2023

Oracle Solaris Zone management


Solaris Zones is a software partitioning technology, which provides a means of virtualizing operating system services to create an isolated environment for running applications. 
This isolation prevents processes that are running in one zone from monitoring or affecting processes running in other zones

After installing Oracle Solaris 10 on a system, but before creating any zones, all processes run in the global zone. 

After you create a zone, it has processes that are associated with that zone and no other zone. 

Any process created by a process in a non-global zone is also associated with that non-global zone. Any zone which is not the global zone is called a non-global zone.




File and Directory : 

As a  Oracle Dba script to start database as part of cluster are placed under  /etc/rc0.d/.  Eg  /etc/rc0.d/K10dbora 


1) 

The directory /etc/zones contains the configuration files for each zone by the zone name. Examining these files will show the show the configuration of the zones.

The index file in this directory also contains the status of the zones:

# cat <zonename>.xml
cat /etc/zones/myzone.xml



Viewing Global Zone name from Local Zone :


Global-Zone# cat globalzonename.sh
#!/usr/bin/bash
for i in `/usr/sbin/zoneadm list -v | /usr/bin/egrep -v "ID|global" |/usr/bin/awk ' { print $4 } '`; do echo `uname -n` > $i/root/etc/GLOBALZ; done




Commands : 

1) 
# /usr/sbin/zoneadm list -vi to show status of all installed zones (i-stands for installed)
# /usr/sbin/zoneadm list -vc to show status of all configured zones (c-stands for configured which includes installed zones).



2) 
zonecfg -z zonename info    -->  To list resources used by each zones : Use zonecfg -z with the info option to list a specific zone configuration.



3) 
zonename


4) 
zpool list



5) 
zonestat -z zonename -r physical-memory 2
zonestat -z zone1,zone2 5 



Friday, August 18, 2023

Oracle Database SQLTXPLAIN (SQLT) Installation and report generation

 
There are  lot of documents online sqlt however  i have documented below for my quick reference and if it helps someone 

Sqlt Needs  installation which  becomes challenge sometimes , in that case lighter version sqlhc can be used  



Installation : 

Install SQLT
Install SQL by executing the installation script sqlt/install/sqcreate.sql connected as SYS:


# cd sqlt/install
# sqlplus / as sysdba
SQL> START sqcreate.sql



During the installation you will be asked to enter values for these parameters:

Optional Connect Identifier - In some restricted-access systems you may need to specify a connect identifier like @PROD. If a connect identifier is not needed, enter nothing and just hit the "Enter" key.

SQLTXPLAIN password - Case sensitive in most systems.

SQLTXPLAIN Default Tablespace - Select from a list of available permanent tablespaces which one should be used by SQLTXPLAIN for the SQLT repository. It must have more than 50MB of free space.

SQLTXPLAIN Temporary Tablespace - Select from a list of available temporary tablespaces which one should be used by SQLTXPLAIN for volatile operations and objects.

Optional Application User - This is the user that issued the SQL statement to be analyzed. For example, if this were an EBS system specify APPS, on Siebel you would specify SIEBEL and on People Soft 
SYSADM. You won't be asked to enter the password for this user. You can add additional SQLT users after the tool is installed, by granting them role SQLT_USER_ROLE or by using provided script sqlt/install/sqguser.sql

Licensed Oracle Pack. (T, D or N) - You can specify T for Oracle Tuning, D for Oracle Diagnostic or N for none. If T or D is selected, SQLT may include licensed content within the diagnostics files it produces. Default is T. If N is selected, SQLT installs with limited functionality.



Report Generation : 

Note  :  after 19c migration  sqltadmin object becomes invalid and  sqlt has to be reinstalled as per doc id 2920902.1 


== EXTRACT SQLT reports:

Starting with 12cR1, SQLT can be executed under SYS user -- for so, do as follows before starting execution:

Fix for :  ORA-06598   - inherit any  and  sqlt role 
SQL> CONN /as sysdba
SQL> GRANT INHERIT PRIVILEGES ON USER sys TO sqltxadmin;



1: To speedup SQLT Extraction -- connect as SQLTXADMIN user and run:

EXEC sqltxplain.sqlt$a.set_param('sta_time_limit_secs', '30');
EXEC sqltxplain.sqlt$a.set_sess_param('sql_tuning_advisor','N');
EXEC sqltxadmin.sqlt$a.set_param('ash_reports', '0');
EXEC sqltxadmin.sqlt$a.set_param('awr_reports', '0');




-- sqltxtract.sql script gather the details from memory or else from AWR snapshots.
cd sqlt/run
sqlplus ep/ep; -- connect as application user
@sqltxtract.sql sqlid sqltpassword
@/home/oracle/sqlt/run/sqltxtract.sql jkz7v96ym42c6 SQLTXPLAIN



 


Reference : 

How to Collect Standard Diagnostic Information Using SQLT for SQL Issues (Doc ID 1683772.1)
Document 215187.1 SQLT Diagnostic Tool

Monday, August 14, 2023

Tracing Oracle Database PLSQL using DBMS_HPROF and PLSQL Profiler

 

There may be times when a normal SQL trace of a PLSQL package is not sufficient. For example, when the SQL statements seem to be running fast, but the procedure takes a long time. It may be useful to understand which calls in the PLSQL package are taking the time. With this in mind, this document provides a step by step guide to tracing PLSQL using DBMS_HPROF.



One of way to debug  compilation issues is using below before compilcation
SQL> alter session set plsql_warnings='enable:all';


If we are writing Procedure ,  its always advice able to  add below in procedure to capture more details on error 

DBMS_UTILITY.FORMAT_ERROR_BACKTRACE holds the line number of the error.
DBMS_UTILITY.FORMAT_ERROR_STACK still holds the last error message.





DBMS_HPROF.

create directory PROF_DIR as '/home/oracle/pl';
grant all on directory plshprof_dir to pltest;
grant execute on dbms_hprof to pltest;




-- as user PLTEST 

execute dbms_hprof.create_tables();

With the last call the tables DBMSHP_TRACE_DATA, DBMSHP_RUNS, DBMSHP_FUNCTION_INFO and DBMSHP_PARENT_CHILD_INFO are created. They store the information in the database schema  and can be queried afterwards. 


Create the DBMS_HRPOF objects if not present .

Log on to the PLTEST  that owns the PLSQL package being profiled.
sqlplus <user>/<password>
SQL> @$ORACLE_HOME/rdbms/admin/dbmshptab.sql




Profiling Pl/sql 

SET SERVEROUTPUT ON;
EXEC DBMS_HPROF.START_PROFILING('PROF_DIR','P1_PROFILE_DATA.trc');

/* Within the same session run the P1 procedure */
EXEC P1;

/* Stop the profiling */
EXEC DBMS_HPROF.STOP_PROFILING;




Analyzing Raw File Generated : 

This raw profiler trace file can be analyzed in two ways:

1) Using the DBMS_HPROF.ANALYZE function.
2) Using the "plshprof" command line utility. 


Eg :

plshprof -output <out_file_name> P1_PROFILE_DATA.trc





Reference : 

How to Tune PL/SQL Applications and Identify Hot Spots using DBMS_HPROF (Doc ID 763944.1)

How to resolve error - PLS-00201: identifier 'DBMS_HPROF' must be declared? (Doc ID 2867790.1)

SRDC - Providing Supporting Information for PL/SQL Runtime Issues (Doc ID 2288591.1)

https://blogs.oracle.com/coretec/post/plsql-tuning-with-plsql-hierarchical-profiler



DBMS_PROFILER

 What about customers who have only limited privileges to their Oracle environment and who may not have access to a logical directory. The solution is the usage of the package DBMS_PROFILER, which writes the data directly into database tables. The procedure works similar to the described procedure of DBMS_HPROF.

First the corresponding repository tables PLSQL_PROFILER_RUNS, PLSQL_PROFILER_DATA and PLSQL_PROFILER_UNITS are created in the application schema. To do this, use the proftab.sql utility from the $ORACLE_HOME/rdbms/admin directory.



Configuring the profiler is fairly easy; in the schema that owns the PL/SQL objects the proftab.sql script (located at $ORACLE_HOME/rdbms/admin) 
must be run to create the tables that the profiler needs to store run data; those tables are:


PLSQL_PROFILER_UNITS
PLSQL_PROFILER_RUNS
PLSQL_PROFILER_DATA


An example of how to use the profiler once the setup has been successfully completed follows:


set linesize 1000 trimspool on
select dbms_profiler.start_profiler('DATES PACKAGE') from dual;
select dates_pkg.minutes_elapsed(sysdate-10, sysdate) from dual;
select dbms_profiler.stop_profiler from dual;
spool dates_package_profiler_run.rpt
@?/rdbms/admin/profsum
spool off

or 

execute dbms_profiler.start_profiler;
execute testprof;
execute dbms_profiler.stop_profiler;

select runid, run_date, run_comment from plsql_profiler_runs;  





Reference : 

https://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_profiler.htm#ADFNS02301

Script to produce HTML report with top consumers out of PL/SQL Profiler DBMS_PROFILER data (Doc ID 243755.1)

https://oracle-base.com/articles/11g/plsql-hierarchical-profiler-11gr1


Saturday, August 12, 2023

Oracle Rac Node failover Testing


     As part of new build we need to  do  node failover testing hence documenting some steps  known to me 

Objective

Action

Expectations

To test accidental change of
 IP addresses of the cluster

Modify etc/hosts and change
theVIP addresses - Try starting thecluster

The cluster must not be able to
bootup with error messages related to network Ips.

To test the failure of ASM instance
 (ASM Instancecrash)

On Node 1, kill the processes
 related to ASM

The cluster must not be affected
by ASM crash on node 1

To test the failover of the network
 card of the public IPaddress.

Plug out one of the cableconnecting
 the node to thepublic network

The ping from the client
 should beable to go thru

To test the stability of the db by killing  a ora server process

kill off a process related to the
oracle daemons from a terminal.This should crash the database processes.

The sql session should not be
affected and failover to the next node without any disruption

To test thecomplete failover of the session in event of a node(e.g. sudden shutdown

1. Connect from the client to the database via sqlplus.

2. power off the unix box inwhich the sqlplus connection isat.

The sql session should not be
 affected and failover to the nextnode without any disruption

To test the failover session of a
connected oracleuser in the event of an instance failureTo test the failoversession of aconnected oracleuser in the event of an instance failure

1. Connect from a client to node1 in the RAC Environment usingsqlplus.

2. Connect to the database onnode 1 and do a shutdown.

sqlplus / as sysdba

The sql connection should be able to
swing over to other instance and stillcontinue the select query

# – sqlplus connection from client todb is still available

To test Crash of Grid  Processes

Kill Lmon process on rac node

They must be restarted
 automatically

To  Test for automatically shifting of
 scan listeners

Reboot Node 1 using  the OS
reboot command

1. When noe 1 goes down all scan
listeners must shift to remaining
node

2. When node 1 comes up  scan listeners must again shift back
to original node

To test storage  crash

Forcibly make scan disk unvisible to
both nodes

The cluster must go down and
should not start since rac does not
safeguard to storage failures

TAF failoer test

to test sql and session   failover 

 

 



Tuesday, August 8, 2023

Oracle Database Performance Hub report using DBMS_PERF



  When we  have performance issues normally we have to login to  oem  to get  performance details . 

  We can get quick snap of  performance using DBMS_PERF.REPORT_PERFHUB    as mentioned below .   Its old tool  introduced in 12c but  not frequently used . 


Command : 

SET  LONG   10000000 ;
 SET LONGC 10000000 ;
 SET LINESIZE 32767 ;
 SET PAGESIZE 0 ;
 SET TRIMSPOOL ON ;
SPOOL perfhub.html
SELECT DBMS_PERF.REPORT_PERFHUB FROM DUAL;
SPOOL OFF;





.Other option used :


SET HEADING OFF LINESIZE 32767 PAGESIZE 0 TRIMSPOOL ON TRIMOUT ON LONG 9999999 VERIFY OFF LONGCHUNKSIZE 100000 FEEDBACK OFF
SET TERMOUT OFF

spool perfhub.html

SELECT DBMS_PERF.REPORT_PERFHUB (
    is_realtime         => 0           -- (  0 = dba_hist, 1 = v$ash  )
  , outer_start_time    => sysdate-1
  , outer_end_time      => sysdate
  , selected_start_time => TIMESTAMP'2023-05-15 20:15:00'
  , selected_end_time   => TIMESTAMP'2023-05-15 20:45:00'
  , type=>'ACTIVE'
)
FROM dual ;


spool off
SET TERMOUT ON HEADING ON PAGESIZE 5000 LINESIZE 999 FEEDBACK ON 



 

Historical SQL monitoring report can be extracted for particular SQL statement using SQL Details report.

Example: To generate SQL details report to get the SQL monitoring information for particular SQL statement in historical mode.

SQL> set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
SQL> spool sql_details_history.html
SQL> select dbms_perf.report_sql(sql_id=>'9vkyyg1xj6fgc',is_realtime=>0,type=>'active',selected_start_time=>to_date('10-SEP-18 04:00:00','dd-MON-YY hh24:mi:ss'),selected_end_time=>to_date('10-SEP-18 05:00:00','dd-MON-YY hh24:mi:ss')) from dual;
SQL> spool off

Note in 19c or higher it is better to add "outer_start_time" and  "outer_end_time" to get the desired results:

SQL> set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
SQL> spool sql_details_history.html
SQL> select dbms_perf.report_perfhub( is_realtime=>0, outer_start_time=>to_date('06-SEP-2022 12:00:00','dd-MON-YYYY hh24:mi:ss'), outer_end_time=>to_date('06-SEP-2022 13:00:00','dd-MON-YYYY hh24:mi:ss'), selected_start_time=>to_date('06-SEP-2022 12:00:00','dd-MON-YYYY hh24:mi:ss'), selected_end_time=>to_date('06-SEP-2022 13:00:00','dd-MON-YYYY hh24:mi:ss')) from dual;
SQL> spool off




Generating Database Performance Hub Report :

@?/rdbms/admin/perfhubrpt.sql



Syntax

DBMS_PERF.REPORT_PERFHUB ( is_realtime IN NUMBER DEFAULT NULL, outer_start_time IN DATE DEFAULT NULL, outer_end_time IN DATE DEFAULT NULL, selected_start_time IN DATE DEFAULT NULL, selected_end_time IN DATE DEFAULT NULL, inst_id IN NUMBER DEFAULT NULL, dbid IN NUMBER DEFAULT NULL, monitor_list_detail IN NUMBER DEFAULT NULL, workload_sql_detail IN NUMBER DEFAULT NULL, addm_task_detail IN NUMBER DEFAULT NULL, report_reference IN VARCHAR2 DEFAULT NULL, report_level IN VARCHAR2 DEFAULT NULL, type IN VARCHAR2 DEFAULT 'ACTIVE', base_path IN VARCHAR2 DEFAULT NULL); RETURN CLOB;



 REPORT_PERFHUB Function Parameters

ParameterDescription

is_realtime

If 1, then real-time. If NULL (default) or 0, then historical mode.

outer_start_time

Start time of outer period shown in the time selector. If NULL (default):

  • If is_realtime=0 (historical), then 24 hours before outer_end_time.

  • If is_realtime=1 (realtime mode), then 1 hour before outer_end_time.

outer_end_time

End time of outer period shown in the time selector. If NULL (default), then latest AWR snapshot.

  • If is_realtime=0 (historical), then the latest AWR snapshot

  • If is_realtime=1 (realtime mode), this is the current time (and any input is ignored)

selected_start_time

Start time period of selection. If NULL (default)

  • If is_realtime=0, then 1 hour before selected_end_time

  • If is_realtime=1, then 5 minutes before selected_end_time

selected_end_time

End time period of selection. If NULL (default)

  • If is_realtime=0, then latest AWR snapshot

  • If is_realtime=1, then current time

inst_id

Instance ID to for which to retrieve data

  • If -1, then current instance

  • If number is specified, then for that instance

  • If NULL (default), then all instances

dbid

DBID to query.

  • If NULL, then current DBID.

  • If is_realtime=1, then DBID must be the local DBID.

monitor_list_detail

Top N in SQL monitor list for which to retrieve SQL monitor details.

  • If NULL (default), then retrieves top 10

  • If 0, then retrieves no monitor list details

workload_sql_detail

Top N in Workload Top SQL list to retrieve monitor details,

  • If NULL (default), then retrieves top 10

  • If 0, then retrieves no monitor list details

addm_task_detail

Maximum N latest ADDM tasks to retrieve

  • If NULL (default), retrieves available data but no more than N

  • If 0, then retrieves no ADDM task details

report_reference

Must be NULL when used from SQL*Plus.

report_level

'typical' will get all tabs in performance hub

type

Report type:

  • 'ACTIVE' (default)

  • 'xml' returns XML

base_path

URL path for HTML resources since flex HTML requires access to external files. This is only valid for type='ACTIVE' and is typically not used. Default value will retrieve the required files from OTN.

 


Reference : 

Monitoring Database Performance Using Performance Hub Report (Doc ID 2436566.1)

https://docs.oracle.com/en/database/oracle/oracle-database/21/arpls/DBMS_PERF.html#GUID-85CD8AB9-B6E0-444F-91A5-762EB92A74E9 

How To Get Historical SQL Monitor Report For SQL Statements (Doc ID 2555350.1)

Saturday, July 15, 2023

Force cleanup of temporary segments through DROP_SEGMENT event in Oracle database


We recently had scenario where  temp segments were not getting released  even though  no  active sessions were there  and  "alter tablespace temp  coalesce;"   was not helping  .   

This needs to be done only after oracle support  recommendation .and this  feature  surprisingly exists from Oracle version  8 . 

This routine does what SMON does in the background, i.e. drops temporary segments.

Before force dropping temp segment try to gracefully clean . Sometimes temp can be resized after restart of database 



 select sum(blocks*8/1024) size_mb from v$tempseg_usage where tablespace='TEMP';


Shrink TEMP Tablespace using alter tablespace command
SQL> ALTER TABLESPACE temp SHRINK SPACE KEEP 50M;.


Shrink TEMPFILE using alter tablespace command
SQL> ALTER TABLESPACE temp SHRINK TEMPFILE '/u01/app/oracle/oradata/TEST11G/temp01.dbf' KEEP 40M;


Shrink TEMP Tablespace to the smallest possible size:
SQL> ALTER TABLESPACE temp SHRINK SPACE;



DESCRIPTION : 
Finds all the temporary segments in a tablespace which are not currently locked and drops them.


PARAMETER : Level parameter used in command 

level - tablespace number+1. If the value is 2147483647 then
temp segments in ALL tablespaces are dropped, otherwise, only
segments in a tablespace whose number is equal to the LEVEL
specification are dropped.


Manually release the temporary segment, the utilization rate will not decrease, but the content of the temporary tablespace will be released


alter session set events 'immediate trace name DROP_SEGMENTS level 4'

Description: TS# of temp tablespace is 3, So TS#+1=4

View temp content details

select se.inst_id,se.username,se.sid,su.extents,su.blocks*to_number(rtrim(p.value)) as Space, tablespace,segtype,sql_text from gv$sort_usage su,gv$parameter p,gv $session se,gv$sql s
where p.name='db_block_size' and su.session_addr=se.saddr and s.hash_value=su.sqlhash and s.address=su.sqladdr order by
se.username,se.sid;


If the query finds that most of the temp temporary table space is lob temporary fields, you can set the 60025 event

alter system set event="60025 trace name context forever" scope=spfile;

Restart to take effect




References : 

EVENT: DROP_SEGMENTS - Forcing cleanup of TEMPORARY segments (Doc ID 47400.1)

Note 177334.1 Overview of Temporary Segments 
Note 35513.1 Removing 'Stray' TEMPORARY Segments
Note 61997.1 SMON - Temporary Segment Cleanup and Free Space Coalescing
NOTE:160426.1 - TEMPORARY Tablespaces : Tempfiles or Datafiles ?
Note 102339.1 Temporary Segments: What Happens When a Sort Occurs 
Note 1039341.6 Temporary Segments Are Not Being De-Allocated After a Sort 
Note 68836.1 How To Efficiently Drop (or Truncate) A Table With Many Extents
Note 132913.1 How To Free Temporary Segment in Temporary Tablespace Dynamically
Resize Of Temporary Tablespace with KEEP keyword not working (Doc ID 1663229.1)