Friday, August 18, 2023

Oracle Database SQLTXPLAIN (SQLT) Installation and report generation

 
There are  lot of documents online sqlt however  i have documented below for my quick reference and if it helps someone 

Sqlt Needs  installation which  becomes challenge sometimes , in that case lighter version sqlhc can be used  



Installation : 

Install SQLT
Install SQL by executing the installation script sqlt/install/sqcreate.sql connected as SYS:


# cd sqlt/install
# sqlplus / as sysdba
SQL> START sqcreate.sql



During the installation you will be asked to enter values for these parameters:

Optional Connect Identifier - In some restricted-access systems you may need to specify a connect identifier like @PROD. If a connect identifier is not needed, enter nothing and just hit the "Enter" key.

SQLTXPLAIN password - Case sensitive in most systems.

SQLTXPLAIN Default Tablespace - Select from a list of available permanent tablespaces which one should be used by SQLTXPLAIN for the SQLT repository. It must have more than 50MB of free space.

SQLTXPLAIN Temporary Tablespace - Select from a list of available temporary tablespaces which one should be used by SQLTXPLAIN for volatile operations and objects.

Optional Application User - This is the user that issued the SQL statement to be analyzed. For example, if this were an EBS system specify APPS, on Siebel you would specify SIEBEL and on People Soft 
SYSADM. You won't be asked to enter the password for this user. You can add additional SQLT users after the tool is installed, by granting them role SQLT_USER_ROLE or by using provided script sqlt/install/sqguser.sql

Licensed Oracle Pack. (T, D or N) - You can specify T for Oracle Tuning, D for Oracle Diagnostic or N for none. If T or D is selected, SQLT may include licensed content within the diagnostics files it produces. Default is T. If N is selected, SQLT installs with limited functionality.



Report Generation : 

Note  :  after 19c migration  sqltadmin object becomes invalid and  sqlt has to be reinstalled as per doc id 2920902.1 


== EXTRACT SQLT reports:

Starting with 12cR1, SQLT can be executed under SYS user -- for so, do as follows before starting execution:

Fix for :  ORA-06598   - inherit any  and  sqlt role 
SQL> CONN /as sysdba
SQL> GRANT INHERIT PRIVILEGES ON USER sys TO sqltxadmin;



1: To speedup SQLT Extraction -- connect as SQLTXADMIN user and run:

EXEC sqltxplain.sqlt$a.set_param('sta_time_limit_secs', '30');
EXEC sqltxplain.sqlt$a.set_sess_param('sql_tuning_advisor','N');
EXEC sqltxadmin.sqlt$a.set_param('ash_reports', '0');
EXEC sqltxadmin.sqlt$a.set_param('awr_reports', '0');




-- sqltxtract.sql script gather the details from memory or else from AWR snapshots.
cd sqlt/run
sqlplus ep/ep; -- connect as application user
@sqltxtract.sql sqlid sqltpassword
@/home/oracle/sqlt/run/sqltxtract.sql jkz7v96ym42c6 SQLTXPLAIN



 


Reference : 

How to Collect Standard Diagnostic Information Using SQLT for SQL Issues (Doc ID 1683772.1)
Document 215187.1 SQLT Diagnostic Tool

Monday, August 14, 2023

Tracing Oracle Database PLSQL using DBMS_HPROF and PLSQL Profiler

 

There may be times when a normal SQL trace of a PLSQL package is not sufficient. For example, when the SQL statements seem to be running fast, but the procedure takes a long time. It may be useful to understand which calls in the PLSQL package are taking the time. With this in mind, this document provides a step by step guide to tracing PLSQL using DBMS_HPROF.



One of way to debug  compilation issues is using below before compilcation
SQL> alter session set plsql_warnings='enable:all';


If we are writing Procedure ,  its always advice able to  add below in procedure to capture more details on error 

DBMS_UTILITY.FORMAT_ERROR_BACKTRACE holds the line number of the error.
DBMS_UTILITY.FORMAT_ERROR_STACK still holds the last error message.





DBMS_HPROF.

create directory PROF_DIR as '/home/oracle/pl';
grant all on directory plshprof_dir to pltest;
grant execute on dbms_hprof to pltest;




-- as user PLTEST 

execute dbms_hprof.create_tables();

With the last call the tables DBMSHP_TRACE_DATA, DBMSHP_RUNS, DBMSHP_FUNCTION_INFO and DBMSHP_PARENT_CHILD_INFO are created. They store the information in the database schema  and can be queried afterwards. 


Create the DBMS_HRPOF objects if not present .

Log on to the PLTEST  that owns the PLSQL package being profiled.
sqlplus <user>/<password>
SQL> @$ORACLE_HOME/rdbms/admin/dbmshptab.sql




Profiling Pl/sql 

SET SERVEROUTPUT ON;
EXEC DBMS_HPROF.START_PROFILING('PROF_DIR','P1_PROFILE_DATA.trc');

/* Within the same session run the P1 procedure */
EXEC P1;

/* Stop the profiling */
EXEC DBMS_HPROF.STOP_PROFILING;




Analyzing Raw File Generated : 

This raw profiler trace file can be analyzed in two ways:

1) Using the DBMS_HPROF.ANALYZE function.
2) Using the "plshprof" command line utility. 


Eg :

plshprof -output <out_file_name> P1_PROFILE_DATA.trc





Reference : 

How to Tune PL/SQL Applications and Identify Hot Spots using DBMS_HPROF (Doc ID 763944.1)

How to resolve error - PLS-00201: identifier 'DBMS_HPROF' must be declared? (Doc ID 2867790.1)

SRDC - Providing Supporting Information for PL/SQL Runtime Issues (Doc ID 2288591.1)

https://blogs.oracle.com/coretec/post/plsql-tuning-with-plsql-hierarchical-profiler



DBMS_PROFILER

 What about customers who have only limited privileges to their Oracle environment and who may not have access to a logical directory. The solution is the usage of the package DBMS_PROFILER, which writes the data directly into database tables. The procedure works similar to the described procedure of DBMS_HPROF.

First the corresponding repository tables PLSQL_PROFILER_RUNS, PLSQL_PROFILER_DATA and PLSQL_PROFILER_UNITS are created in the application schema. To do this, use the proftab.sql utility from the $ORACLE_HOME/rdbms/admin directory.



Configuring the profiler is fairly easy; in the schema that owns the PL/SQL objects the proftab.sql script (located at $ORACLE_HOME/rdbms/admin) 
must be run to create the tables that the profiler needs to store run data; those tables are:


PLSQL_PROFILER_UNITS
PLSQL_PROFILER_RUNS
PLSQL_PROFILER_DATA


An example of how to use the profiler once the setup has been successfully completed follows:


set linesize 1000 trimspool on
select dbms_profiler.start_profiler('DATES PACKAGE') from dual;
select dates_pkg.minutes_elapsed(sysdate-10, sysdate) from dual;
select dbms_profiler.stop_profiler from dual;
spool dates_package_profiler_run.rpt
@?/rdbms/admin/profsum
spool off

or 

execute dbms_profiler.start_profiler;
execute testprof;
execute dbms_profiler.stop_profiler;

select runid, run_date, run_comment from plsql_profiler_runs;  





Reference : 

https://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_profiler.htm#ADFNS02301

Script to produce HTML report with top consumers out of PL/SQL Profiler DBMS_PROFILER data (Doc ID 243755.1)

https://oracle-base.com/articles/11g/plsql-hierarchical-profiler-11gr1


Saturday, August 12, 2023

Oracle Rac Node failover Testing


     As part of new build we need to  do  node failover testing hence documenting some steps  known to me 

Objective

Action

Expectations

To test accidental change of
 IP addresses of the cluster

Modify etc/hosts and change
theVIP addresses - Try starting thecluster

The cluster must not be able to
bootup with error messages related to network Ips.

To test the failure of ASM instance
 (ASM Instancecrash)

On Node 1, kill the processes
 related to ASM

The cluster must not be affected
by ASM crash on node 1

To test the failover of the network
 card of the public IPaddress.

Plug out one of the cableconnecting
 the node to thepublic network

The ping from the client
 should beable to go thru

To test the stability of the db by killing  a ora server process

kill off a process related to the
oracle daemons from a terminal.This should crash the database processes.

The sql session should not be
affected and failover to the next node without any disruption

To test thecomplete failover of the session in event of a node(e.g. sudden shutdown

1. Connect from the client to the database via sqlplus.

2. power off the unix box inwhich the sqlplus connection isat.

The sql session should not be
 affected and failover to the nextnode without any disruption

To test the failover session of a
connected oracleuser in the event of an instance failureTo test the failoversession of aconnected oracleuser in the event of an instance failure

1. Connect from a client to node1 in the RAC Environment usingsqlplus.

2. Connect to the database onnode 1 and do a shutdown.

sqlplus / as sysdba

The sql connection should be able to
swing over to other instance and stillcontinue the select query

# – sqlplus connection from client todb is still available

To test Crash of Grid  Processes

Kill Lmon process on rac node

They must be restarted
 automatically

To  Test for automatically shifting of
 scan listeners

Reboot Node 1 using  the OS
reboot command

1. When noe 1 goes down all scan
listeners must shift to remaining
node

2. When node 1 comes up  scan listeners must again shift back
to original node

To test storage  crash

Forcibly make scan disk unvisible to
both nodes

The cluster must go down and
should not start since rac does not
safeguard to storage failures

TAF failoer test

to test sql and session   failover 

 

 



Tuesday, August 8, 2023

Oracle Database Performance Hub report using DBMS_PERF



  When we  have performance issues normally we have to login to  oem  to get  performance details . 

  We can get quick snap of  performance using DBMS_PERF.REPORT_PERFHUB    as mentioned below .   Its old tool  introduced in 12c but  not frequently used . 


Command : 

SET  LONG   10000000 ;
 SET LONGC 10000000 ;
 SET LINESIZE 32767 ;
 SET PAGESIZE 0 ;
 SET TRIMSPOOL ON ;
SPOOL perfhub.html
SELECT DBMS_PERF.REPORT_PERFHUB FROM DUAL;
SPOOL OFF;





.Other option used :


SET HEADING OFF LINESIZE 32767 PAGESIZE 0 TRIMSPOOL ON TRIMOUT ON LONG 9999999 VERIFY OFF LONGCHUNKSIZE 100000 FEEDBACK OFF
SET TERMOUT OFF

spool perfhub.html

SELECT DBMS_PERF.REPORT_PERFHUB (
    is_realtime         => 0           -- (  0 = dba_hist, 1 = v$ash  )
  , outer_start_time    => sysdate-1
  , outer_end_time      => sysdate
  , selected_start_time => TIMESTAMP'2023-05-15 20:15:00'
  , selected_end_time   => TIMESTAMP'2023-05-15 20:45:00'
  , type=>'ACTIVE'
)
FROM dual ;


spool off
SET TERMOUT ON HEADING ON PAGESIZE 5000 LINESIZE 999 FEEDBACK ON 



 

Historical SQL monitoring report can be extracted for particular SQL statement using SQL Details report.

Example: To generate SQL details report to get the SQL monitoring information for particular SQL statement in historical mode.

SQL> set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
SQL> spool sql_details_history.html
SQL> select dbms_perf.report_sql(sql_id=>'9vkyyg1xj6fgc',is_realtime=>0,type=>'active',selected_start_time=>to_date('10-SEP-18 04:00:00','dd-MON-YY hh24:mi:ss'),selected_end_time=>to_date('10-SEP-18 05:00:00','dd-MON-YY hh24:mi:ss')) from dual;
SQL> spool off

Note in 19c or higher it is better to add "outer_start_time" and  "outer_end_time" to get the desired results:

SQL> set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
SQL> spool sql_details_history.html
SQL> select dbms_perf.report_perfhub( is_realtime=>0, outer_start_time=>to_date('06-SEP-2022 12:00:00','dd-MON-YYYY hh24:mi:ss'), outer_end_time=>to_date('06-SEP-2022 13:00:00','dd-MON-YYYY hh24:mi:ss'), selected_start_time=>to_date('06-SEP-2022 12:00:00','dd-MON-YYYY hh24:mi:ss'), selected_end_time=>to_date('06-SEP-2022 13:00:00','dd-MON-YYYY hh24:mi:ss')) from dual;
SQL> spool off




Generating Database Performance Hub Report :

@?/rdbms/admin/perfhubrpt.sql



Syntax

DBMS_PERF.REPORT_PERFHUB ( is_realtime IN NUMBER DEFAULT NULL, outer_start_time IN DATE DEFAULT NULL, outer_end_time IN DATE DEFAULT NULL, selected_start_time IN DATE DEFAULT NULL, selected_end_time IN DATE DEFAULT NULL, inst_id IN NUMBER DEFAULT NULL, dbid IN NUMBER DEFAULT NULL, monitor_list_detail IN NUMBER DEFAULT NULL, workload_sql_detail IN NUMBER DEFAULT NULL, addm_task_detail IN NUMBER DEFAULT NULL, report_reference IN VARCHAR2 DEFAULT NULL, report_level IN VARCHAR2 DEFAULT NULL, type IN VARCHAR2 DEFAULT 'ACTIVE', base_path IN VARCHAR2 DEFAULT NULL); RETURN CLOB;



 REPORT_PERFHUB Function Parameters

ParameterDescription

is_realtime

If 1, then real-time. If NULL (default) or 0, then historical mode.

outer_start_time

Start time of outer period shown in the time selector. If NULL (default):

  • If is_realtime=0 (historical), then 24 hours before outer_end_time.

  • If is_realtime=1 (realtime mode), then 1 hour before outer_end_time.

outer_end_time

End time of outer period shown in the time selector. If NULL (default), then latest AWR snapshot.

  • If is_realtime=0 (historical), then the latest AWR snapshot

  • If is_realtime=1 (realtime mode), this is the current time (and any input is ignored)

selected_start_time

Start time period of selection. If NULL (default)

  • If is_realtime=0, then 1 hour before selected_end_time

  • If is_realtime=1, then 5 minutes before selected_end_time

selected_end_time

End time period of selection. If NULL (default)

  • If is_realtime=0, then latest AWR snapshot

  • If is_realtime=1, then current time

inst_id

Instance ID to for which to retrieve data

  • If -1, then current instance

  • If number is specified, then for that instance

  • If NULL (default), then all instances

dbid

DBID to query.

  • If NULL, then current DBID.

  • If is_realtime=1, then DBID must be the local DBID.

monitor_list_detail

Top N in SQL monitor list for which to retrieve SQL monitor details.

  • If NULL (default), then retrieves top 10

  • If 0, then retrieves no monitor list details

workload_sql_detail

Top N in Workload Top SQL list to retrieve monitor details,

  • If NULL (default), then retrieves top 10

  • If 0, then retrieves no monitor list details

addm_task_detail

Maximum N latest ADDM tasks to retrieve

  • If NULL (default), retrieves available data but no more than N

  • If 0, then retrieves no ADDM task details

report_reference

Must be NULL when used from SQL*Plus.

report_level

'typical' will get all tabs in performance hub

type

Report type:

  • 'ACTIVE' (default)

  • 'xml' returns XML

base_path

URL path for HTML resources since flex HTML requires access to external files. This is only valid for type='ACTIVE' and is typically not used. Default value will retrieve the required files from OTN.

 


Reference : 

Monitoring Database Performance Using Performance Hub Report (Doc ID 2436566.1)

https://docs.oracle.com/en/database/oracle/oracle-database/21/arpls/DBMS_PERF.html#GUID-85CD8AB9-B6E0-444F-91A5-762EB92A74E9 

How To Get Historical SQL Monitor Report For SQL Statements (Doc ID 2555350.1)

Saturday, July 15, 2023

Force cleanup of temporary segments through DROP_SEGMENT event in Oracle database


We recently had scenario where  temp segments were not getting released  even though  no  active sessions were there  and  "alter tablespace temp  coalesce;"   was not helping  .   

This needs to be done only after oracle support  recommendation .and this  feature  surprisingly exists from Oracle version  8 . 

This routine does what SMON does in the background, i.e. drops temporary segments.

Before force dropping temp segment try to gracefully clean . Sometimes temp can be resized after restart of database 



 select sum(blocks*8/1024) size_mb from v$tempseg_usage where tablespace='TEMP';


Shrink TEMP Tablespace using alter tablespace command
SQL> ALTER TABLESPACE temp SHRINK SPACE KEEP 50M;.


Shrink TEMPFILE using alter tablespace command
SQL> ALTER TABLESPACE temp SHRINK TEMPFILE '/u01/app/oracle/oradata/TEST11G/temp01.dbf' KEEP 40M;


Shrink TEMP Tablespace to the smallest possible size:
SQL> ALTER TABLESPACE temp SHRINK SPACE;



DESCRIPTION : 
Finds all the temporary segments in a tablespace which are not currently locked and drops them.


PARAMETER : Level parameter used in command 

level - tablespace number+1. If the value is 2147483647 then
temp segments in ALL tablespaces are dropped, otherwise, only
segments in a tablespace whose number is equal to the LEVEL
specification are dropped.


Manually release the temporary segment, the utilization rate will not decrease, but the content of the temporary tablespace will be released


alter session set events 'immediate trace name DROP_SEGMENTS level 4'

Description: TS# of temp tablespace is 3, So TS#+1=4

View temp content details

select se.inst_id,se.username,se.sid,su.extents,su.blocks*to_number(rtrim(p.value)) as Space, tablespace,segtype,sql_text from gv$sort_usage su,gv$parameter p,gv $session se,gv$sql s
where p.name='db_block_size' and su.session_addr=se.saddr and s.hash_value=su.sqlhash and s.address=su.sqladdr order by
se.username,se.sid;


If the query finds that most of the temp temporary table space is lob temporary fields, you can set the 60025 event

alter system set event="60025 trace name context forever" scope=spfile;

Restart to take effect




References : 

EVENT: DROP_SEGMENTS - Forcing cleanup of TEMPORARY segments (Doc ID 47400.1)

Note 177334.1 Overview of Temporary Segments 
Note 35513.1 Removing 'Stray' TEMPORARY Segments
Note 61997.1 SMON - Temporary Segment Cleanup and Free Space Coalescing
NOTE:160426.1 - TEMPORARY Tablespaces : Tempfiles or Datafiles ?
Note 102339.1 Temporary Segments: What Happens When a Sort Occurs 
Note 1039341.6 Temporary Segments Are Not Being De-Allocated After a Sort 
Note 68836.1 How To Efficiently Drop (or Truncate) A Table With Many Extents
Note 132913.1 How To Free Temporary Segment in Temporary Tablespace Dynamically
Resize Of Temporary Tablespace with KEEP keyword not working (Doc ID 1663229.1)

Saturday, June 17, 2023

Debugging any Oracle errors and events using ERRORSTACK

 
DBA has seen ORA-00600 trace file generated every time ORA-00600 or ORA-07445 occurs which has each and every detail about this error. 
Generating trace files for ORA-00600 or ORA-07445 is the default behaviour of Oracle, but the Same doesn’t happen in case of other errors.

We are using ERRORSTACK to  capture  details  for all other  oracle errors and events .  By enabling error stack we are force capturing  error  details in alert log and with its trace file details .




ERRORSTACK Parameters:

0 – Error stack only
1 – Error stack and function call stack (if implemented)
2 – As 1 plus the ProcessState
3 – As 2 plus the context area (all cursors and current cursor highlighted)


Enable

ALTER SYSTEM SET EVENT ='<ORAError> trace name errorstack level <LevelNumber>' SCOPE=spfile;

e.g To set errorstack for ORA – 1652 ,Syntax will be
 
alter system set events ‘1652 trace name errorstack level 3’;  -- system level 
alter session set events ‘1652 trace name errorstack level 3’; -- sesion level 


Disable

ALTER SYSTEM SET EVENT='<ORAError> trace name errorstack off';
ALTER SYSTEM SET EVENT='1652 trace name errorstack off';





Display Events

-- SQLPlus> show parameter event
-- oradebug eventdump session;   OR   oradebug eventdump system  
-- DBA_ENABLED_TRACES or WRI$_TRACING_ENABLED can be used to see status also:
SELECT TRACE_TYPE, PRIMARY_ID, QUALIFIER_ID1, WAITS, BINDS FROM DBA_ENABLED_TRACES;




You can also use oradebug to take errorstack for particular process (say ospid 1234) as below

sqlplus “/ as sysdba”

oradebug setospid 1234
oradebug unlimit
oradebug dump errorstack 3





References :

How to Collect Errorstacks for use in Diagnosing Performance Issues. (Doc ID 1364257.1)

Friday, June 9, 2023

Oracle picks more parallel slaves then what is defined in PARALLEL HINT

  

We  had strange scenario where  application team  defined  parallel  10  in   parallel  hint   eg    " SELECT /*+ parallel(10)*/ "   ,   however   we  were seeing  huge parallel slaves sessions  around 100+  . 


This  kind  of  behavior is normal  when parallel_degree_policy is set to true  ,   but we had  parallel_degree_policy=manual  and   still we  were seeing huge parallel slaves  far above what is defined in hint  

parallel_degree_policy=manual
parallel_degree_limit = CPU


On checking it as found that  When we dont add table name in parallel hint , it will enable parallel auto  dop   . We can see  this in  sql plan . 

  Note
-----
- cpu costing is off (consider enabling it)
- automatic DOP: Computed Degree of Parallelism is 124 because of degree limit


If we  dont   want  to enable auto dop when parallel_degree_policy -is set to  manual  , we need to  add parallel hint  on specific tables   eg .  /*+ PARALLEL(employee_table, 35) */





Parallel Statement Queuing  .

Parallel Statement Queuing has been introduced from 12c and is  key feature .  Parallel Statement Queuing is enabled by default  when we enable Parallel  auto dop  by setting parallel_degree_policy to auto .

In case when parallel_degree_policy is set to manual  , we can still enable Parallel Statement Queuing  by 2 ways .  i,e  

1) using parameter  "_parallel_statement_queuing"=true  
     
select a.ksppinm "Parameter",b.ksppstvl "Session Value",c.ksppstvl "Instance Value" from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c where a.indx = b.indx and a.indx = c.indx and a.ksppinm in ('_fix_control','_parallel_statement_queuing') order by 1;


2) using  hint  - select /*+PARALLEL(4) STATEMENT_QUEUING*/ .

STATEMENT_QUEUING Hint can be deployed through patch 

begin
sys.dbms_sqldiag_internal.i_create_patch(sql_id=>'<SQL_ID>', hint_text=>'OPT_PARAM(''_parallel_statement_queuing'' ''true'')', name=> 'SQL_Patch_parallel_statement_queuing_sql_id'); >>>>>>>>>>>>>>>REPLACE <SQL_ID> with your sql_id
end;
/




References :

Automatic Degree of Parallelism (AutoDOP) is enabled when "PARALLEL_DEGREE_POLICY=MANUAL and PARALLEL HINT" is Used at SQL Statement (Doc ID 1542556.1)

How to Achieve Parallel Statement Queuing for an SQL When PARALLEL_DEGREE_POLICY=MANUAL (Doc ID 1902069.1)

How To Enable Parallel Query For A Function? ( Doc ID 1093773.1 )

https://blogs.oracle.com/datawarehousing/post/px-server-sets-parallelizers-dfo-trees-parallel-groups-lets-talk-about-terminology

SRDC - Data Collection for Query executing in Parallel returning error (Doc ID 2076943.1)





Other Known Issues related to parallel hint in 19c  :

9c: Duplicate PQ workers created at instance level (Doc ID 2719466.1)

BUG:32342582 - WRONG RESULTS WITH PARALLEL AFTER UPGRADE TO 19.9

Bug 19849585 : PARALLEL HINT IGNORED WHEN USING GROUP BY

Issue with parallel hint on 19c CDB database where the hint is not honored and going for serial scans. Apparently seems to be Oracle bug Bug 30618478 

Number of Preallocated Parallel Workers Are More Than Parallel_min_servers (Doc ID 2654231.1)

 Bug 31195441 : 18C: LONG PARSING TIME WHEN QUERY EXECUTING IN PARALLEL



Views :


col username for a12
col "QC SID" for A6
col "SID" for A6
col "QC/Slave" for A8
col "Req. DOP" for 9999
col "Actual DOP" for 9999
col "Slaveset" for A8
col "Slave INST" for A9
col "QC INST" for A6
set pages 300 lines 300
col wait_event format a30

select
decode(px.qcinst_id,NULL,username,
' - '||lower(substr(pp.SERVER_NAME,
length(pp.SERVER_NAME)-4,4) ) )"Username",
decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
to_char( px.server_set) "SlaveSet",
to_char(s.sid) "SID",
to_char(px.inst_id) "Slave INST",
decode(sw.state,'WAITING', 'WAIT', 'NOT WAIT' ) as STATE,
case sw.state WHEN 'WAITING' THEN substr(sw.event,1,30) ELSE NULL end as wait_event ,
decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
to_char(px.qcinst_id) "QC INST",
px.req_degree "Req. DOP",
px.degree "Actual DOP"
from gv$px_session px,
gv$session s ,
gv$px_process pp,
gv$session_wait sw
where px.sid=s.sid (+)
and px.serial#=s.serial#(+)
and px.inst_id = s.inst_id(+)
and px.sid = pp.sid (+)
and px.serial#=pp.serial#(+)
and sw.sid = s.sid
and sw.inst_id = s.inst_id
order by
decode(px.QCINST_ID, NULL, px.INST_ID, px.QCINST_ID),
px.QCSID,
decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),
px.SERVER_SET,
px.INST_ID ;

set pages 300 lines 300
col wait_event format a30
select
sw.SID as RCVSID,
decode(pp.server_name,
NULL, 'A QC',
pp.server_name) as RCVR,
sw.inst_id as RCVRINST,
case sw.state WHEN 'WAITING' THEN substr(sw.event,1,30) ELSE NULL end as wait_event ,
decode(bitand(p1, 65535),
65535, 'QC',
'P'||to_char(bitand(p1, 65535),'fm000')) as SNDR,
bitand(p1, 16711680) - 65535 as SNDRINST,
decode(bitand(p1, 65535),
65535, ps.qcsid,
(select
sid
from
gv$px_process
where
server_name = 'P'||to_char(bitand(sw.p1, 65535),'fm000') and
inst_id = bitand(sw.p1, 16711680) - 65535)
) as SNDRSID,
decode(sw.state,'WAITING', 'WAIT', 'NOT WAIT' ) as STATE
from
gv$session_wait sw,
gv$px_process pp,
gv$px_session ps
where
sw.sid = pp.sid (+) and
sw.inst_id = pp.inst_id (+) and
sw.sid = ps.sid (+) and
sw.inst_id = ps.inst_id (+) and
p1text = 'sleeptime/senderid' and
bitand(p1, 268435456) = 268435456
order by
decode(ps.QCINST_ID, NULL, ps.INST_ID, ps.QCINST_ID),
ps.QCSID,
decode(ps.SERVER_GROUP, NULL, 0, ps.SERVER_GROUP),
ps.SERVER_SET,
ps.INST_ID ;



set pages 300 lines 300
col "Username" for a12
col "QC/Slave" for A8
col "Slaveset" for A8
col "Slave INST" for A9
col "QC SID" for A6
col "QC INST" for A6
col "operation_name" for A30
col "target" for A30

select
decode(px.qcinst_id,NULL,username,
' - '||lower(substr(pp.SERVER_NAME,
length(pp.SERVER_NAME)-4,4) ) )"Username",
decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
to_char( px.server_set) "SlaveSet",
to_char(px.inst_id) "Slave INST",
substr(opname,1,30) operation_name,
substr(target,1,30) target,
sofar,
totalwork,
units,
start_time,
timestamp,
decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
to_char(px.qcinst_id) "QC INST"
from gv$px_session px,
gv$px_process pp,
gv$session_longops s
where px.sid=s.sid
and px.serial#=s.serial#
and px.inst_id = s.inst_id
and px.sid = pp.sid (+)
and px.serial#=pp.serial#(+)
order by
decode(px.QCINST_ID, NULL, px.INST_ID, px.QCINST_ID),
px.QCSID,
decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),
px.SERVER_SET,
px.INST_ID ;

 




select inst_id
, status
, count(1) px_servers#
from gv$px_process
group by inst_id, status
order by inst_id, status ;

SELECT DECODE(px.qcinst_id, NULL, username, ' – ' || lower(SUBSTR(pp.SERVER_NAME, LENGTH(pp.SERVER_NAME)-4,4) ) ) "Username",
 DECODE(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave",
 TO_CHAR( px.server_set) "SlaveSet",
 TO_CHAR(s.sid) "SID",
 TO_CHAR(px.inst_id) "Slave INST",
 DECODE(sw.state,'WAITING', 'WAIT', 'NOT WAIT' ) AS STATE,
  CASE sw.state
    WHEN 'WAITING'
    THEN SUBSTR(sw.event,1,30)
    ELSE NULL
  END AS wait_event,
  DECODE(px.qcinst_id, NULL ,TO_CHAR(s.sid) ,px.qcsid) "QC SID",
  TO_CHAR(px.qcinst_id) "QC INST",
  px.req_degree "Req. DOP",
  px.degree "Actual DOP"
FROM gv$px_session px,
  gv$session s ,
  gv$px_process pp,
  gv$session_wait sw
WHERE px.sid   =s.sid (+)
AND px.serial# =s.serial#(+)
AND px.inst_id = s.inst_id(+)
AND px.sid     = pp.sid (+)
AND px.serial# =pp.serial#(+)
AND sw.sid     = s.sid
AND sw.inst_id = s.inst_id
ORDER BY DECODE(px.QCINST_ID, NULL, px.INST_ID, px.QCINST_ID),
  px.QCSID,
  DECODE(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),
  px.SERVER_SET,
  px.INST_ID,
  pp.SERVER_NAME  ;


column name format a50
column value format 999,999
SELECT NAME, VALUE
FROM GV$SYSSTAT
WHERE UPPER (NAME) LIKE '%PARALLEL OPERATIONS%'
  OR UPPER (NAME) LIKE '%PARALLELIZED%'
  OR UPPER (NAME) LIKE '%PX%'
;



Select
decode(px.qcinst_id,NULL,username,
' - '||lower(substr(pp.SERVER_NAME,
length(pp.SERVER_NAME)-4,4) ) )"Username",
decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
to_char( px.server_set) "SlaveSet",
to_char(px.inst_id) "Slave INST",
substr(opname,1,30) operation_name,
substr(target,1,30) target,
sofar,
totalwork,
units,
start_time,
timestamp,
decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
to_char(px.qcinst_id) "QC INST"
from gv$px_session px,
gv$px_process pp,
gv$session_longops s
where px.sid=s.sid
and px.serial#=s.serial#
and px.inst_id = s.inst_id
and px.sid = pp.sid (+)
and px.serial#=pp.serial#(+)
order by decode(px.QCINST_ID, NULL, px.INST_ID, px.QCINST_ID),px.QCSID,
decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP), px.SERVER_SET,px.INST_ID;


select
   ps.qcsid,
   ps.sid,
   p.spid,
   ps.inst_id,
   ps.degree,
   ps.req_degree
from
   gv$px_session ps
   join
   gv$session s
      on ps.sid=s.sid
         and
         ps.inst_id=s.inst_id
   join
   gv$process p
      on p.addr=s.paddr
         and
         p.inst_id=s.inst_id 
order by
   qcsid,
   server_group desc,
   inst_id,
   sid;