Monday, May 15, 2023

Oracle High Undo UNEXPIRED utilization due to Autotune retention causing ORA-01555



This is  very old issue where undo extends puck  up high retention ( more then defined undo_retention) and  undo block remains in unexpired state for long time .  This is because of   _undo_autotune which is  by default set to true . 

 _undo_autotune   will try to override the undo_retention parameter. When _undo_autotune is set to True (default value), based on the size of the undo tablespace Oracle will try to keep the undo segments to higher time than defined in undo_retention parameter.


SQL> select TABLESPACE_NAME,STATUS, SUM(BYTES), COUNT(*) FROM DBA_UNDO_EXTENTS GROUP BY tablespace_name,STATUS order by tablespace_name;

TABLESPACE_NAME STATUS SUM(BYTES) COUNT(*)
------------------------------ --------- ----------------- ----------

UNDOTBS1 EXPIRED 65536 1
UNDOTBS1 UNEXPIRED 10285809664 3271
UNDOTBS2 EXPIRED 142802944 6
UNDOTBS2 UNEXPIRED 4242735104 642




Suggested Solutions :


One of below solution   can  be applied . 

1)    Setting  “_undo_autotune” = false;


2) Setting   _smu_debug_mode=33554432

With this setting, V$UNDOSTAT.TUNED_UNDORETENTION is not calculated based on a percentage of the fixed size undo tablespace. Instead it is set to the maximum of (MAXQUERYLEN secs + 300) and UNDO_RETENTION.

Set   at CDB level and monitor the database



3)  Setting “_HIGHTHRESHOLD_UNDORETENTION”=900

Setting    this parameter will limit high value of undo retention.

alter system set “_HIGHTHRESHOLD_UNDORETENTION”=900 SCOPE=spfile;




Views : 

SELECT property_name, property_value FROM database_properties WHERE property_name = 'LOCAL_UNDO_ENABLED';

select TABLESPACE_NAME,retention from dba_tablespaces where TABLESPACE_NAME like '%UNDO%';

select TABLESPACE_NAME,STATUS, SUM(BYTES), COUNT(*) FROM DBA_UNDO_EXTENTS GROUP BY tablespace_name,STATUS order by tablespace_name;

select max(maxquerylen),max(tuned_undoretention) from DBA_HIST_UNDOSTAT;


SELECT
a.ksppinm "Parameter",
decode(p.isses_modifiable,'FALSE',NULL,NULL,NULL,b.ksppstvl) "Session",
c.ksppstvl "Instance",
decode(p.isses_modifiable,'FALSE','F','TRUE','T') "S",
decode(p.issys_modifiable,'FALSE','F','TRUE','T','IMMEDIATE','I','DEFERRED','D') "I",
decode(p.isdefault,'FALSE','F','TRUE','T') "D",
a.ksppdesc "Description"
FROM x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
WHERE a.indx = b.indx AND a.indx = c.indx
AND p.name(+) = a.ksppinm
AND lower(a.ksppinm) in ('_smu_debug_mode')
ORDER BY a.ksppinm;



  select segment_name, nvl(sum(act),0) "ACT BYTES",
  nvl(sum(unexp),0) "UNEXP BYTES",
  nvl(sum(exp),0) "EXP BYTES"
  from (select segment_name, nvl(sum(bytes),0) act,00 unexp, 00 exp
  from dba_undo_extents where status='ACTIVE' group by segment_name
  union
  select segment_name, 00 act, nvl(sum(bytes),0) unexp, 00 exp
  from dba_undo_extents where status='UNEXPIRED' group by segment_name
  union
  select segment_name, 00 act, 00 unexp, nvl(sum(bytes),0) exp
  from dba_undo_extents where status='EXPIRED' group by segment_name)
  group by segment_name
  order by 1
 /




select segment_name, nvl(sum(act),0) "ACT BYTES",
nvl(sum(unexp),0) "UNEXP BYTES",
nvl(sum(exp),0) "EXP BYTES"
from (select segment_name, nvl(sum(bytes),0) act,00 unexp, 00 exp
from dba_undo_extents where status='ACTIVE' group by segment_name
union
select segment_name, 00 act, nvl(sum(bytes),0) unexp, 00 exp
from dba_undo_extents where status='UNEXPIRED' group by segment_name
union
select segment_name, 00 act, 00 unexp, nvl(sum(bytes),0) exp
from dba_undo_extents where status='EXPIRED' group by segment_name)
group by segment_name
order by 1
/

select tablespace_name,round(sum(case when status = 'UNEXPIRED' then bytes else 0 end) / 1048675,2) unexp_MB ,
round(sum(case when status = 'EXPIRED' then bytes else 0 end) / 1048576,2) exp_MB ,
round(sum(case when status = 'ACTIVE' then bytes else 0 end) / 1048576,2) act_MB
from dba_undo_extents group by tablespace_name
/

SELECT s.sid, s.serial#, s.username, s.program, t.used_ublk, t.used_urec FROM v$session s, v$transaction t WHERE s.taddr = t.addr ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;




References :

ORA-01555 for long running queries if "_undo_autotune"=true (Doc ID 2784427.1)

High Tuned_undoretention though Undo Datafiles are Autoextend on (Doc ID 2582183.1)

Customer Recommended High Undo Utilization On 19c PDB Database due to Automatic TUNED_UNDORETENTION (Doc ID 2710337.1)




Related Issues :

Bug 31113682 - ORA-1555 FOR LONG RUNNING QUERIES IF "_UNDO_AUTOTUNE"=TRUE

Active Undo Segment Keep Growing And Not Releasing/Reusing Space (Doc ID 2343397.1)



Dump Undo Block :

for our  analysis we  can  try to  dump  one of undo block to get more information . 

select SEGMENT_NAME, sum(bytes)/1024/1024/1024 from DBA_UNDO_EXTENTS where status='UNEXPIRED' and segment_name in (
SELECT a.name from
v$rollname a,
v$rollstat b,
dba_rollback_segs c,
v$transaction d,
v$session e
WHERE
a.usn=b.usn AND
a.name=c.segment_name AND
a.usn=d.xidusn AND
d.addr=e.taddr ) group by SEGMENT_NAME order by 2;


alter session set tracefile_identifier='XID';
alter session set max_dump_file_size = unlimited;
alter system dump undo header '_SYSSMU31_4199861047$';

 





Tuesday, May 9, 2023

Analyzing ORA-4031 In Oracle Database 19c

 

Possible Reason :
1) Too many Child Cursors 
2)  In memory  enabled 
3)  Undersized Sga 
4)  Pinned objects  in shared pool 
5)  Memory Fragmentation 
6) Too many hard parsing ,  

 

Possible Solution : 
1) Use  bind variables 
2)   Set enough  Sga 




References : 
1) https://docs.oracle.com/en/database/oracle/oracle-database/19/tgsql/improving-rwp-cursor-sharing.html#GUID-971F4652-3950-4662-82DE-713DDEED317C 

2) Troubleshooting: High Version Count Issues (Doc ID 296377.1)




#####################################
Logs to Collect 
#####################################

1) Tfa 
2) Using customized  sql 
2)  Using  srdc_db_ora4031.sql  retrieved from 2232371.1



Use Tfa 

$TFA_HOME/bin/tfactl diagcollect -srdc ora4031




Use your  customized  sql 

spool /tmp/memory_status.txt <<<<<< Call out the database name

set linesize 200
col VALUE for a50
set pagesize 100
set pagesize 100
col Parameter for a50
col "Session Value" for a50
col "Instance Value" for a50
column component format a25
column Initial format 99,999,999,999
column Final format 99,999,999,999
column Started format A25

select name,value from v$system_parameter where name in ( 'memory_max_target', 'memory_target', 'sga_max_size', 'sga_target', 'shared_pool_size', 'db_cache_size', 'large_pool_size', 'java_pool_size', 'pga_aggregate_target', 'workarea_size_policy', 'streams_pool_size' ) ;

select a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value"
from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c
where a.indx = b.indx and a.indx = c.indx and a.ksppinm in
('__shared_pool_size','__db_cache_size','__large_pool_size','__java_pool_size','__streams_pool_size','__pga_aggregate_target','__sga_target','memory_target');


select * from v$sgastat where pool like '%shared%' order by bytes;

select NAMESPACE,GETHITRATIO,PINHITRATIO,RELOADS,INVALIDATIONS from v$librarycache;

select HS.BEGIN_INTERVAL_TIME,ss.*
from DBA_HIST_SGASTAT ss ,dba_hist_snapshot hs
where pool=’shared pool’ and name=’free memory’
and SS.SNAP_ID=HS.SNAP_ID
and SS.INSTANCE_NUMBER=HS.INSTANCE_NUMBER
and ss.instance_number=1
–and HS.BEGIN_INTERVAL_TIME between to_date(’17-09-2019 13:00:00′,’dd-mm-yyyy hh24:mi:ss’) and to_date(’17-09-2019 15:30:00′,’dd-mm-yyyy hh24:mi:ss’)
order by ss.snap_id desc;


SELECT COMPONENT ,OPER_TYPE,INITIAL_SIZE "Initial",FINAL_SIZE "Final",to_char(start_time,'dd-mon hh24:mi:ss') Started FROM V$SGA_RESIZE_OPS;

SELECT COMPONENT ,OPER_TYPE,INITIAL_SIZE "Initial",FINAL_SIZE "Final",to_char(start_time,'dd-mon hh24:mi:ss') Started FROM V$MEMORY_RESIZE_OPS;

select * from v$pga_target_advice;

spool off;

   




Saturday, April 29, 2023

Handling Oracle Database Result Cache Corruption


Result Cache,   can be used to cache query and function results in memory. The cached information is stored in a dedicated area inside the shared pool 
where it can be shared by other PL/SQL programs that are performing similar calculations. 

 The result cache takes its memory from the shared pool. Therefore, if you expect to increase the maximum size of the result cache, take this into consideration when sizing the shared pool.

-> If you are managing the size of the shared pool using the SGA_TARGET initialization parameter, Oracle Database allocates 0.50% of the value of the SGA_TARGET parameter to the result cache.

-> If you are managing the size of the shared pool using the SHARED_POOL_SIZE initialization parameter, then Oracle Database allocates 1% of the shared pool size to the result cache.




##############  Result Cache Parameters ##############


1) Result_cache_mode 

The Result Cache is set up using the result_cache_mode initialization parameter with one of these three values:

-> MANUAL is the default and recommended value.
-> Query results can only be stored in the result cache by using a query hint or table annotation. 
-> FORCE : All results are stored in the result cache.



2)  RESULT_CACHE_MAX_SIZE 

-> Specifies the memory allocated to the server result cache. To disable the server result cache, set this parameter to 0.
-> the result cache is specific to each database instance and 
can be sized differently on each instance.



3)  RESULT_CACHE_MAX_RESULT 

-> Specifies the maximum amount of server result cache memory  (in percent) that can be used for a single result. 
Valid values are between 1 and 100. The default value is 5%.
You can set this parameter at the system or session level.


4) RESULT_CACHE_REMOTE_EXPIRATION 

-> The default value is 0, which specifies that results using remote objects will not be cached

If a non-zero value is set for this parameter, DML on the remote database does not invalidate the server result cache. 

Specifies the expiration time (in minutes) for a result in  the server result cache that depends on remote database objects.






##############  how to check result cache corruption ##############  

At  times we  come across   result cache Contention  or Corruption  issues  which  can be easily fixed by disabling and re-enabling result cache .    "ORA-600 [qesrcro_dol2ro] / result cache corruption"   can be seen in alert log in case of   corruption issues .


SQL> SELECT dbms_result_cache.status() FROM dual; 

 DBMS_RESULT_CACHE.STATUS()
 --------------------------------------------------------------------------------
 CORRUPT



--> Generate Result Cache Report 
SQL> SET SERVEROUTPUT ON 
SQL> EXECUTE DBMS_RESULT_CACHE.MEMORY_REPORT 


-> It will flush the server result cache.
EXEC DBMS_RESULT_CACHE.FLUSH;



##############   Flushing Result Cache  ##############  

--Flush retaining statistics (default for both are FALSE)
begin
dbms_result_cache.flush (
   retainmem => FALSE,
   retainsta => TRUE);
end;
/

--Flush Retaining memory (default for both are FALSE)
begin
dbms_result_cache.flush (
   retainmem => TRUE,
   retainsta => FALSE);
end;
/

--Flush memory and statistics globally
begin
dbms_result_cache.flush(
   retainmem => TRUE,
   retainsta => FALSE,
   global => TRUE);
end;
/




##############   Manually Use Result Cache Using Hint   ##############  


--Use the result_cache hint because manual mode is being used for this instance

select /*+ result_cache */ 



------------------------------------------
| Id  | Operation                        |
------------------------------------------
|  11 |  COUNT STOPKEY                   |
|  12 |   VIEW                           |
|  13 |    RESULT CACHE                  |
------------------------------------------




The /*+ NO_RESULT_CACHE */ hint instructs the database not to cache the results in either the server or client result caches.




Enable result cache of Table
ALTER TABLE sales RESULT_CACHE (MODE FORCE);



##############   Disable and re-enable  the result cache  ##############  

 alter system set RESULT_CACHE_MAX_SIZE=0;

 alter system set RESULT_CACHE_MAX_SIZE=0; 
 alter system set RESULT_CACHE_MAX_SIZE=125856K




##############   Temporarily ByPass  the result cache  ##############  

If we   dont  want to  disable Result cache , temporarily we can   Bypass  Result cache using below 


--> To Bypass

begin
dbms_result_cache.bypass(true) ; 
dbms_result_cache.flush ; 
end ; 

select dbms_result_cache.status ()  from dual  ;



--> To Normalize 

begin
dbms_result_cache.bypass(flase) ; 
dbms_result_cache.flush ; 
end ; 


##############  other views ##############  


# to check which sql is using result cache 
select   sid , result_cache from v$sql ; 

select * from GV$RESULT_CACHE_DEPENDENCY;
select * from GV$RESULT_CACHE_MEMORY;
select * from GV$RESULT_CACHE_OBJECTS;
select * from GV$RESULT_CACHE_STATISTICS;

show parameters result





Set line 400 

--Check result cache parameters

col name for a30
col value for a30
select
   name,
   value
from
   v$parameter
where
   name like '%result%';


--Query the v$result_cache_objects to check if there is any cached object

select
   count(*)
from
   v$result_cache_objects;



col "Space Over" for a30
col "Space Unused" for a30
col "Obj_Name_Dep" for a30
select
   type "Type",
   name "Name",
   namespace "SQL|PL/SQL",
   creation_timestamp "Creation",
   space_overhead "Space Over",
   space_unused "Space Unused",
   cache_id "Obj_Name_Dep",
   invalidations "Invds"
from
   gv$result_cache_objects;  




--Check the Result Cache Setting and Statistics

select
   name "Name",
   value "Value"
from
v$result_cache_statistics;


 

--Check objects cached

select
   o.owner "Owner",
   o.object_id "ID",
   o.object_name "Name",
   r.object_no "Obj Number"
from
   dba_objects o,
   gv$result_cache_dependency r
where
   o.object_id = r.object_no;



--Checking memory blocks and their status in Result Cache Memory

select   *  from
   gv$result_cache_memory;



SQL> set linesize 150

SQL> select id,name,row_count,type,invalidations from v$result_cache_objects;



##############  Requirements for the Result Cache ##############  

 

Enabling the result cache does not guarantee that a specific result set will be included in the server or client result cache. 


In order for results to be cached, the following requirements must be met:


1. Read Consistency Requirements

-> If the current session has an active transaction referencing objects in a query, then the results from this query are not eligible for caching.


2. Query Parameter Requirements

Cache results can be reused if they are parameterized with variable values when queries are equivalent and  the parameter values are the same. Different values or bind variable names may cause cache misses. Results are parameterized if any of the following constructs are used in the query:

-> Bind variables

-> The SQL functions DBTIMEZONE, SESSIONTIMEZONE, USERENV/SYS_CONTEXT (with constant variables), UID, and USER NLS parameters



Sunday, April 2, 2023

Tracing Oracle Datapump expdp/impdp


Expdp and impdp has a built in mechanism to trace executions. We can add the (hidden) expdp parameter “trace” so we can mark the command to 
output a useful trace file. Unfortunately, this parameter is hidden – meaning it does not appear in the expdp help=yes and we still need to figure out how to use it.
The trace parameter is being set using an hexadecimal number with 7 digits. There is no need to add 0x at the beginning of the string – it will not accept decimal numbers or binary inputs. 
The number must be written with a lower case. We will also need the privileges to run trace on the session, and obviously, the permissions to export or import.


-- Example of combination (last 4 digits are usually 0300): 
40300 to trace Process services
80300 to trace Master Control Process (MCP)
400300 to trace Worker process(es)


In order to trace all Data Pump components, level 1FF0300 can be specified:
However for most cases full level tracing is not required. As trace 400300 is to trace Worker process(es) and trace 80300 is to trace Master Control Process (MCP).
So combining them is trace 480300 and by using trace 480300 you will be able to trace both Master Control process (MCP) and the Worker process(es). This would serve the purpose.



Run a Data Pump job with full tracing:

This results in two trace files in BACKGROUND_DUMP_DEST:   
--    Master Process trace file: <SID>_dm<number>_<process_id>.trc    
--    Worker Process trace file: <SID>_dw<number>_<process_id>.trc    

And one trace file in USER_DUMP_DEST:   
--    Shadow Process trace file: <SID>_ora_<process_id>.trc    



To use full level tracing issue data pump export as,
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300
To use full level tracing for data pump import operation issue import as,
impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300



So to solve any data pump export problem issue,
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300 metrics=yes
To solve any data pump import problem issue,
impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300 metrics=yes


Data Pump tracing can also be started with a line with EVENT 39089 in the initialization parameter file. 
This method should only be used to trace the Data Pump calls in an early state, e.g. if details are needed about the DBMS_DATAPUMP.OPEN API call. 
Trace level 0x300 will trace all Data Pump client processes. Note that this event cannot be enabled through an ALTER SYSTEM command, only a restart of the database will enable the event.

-- Enable event
ALTER SYSTEM SET EVENTS = '39089 trace name context forever, level 0x300' ;
-- Disable event
ALTER SYSTEM SET EVENTS = '39089 trace name context off' ;




We can also use  tfactl  to collect diagnostic information
tfactl diagcollect -srdc  dbexpdp
tfactl diagcollect -srdc  dbimpdp




References:

Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump (Doc ID 286496.1)


Saturday, March 11, 2023

Troubleshooting advanced queue issues in Oracle 19c database

 
Seeing   issues  related to queuing  below  article  is  documented to  have some handy steps for troubleshooting queuing issues 



AQ Components 

The four main components of AQ are: 

1. Message - A message consists of message content, or payload, which can be specified using typed or raw data and message attributes or control information.

 2. Message Queue – Messages are stored in queues and these queues act as “postal boxes” where different applications can look for “mail” in the form of messages. Thus, when one application wants to contact certain applications for certain tasks, it can leave messages in these queues, and the receiving applications will be able to find these messages for processing. 

3. Message Interfaces – AQ supports enqueue, dequeue, and propagation operations that integrate seamlessly with existing applications by supporting popular standards. AQ messages can be created, queried, propagated and consumed using popular application programming interfaces (API) such as PL/SQL, C/C++, Java, Visual Basic (through Oracle Objects for OLE), Python, Node.js, and ODP.NET. AQ provides support for the Java Message Service 1.1 (JMS) API that allows Java applications to utilize the message queuing functionality. 

4. Message Handling – AQ supports rule-based routing of messages according to data in the message payload or attributes. Additionally, message transformations can be applied to messages to re-format data before the messages are automatically delivered to target applications or subscribers. Oracle Database 19c can also exchange AQ messages with IBM MQ and TIBCO/Rendezvous through the Oracle Messaging Gateway.



Terms To Understand :

1) Queue 
2) Queue  tables 
3)  Propagations 
4) Subscribers 





Parameter Influencing Advanced queuing :

Main database parameter is aq_tm_processes . However below are other parameters  that influence advanced queuing 


aq_tm_processes
processes
job_queue_processe
timed_statistics
SGA_TARGET
streams_pool_size
undo_retention
_client_enable_auto_unregister
shared_pool_size 



Troubleshooting AQ : 

1) Restart  queue process and   queue monitoring 

 aq_tm_processes  --> make it 0 
 kill  existing   q0 process . 
oracle aq_tm_processes  --> make it 1 


ps -ef | grep -i e00  | grep -i  dbname  -- kill it 


-- Perform the below to reset the qmon process:
alter system set "_aq_stop_backgrounds"=TRUE;
alter system set "_aq_stop_backgrounds"=FALSE;




2)  Restart job  queue processes 

3) Check which jobs are being run by querying dba_jobs_running. It is possible that other jobs are starving the propagation jobs.

4)  Check to see that the queue table sys.aq$_prop_table_instno exists in DBA_QUEUE_TABLES. The queue sys.aq$_prop_notify_queue_instnomust  also exist in DBA_QUEUES and must be enabled for enqueue and dequeue.

5)   Turn on propagation tracing at the highest level using event 24040, level 10


6) . Debugging information is logged to job queue trace files as propagation takes place. You can check the trace file for errors and for statements indicating that messages have been sent.


7) Generate TFA as per 1905796.1 
$TFA_HOME/bin/tfactl diagcollect -srdc dbaqmon


8)  Generate   AQ & MGW Health Check   using Configuration Script ( Doc ID 1193854.1 )

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/03/aq-mgw-health-check-and-configuration.html


9)  Check  *qm0*.trc  in  trace   directory .


10)  Restart  Time Managers  from root container 

EXEC dbms_aqadm.stop_time_manager;
EXEC dbms_lock.sleep(120); -- sleep is must
EXEC dbms_aqadm.start_time_manager;



11)    Restart  queue process 

Begin 
dbms_aqadm.stop_queue  ('<schema name>.<queue name>'); ; 
END ; 
/


Begin 
dbms_aqadm.start_queue ( queue_name =>'QUEUENAME' ) ; 
END ; 
/






Views :

SELECT msg_state, MIN(enq_time) min_enq_time, MAX(deq_time) max_deq_time, count(*) FROM abdul_queue  GROUP BY msg_state;


select TOTAL_NUMBER 
from DBA_QUEUE_SCHEDULES 
where QNAME=’<source_queue_name>’;

select* from QUEUE_PRIVILEGES  ;

SELECT name, queue_type, waiting, ready, expired
FROM dba_queues q
JOIN v$aq v ON q.qid = v.qid
WHERE q.name = 'queue_name'  ;



 select count(*), min(enq_time), max(deq_time), state, q_name from ABDUL.QUEUE_TABLE group by q_name, state;



set line 100
col Parameter for a50
col "Session Value" for a20
col "Instance Value" for a20

SELECT a.ksppinm "Parameter",b.ksppstvl "Session Value",c.ksppstvl "Instance Value" FROM x$ksppi a, x$ksppcv b, x$ksppsv c WHERE a.indx = b.indx AND a.indx = c.indx AND a.ksppinm in ( '_client_enable_auto_unregister', '_emon_send_timeout' )
/


-- to check if dequeue is  happening 
select queue_id,shard_id,priority,ENQUEUED_MSGS,dequeued_msgs, enqueued_msgs-dequeued_msgs backlog from gv$aq_sharded_subscriber_stat order by queue_id, shard_id, priority;




 Check Message State and Destination. Find the queue table for a given queue

select QUEUE_TABLE 
from DBA_QUEUES 
where NAME = &queue_name;



Check for messages in the source queue with

select count (*) 
from AQ$<source_queue_table>  
where q_name = 'source_queue_name';



Check for messages in the destination queue.

select count (*) 
from AQ$<destination_queue_table>  
where q_name = 'destination_queue_name';





select * from GV$AQ  ;
select * from aq.aq$objmsgs80_qtab ;
select * from dba_queues where qid=14140;
SELECT * FROM SYS.ALL_QUEUES ;
select * from DBA_QUEUE_SCHEDULES ;



COLUMN OWNER HEADING 'Owner' FORMAT A10
COLUMN NAME HEADING 'Queue Name' FORMAT A28
COLUMN QUEUE_TABLE HEADING 'Queue Table' FORMAT A22
COLUMN USER_COMMENT HEADING 'Comment' FORMAT A15
SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, q.USER_COMMENT
  FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
  WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND
        q.QUEUE_TABLE = t.QUEUE_TABLE AND
        q.OWNER       = t.OWNER;






DBA_QUEUE_TABLES: All Queue Tables in Database
The DBA_QUEUE_TABLES view contains information about the owner instance for a queue table. A queue table can contain multiple queues. In this case, each queue in a queue table has the same owner instance as the queue table. Its columns are the same as those in ALL_QUEUE_TABLES.
Werbung

DBA_QUEUES: All Queues in Database The DBA_QUEUES view specifies operational characteristics for every queue in a database. Its columns are the same as those ALL_QUEUES.

DBA_QUEUE_SCHEDULES: All Propagation Schedules The DBA_QUEUE_SCHEDULES view describes all the current schedules in the database for propagating messages.

QUEUE_PRIVILEGES: Queues for Which User Has Queue Privilege The QUEUE_PRIVILEGES view describes queues for which the user is the grantor, grantee, or owner. It also shows queues for which an enabled role on the queue is granted to PUBLIC.

AQ$Queue_Table_Name: Messages in Queue Table
The AQ$Queue_Table_Name view describes the queue table in which message data is stored. This view is automatically created with each queue table and should be used for querying the queue data. The dequeue history data (time, user identification and transaction identification) is only valid for single-consumer queues.




***********************
.
SET MARKUP HTML ON SPOOL ON HEAD "<TITLE> DIAG - INFO </title> -
<STYLE TYPE='TEXT/CSS'><!--BODY {background: ffffc6} --></STYLE>"
SET ECHO ON

spool AQ_Diag.html

set pagesize 120
set linesize 180
column global_name format a20
column current_scn format 99999999999999999999

alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
alter session set nls_timestamp_format='dd-mon-yyyy hh24:mi:ss';
select global_name, sysdate from global_name;
select * from gv$version;
select inst_id, instance_number, instance_name, host_name, startup_time, status from gv$instance order by inst_id;

SELECT SYS_CONTEXT ('USERENV', 'CON_NAME') "Current Container" FROM DUAL;
select con_id, name, cdb, current_scn from gv$database order by inst_id;
select * from gv$containers order by inst_id,con_id;
select * from gv$pdbs order by inst_id,con_id;

select owner, queue_table, type, RECIPIENTS from dba_queue_tables where
queue_table='ABDUL_QUEUE';

SELECT queue_table, owner_instance, primary_instance, secondary_instance
FROM dba_queue_tables
WHERE owner = 'ABDUL' AND queue_table = 'ABDUL_QUEUE';

select count(*), state, q_name from ABDUL.ABDUL_QUEUE
group by q_name, state;

select count(*), min(enq_time), max(deq_time), state, q_name from ABDUL.ABDUL_QUEUE
group by q_name, state;


select count(*), min(enq_time), max(deq_time), state, q_name from ABDUL.ABDUL_QUEUE
where deq_time < (sysdate - 1)
group by q_name, state;

select trunc(enq_time,'HH'), q_name, state, count(*), max(deq_time) from ABDUL.ABDUL_QUEUE
group by trunc(enq_time,'HH'),q_name, state;

select * from dba_queue_subscribers where queue_table='ABDUL_QUEUE';

spool off

SET MARKUP HTML OFF
SET ECHO ON

***********************






Possible message states in Oracle Advanced Queuing

ValueNameMeaning
0READYThe message is ready to be processed, i.e., either the delay
time of the message has passed or the message did not have
a delay time specified
1WAITING or WAITThe delay specified by message_properties_t.delay while
executing dbms_aq.enqueue has not been reached.
2RETAINED OR PROCESSEDThe message has been successfully processed (dequeued) but
will remain in the queue until the retention_time specified
for the queue while executing dbms_aqadm.create_queue has
been reached.
3EXPIREDThe message was not successfully processed (dequeued) in
either 1) the time specified by message_properties_t.expiration
while executing dbms_aq.enqueue or 2) the maximum number of
dequeue attempts (max_retries) specified for the queue while
executing dbms_aqadm.create_queue.
4IN MEMORYUser-enqueued buffered message
7SPILLEDUser-enqueued buffered message spilled to disk
8DEFERREDBuffered messages enqueued by a Streams Capture process
9DEFERRED SPILLEDCapture-enqueued buffered messages that have been spilled to disk
10BUFFERED EXPIREDUser-enqueued expired buffered messages




Known Issues :

1) Issue 1

AQ Queue Monitor Does Not Change Message MSG_STATE from WAIT to READY on RAC  after restart / patching :



Queue table "ABDUL_QUEUE" is AQ queue table owned by user "USER01" and view "AQ$ABDUL_QUEUE" is created when creating queue table "ABDUL_QUEUE" 
and returns read only data from "ABDUL_QUEUE" with more meaningful/readable values for columns .

During restart queue process goes in hung state and primary_instance  gets de-assigned .Check if any hung message detected  in  *qm0*.trc files 


This can be fixed by altering queue  

execute dbms_aqadm.alter_queue_table (queue_table => 'ABDU_QUEUE',primary_instance => 2,secondary_instance => 1);


select MSG_ID,MSG_STATE,DELAY,ENQ_TIME
  from USER01.AQ$ABDUL_QUEUE
 where MSG_STATE='WAIT'
order by ENQ_TIME;


select owner, queue_table, object_type, owner_instance, primary_instance, secondary_instance
  from dba_queue_tables where queue_table='ABDUL_QUEUE';



AQ Messages Not Changing Status from DELAY to READY Or READY to PROCESSED In RAC Databases (Doc ID 2879519.1)



2)  Issue 2 : 

 Sharded  Queues :  Sharded AQ:RAC: MESSAGES in READY when SHARD OWNERSHIP HAS CHANGED.

 select distinct MSG_PRIORITY from owner.aq_table ; 
 select * from GV$AQ_CACHED_SUBSHARDS ;
 select * from GV$AQ_REMOTE_DEQUEUE_AFFINITY;
 select * from SYS.aq$_queue_shards;
 select INST_ID,JOB_ID,QUEUE_NAME,JOB_STATE,MSGS_SENT,OWNER_INSTANCE_ID from GV$AQ_CROSS_INSTANCE_JOBS;






Purge Advanced Queue Table  if auto purge not happening 


1-)Identifiy how many data you will purge. If you want purge all off them you can easily purge without condition. 
select owner,name,queue_table,retention
from dba_queues
where owner='OWNER_AQ'
and queue_table='QUEUE_TABLE_T'
and queue_type='NORMAL_QUEUE';


select count(1)
from OWNER_AQ.AQ$QUEUE_TABLE_T
where msg_state=’PROCESSED’ and enq_time<to_date(’16/10/2017 17:05:30′,’dd/mm/yyyy hh24:mi:ss’);


2.1-) Purge queue table with conditions.
DECLARE
po dbms_aqadm.aq$_purge_options_t;
BEGIN
po.block := TRUE;
DBMS_AQADM.PURGE_QUEUE_TABLE(
queue_table => 'OWNER_AQ.QUEUE_TABLE_T',
purge_condition => 'qtview.queue = ''QUEUE_TABLE_Q''
and qtview.enq_timestamp < to_date(''16/10/2017 17:05:30'',''dd/mm/yyyy hh24:mi:ss'') and qtview.msg_state = ''PROCESSED''', purge_options => po);
END;
/



2.2-) Purge queue table without conditions. With this option you will gain space like truncate command on table .
DECLARE
po dbms_aqadm.aq$_purge_options_t;
BEGIN
dbms_aqadm.purge_queue_table('Queue_Table_Name', NULL, po);
END;



2.3-) If your dataset is big it can be executed like that. Condition may change.


begin
FOR i IN 1..15
LOOP

DECLARE
po dbms_aqadm.aq$_purge_options_t;
BEGIN
po.block := TRUE;
DBMS_AQADM.PURGE_QUEUE_TABLE(
queue_table => ‘OWNER_AQ.QUEUE_TABLE_T’,
purge_condition => ‘qtview.queue = ”QUEUE_TABLE_Q”
and qtview.enq_timestamp < to_date(”’||i*2||’/09/2017 05:05:30”,”dd/mm/yyyy hh24:mi:ss”) and qtview.msg_state = ”PROCESSED”’, purge_options => po);
END;
commit;
dbms_lock.sleep(20);
END LOOP;
end;
/





References : 

1) 
https://www.oracle.com/a/otn/docs/database/aq-db19c-technicalbrief.pdf

2) 
https://www.oracle.com/a/otn/docs/database/jmssq-db19c-technicalbrief.pdf

3) 
https://documentation.braintribe.com/Content/customer/oeamtc/oeamtcdocu/Troubleshooting/Oracle%20Advanced%20Queues%20(AQ).htm


4) 
https://docs.oracle.com/database/121/ADQUE/aq_intro.htm#ADQUE0100


5) 
https://docs.oracle.com/database/121/STRMS/strms_qpmon.htm#STRMS908

6) 
After import of queue: ORA-25226 dequeue failed queue not enabled for dequeue (Doc ID 1290221.1)

7) 
https://www.oracle.com/docs/tech/database/jmssq-db19c-technicalbrief.pdf

https://www.morganslibrary.org/reference/demos/aq_demo_rac.html

Monday, February 6, 2023

Copy sql profile/baseline from one sql id to another for Oracle database out of a modified SQL using coe_load scripts

 
We came across user requirement where sql profile and sql baseline from 1 sql id needs to  be copied to another sqlid . 


SQL_ID1 and SQL_HANDLE1 with a poorly performing Execution Plan with Hash Value PHV1. With a small modification to this query, like adding a CBO Hint or removing one, we obtain query Q2, which performs well, and has SQL_ID2, SQL_HANDLE2 and PHV2. So what we want it to associate PHV2 to SQL_ID

This can be done using coe_load_sql_baseline.sql / coe_load_sql_profile.sql   provided under Metalink (Doc ID 1400903.1 ) 

coe_load_sql_baseline.sql / coe_load_sql_profile.sql are scripts provided with the SQLT tool in the "utl" folder.

Have stored  both scripts  below  

coe_load_sql_profile.sql :
https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/02/coeloadsqlprofilesql-as-per-doc-id.html


coe_load_sql_baseline.sql :
https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/02/coeloadsqlbaselinesql-as-per-doc-id.html



References:

Directing Plans with Baselines/Profiles Using coe_load_sql_baseline.sql / coe_load_sql_profile.sql (shipped with SQLT) (Doc ID 1400903.1)

All About the SQLT Diagnostic Tool (Doc ID 215187.1)





Saturday, February 4, 2023

Force monitor Oracle Database Operations / Sql -- Monitoring Settings to capture Sql that are not monitored

 
We all knew to  how to  check sql monitoring  for long running sql .  But there were  situations where sql were not monitored . 



To avoid  these issue we  came across below 4 solutions  : 
 
1)  /*+ MONITOR */ hint
2) sql_monitor  event 
3) DBMS_SQL_MONITOR.begin_operation
4) “_sqlmon_max_planlines”    parameter 



 “_sqlmon_max_planlines”  Parameter 

There is a hidden parameter “_sqlmon_max_planlines” which states that any SQL with a plan in excess of 300 lines should not be monitored. 
The solution is to change either the session or the system to allow monitoring to happen when the plan is over 300 lines.


alter system  set "_sqlmon_max_planlines"=500 scope=memory sid='*';
or
alter session set "_sqlmon_max_planlines"=500;
The negative side effect it that the monitoring will use more resources (primarily memory and CPU), which is why there are default limits on this feature. You might want to change it back when you’re finished to conserve resources.

select ksppinm, ksppstvl, ksppdesc
  from sys.x$ksppi a, sys.x$ksppsv b
 where a.indx=b.indx
  and lower(ksppinm) like lower('%sqlmon%')
order by ksppinm
;



 /*+ MONITOR */ hint  and sql_monitor   event 
 
The event sql_monitor specifies a list of SQL IDs for the statements to be monitored. A SQL statement specifies the /*+ MONITOR */ hint.

For example, the following statement forces instance-level monitoring for SQL IDs 5hc07qvt8v737 and 9ht3ba3arrzt3:

ALTER SYSTEM SET EVENTS 'sql_monitor [sql: 5hc07qvt8v737|sql:9ht3ba3arrzt3] force=true'

At each step of the SQL execution plan, the database tracks statistics by performance  metrics such as elapsed time, CPU time, number of reads and writes, and I/O wait time.
These metrics are available in a graphical and interactive report called the SQL monitor  active report.




sql_monitor  event 


ALTER SYSTEM SET EVENTS 'sql_monitor [sql: 6v717k15utxsf] force=true';

SET TERMOUT OFF
SELECT * FROM force_sqlmon;
SET TERMOUT ON

SELECT * FROM v$sql_monitor WHERE sql_text LIKE '%force_sqlmon%';




DBMS_SQL_MONITOR.begin_operation

We  can monitor whole operation executed by user session 


BEGIN
  :l_dbop_eid := DBMS_SQL_MONITOR.begin_operation (
                   dbop_name       => 'db_op_1',
                   dbop_eid        => :l_dbop_eid,
                   forced_tracking => DBMS_SQL_MONITOR.force_tracking
                 );
END;
/


BEGIN
  DBMS_SQL_MONITOR.end_operation (
    dbop_name       => 'db_op_1',
    dbop_eid        => :l_dbop_eid
  );
END;
/



SET LONG 1000000
SET LONGCHUNKSIZE 1000000
SET LINESIZE 1000
SET PAGESIZE 0
SET TRIM ON
SET TRIMSPOOL ON
SET ECHO OFF
SET FEEDBACK OFF

SPOOL /host/report_sql_monitor.htm
SELECT DBMS_SQL_MONITOR.report_sql_monitor(
  dbop_name    => 'db_op_1',
  type         => 'HTML',
  report_level => 'ALL') AS report
FROM dual;
SPOOL OFF




References :

https://docs.oracle.com/database/121/ARPLS/d_sql_monitor.htm#ARPLS74785


Sunday, January 29, 2023

Oracle database 19c Dynamic sequence cache - LAST_VALUE for sequences is incorrectly calculated


Recently after  migrating to  19c  , Application team started complaining that sequences  value are not properly calculated . 

It was later found that  it was due to  Dynamic sequence cache  introduced in 19c . 

This issue is supposed to be fixed in 19.13 


If the nextval value of the sequence is frequently called in the application, there may be performance 
problems.  

The default sequence cache is usually 20, I still recommend configuring cache 200 to start when creating.


The feature is enabled by default and requires no additional setup by a DBA or end user. This feature resizes sequence caches dynamically based on the rate of consumption of sequence numbers. As a result sequence cache sizes can grow well beyond the configured DDL cache size.

If a sequence is used across nodes in a RAC setting, this can result in large gaps in sequence numbers seen across instances.

If you see large gaps in sequence numbers on 19.10 or later, and your application is sensitive to such gaps, then you can disable this feature with the following setting:

_dynamic_sequence_cache = FALSE;


References :

19c LAST_VALUE for sequences is incorrectly calculated when next cache of numbers required (Doc ID 2797098.1)

Sequence dynamic cache resizing feature (Doc ID 2790985.1)





Saturday, January 21, 2023

Gathering Information for Troubleshooting Oracle Physical Standby Database /Dataguard


Writing  this blog to keep all  scripts handy for handling dataguard  issues 



Script to Collect Data Guard Physical and Active Standby Diagnostic Information for Version 10g and above (Including RAC) ( ID 1577406.1)

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/01/script-to-collect-data-guard-physical_21.html


Script to Collect Data Guard Primary Site Diagnostic Information [ID 241374.1]

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/01/script-to-collect-dataguard-primary.html





Script to Collect Data Guard Primary Site Diagnostic Information for Version 10g and above (Including RAC). (Doc ID 1577401.1)

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/01/script-to-collect-data-guard-primary.html


Script to Collect Data Guard Physical Standby Diagnostic Information [ID 241438.1]

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/01/script-to-collect-data-guard-physical.html




SRDC - Collect Data Guard Diagnostic Information (Doc ID 2219763.1)

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/01/srdc-collect-data-guard-diagnostic.html






Script to Collect Log File Sync Diagnostic Information (lfsdiag.sql) (Doc ID 1064487.1)

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/01/script-to-collect-log-file-sync.html




Checking Log Transport Lag and Log  transport error  : 

Have documented sql used  in below separate blog foe easiness 

https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/01/monitoring-oracle-standby-log-transport.html



Improving Log transport : 

1) Using multithreaded log writer / lgwr  process for Sync  log transport 

2) Increasing Archiver process for async log transport . 



Handling Huge archive gap or when archive is missing :

1) Applying Incremental Scn based backup 



Logs to checks :

1)  Prod and Dr Site Alert Logs 

2) Prod and Dr Site  dg broker log 

3) Prod site lgwr logfile . 




DGMGRL Commands : 

Spool dgconfig.log 

DGMGRL> SHOW INSTANCE VERBOSE sales1;
DGMGRL> SHOW INSTANCE sales1;
DGMGRL> show configuration verbose;
DGMGRL> show configuration verbose “tststby”;
DGMGRL> show database prod1;
DGMGRL> show database prod1dr;
DGMGRL> show database prod1 statusreport;
DGMGRL> show database prod1 inconsistentProperties;
DGMGRL> show database prod1 inconsistentlogxptProps;
DGMGRL> show database prod1 logxptstatus;
DGMGRL> show database prod1 latestlog;
DGMGRL> show fast_start failover;

DGMGRL> show database STYDB InconsistentProperties 

DGMGRL>show database STYDB InconsistentLogXptProps

 

DGMGRL>show database PRIMDB statusreport
DGMGRL>show database PRIMDB sendQentries
DGMGRL>show database STYDB recvqentries
DGMGRL>show database PRIMDB topwaitevents

DGMGRL> edit configuration set property tracelevel=support;
DGMGRL> edit database PRIMDB set property LogArchiveTrace=8191;
DGMGRL> edit database STYDB set property LogArchiveTrace=8191;

DGMGRL> edit configuration reset property tracelevel ;
DGMGRL> edit database PRIMDB reset property logarchivetrace;
DGMGRL> edit database STYDB reset property logarchivetrace;

DGMGRL> VALIDATE DATABASE ‘chennai’;
DGMGRL> validate database verbose standby 
DGMGRL> VALIDATE network configuration for all ; 


EVENT


To determine which resource is constraining asynchronous transport, use krsb stats which can be enabled by setting event 16421 on both the primary and standby databases:

alter session set events ‘16421 trace name context forever, level 3’;


To disable krsb stats set event 16421 to level 1:

alter session set events ‘16421 trace name context forever, level 1’;



Checking Dataguard Parameters :


set linesize 500 pages 0
col value for a90
col name for a50
 select name, value
from v$parameter
where name in ('db_name','db_unique_name','log_archive_config', 'log_archive_dest_1','log_archive_dest_2',
               'log_archive_dest_state_1','log_archive_dest_state_2', 'remote_login_passwordfile',
               'log_archive_format','log_archive_max_processes','fal_server','db_file_name_convert',
                     'log_file_name_convert', 'standby_file_management');




SELECT db_param.NAME,
       db_param.VALUE,
       db_db.db_unique_name,
       db_db.database_role
FROM   v$parameter db_param,
       v$database  db_db
WHERE  db_param.NAME IN ( 'db_file_name_convert',
                          'db_name',
                          'db_unique_name',
                          'fal_client',
                          'fal_server',
                          'local_listener',
                          'log_archive_config',
                          'log_archive_dest_1',
                          'log_archive_dest_2',
                          'log_archive_dest_3',
                          'log_archive_dest_state_1',
                          'log_archive_dest_state_2',
                          'log_archive_dest_state_3',
                          'log_file_name_convert',
                          'standby_archive_dest',
                          'standby_file_management',                        
                          'remote_login_passwordfile',
                          'log_archive_format'
                        )
ORDER BY db_param.NAME;



############# Dataguard redo apply rate ##########

watch -n 5 ./guard_apply_takip.sh 


#!/bin/bash
date;
hostname;
export ORACLE_HOME=/oracle/product/11.2.0
export ORACLE_SID=TESTDB

echo "*******************************************************************************************************************"
sqlplus -S / as sysdba  <<EOF
 set lines 200
 set feedback off
 set pages 0 
 col comments for a20
 col start_time for a30
 col item for a30
 col units for a15
 col last_apply_time for a30
 select to_char(R.START_TIME,'DD.MON.YYYY HH24:MI')start_time , 
        R.ITEM, R.UNITS, R.SOFAR , 
        to_char(R.TIMESTAMP,'DD.MON.YYYY HH24:MI:SS') last_apply_time
    from v\$recovery_progress R
    where start_time = (select max(start_time) from v\$recovery_progress) order by start_time, item;
 select process, status, thread#, sequence#, block#, blocks from v\$managed_standby order by 1;
EOF

echo ""
echo "*******************************************************************************************************************"
top -cbn 1 | head -n 40





Other Views : 



###############    Gather the per log redo generation rate, .  ###############  

SQL> alter session set nls_date_format='YYYY/MM/DD HH24:MI:SS';
SQL> select thread#,sequence#,blocks*block_size/1024/1024 MB,(next_time-first_time)*86400 sec, blocks*block_size/1024/1024)/((next_time-first_time)*86400) "MB/s" from v$archived_log 
where ((next_time-first_time)*86400<>0) 
and first_time between to_date('2015/01/15 08:00:00','YYYY/MM/DD HH24:MI:SS') 
and to_date('2015/01/15 11:00:00','YYYY/MM/DD HH24:MI:SS') 
and dest_id=1 order by first_time;
 


##########  To Check Lag  ################ 

col NAME format a10
 select NAME,TIME,UNIT,COUNT,LAST_TIME_UPDATED from V$STANDBY_EVENT_HISTOGRAM where
 name like '%lag' and count >0 order by LAST_TIME_UPDATED;



###############  To check speed  of  Log apply  On Datagurd  ########


  set lines 120 pages 99
 alter session set nls_date_format='YYYY/MM/DD HH24:MI:SS';
 select START_TIME, ITEM, SOFAR, UNITS from gv$recovery_progress;







 Reference : 
   

https://docs.oracle.com/en/database/oracle/oracle-database/21/haovw/tune-and-troubleshoot-oracle-data-guard.html#GUID-30CD6E1C-1CE2-4BB6-A404-896D5C06ECCE

How to Generate AWRs in Active Data Guard Standby Databases (Doc ID 2409808.1)

Thursday, December 15, 2022

Gathering Concurrent Statistics in Oracle Database -- Speeding up Statistics gather

 
We planned  to  share  load of  stats  gather across rac  instances and came across this feature of Concurrent Statistics .

Please note there are few  reported issue of high cpu usage




Dont forget to  fulfil requirements :

1) You also need to have job_queue_processes parameter different from 0:
SQL> show parameter job_queue_processes

2) You also need to deactivate parallel_adaptive_multi_user:
 ALTER SYSTEM SET parallel_adaptive_multi_user=FALSE;


3) You also need to have an active resource plan . The aim of this resource plan activation is to control the resources used by concurrent statistics jobs 

SQL> show parameter resource_manager_plan





Check for the stats global preference value using below query

SELECT DBMS_STATS.get_prefs('CONCURRENT') FROM dual;





Turn On CONCURRENT  statistics 

exec dbsm_stats.set_global_prefs('DEGREE', dbms_stats.auto_degree) ; 

BEGIN
DBMS_STATS.set_global_prefs (
pname => ‘CONCURRENT’,
pvalue => ‘ALL’);
END;
/

in 12cR1 the authorized values are : MANUAL, AUTOMATIC, ALL and OFF to control whether you wish concurrent statistics for manual commands, for automatic jobs for both or not.

Instead of ALL (which works for both manual and automatic stats collection), we can also set below values
MANUAL – only for manual stats collection
AUTOMATIC – only for automatic stats collection
OFF -- TO SWITCH OFF 


If you see that Global preferences value is already set to ALL, you need to grant below mentioned roles to the user which is performing gather stats.
These grants are not default, so users will face issues if they use concurrent statistics.

SQL> GRANT CREATE JOB, MANAGE SCHEDULER, MANAGE ANY QUEUE TO testuser;




Be very careful with the concurrent statistics feature as when activated most of your users will not be able any more to gather statistics, even on their own objects:

 
ERROR AT line 1:
ORA-20000: Unable TO gather STATISTICS concurrently, insufficient PRIVILEGES
ORA-06512: AT "SYS.DBMS_STATS", line 24281
ORA-06512: AT "SYS.DBMS_STATS", line 24332
ORA-06512: AT line 1




Checking if concurrent jobs :

     col COMMENTS FOR a50
     SELECT job_name, state, comments
     FROM dba_scheduler_jobs
     WHERE job_class LIKE 'CONC%';



 

 
Checking Global Preference : 



SET LINESIZE 150
COLUMN autostats_target FORMAT A20
COLUMN cascade FORMAT A25
COLUMN degree FORMAT A10
COLUMN estimate_percent FORMAT A30
COLUMN method_opt FORMAT A25
COLUMN no_invalidate FORMAT A30
COLUMN granularity FORMAT A15
COLUMN publish FORMAT A10
COLUMN incremental FORMAT A15
COLUMN stale_percent FORMAT A15
SELECT DBMS_STATS.GET_PREFS('AUTOSTATS_TARGET') AS autostats_target,
       DBMS_STATS.GET_PREFS('CASCADE') AS cascade,
       DBMS_STATS.GET_PREFS('DEGREE') AS degree,
       DBMS_STATS.GET_PREFS('ESTIMATE_PERCENT') AS estimate_percent,
       DBMS_STATS.GET_PREFS('METHOD_OPT') AS method_opt,
       DBMS_STATS.GET_PREFS('NO_INVALIDATE') AS no_invalidate,
       DBMS_STATS.GET_PREFS('GRANULARITY') AS granularity,
       DBMS_STATS.GET_PREFS('PUBLISH') AS publish,
       DBMS_STATS.GET_PREFS('INCREMENTAL') AS incremental,
       DBMS_STATS.GET_PREFS('STALE_PERCENT') AS stale_percent ,
       DBMS_STATS.get_prefs('CONCURRENT')  AS CONCURRENT  
FROM   dual;




col spare4 format a40 head 'VALUE'
select sname,spare4
from sys.optstat_hist_control$
order by 1
/


Checking Table level preference :

select * from dba_tab_stat_prefs where table_name = '&&TABLE';
 



References : 

Oracle Concurrent (CONCURREMT) Collect statistics ( file ID 1555451.1) 

https://docs.oracle.com/database/121/TGSQL/tgsql_stats.htm#TGSQL428