Friday, June 9, 2023
Oracle picks more parallel slaves then what is defined in PARALLEL HINT
Monday, May 15, 2023
Oracle High Undo UNEXPIRED utilization due to Autotune retention causing ORA-01555
UNDOTBS1 EXPIRED 65536 1
UNDOTBS1 UNEXPIRED 10285809664 3271
UNDOTBS2 EXPIRED 142802944 6
UNDOTBS2 UNEXPIRED 4242735104 642
Tuesday, May 9, 2023
Analyzing ORA-4031 In Oracle Database 19c
Saturday, April 29, 2023
Handling Oracle Database Result Cache Corruption
SQL> set linesize 150
SQL> select id,name,row_count,type,invalidations from v$result_cache_objects;
############## Requirements for the Result Cache ##############
Enabling the result cache does not guarantee that a specific result set will be included in the server or client result cache.
In order for results to be cached, the following requirements must be met:
1. Read Consistency Requirements
-> If the current session has an active transaction referencing objects in a query, then the results from this query are not eligible for caching.
2. Query Parameter Requirements
Cache results can be reused if they are parameterized with variable values when queries are equivalent and the parameter values are the same. Different values or bind variable names may cause cache misses. Results are parameterized if any of the following constructs are used in the query:
-> Bind variables
-> The SQL functions DBTIMEZONE, SESSIONTIMEZONE, USERENV/SYS_CONTEXT (with constant variables), UID, and USER NLS parameters
Sunday, April 2, 2023
Tracing Oracle Datapump expdp/impdp
Expdp and impdp has a built in mechanism to trace executions. We can add the (hidden) expdp parameter “trace” so we can mark the command to
output a useful trace file. Unfortunately, this parameter is hidden – meaning it does not appear in the expdp help=yes and we still need to figure out how to use it.
The trace parameter is being set using an hexadecimal number with 7 digits. There is no need to add 0x at the beginning of the string – it will not accept decimal numbers or binary inputs.
The number must be written with a lower case. We will also need the privileges to run trace on the session, and obviously, the permissions to export or import.
-- Example of combination (last 4 digits are usually 0300):
40300 to trace Process services
80300 to trace Master Control Process (MCP)
400300 to trace Worker process(es)
In order to trace all Data Pump components, level 1FF0300 can be specified:
However for most cases full level tracing is not required. As trace 400300 is to trace Worker process(es) and trace 80300 is to trace Master Control Process (MCP).
So combining them is trace 480300 and by using trace 480300 you will be able to trace both Master Control process (MCP) and the Worker process(es). This would serve the purpose.
Run a Data Pump job with full tracing:
-- Master Process trace file: <SID>_dm<number>_<process_id>.trc
-- Worker Process trace file: <SID>_dw<number>_<process_id>.trc
And one trace file in USER_DUMP_DEST:
-- Shadow Process trace file: <SID>_ora_<process_id>.trc
To use full level tracing issue data pump export as,
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300
To use full level tracing for data pump import operation issue import as,
impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300
So to solve any data pump export problem issue,
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300 metrics=yes
To solve any data pump import problem issue,
impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300 metrics=yes
Data Pump tracing can also be started with a line with EVENT 39089 in the initialization parameter file.
This method should only be used to trace the Data Pump calls in an early state, e.g. if details are needed about the DBMS_DATAPUMP.OPEN API call.
Trace level 0x300 will trace all Data Pump client processes. Note that this event cannot be enabled through an ALTER SYSTEM command, only a restart of the database will enable the event.
-- Enable event
ALTER SYSTEM SET EVENTS = '39089 trace name context forever, level 0x300' ;
-- Disable event
ALTER SYSTEM SET EVENTS = '39089 trace name context off' ;
We can also use tfactl to collect diagnostic information
tfactl diagcollect -srdc dbexpdp
tfactl diagcollect -srdc dbimpdp
References:
Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump (Doc ID 286496.1)
Saturday, March 11, 2023
Troubleshooting advanced queue issues in Oracle 19c database
Value | Name | Meaning |
---|---|---|
0 | READY | The message is ready to be processed, i.e., either the delay time of the message has passed or the message did not have a delay time specified |
1 | WAITING or WAIT | The delay specified by message_properties_t.delay while executing dbms_aq.enqueue has not been reached. |
2 | RETAINED OR PROCESSED | The message has been successfully processed (dequeued) but will remain in the queue until the retention_time specified for the queue while executing dbms_aqadm.create_queue has been reached. |
3 | EXPIRED | The message was not successfully processed (dequeued) in either 1) the time specified by message_properties_t.expiration while executing dbms_aq.enqueue or 2) the maximum number of dequeue attempts (max_retries) specified for the queue while executing dbms_aqadm.create_queue. |
4 | IN MEMORY | User-enqueued buffered message |
7 | SPILLED | User-enqueued buffered message spilled to disk |
8 | DEFERRED | Buffered messages enqueued by a Streams Capture process |
9 | DEFERRED SPILLED | Capture-enqueued buffered messages that have been spilled to disk |
10 | BUFFERED EXPIRED | User-enqueued expired buffered messages |
Monday, February 6, 2023
Copy sql profile/baseline from one sql id to another for Oracle database out of a modified SQL using coe_load scripts
Saturday, February 4, 2023
Force monitor Oracle Database Operations / Sql -- Monitoring Settings to capture Sql that are not monitored
Sunday, January 29, 2023
Oracle database 19c Dynamic sequence cache - LAST_VALUE for sequences is incorrectly calculated
Saturday, January 21, 2023
Gathering Information for Troubleshooting Oracle Physical Standby Database /Dataguard
Writing this blog to keep all scripts handy for handling dataguard issues
Checking Log Transport Lag and Log transport error :
Have documented sql used in below separate blog foe easiness
https://abdul-hafeez-kalsekar-tuning.blogspot.com/2023/01/monitoring-oracle-standby-log-transport.html
Improving Log transport :
1) Using multithreaded log writer / lgwr process for Sync log transport
2) Increasing Archiver process for async log transport .
Handling Huge archive gap or when archive is missing :
1) Applying Incremental Scn based backup
Logs to checks :
1) Prod and Dr Site Alert Logs
2) Prod and Dr Site dg broker log
3) Prod site lgwr logfile .
DGMGRL Commands :
Spool dgconfig.log
DGMGRL> show database STYDB InconsistentProperties
DGMGRL>show database STYDB InconsistentLogXptProps
Checking Dataguard Parameters :