Sunday, May 8, 2022

Oracle Database 19c - SQL_DIAGNOSE_AND_REPAIR



In Oracle Database 19c, a new function SQL_DIAGNOSE_AND_REPAIR is introduced to diagnose a given SQL statement for a given SQL for the given problem type. This function creates an incident, populate incident metadata with required information like, SQL ID, SQL text, compilation environment, and so on. It also creates a diagnostic task, executes it and accepts recommendation for a given SQL.



For example:

SQL> select max(col3) from tbl1 where col1=10000 and col2=10000;

MAX(COL3)
------------------------------
A10000

SQL> column sql_id new_value sql_id
SQL> select prev_sql_id as sql_id from v$session where sid = userenv('sid');


 
SQL> var incident_id number;
SQL> exec :incident_id := dbms_sqldiag.sql_diagnose_and_repair(sql_id => '&sql_id',scope=>DBMS_SQLDIAG.SCOPE_COMPREHENSIVE,time_limit=>DBMS_SQLDIAG.TIME_LIMIT_DEFAULT,problem_type=>DBMS_SQLDIAG.PROBLEM_TYPE_PERFORMANCE,auto_apply_patch=>'NO');


PL/SQL procedure successfully completed.

SQL>
SQL> select DBMS_SQLDIAG.REPORT_DIAGNOSIS_TASK(:incident_id) from dual;


SQL> set autotrace off





Reference : 
1)  19c New Feature - New Automatic Diagnostics and Repair Function SQL_DIAGNOSE_AND_REPAIR (Doc ID 2644400.1)

Oracle 19c: Bug 27175987 — Partition Pruning Not Work For SQL With Predicate Of User Function After Upgrade To 19c

 
As we see 19c comes with lot of performance instability ,  recently  we been  facing issues with sql on partition tables . 
It was  observed that After upgrade to 19c, partition pruning  not work for the SQL with predicates of user defined function, while partition pruning works for the same SQL prior to 19c.

This is bug 27175987, also not published. Made as fix_control, can be seen in v$system_fix_control, can be disabled if desired.


Fix 1) 

Enable partition pruning disable the fix of Bug 27175987 by setting "_fix_control" = '27175987:off'

alter session set "_fix_control" = '27175987:off';

or

Add hint /*+ OPT_PARAM('_fix_control' '27175987:off') */:





Fix 2 ) 

Change user defined function to deterministic PL/SQL functions if it is non-deterministic, and add the DETERMINISTIC clause to the function if the function is truly deterministic

SQL> CREATE or REPLACE FUNCTION <USER_FUNCTION_NAME>(ARG VARCHAR2) RETURN VARCHAR2 deterministic IS -- Add "deterministic"
BEGIN
  RETURN ARG;
END;
/



Reference : 
1) Query Performance Issues after upgrade to 19C (Doc ID 2739411.1)
2) Partition Pruning Not Work For SQL With Predicate Of User Function After Upgrade To 19c (Doc ID 2717940.1)
3) Things to Consider to Avoid SQL Performance Problems on 19c (Doc ID 2773715.1)

Oracle Database -- NEW REPORTING SUBPROGRAMS IN DBMS_STATS PACKAGE

 
Knowing when and how to gather statistics in a timely manner is critical to maintain acceptable performance on any system.

Determining what statistics gathering operations are currently executing in an environment and how changes to the statistics methodology will impact the system can be difficult and time consuming.

Reporting subprograms in DBMS_STATS package make it easier to monitor what statistics gathering activities are currently going on and what impact changes to the parameter settings of these operations will have. 

The DBMS_STATS subprograms are  REPORT_STATS_OPERATIONS, REPORT_SINGLE_STATS_OPERATION and REPORT_GATHER_*_STATS.

 The report shows detailed information about what statistics gathering operations have occurred, during a specified time window. It gives details on when each operation occurred, its status, and the number of objects covered and it can be displayed in either text or HTML format.


COLUMN REPORT FORMAT A300

VARIABLE my_report CLOB;
BEGIN
:my_report := DBMS_STATS.REPORT_STATS_OPERATIONS (
  since => SYSDATE-7
, until => SYSDATE
, detail_level => 'TYPICAL'
, format => 'TEXT'
, auto_only => TRUE
);
END;
/

print my_report;



 variable mystatrep2 clob;
  set long 1000000
   begin
   :mystatrep2 := dbms_stats.report_stats_operations(
    since=>SYSTIMESTAMP-16,
    until=>SYSTIMESTAMP-1,
   detail_level=>'TYPICAL',
    format=>'TEXT'); 
  end;
   /

  print mystatrep2


Saturday, May 7, 2022

Exacc -- Create database manually using dbaascli

 

1) Listing Available Software Images and Versions for Database

[oracle@host1 ~]$ sudo dbaascli cswlib showImages




2)  Creating Oracle Database Home with Unified Audit Enabled 

[oracle@host1 ~]$ sudo dbaascli dbhome create --version 19000 --bp JAN2021 --oracleHomeName UnifiedAuditenabled --enableUnifiedAuditing true



3) Creating Oracle Database In the Specified Oracle Database Home

[oracle@host1 ~]$ sudo dbaascli database create --dbName DBNAME --dbUniqueName DBNAME --dbSid DBNAME --oracleHomeName UnifiedAuditenabled --nodeList host1,host2 --createAsCDB true --waitForCompletion false


The command will prompt for the sys and TDE password.

Use the flag --waitForCompletion false
To run in the background.


To run prerequisites checks, use the --executePrereqs command option. This will perform only the prerequisite checks without performing the actual Oracle Database home creation.

sudo dbaascli dbhome create --version Oracle Home Version --imageTag image tag --executePrereqs
Where:
--version specifies the Oracle Database version
--imageTag specifies the Image Tag of the image to be used




4) Monitor the progress of the Job

[oracle@host1 ~]$ sudo dbaascli job getStatus --jobID 09309052-7700-44d5-83e2-f8e 





 5) 

Generate a cloud registration file (creg.ini)  under /var/opt/oracle/creg/ by running the following command as the root user on one of the database servers:

# dbaascli registerdb registry --action initialize --db dbname [ --udb dbuniquename ]


# dbaascli registerdb prereqs –dbname dbname [ –db_unique_name dbuniquename ]
# dbaascli registerdb begin --dbname dbname [ --db_unique_name dbuniquename ]





Sunday, May 1, 2022

Oracle Background Processes that can be restarted and How to prioritize an Oracle Database background process

 
Thought of  documenting  this as at times we come across situations where w have to kill certain background process to high load  or other issues . 
This is only for information purpose . Its advisable to  check with  Oracle Support for any actions 


Killable:
ARCn: Redo log archivers
CJQn: Job scheduler coordinator
Dnnn: Dispatchers
DIA0: Diagnosibility process 0
DIAG: Diagnosibility coordinator
FDBA: Flashback data archiver process
Jnnn: Job scheduler processes
MMNL: Manageability Monitor Process 2
MMON: Manageability Monitor Process
PING: Interconnect latency measurement
Qnnn: Queue cleanup processes
QMNC: Queue coordinator
RECO: Distributed recovery process
Snnn: Shared servers
SMCO: Space management coordinator
Wnnn: Space management processes

Instance-Critical:
ACMS: Atomic controlfile to memory server
CKPT: Checkpoint
DBRM: Resource manager process
DBWn: Database writer processes
LGWR: Redo log writer
LMDn: Global enqueue service daemons
LMON: Global enqueue service monitor
MMAN: Memory manager
PMON: Process monitor
PSPn: Process spawners
RMSn: RAC management server
RVWR: Recovery writer
SMON: System monitor process
VKTM: Virtual keeper of time process



Increasing Priority of Background process . 

In   Test case we  will be increasing priority of LGWR  proces to get rid of log file snc wait . 

Increasing the LGWR priority is putting the LGWR process in the Round-Robin (SCHED_RR) class. You can increase process’s priority both using OS (renice, nice commands) or Database methods, but this post is about setting the priority using ‘_high_priority_process’ an undocumented/hidden parameter that prioritizes your database managed processes

starting from 21.3.0.0.0 LGWR process is part of _high_priority_processes group along with VKTM & LMS* processes.
Note: This being a hidden/undocumented parameter I advise to consult with Oracle support before going and changing the parameter value. 


 select ksppstvl from x$ksppi join x$ksppcv using (indx) where ksppinm='_high_priority_processes';

alter system set "_high_priority_processes"='LMS*|VKTM|LGWR' scope=spfile;

Restart the database

srvctl stop database -d abdul1
srvctl start database -d abdul2 


 select ksppstvl from x$ksppi join x$ksppcv using (indx) where ksppinm='_high_priority_processes';



Tuesday, April 26, 2022

Oracle 19c Automatic SQL Plan Management Behaviors Change

 

What's New : 

Automatic SQL Plan Management is enabled by default in Oracle 19c
New SQL plan baselines will be created automatically


By default, the SPM Evolve Advisor runs daily in the maintenance window
• In 19c it can be configured to also run outside of the maintenance window
• To control it, the DBMS_SPM.CONFIGURE procedure supports a new parameter
• AUTO_SPM_EVOLVE_TASK (OFF, ON, AUTO)
• In 19c AUTO is equivalent to ON
• When enabled, it runs every hour for no longer than 30 minutes



How to check :

column parameter_value format a45
column parameter_name format a25
set pages 200

SELECT PARAMETER_NAME, PARAMETER_VALUE
  FROM   DBA_ADVISOR_PARAMETERS
  WHERE  TASK_NAME = 'SYS_AUTO_SPM_EVOLVE_TASK'
    AND PARAMETER_VALUE <> 'UNUSED'
  ORDER BY 1;


SELECT sql_handle, plan_name, enabled, accepted FROM dba_sql_plan_baselines;


The new default AUTO for ALTERNATE_PLAN_BASELINE and ALTERNATE_PLAN_SOURCE enables the Automatic SQL Plan Management in Oracle Database 19c. The attribute ALTERNATE_PLAN_SOURCE lets the advisor identify the sources for plans by itself. The ALTERNATE_PLAN_BASELINE parameter determines which SQL statements are eligible for plan capture. AUTO in this case means that any SQL statement in AWR will be a potential candidate.




How to  revert to the Oracle 12.2.0.1   Behavior : 

BEGIN 
   DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( 
      task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
      parameter => 'ALTERNATE_PLAN_BASELINE', 
      value     => 'EXISTING'); 
END; 
/

BEGIN 
   DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( 
      task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
      parameter => 'ALTERNATE_PLAN_SOURCE', 
      value     => 'CURSOR_CACHE+AUTOMATIC_WORKLOAD_REPOSITORY'); 
END; 
/

BEGIN
   DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
      task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
      parameter => 'ALTERNATE_PLAN_LIMIT',
      value => 10);
END;
/






Switching to the Oracle 19c  behaviour 

BEGIN 
   DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( 
      task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
      parameter => 'ALTERNATE_PLAN_BASELINE', 
      value     => 'AUTO');  
END; 
/

BEGIN 
   DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( 
      task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
      parameter => 'ALTERNATE_PLAN_SOURCE', 
      value     => 'AUTO');  
END; 
/

BEGIN
   DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
      task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
      parameter => 'ALTERNATE_PLAN_LIMIT',
      value => 'UNLIMITED');
END;
/




Reference : 
https://blogs.oracle.com/optimizer/post/what-is-automatic-sql-plan-management-and-why-should-you-care







Monday, April 25, 2022

Oracle Awr data dump -- Important while migration

 

 

Most of time after migration  of database to new server we observe performance degradation . Usually we dont have historical performance data before migration which land us is big mess.

To overcome this , its very important to  take awr dump from existing database before migration . 


Extract AWR data

For Extract the AWR data from source database run the awrextr.sql script which extracts AWR data for a range of snapshots from the database into a Data Pump export file.
 
1. Run the following script for extract AWR:
@?/rdbms/admin/awrextr.sql;


2. Script ask for select DBID
Enter value for dbid:

3. Enter the number of days backup export:
Enter value for num_days: 2

4. It will list the 2 days snapshot in AWR. Choose the begin and end snapshot for export:
Enter value for begin_snap: 76
Begin Snapshot Id specified: 76
Enter value for end_snap: 86
End Snapshot Id specified: 86

5. List the Directory present in Database, Choose the directory location and dump file name:
Enter value for directory_name: ORACLE_HOME

Enter value for file_name:awrdat_76_86.dmp

6. Now dump is generated.
The AWR extract dump file will be located
in the following directory/file:
 /usr/tmp/
awrdat_76_86.dmp



Load the AWR data to target

For loading the extracted AWR data with awrload.sql script. It will first create a staging schema where the snapshot data is transferred
from the Data Pump file into the database

1. Run the AWRload.sql script for start loading data.

@?/rdbms/admin/awrload.sql

2. Specify the directory name where export file exists.

Enter value for directory_name:

3. Put the prefix of name of dump file:

Enter value for file_name:

4. Specify the name of staging schema where data loaded i.e AWRSTAGE

Enter value for schema_name:

5. Specify the default or temporary tablespace.

Enter value for default_tablespace: SYSAUX
-----------
Enter value for temporary_tablespace: TEMP

6. Loading of data is successful.

AWR Load Started ..

This operation will take a few moments. The
progress of the AWR load operation can be
monitored in the following directory/file:
/usr/tmp/
AWRDAT_76_86.log



The process will then prompt for the staging schema name, the default is AWR_STAGE. If you accept the default, the script will create the AWR_STAGE user after asking you for default tablespaces. Once it has completed the awr load process, the script will drop the AWR_STAGE user.

After the process completes, the AWR tables now have new data in them! You can query DBA_HIST_SNAPSHOT or any of the other DBA_HIST views (including DBA_HIST_ACTIVE_SESS_HISTORY). 

Sunday, April 24, 2022

Estimating Size of Oracle Database Objects --seeing history of object/table size


Resource and performance capacity of the servers is one side of the puzzle. Equally important is to size/estimate the database for storage and the data growth. This would mean the database, the database objects, and the underlying storage subsystem would also have to be sized for today and tomorrow.

Oracle provides few packages and procedures that help determine the size of objects and indexes based on the   estimated growth size. Even further, using the DBMS_SPACE.OBJECT_GROWTH_TREND function, a growth pattern for existing tables can be obtained.

The following query will list the object growth trend for an object; the data for the trend listed is gathered from Automatic Workload Repository (AWR). The growth trends for two of the tables are listed following.

The OBJECT_GROWTH_TREND function returns four values:
TIMEPOINT—Is a time stamp value indicating the time of the recording/reporting.
SPACE_USAGE—Lists the amount of space used by the object at the given point in time.
SPACE_ALLOCATED—Lists the amount of space allocated to the object in the table space at the
given point in time.
QUALITY—Indicates the quality of data reported; there are three possible values:
INTERPOLATED—The value did not meet the criteria of GOOD. As noted in the outputs
following, the used and allocated are same. Basically, the values do not reflect any usage.
GOOD—The value whenever the value of TIME is based on recorded statistics. Value is
marked good if at least 80% of the value is derived from GOOD instance values.
PROJECTED—The value of time is in the future as of the time the table was produced.


In a RAC environment, the output reflects the aggregation of values recorded across all instances in the cluster.

SELECT *
FROM TABLE(dbms_space.object_growth_trend(object_owner => 'ABDUL',
 object_name => 'HISTORY', object_type => 'TABLE'));


TIMEPOINT SPACE_USAGE SPACE_ALLOC QUALITY
------------------------------- ----------- ----------- --------------------
28-MAY-14 11.12.43.052162 AM 111082969 111082969 INTERPOLATED
29-MAY-14 11.12.43.052162 AM 111082969 111082969 INTERPOLATED
30-MAY-14 11.12.43.052162 AM 111082969 111082969 INTERPOLATED
31-MAY-14 11.12.43.052162 AM 111082969 111082969 INTERPOLATED
01-JUN-14 11.12.43.052162 AM 111082969 111082969 INTERPOLATED
02-JUN-14 11.12.43.052162 AM 111082969 111082969 INTERPOLATED
03-JUN-14 11.12.43.052162 AM 111082969 111082969 INTERPOLATED
04-JUN-14 11.12.43.052162 AM 131877793 134217728 GOOD
05-JUN-14 11.12.43.052162 AM 132003569 134369941 PROJECTED
06-JUN-14 11.12.43.052162 AM 132129344 134522153 PROJECTED
07-JUN-14 11.12.43.052162 AM 132255119 134674366 PROJECTED
08-JUN-14 11.12.43.052162 AM 132380894 134826579 PROJECTED
09-JUN-14 11.12.43.052162 AM 132506669 134978791 PROJECTED

Wednesday, April 20, 2022

Pga Nightmare in Oracle 19c database -- MGA / ORA-04030 / ORA-04036


Had  to  write this  Blog  Considering Lot of Pga issues we  been facing   after migration from 12c to 19c . 
This is  because   MGA is also  Part of   Pga  now .    To get more information on  Mga please refer  2638904.1 


In Oracle release 12.1:    Pga  setting was  the greater of the following:
1. 2 GB 
2. 200% of PGA_AGGREGATE_TARGET 
3. (Value of PROCESSES initialization parameter setting) * 3 MB
It will not exceed 120% of the physical memory size minus the total SGA size.


In Oracle release 12.2:   Pga  setting was as per below 

* If MEMORY_TARGET is set, then PGA_AGGREGATE_LIMIT defaults to the MEMORY_MAX_TARGET value.
* If MEMORY_TARGET is not set, then PGA_AGGREGATE_LIMIT defaults to 200% of PGA_AGGREGATE_TARGET.
* If MEMORY_TARGET is not set, and PGA_AGGREGATE_TARGET is explicitly set to 0, then the value of PGA_AGGREGATE_LIMIT is set to 90% of the physical memory size minus the   total SGA size.

In all cases, the default PGA_AGGREGATE_LIMIT is at least 2GB and at least 3MB times the PROCESSES parameter for standby  database , and for Rac database it should be at least 5MB times the PROCESSES parameter



From 19c we need to  add Mga overhead to Pag  using below formula .   Please refer to metalink document 2808761.1  for more information . 

PGA_AGGREGATE_LIMIT = (original PGA_AGGREGATE_LIMIT value) + ((maximum number of connected processes) * 4M)




Sql  used for analysis . 


select * from dba_hist_pgastat where name ='total PGA allocated' order by snap_id;

set lines 200;
set pages 200;
column name format a25;
column pname format a12;
column "MegaBytes" format a10;
set numwidth 6;
select s.sid, s.serial#,p.pid, p.spid,p.pname, sn.name, round(ss.value/(1024 *1024))||'Mb' "MegaBytes"
from v$sesstat ss, v$statname sn, v$session s, v$process p
where s.paddr = p.addr
and sn.statistic# = ss.statistic#
and s.sid = ss.sid
and sn.name in ('session pga memory' , 'session pga memory max')
and p.pname like 'DIA%'
order by ss.value
/

col max_pga for 99999999.9
select value/1024/1024 max_pga from v$pgastat where name='maximum PGA allocated' minus
     select value/1024/1024 max_pga from v$pgastat where name='MGA allocated (under PGA)';


select max_utilization from v$resource_limit where resource_name='processes';



set pages 333 lines 255
set wrap off
col "Sid/Serial" for a12
col UnixPid for a8
col OraUser for a8
col OsUser for a8
col machine for a18
col Minutes for 99990.9
col "ClientProgram" format a28
col "ServerProgram" format a32
col "Program" format a25
col "Module" format a25
col "SQL Query" format a60
col "MBmem" format 99999
col "App PID" for a8
col "SPID" for a8

BREAK ON REPORT
COMPUTE SUM LABEL "Total PGA MB" OF  "MBmem" ON REPORT
select /* Running SQL */ distinct 
       s.INST_ID "INST_ID",s.sid||','||s.serial# "Sid/Serial", s.sql_id,p.spid "SPID",s.process "App PID", s.username "DBUser",s.osuser "OSUser", s.machine,
       s.last_call_et "Time(Sec)", 
       q.sql_text "SQL Query", p.pga_alloc_mem/1024/1024 "MBmem"
from gv$session s,gv$process p, gv$sql q
where s.INST_ID=p.INST_ID and
      s.paddr=p.addr and
      s.sql_id=q.sql_id(+)
and s.username is not null
and s.sql_id is not null
and s.status <> 'INACTIVE'
and q.sql_text not like '%Running SQL%'
order by "INST_ID","Time(Sec)" desc;






 List of Related Bug :

Bug 32521805 : ORA-4030 IN MGA, MGA 20% OF PGA_AGGREGATE_LIMIT

BUG 30611650 - HIGH PGA USAGE FOR A QUERY USING XMLFOREST was filed for this kind of problem.

Bug 30028599 - ORA-4036: PGA memory used by the instance exceeds pga_aggregate_limit (Doc ID 30028599.8)

Database Failures and Hangs in RAC due to MGA Sizing and Allocation Issues (Doc ID 2831121.1)



Related  Metalink Document : 

Sizing the PGA in Oracle 19c - How to Account for the MGA Size (Doc ID 2808761.1)

MGA (Managed Global Area) Reference Note (Document 2638904.1)

Cannot Increase or Decrease The Value of PGA_AGGREGATE_LIMIT on 19c (Doc ID 2685564.1)

MMON SLAVE Process Consuming More PGA Memory in 19c (Doc ID 2721971.1)


Saturday, April 9, 2022

Exacc dataguard switchover and Failover using dbaascli

 


Pre-Checks 

--> Confirm DGMGRL status is ok . And no Lag 
DGMGRL> show configuration
DGMGRL> show database standby_sb2
DGMGRL> validate database  verbose standby_sb2

select database_role  , switchover_status  from v$database;  ( on booth primary and  standby ) 
select INST_ID , DEST_ID , max(SEQUENCE#) from gv$archived_log group by INST_ID , DEST_ID ;

ALTER DATABASE SWITCHOVER TO Standby VERIFY; 


sudo dbaascli dataguard status --dbname DBNAME --details yes   ( Put Exact db name as in crs ) 
Logfile will be under /var/opt/oracle/log/dbaastools_launcher/



Below are pre check based on known issues faced :

ISSUE – Parameter missing in .INI
FIX – Add parameters ( dg_observer=no and dg_observer_host=no) are added in file /var/opt/oracle/creg/<dbname>.ini

ISSUE – Password in Wallet is not sync between PRIMARY and STANDBY
FIX – Copy the cwallet.sso with correct password to STANDBY node.





Switchover 

sudo dbaascli dataguard switchover --dbname  DBNAME( Put Exact db name in crs ) 



Failover 

sudo dbaascli dataguard failover --dbname  DBNAME( Put Exact db name in crs ) 



Post-checks 

--> Confirm DGMGRL status is ok . And no Lag 
DGMGRL> show configuration
DGMGRL> show database frkvasp1_sb2
DGMGRL> validate database  verbose standby_sb2

select database_role  , switchover_status  from v$database;  ( on booth primary and  standby ) 
select INST_ID , DEST_ID , max(SEQUENCE#) from gv$archived_log group by INST_ID , DEST_ID ;

sudo dbaascli dataguard status --dbname DBNAME--details yes


Reference : 

https://docs.oracle.com/en/cloud/paas/database-dbaas-cloud/csdbi/dbaascli-dataguard-switchover.html

https://docs.oracle.com/en/cloud/paas/database-dbaas-cloud/csdbi/administer-data-guard-configuration.html#GUID-9D5B2117-AA20-4468-A5AC-BD2E624C8844