Saturday, June 12, 2021

Oracle Datapump : logtime=all metrics=y , KEEP_MASTER , and increase parallelism for active export/import

 
We all been using  datapump however  there below are few less known features that can make  our life easy .


####################### 
METRICS=YES LOGTIME=ALL
####################### 


While migration its critical to find steps taken by each process to estimate migration time . 
The LOGTIME parameter adds a timestamp down to the millisecond for every line in your logfile. This can be helpful in diagnosing issues with I/O. It adds e timestamp in front of each line and a message about each task completed by the worker:

expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr METRICS=YES LOGTIME=ALL

15-NOV-17 11:48:32.305: W-5 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
15-NOV-17 11:48:34.439: W-13      Completed 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.439: W-13      Completed by worker 1 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.440: W-13 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
15-NOV-17 11:48:35.472: W-17 Startup took 70 seconds
15-NOV-17 11:48:35.596: W-18 Startup took 70 seconds
15-NOV-17 11:48:35.719: W-20 Startup took 70 seconds
15-NOV-17 11:48:35.841: W-19 Startup took 70 seconds
15-NOV-17 11:48:43.520: W-5      Completed 1714 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5      Completed by worker 1 39 TABLE objects in 2 seconds
15-NOV-17 11:48:43.520: W-5      Completed by worker 2 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5      Completed by worker 3 85 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5      Completed by worker 4 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5      Completed by worker 5 25 TABLE objects in 1 seconds
15-NOV-17 11:48:43.520: W-5      Completed by worker 6 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5      Completed by worker 7 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5      Completed by worker 8 111 TABLE objects in 6 seconds
15-NOV-17 11:48:44.266: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA


####################### 
KEEP_MASTER
####################### 

KEEP_MASTER prevents the Data Pump Master table from getting deleted after an Export/Import job completion.
There are cases where we want to keep  master  table  for analysis to be done later . 

SQL> select owner,segment_name,segment_type,tablespace_name,(bytes/1024/1024)MB from dba_segments where segment_name='SYS_EXPORT_TABLE_01';
SQL> select OBJECT_TYPE,OBJECT_NAME,OBJECT_SCHEMA,ORIGINAL_OBJECT_SCHEMA,ORIGINAL_OBJECT_NAME,OBJECT_TABLESPACE,SIZE_ESTIMATE,OBJECT_ROW from SYS_SQL_FILE_FULL_01 where ORIGINAL_OBJECT_SCHEMA is not null;
OBJECT_TYPE –> Show the object type.
OBJECT_SCHEMA –> Contains the schema name to which it has to be imported.
ORIGINAL_OBJECT_SCHEMA –> column has the original object’s schema name.
OBJECT_TABLESPACE –> Shows the tablespace where the object will be imported.
SIZE_ESTIMATE –> Estimated size of the table in bytes


####################### 
Increasing parallelism for ongoing Job 
####################### 

We  have been in situations  where we want to increase/Decrease parallelism of  datapump Jobs . We can  achieve this  using below . 

To increase level of parallelism on running datapump job do the following :
- identify job name (select owner_name, job_name from dba_datapump_jobs);
- connect to it : impdp attach=schema.job_name
- change parallelism : parallel=n (n - number of parallel workers)
- wait a bit for change to apply
- confirm change : status


####################### 
Patch 21539301 : Index creation  serial in impdp 
####################### 

Previously, Data Pump would create indexes one at a time, specifying the PARALLEL keyword for the CREATE INDEX statement to invoke parallel query for index creation. We used to recommend a workaround to create indexes in parallel, which involved a three-step process of importing without indexes, then creating a SQLFILE of the CREATE INDEX statements, and breaking that file into multiple windows.

Through extensive performance testing we found that it is faster to create multiple indexes in parallel (using a parallel degree of 1) instead of creating a single index using parallel query processes. This is enabled by the patch for bug 18793090, superseded by patch 21539301 

Need to check if similar patch is available in 19c 


####################### 
Other Useful Options : 
####################### 

Export Filtering Parameters

/* EXCLUDE & INCLUDE */
EXCLUDE=INDEX
EXCLUDE=PROCEDURE
EXCLUDE=TABLE:"LIKE 'EMP%'"
EXCLUDE=SCHEMA:"='HR'"
INCLUDE=TABLE:"IN ('EMP', 'DEPT')"
# When used in command line, use slashes to escape single and double
quotation:
expdp .. schemas=SCOTT EXCLUDE=TABLE:\"=\'EMP\'\"
/* QUERY */
QUERY=OE.ORDERS: "WHERE order_id > 100000 ORDER BY order_date desc" 



Export Remapping Parameters

/* REMAP_DATA (11g) */
-- the remapping function shouldn’t have commit or rollback
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function
expdp hr/passwd DIRECTORY=dp_dir DUMPFILE=remap.dmp
TABLES=hr.employees REMAP_DATA=hr.employees.last_name:hr.remap_pckg.modifychar



Export Encryption Parameters

(11g): To secure the exported dump file, the following new parameters are presented in Oracle 11g
Data pump: ENCRYPTION, ENCRYPTION_PASSWORD and ENCRYPTION_ALGORITHM. To enable
encryption, you must specify either the ENCRYPTION or ENCRYPTION_PASSWORD parameter, or both.

ENCRYPTION = {all | data_only | encrypted_columns_only | metadata_only | none}
ENCRYPTION_ALGORITHM = { AES128 | AES192 | AES256 }
ENCRYPTION_MODE = { DUAL | PASSWORD | TRANSPARENT }

expdp hr DUMPFILE=dp_dir.hr_enc.dmp JOB_NAME=enc ENCRYPTION=data_only
ENCRYPTION_PASSWORD=mypassword

expdp hr DIRECTORY=dp_dir DUMPFILE=hr_enc.dmp
 ENCRYPTION=all ENCRYPTION_PASSWORD=mypassword
 ENCRYPTION_ALGORITHM=AES256 ENCRYPTION_MODE=dual 




Export Estimating Parameters

ESTIMATE={BLOCKS | STATISTICS}
ESTIMATE_ONLY=y
expdp system/pswd estimate_only=y





Export Network Link Parameter

You can initiate an export job from your server and have Data Pump export data from a remote
database to dump files located on the instance from which you initiate the Data Pump export job.

READ ONLY DB can still be loaded from.

expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link DUMPFILE=network_export.dmp
-- more detailed steps:
-- scenario: I will take a data pump export from database ORCL
-- and dumpfile will be written to database TIGER

sqlplus sa/a@tiger
create database link orcl.net using 'ORCL';
OR
Create database link orcl.net connect to sa identified by a
using '(DESCRIPTION=(ADDRESS = (PROTOCOL = TCP)(HOST =10.4.x.x) (PORT=1521))
(connect_data=(service_name=orcl)))';

select * from dual@orcl.net;

$expdp arju/a@tiger directory=d schemas=arju dumpfile=arju_dump_from_orcl.dmp  network_link=orcl.net




Expdp:
FLASHBACK_TIME=SYSTIMESTAMP , PARALLEL=4 , FILESIZE=5G ,  EXCLUDE=STATISTICS , DUMPFILE=TEST_%U.dmp  , COMPRESSION=ALL , COMPRESSION_ALGORITHM=MEDIUM . EXCLUDE:CLUSTER, DB_LINK

Impdp:
transform=lob_storage:securefile  , REMAP_TABLESPACE=%:DATA


####################### 
References : 
####################### 

https://docs.oracle.com/database/121/SUTIL/GUID-2E7A8123-F8D8-4C62-AE4D-8A3F1753E6D3.htm#SUTIL3851

https://docs.oracle.com/database/121/SUTIL/GUID-56B6B4EA-5A2B-45B8-9B41-8966378C2C3D.htm#SUTIL4281

Sunday, June 6, 2021

Oracle Convert Rac Active-Active - To One Node Rac Active/Passive

We recently came across  situation where application was  not supporting Rac active/active . And we have convert database to Rac Active Passive .

Below are steps for Converting Rac Active/ Active to  One Node Rac Active/Passive   



a) Check Status of RAC database
[RACDB1@Abdul-rac1 ] $srvctl status database -d RACDB
Instance RACDB1 is running on node Abdul-rac1
Instance RACDB2 is running on node Abdul-rac2

b) Keep one of the instance down before converting database to one-node RAC configuration.
[RACDB1@Abdul-rac1 ] $srvctl stop instance -d RACDB -i RACDB2


c) Keep one of the instance active in Cluster before converting database to one-node RAC configuration.

[RACDB1@Abdul-rac1 ] $srvctl convert database -d RACDB -c RACONENODE -i RACDB1
PRCD-1214 : Administrator-managed RAC database RACDB has more than one instance

The above error means you should have only 1 Instance of Database running and active for converting it to One-Node RAC.

[RACDB1@Abdul-rac1 ] $srvctl remove instance -d RACDB -i RACDB2
Remove instance from the database RACDB? (y/[n]) y



d)It is mandatory to have at-least one service active for given instance before you convert it into RACONENODE mode, else you will get error.

[RACDB1@Abdul-rac1 ] $srvctl convert database -d RACDB -c RACONENODE -i RACDB1

PRCD-1242 : Unable to convert RAC database RACDB to RAC One Node database because the database had no service added

[RACDB1@Abdul-rac1 ] $srvctl add  service -d RACDB -s TEST -preferred RACDB1



e) Use srvctl convert command to change your database from RAC mode to RACONENODE.

[RACDB1@Abdul-rac1 ] $srvctl convert database -d RACDB -c RACONENODE -i RACDB

$ srvctl -h | grep convert
Usage: srvctl convert database -d <db_unique_name> -c RAC [-n <node>]
Usage: srvctl convert database -d <db_unique_name> -c RACONENODE [-i <instname>] [-w <timeout>]



f) Verify One Node RAC configuration.

[RACDB1@Abdul-rac1 ] $srvctl config database -d RACDB
Database unique name: RACDB
Database name: RACDB
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +ASMDATA/RACDB/spfileRACDB.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RACDB
Database instances:
Disk Groups: DATA, FRA
Mount point paths:
Services: TEST
Type: RACOneNode    <<<<<<< The database has successfully turned into 1-Node Configuration.
Online relocation timeout: 30
Instance name prefix: RACDB1
Candidate servers: Abdul-rac1
Database is administrator managed


g) In case you want to relocate a RAC 1-Node database from 1 node to another, you can use following syntax .  Default shutdown mode for manual relocation is shutdown transactional . 

[RACDB1@Abdul-rac1 ]srvctl relocate database -d RACDB -n Abdul-rac2 


srvctl relocate database -h

srvctl relocate database -db <db_unique_name> {[-node <target>] [-timeout <timeout>] [-stopoption <stop_option>] | -abort [-revert]} [-drain_timeout <timeout>] [-verbose]
-db <db_unique_name> Unique name of database to relocate
-node <target> Target node to which to relocate database
-timeout <timeout> Online relocation timeout in minutes (only for RAC One Node database)
-abort Abort failed online relocation (only for RAC One Node database)
-revert Remove target node of failed online relocation request from the candidate server list of administrator-managed RAC One Node database
-stopoption <stop_option> Override default shutdown option for running instance (only NORMAL allowed)
-drain_timeout <drain_timeout> Service drain timeout specified in seconds
-verbose Verbose output
-help Print usage




We will get below Undo  error which needs to be fixed by setting Undo tablespace to specific node

$ srvctl relocate database –d rontest -n node3                       <
PRCD-1222 : Online relocation of database rontest failed but database was restored to its original state
PRCD-1129 : Failed to start instance rontest_2 for database rontest
PRCR-1064 : Failed to start resource ora.rontest.db on node node3
CRS-5017: The resource action "ora.rontest.db start" encountered the following error:
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-30013: undo tablespace 'UNDOTBS1' is currently in use
Process ID: 1587
Session ID: 35 Serial number: 1
CRS-2674: Start of 'ora.rontest.db' on 'node3' failed



SQL> select tablespace_name from dba_data_files where tablespace_name like ‘%UNDO%’;
TABLESPACE_NAME
------------------------------
UNDOTBS1
UNDOTBS2
So the tablespace was there, but the initialisation parameter was wrong! Let’s correct this:
SQL> alter system set undo_tablespace='UNDOTBS1' sid='rontest_1';
System altered.
SQL> alter system set undo_tablespace='UNDOTBS2' sid='rontest_2';



If we want to remove and re-add database in cluster  :

1) Stop database --> srvctl stop database -d  dbname 
2) remove database from  cluster -->   srvctl remove database -d dbname 
3)  re-add  database  to cluster using below 
srvctl add database -dbname  database_name -dbtype RACONENODE -oraclehome /u01/19c -instance database_name -spfile asm_spfile_path -pwfile password-file_path  -server server1,server2 -db database_name 

srvctl add database -db db_unique_name [-eval]
     -oraclehome oracle_home [-node node_name ] [-domain domain_name ]
     [-spfile spfile ] [-pwfile password_file_path ][-dbtype {RACONENODE | RAC |
     SINGLE} [-server“ server_list ”]
     [-instance instance_name ] [-timeout timeout ]]
     [-role {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY“]
     [-startoption start_options ] [-stopoption stop_options ] [-dbname db_name ] 
     [-acfspath“ acfs_path_list ”] [-policy {AUTOMATIC | 手动| NORESTART}]
     [-serverpool“ server_pool_list ”[-pqpool“ pq_pool_list ”]]
     [-diskgroup “disk_group_list” ] [-verbose]





If we  need modify Instance name : 

If we  want to  change  instance name  , best way is to remove re-add database .   We  can also  use below instance name will be hard coded to each node  . Ideally if instance  name given at time of conversion is  OBCDATA . then instance name on first node will be  OBCDATA_1. if there is server crash ,  same instance name is  moved to another node  i,e  instance will come up with same name OBCDATA_1  on node 2  .  If we do manual  relocation  instance will always come  up as OBCDATA_2 because during manual relocation we cannot  have 2 instance  with same name . 
using below  option will force instance to always have different name , even in case of  server failover ./

srvctl modify instance -d dbname -i  instance_name  -n name 




References :

https://docs.oracle.com/en/database/oracle/oracle-database/19/rilin/installing-oracle-rac-rac-one-node-software-only.html#GUID-E865878B-C328-4368-A8F5-C7B2CD81172B

Instance Name changes during RAC One Node Failover/Switchover ( Doc ID 1540629.1 )

Naming Of RAC ONE NODE Instances Across RAC Nodes ( Doc ID 2063599.1 )



Oracle 19c Autopgrade utility


Oracle Database Autoupgrade Utility is a new feature designed in Oracle 19c to automate the Upgrade process which Identifies issues before upgrade, Performs Preupgrade actions, Deploying the upgrades and Performs Post upgrade actions . You can upgrade multiple databases at the same time using a single configuration file.


Below i did for testing purpose only . I personally use dbua .  

 AutoUpgrade Tool | AutoUpgrade - 4 modes
-analyze -fixups -upgrade -deploy

 
Below are the High level steps:

Install Oracle 19.3.0.0 binaries
Prerequisite for Autoupgrade
Create the config file
Analyze the database
Deploy the upgrade
Post upgrade task



Environment Details:
Source Hostname: new19c
Database version: 12.2.0.1
Database Name: Abdultest
ORACLE_HOME: /u01/app/oracle/product/12.2.0/db_1/
Target Hostname: new19c 
Databaes Version: 19.3.0.0
Database name: Abdultest
ORACLE_HOME: /u01/app/oracle/product/19.3.0/dbhome_1

Source DB Details
SQL> select name, open_mode, version, status from v$database, v$instance;
NAME      OPEN_MODE            VERSION           STATUS
--------- -------------------- ----------------- ------------
ABDULTEST  READ WRITE           12.2.0.1.0        OPEN




1. Install Oracle 19.3.0.0 binaries

I have already completed the installation of 19c, Refer here for step by step installation of Oracle 19.3.0.0 on Linux 7 64bit.


2. Prerequisite for Autoupgrade   : Download the latest autoupgrade.jar file

Autoupgrade utility autoupgrade.jar file exists by default under $ORACLE_HOME/rdbms/admin directory from Oracle 19.3 release onwards, however Oracle strongly recommends to download the latest AutoUpgrade version before doing the upgrade.  Refer 2485457.1

Replace the autoupgrade.jar with the latest version downloaded

[oracle@new19c ~]$ mv $ORACLE_HOME/rdbms/admin/autoupgrade.jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar-bkp
[oracle@new19c ~]$ cp /tmp/autoupgrade.jar $ORACLE_HOME/rdbms/admin/
[oracle@new19c ~]$

[oracle@new19c ~]$ $ORACLE_HOME/jdk/bin/java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -version  

build.hash e84c9c2
build.version 19.10.0
build.date 2020/10/23 10:36:46
build.max_target_version 19
build.supported_target_versions 12.2,18,19
build.type production


Java version
Java version should be 8 or later, which is available by default in Oracle Database homes from release 
12.1.0.2 and latest.

[oracle@new19c temp]$ $ORACLE_HOME/jdk/bin/java -version
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)



3. Create the config file

Create a directory to hold all upgrade config and log files.

[oracle@new19c ~]$ mkdir /u01/19c-autoupg
[oracle@new19c ~]$ cd /u01/19c-autoupg

Create the sample config file
cd /u01/19c-autoupg
export ORACLE_HOME=/u01/app/oracle/product/19.3.0/dbhome_1
export PATH=$PATH:$ORACLE_HOME/jdk/bin

$ORACLE_HOME/jdk/bin/java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -create_sample_file config

--output
Created sample configuration file /u01/19c-autoupg/sample_config.cfg


Modify the config file

Copy the sample config file and make the necessary changes as per the database environment.
cd /u01/19c-autoupg
cp sample_config.cfg abdultest_db_config.cfg
vi abdultest_db_config.cfg

This is the config file I used for upgrade:
[oracle@new19c 19c-autoupg]$ cat abdultest_db_config.cfg
global.autoupg_log_dir=/u01/19c-autoupg/upg_logs
#
# Database abdultest
#
upg1.dbname=abdultest
upg1.start_time=NOW
upg1.source_home=/u01/app/oracle/product/12.2.0/db_1/
upg1.target_home=/u01/app/oracle/product/19.3.0/dbhome_1
upg1.sid=abdultest
upg1.log_dir=/u01/19c-autoupg/upg_logs/abdultest
upg1.upgrade_node=new19c
upg1.target_version=19.3
upg1.run_utlrp=yes
upg1.timezone_upg=yes




4. Analyze the database
Autoupgrade Analyze mode checks your database to see if it is ready for the upgrade. This will reads data from the database and does not perform any updates.

Execute autoupgrade in analyze mode with the below syntax,
export ORACLE_HOME=/u01/app/oracle/product/19.3.0/dbhome_1
export PATH=$PATH:$ORACLE_HOME/jdk/bin

cd /u01/19c-autoupg
$ORACLE_HOME/jdk/bin/java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -config abdultest_db_config.cfg -mode ANALYZE


Output
[oracle@new19c 19c-autoupg]$ $ORACLE_HOME/jdk/bin/java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -config cdbdev_db_config.cfg -mode ANALYZE

AutoUpgrade tool launched with default options
Processing config file ...
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 databases will be analyzed
Type 'help' to list console commands

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+----------------------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME| UPDATED|                     MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+----------------------------+
| 100| cdbdev|PRECHECKS|PREPARING|RUNNING|20/11/19 03:27|03:27:28|Loading database information|
+----+-------+---------+---------+-------+--------------+--------+----------------------------+
Total jobs 1
upg> Job 100 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]
Jobs finished successfully     [1]
Jobs failed                    [0]
Jobs pending                   [0]
------------- JOBS FINISHED SUCCESSFULLY -------------
Job 100 for abdultest


 
We can monitor, manage and control the jobs from the autoupgrade console. Example:
lsj – to list the jobs
status – to show the job status
tasks – shows the tasks executing
logs - to check log  folder 


All Analyze logs are created under autoupg_log_dir

[oracle@new19c 100]$ cd /u01/19c-autoupg/upg_logs/abdultest/abdultest/100/prechecks/
[oracle@new19c prechecks]$ ls -l
total 536
-rwx------. 1 oracle oinstall   5051 Nov 19 03:28 abdultest_checklist.cfg
-rwx------. 1 oracle oinstall  18050 Nov 19 03:28 abdultest_checklist.json
-rwx------. 1 oracle oinstall  17101 Nov 19 03:28 abdultest_checklist.xml
-rwx------. 1 oracle oinstall  36704 Nov 19 03:28 abdultest_preupgrade.html
-rwx------. 1 oracle oinstall  17649 Nov 19 03:28  abdultest_vpreupgrade.log
-rwx------. 1 oracle oinstall 158030 Nov 19 03:28 prechecks_cdb_root.log
-rwx------. 1 oracle oinstall 140241 Nov 19 03:28 prechecks_pdbdev.log
-rwx------. 1 oracle oinstall 139243 Nov 19 03:28 prechecks_pdb_seed.log



We can review the html file (abdultest_preupgrade.html) which will list all precheck Errors, warnings and recommendations.


5. Deploy the upgrade

Autoupgrade Deploy mode performs the actual upgrade of the database from preugrade source database analysis to post-upgrade checks.

Note: Before deploying the upgrade, you must have a backup plan in place.

Execute the autoupgrade in DEPLOY mode using the below syntax,

export ORACLE_HOME=/u01/app/oracle/product/19.3.0/dbhome_1
export PATH=$PATH:$ORACLE_HOME/jdk/bin

cd /u01/19c-autoupg$ORACLE_HOME/jdk/bin/java -jar 
$ORACLE_HOME/rdbms/admin/autoupgrade.jar -config abdultest_db_config.cfg -mode DEPLOY

Once the upgrade process is started consider monitoring the logs to see the progress of the upgrade. Autoupgrade logs are available under,

/u01/19c-autoupg/upg_logs/abdultest/abdultest/101/dbupgrade



Output

[oracle@new19c 19c-autoupg]$ $ORACLE_HOME/jdk/bin/java -jar $ORACLE_HOME/rdbms/admin/autoupgrade.jar -config abdultest_db_config.cfg -mode DEPLOY

AutoUpgrade tool launched with default options

Processing config file ...
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 databases will be processed
Type 'help' to list console commands
upg> tasks
+--+-------------+-------------+
|ID|         NAME|         Job#|
+--+-------------+-------------+
| 1|         main|      WAITING|
|35|     jobs_mon|      WAITING|
|36|      console|     RUNNABLE|
|37| queue_reader|      WAITING|
|38|        cmd-0|      WAITING|
|54|job_manager-0|      WAITING|
|56|   event_loop|TIMED_WAITING|
|57|   bqueue-101|      WAITING|
|61|     quickSQL|     RUNNABLE|
+--+-------------+-------------+
upg>
upg> logs
AutoUpgrade logs folder [/u01/19c-autoupg/upg_logs/cfgtoollogs/upgrade/auto]
logs folder [abdultest][/u01/19c-autoupg/upg_logs/abdultest/abdultest]
upg> lsj
+----+-------+---------+---------+--------+--------------+--------+----------------------------+
|Job#|DB_NAME|    STAGE|OPERATION|  STATUS|    START_TIME| UPDATED|                     MESSAGE|
+----+-------+---------+---------+--------+--------------+--------+----------------------------+
| 101| abdultest|PREFIXUPS|EXECUTING|FINISHED|20/11/19 03:46|03:48:44|Loading database information|
+----+-------+---------+---------+--------+--------------+--------+----------------------------+
Total jobs 1
upg> lsj
+----+-------+-----+---------+-------+--------------+--------+----------------------+
|Job#|DB_NAME|STAGE|OPERATION| STATUS|    START_TIME| UPDATED|               MESSAGE|
+----+-------+-----+---------+-------+--------------+--------+----------------------+
| 101| abdultest|DRAIN|EXECUTING|RUNNING|20/11/19 03:46|03:48:52|Shutting down database|
+----+-------+-----+---------+-------+--------------+--------+----------------------+
Total jobs 1
upg> status
---------------- Config -------------------
User configuration file    [/u01/19c-autoupg/abdultest_db_config.cfg]
General logs location      [/u01/19c-autoupg/upg_logs/cfgtoollogs/upgrade/auto]
Mode                       [DEPLOY]
DB upg fatal errors        ORA-00600,ORA-07445
DB Post upgrade abort time [60] minutes
DB upg abort time          [1440] minutes
DB restore abort time      [120] minutes
DB GRP abort time          [3] minutes
------------------------ Jobs ------------------------
Total databases in configuration file [1]
Total Non-CDB being processed         [0]
Total CDB being processed             [1]
Jobs finished successfully            [0]
Jobs finished/aborted                 [0]
Jobs in progress                      [1]
Jobs stage summary
    Job ID: 101
    DB name: abdultest
        SETUP             <1 min
        GRP               <1 min
        PREUPGRADE        <1 min
        PRECHECKS         <1 min
        PREFIXUPS         1 min
        DRAIN             <1 min
        DBUPGRADE         12 min (IN PROGRESS)
------------ Resources ----------------
Threads in use                        [32]
JVM used memory                       [115] MB
CPU in use                            [13%]
Processes in use                      [18]
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME| UPDATED|             MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------------------+
| 101| cdbdev|DBUPGRADE|EXECUTING|RUNNING|20/11/19 03:46|04:38:42|70%Upgraded CDB$ROOT|
+----+-------+---------+---------+-------+--------------+--------+--------------------+
Total jobs 1
upg> /
+----+-------+---------+---------+-------+--------------+--------+--------------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME| UPDATED|             MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------------------+
| 101| cdbdev|DBUPGRADE|EXECUTING|RUNNING|20/11/19 03:46|06:31:20|95%Upgraded PDB$SEED|
+----+-------+---------+---------+-------+--------------+--------+--------------------+
Total jobs 1
upg> /
+----+-------+----------+---------+-------+--------------+--------+-------------+
|Job#|DB_NAME|     STAGE|OPERATION| STATUS|    START_TIME| UPDATED|      MESSAGE|
+----+-------+----------+---------+-------+--------------+--------+-------------+
| 101| cdbdev|POSTFIXUPS|EXECUTING|RUNNING|20/11/19 03:46|06:38:37|Remaining 1/9|
+----+-------+----------+---------+-------+--------------+--------+-------------+
Total jobs 1
upg> /
+----+-------+----------+---------+-------+--------------+--------+----------------------------+
|Job#|DB_NAME|     STAGE|OPERATION| STATUS|    START_TIME| UPDATED|                     MESSAGE|
+----+-------+----------+---------+-------+--------------+--------+----------------------------+
| 101| cdbdev|POSTFIXUPS|EXECUTING|RUNNING|20/11/19 03:46|06:43:37|Loading database information|
+----+-------+----------+---------+-------+--------------+--------+----------------------------+
Total jobs 1
upg> /
+----+-------+-----------+---------+-------+--------------+--------+---------------------+
|Job#|DB_NAME|      STAGE|OPERATION| STATUS|    START_TIME| UPDATED|              MESSAGE|
+----+-------+-----------+---------+-------+--------------+--------+---------------------+
| 101| cdbdev|POSTUPGRADE|EXECUTING|RUNNING|20/11/19 03:46|06:45:16|Creating final SPFILE|
+----+-------+-----------+---------+-------+--------------+--------+---------------------+
Total jobs 1
upg> /
+----+-------+-----------+---------+-------+--------------+--------+----------+
|Job#|DB_NAME|      STAGE|OPERATION| STATUS|    START_TIME| UPDATED|   MESSAGE|
+----+-------+-----------+---------+-------+--------------+--------+----------+
| 101| cdbdev|POSTUPGRADE|EXECUTING|RUNNING|20/11/19 03:46|06:46:01|Restarting|
+----+-------+-----------+---------+-------+--------------+--------+----------+
Total jobs 1
upg> Job 101 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]
Jobs finished successfully     [1]
Jobs failed                    [0]
Jobs pending                   [0]
------------- JOBS FINISHED SUCCESSFULLY -------------
Job 101 for cdbdev



---- Drop GRP at your convenience once you consider it is no longer needed ----
Drop GRP from cdbdev: drop restore point AUTOUPGRADE_9212_CDBDEV122010

[oracle@new19c 19c-autoupg]$

Check the upg_summary.log
Oracle Database Release 19 Post-Upgrade Status Tool    11-19-2020 06:30:0
Container Database: CDBDEV
[CON_ID: 2 => PDB$SEED]
Component                               Current         Full     Elapsed Time
Name                                    Status          Version  HH:MM:SS
Oracle Server                             VALID      19.3.0.0.0  00:34:10
JServer JAVA Virtual Machine              VALID      19.3.0.0.0  00:02:25
Oracle XDK                                VALID      19.3.0.0.0  00:01:19
Oracle Database Java Packages             VALID      19.3.0.0.0  00:00:10
OLAP Analytic Workspace                   VALID      19.3.0.0.0  00:00:55
Oracle Label Security                     VALID      19.3.0.0.0  00:00:11
Oracle Database Vault                     VALID      19.3.0.0.0  00:03:00
Oracle Text                               VALID      19.3.0.0.0  00:00:42
Oracle Workspace Manager                  VALID      19.3.0.0.0  00:00:52
Oracle Real Application Clusters     OPTION OFF      19.3.0.0.0  00:00:00
Oracle XML Database                       VALID      19.3.0.0.0  00:03:32
Oracle Multimedia                         VALID      19.3.0.0.0  00:00:46
Spatial                                   VALID      19.3.0.0.0  00:09:15
Oracle OLAP API                           VALID      19.3.0.0.0  00:00:26
Datapatch                                                        00:04:50
Final Actions                                                    00:05:11
Post Upgrade                                                     00:02:06
Post Compile                                                     00:11:29
Total Upgrade Time: 01:17:50 [CON_ID: 2 => PDB$SEED * ]
Asterisks denotes compilation time has been included during the upgrade process.
Database time zone version is 26. It is older than current release time
zone version 32. Time zone upgrade is needed using the DBMS_DST package.
Upgrade Times Sorted In Descending Order
Total Upgrade Time: 01:17:50 [CON_ID: 2 => PDB$SEED * ]
Total Upgrade Time: 01:06:31 [CON_ID: 1 => CDB$ROOT]
Total Upgrade Time: 01:06:12 [CON_ID: 3 => PDBDEV]
Grand Total Upgrade Time:    [0d:2h:42m:43s]



[oracle@new19c dbupgrade]$
Timezone file upgrade and database recompilation has already completed by the autoupgrade utility as the below values are adjusted as “yes” in the config file,
upg1.run_utlrp=yes =yes  # yes(default) to run utlrp as part of upgrade
upg1.timezone_upg=yes # yes(default) to upgrade timezone if needed
Check the Timezone version
SQL> SELECT VERSION FROM V$TIMEZONE_FILE;
   VERSION
----------
        32
SQL>
Check the db details
SQL> select name, open_mode, version, status from v$database, v$instance;
NAME      OPEN_MODE            VERSION           STATUS
--------- -------------------- ----------------- ------------
ABDULTEST   READ WRITE           19.0.0.0.0        OPEN
SQL>
6. Post-upgrade task
Once the upgrade is successful and all testing is done, drop the restore point.
Drop the Guaranteed restore point
SQL> select name from v$restore_point;
NAME
------------------------------
AUTOUPGRADE_9212_ABDULTEST122010
SQL>
SQL> drop restore point AUTOUPGRADE_9212_ABDULTEST122010;
Restore point dropped.
SQL>


Change the compatible parameter

Note: After the upgrade, database has to be tested properly before updating the compatible parameter. Once the parameter is updated database cannot be downgraded.
show parameter compatible
alter system set compatible='19.0.0' scope=spfile;
shutdown immediate;
startup;
show parameter compatible


--output
SQL> show parameter compatible
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string      12.2.0
noncdb_compatible                    boolean     FALSE
SQL> alter system set compatible='19.0.0' scope=spfile;
System altered.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 1560278096 bytes
Fixed Size                  9135184 bytes
Variable Size             973078528 bytes
Database Buffers          570425344 bytes
Redo Buffers                7639040 bytes
Database mounted.
Database opened.
SQL> 
SQL> show parameter compatible
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string      19.0.0
noncdb_compatible                    boolean     FALSE
SQL>
It’s Done. Database is successfully upgraded from 12c to 19c.



Reference :
https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/using-autoupgrade-oracle-database-upgrades.html#GUID-71883C8C-7A34-4E93-8955-040CB04F2109 

Tuesday, June 1, 2021

Oracle Adaptive LGWR and Adaptive Plans -- Nightmare in 12c database

Oracle  introduced adaptive  features from 12c  as a enhancement for automatic tuning  how 2  features that gave us nightmare where  Adaptive LGWR  and Adaptive plans .


Adaptive LGWR  : 

Adaptive LGWR - which tries to determine whether it is better to use serial LGWR or parallel LGWR and switches between these dynamically. However this may lead to DEADLOCK / Database Hang or ORA-742 "Log read detects lost write" or ORA-600 [kcrfrgv_nextlwn_scn] during instance OPEN or ORA-600 [krr_process_read_error_2] during Recovery on IBM AIX and potentially on HPUX Itanium 64bit. as reported in Bug  21915719. Doc ID 21915719.8  

This can be disabled


o To use only serial LGWR:

§ _use_single_log_writer=true


o To use only parallel LGWR:

§ _use_single_log_writer=false

MEMO: "ADAPTIVE" is the default value, it will toggle between TRUE/FALSE depending on workload. Adaptive means the instance will startup in parallel mode , then may switch back to serial if it thinks it's better that way.



Adaptive plans .

Oracle introduced a new feature Adaptive Query Optimization in Oracle 12c to better estimate statistics and optimize plans. In the system of the mentioned customer this lead to the performance issues.

This feature is enabled by default and can be turned off by a database setting. 

We got information from a customer that upgrading the underlaying Oracle Database 11g to Oracle Database 12c brought up obvious performance issues and sql plan flips issues . 

In release 12.2, Oracle depreciated the optimizer_adaptive_features parameter into two parameters Optimizer_Adaptive_plans and optimizer_Adaptive_statistics.


Solution
Oracle 12c 12.0/12.1
Use the following system setting to disable Adaptive Query Optimization:
ALTER SYSTEM SET OPTIMIZER_ADAPTIVE_FEATURES = FALSE SCOPE=BOTH; 
(Default is TRUE)


Hint: The settings from Oracle 12.2 where backported to 12.1. See https://support.oracle.com/epmos/faces/DocContentDisplay?id=2187449.1 


Before changing settings, please verify if your Oracle 12.1 database may already have installed this backport.
Oracle 12c 12.2
Oracle has splitted the Optimizer Adaptive Features into two settings:

ALTER SYSTEM SET OPTIMIZER_ADAPTIVE_PLANS = FALSE SCOPE=BOTH;
(Default is TRUE)

ALTER SYSTEM SET OPTIMIZER_ADAPTIVE_STATISTICS = FALSE SCOPE=BOTH;
(Default is FALSE)




Oracle -- Importing Database Recovery Catalog

To merge two recovery catalogs, one from the 10.2 release and the other from 11g, into a single 11g
release catalog schema. 

This is also helpful importing production catalog information  into dr catalog  


1. Connect to the destination recovery catalog.

$ rman

RMAN> connect catalog rman/rman@rman11


2. Issue the IMPORT CATALOG command connecting to source recovery catalog 

RMAN> import catalog rman1/rman1@rman10;

To specify which database to register:

RMAN> import catalog rman10/rman10@tenner dbid = 123456, 1234557;

RMAN> import catalog rman10/rman10@tenner db_name = testdb, mydb;

-- by default, the databases are unregistered from the source catalog:

RMAN> import catalog rman10/rman10@tenner NO UNREGISTER



3. Validate after import 

RMAN> list db_unique_name all 


Saturday, May 22, 2021

Oracle Rac -- Adding HAIP Manually : Redundant Interconnect



Oracle Grid Infrastructure from 11g R2  now provides RAC HAIP, which is link aggregation moved to the clusterware level. Instead of bonding the network adapters on the OS side, Grid Infrastructure in instructed to use multiple network adapters. Grid Infrastructure will still start HAIP even if the system is configured with only one private network adapter. Shows the resource name ora.cluster_interconnect.haip is online.

Starting 11gR2, Oracle supports up to 4 redundant interconnects that will be automatically managed by the cluster for fail-over and load balancing.
Following procedure shows how to add a new private interconnect to an existing RAC cluster.


 Before making this change, check on all nodes if any resource is OFFLINE.
> All the CRS resources should be online.
> In one case, we had ADVM (12c) STABLE and we got errors:
> Check to see what private interfaces are already registered


** IMPORTANT: Make sure your second/redundant Private has DIFFERENT SUBNET

List all the interfaces on the server
Make sure that the new private interface that you are going to add is correctly plumbed
On all other nodes - it has to be on the same interface name 


Each NIC used in the HAIP configuration must be in its own subnet, if the same subnet is used and the NIC having its subnet first in the routing table fails you can experience a node eviction

Each NIC defined as a cluster interconnect  on a given node will have a static ip address (Private IP) assigned to it and each cluster interconnect NIC on a given node must be on a unique subnet. If any one of the cluster interconnect NICs is down on a node, then the subnet associated with the down NIC is considered not usable by any node of the cluster 







#####################################
Check HAIP Information  Before Adding
#####################################

[oracle@host01 bin]$ ./crsctl stat res -t -init

---------------------------------------------------------------------------------------------------- 

Name          Target  State        Server                   State details      Cluster Resources
---------------------------------------------------------------------------------------------------- 
ora.asm  1        ONLINE  ONLINE       host01                   Started,STABLE
ora.cluster_interconnect.haip   1        ONLINE  ONLINE       host01                   STABLE



oifcfg  shows , only one adapter is defined for the Cluster Interconnect.

[oracle@host01 bin]$ ./oifcfg getif
eth0  192.168.56.0  global  public
eth1  192.168.10.0  global  cluster_interconnect



The ifconfig command shows that network device eth2 is part of two subnets.

[oracle@host01 bin]$ ifconfig -a

eth0      Link encap:Ethernet  HWaddr 08:00:27:98:EA:FE 
         inet addr:192.168.56.71  Bcast:192.168.56.255  Mask:255.255.255.0
         inet6 addr: fe80::a00:27ff:fe98:eafe/64 Scope:Link
         UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
         RX packets:947 errors:0 dropped:0 overruns:0 frame:0
         TX packets:818 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:100821 (98.4 KiB)  TX bytes:92406 (90.2 KiB)

eth2      Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 
          inet addr:192.168.10.1  Bcast:192.168.10.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe54:738f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:406939 errors:0 dropped:0 overruns:0 frame:0
          TX packets:382298 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:445270636 (424.6 MiB)  TX bytes:202801222 (193.4 MiB)
 

eth2:1    Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 
          inet addr:192.168.225.190  Bcast:192.168.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1





When Grid Infrastructure is stopped, the ifconfig command will no longer show the eth1:1 device. The gv$cluster_interconnects view shows the HAIP subnets for each instance.

select     
     inst_id,
     name,
     ip_address
  from
     gv$cluster_interconnects;


   INST_ID NAME            IP_ADDRESS
---------- --------------- ----------------
         1 eth2:1          192.168.225.190
         2 eth2:1          192.168.230.98





#####################################
Adding Secondary Private Network / HAIP 
#####################################

While HAIP is running, there is no redundancy or additional network bandwidth because only one network interface is configured. If a second network interface is available for the private network, it will need to be added to Grid Infrastructure. The device needs to be a well-configured network adapter in the operating system. The new network interface needs to have the same configuration as the current interface, i.e. both must be on the same subnet, have the same MTU size, etc.  The oifcfg command is used to set the new interface as a cluster_interconnect device.

$ oifcfg iflist
eth0  10.0.2.0        <--local router
eth1  192.168.56.0    <-- public Interface
eth2  192.168.10.0     <-- RAC cluster_interconnect
eth2  192.168.0.0     <-- RAC used  
eth3  192.168.0.0     <-- Our new device we want to add to the cluster_interconnect




oifcfg iflist -p -n  ( eg only ) 

eth0  192.168.4.0  PRIVATE  255.255.255.0
eth1  192.168.0.128  PRIVATE  255.255.255.128
eth1  192.168.0.0  UNKNOWN  255.255.0.0
Note:
– The first column is the network adapter name.
– The second column is the subnet ID.
– The third column indicates whether it’s private, public or unknown according to RFC standard, it has NOTHING to do whether it’s used as a private or public network in Oracle Clusterware.
– The last column is the netmask.


oifcfg setif <interface-name>/<subnet>:<cluster_interconnect|public>

[oracle@host01 bin]$ ./oifcfg setif -global  eth3/192.168.10.0:cluster_interconnect,asm



The device eth3 is now part of the Cluster Interconnect. The commands do not need to be repeated on all nodes as Grid Infrastructure takes care of that for us. On host02, the device is already configured.
 
[oracle@host02 bin]$ ./oifcfg getif

eth1  192.168.56.0  global  public
eth2  192.168.10.0  global  cluster_interconnect
eth3  192.168.10.0  global  cluster_interconnect

 

Grid Infrastructure needs to be restarted on all nodes.

[root@host01 bin]# ./crsctl stop crs
[root@host01 bin]# ./crsctl start crs




Once the cluster nodes are back up and running, the new interface will be part of the RAC HAIP configuration.

[root@host01 ~]# ifconfig ?a

eth1      Link encap:Ethernet  HWaddr 08:00:27:98:EA:FE 
          inet addr:192.168.56.71  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe98:eafe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5215 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6593 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2469064 (2.3 MiB)  TX bytes:7087438 (6.7 MiB)

 
eth2      Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 
          inet addr:192.168.10.1  Bcast:192.168.10.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe54:738f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:3517 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2771 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000

          RX bytes:789056 (770.5 KiB)  TX bytes:694387 (678.1 KiB)

 

eth2:1    Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 
          inet addr:192.168.21.30  Bcast:192.168.127.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1

 
eth3      Link encap:Ethernet  HWaddr 08:00:27:6A:8B:8A 
          inet addr:192.168.10.3  Bcast:192.168.10.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe6a:8b8a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:857 errors:0 dropped:0 overruns:0 frame:0
         TX packets:511 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:158563 (154.8 KiB)  TX bytes:64923 (63.4 KiB)

 
eth3:1    Link encap:Ethernet  HWaddr 08:00:27:6A:8B:8A 
          inet addr:192.168.170.240 Bcast:192.168.255.255 Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1

 

The new interface is also found in the gv$cluster_interconnects view.

 select
     inst_id,
     name,
     ip_address
  from
     gv$cluster_interconnects;

 
   INST_ID NAME            IP_ADDRESS
---------- --------------- ----------------
         1 eth2:1          192.168.21.30
         1 eth3:1          192.168.170.240
         2 eth2:1         192.168.75.234
         2 eth3:1          192.168.188.35



#####################################
Testing  HAIP 
#####################################

Stop eth2 at OS level and verify eth3 
# ifconfig eth2 down
# ifconfig eth3
 
--> Custer_interconnect failed over  from eth2:1 to eth3:2 - eth2 is not used anymore

Re-enable eth2 again at OS level
# ifconfig eth2 up
--> Wait some seconds to see that failed_over cluster_interconnect is back on eth2



#####################################
Disable haip service and haip dependencies
#####################################


Disable : 
[ora12c1:root]:/>crsctl modify res ora.cluster_interconnect.haip -attr "ENABLED=0" -init


Enable : 
[ora12c1:root]:/>crsctl modify res ora.cluster_interconnect.haip -attr "ENABLED=0" -init





#####################################
Reference
#####################################
How to Modify Private Network Information in Oracle Clusterware (Doc ID 283684.1)

https://www.oracle.com/technetwork/products/clusterware/overview/interconnect-vlan-06072012-1657506.pdf

https://docs.oracle.com/database/121/CWLIN/networks.htm#CIHIJAJB
ID 1481481.1

Friday, May 21, 2021

Oracle Statspack Installation / Re-Creation

We all must be thinking in this era of  21c why we need statspack. However for some non critical database where we dont  have license of tuning pack . Hence we need to survive on  statspack 


1)    take export  backup of  exiting perftstat user 

2) Drop  statspack  
@?/rdbms/admin/spdrop.sql

3) create stastpack 
@?/rdbms/admin/spcreate.sql

4)  Change  stastpack snap level to level 6 
BEGIN
  statspack.modify_statspack_parameter(i_snap_level=>6, i_modify_parameter=>'true');
END;
/

select SNAP_ID, SNAP_LEVEL from STATS$SNAPSHOT;


5) try manual snap 
exec PERFSTAT.statspack.snap;


6)  Schedule  statspack auto jobs for statspack snap   ( note job id) 
@?/rdbms/admin/spauto.sql



7)  Change  snap interval to 30 min 

alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
  set lines 180
  col SCHEMA_USER for a20
  col INTERVAL for a30
 col WHAT for a30
 select JOB, SCHEMA_USER, INTERVAL, BROKEN, WHAT from dba_jobs where JOB=428;

execute dbms_job.interval(428,'sysdate+(1/48)');

alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
  set lines 180
  col SCHEMA_USER for a20
  col INTERVAL for a30
 col WHAT for a30
 select JOB, SCHEMA_USER, INTERVAL, BROKEN, WHAT , to_char(next_date ,'DD-MON-YYYY:HH24:MI:SS')      "next date"  , failures   from dba_jobs where JOB=428;

select name,snap_id,to_char(snap_time,'DD-MON-YYYY:HH24:MI:SS')      "Date/Time" from stats$snapshot,v$database;


8)  after  30  minutes verify snap interval working fine with 30 min and level 6 

@?/rdbms/admin/spreport.sql





Saturday, May 15, 2021

Whats new in Oracle database Home binary Cloning in 19c

 There are  2 main  feature been  introduced in 19c with regards to   home binary cloning 

1)  Taking "Gold Copy "  instead of  TAR  backup form source 

2) clone.pl is depreciated 


Gold Image : 

Instead  of taking traditional tar backup , Oracle now suggest taking gold image . Below is example . Zip file will be created in target  location . 

$ORACLE_HOME/runInstaller -createGoldImage -destinationLocation /refresh/home/app/oracle/product/ -silent 


Clone.pl depreciated : 

If we use  "clone.pl"  or   "runInstaller -clone"  now  from 19c , we will get below exception .


[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image.

You must use /refresh/home/app/oracle/product/19.3_clone/runInstaller script available to perform the Software Only install. For more details on image based installation, refer to help documentation.


To  avoid  above  , we used runInstaller  as below ,    just like  we  for fresh installation  . 


ORACLE_BASE=/u01/app/oracle
NODE2_HOSTNAME=node2
ORACLE_HOSTNAME=node1
NODE1_HOSTNAME=node1
ORA_INVENTORY=/u01/app/oraInventory
ORACLE_HOME=/u01/app/oracle/product/19.3.0.0/dbhome_2
[oracle@node1 ~]$ cd $ORACLE_HOME
[oracle@node1 dbhome_2]$ ${ORACLE_HOME}/runInstaller -ignorePrereq -waitforcompletion -silent \
> -responseFile ${ORACLE_HOME}/install/response/db_install.rsp \
> oracle.install.option=INSTALL_DB_SWONLY \
> ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \
> UNIX_GROUP_NAME=oinstall \
> INVENTORY_LOCATION=${ORA_INVENTORY} \
> SELECTED_LANGUAGES=en \
> ORACLE_HOME=${ORACLE_HOME} \
> ORACLE_BASE=${ORACLE_BASE} \
> oracle.install.db.InstallEdition=EE \
> oracle.install.db.OSDBA_GROUP=dba \
> oracle.install.db.OSOPER_GROUP=dba \
> oracle.install.db.OSBACKUPDBA_GROUP=dba \
> oracle.install.db.OSDGDBA_GROUP=dba \
> oracle.install.db.OSKMDBA_GROUP=dba \
> oracle.install.db.OSRACDBA_GROUP=dba \
> oracle.install.db.CLUSTER_NODES=${NODE1_HOSTNAME},${NODE2_HOSTNAME} \
> oracle.install.db.isRACOneInstall=false \
> oracle.install.db.rac.serverpoolCardinality=0 \
> oracle.install.db.config.starterdb.type=GENERAL_PURPOSE \
> oracle.install.db.ConfigureAsContainerDB=false \
> SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
> DECLINE_SECURITY_UPDATES=true



Reference : 

19.x:Clone.pl script is deprecated and how to clone using gold-image (Doc ID 2565006.1)





Sunday, May 2, 2021

Oracle TFA And AHF log collection / Installation

 
Trace File Analyzer Collector also known as TFA is a diagnostic collection utility which greatly simplifies the diagnostic data collection for both Oracle Database as well as Oracle Clusterware/Grid Infrastructure RAC environments.

Trace File Analyzer provides a central and single interface for all diagnostic data collection and analysis.

When a problem occurs, TFA collects all the relevant data at the time of the problem and consolidates data even across multiple nodes in a clustered Oracle RAC environment.  Only the relevant diagnostic data is collected and can be packaged and uploaded to Oracle Support and this leads to faster resolution times. All the required diagnostic data is collected via a single tfactl command instead of having to individually look for the required diagnostic information across a number of database and cluster alert logs, trace files or dump files.

In addition to the core functionality of gathering, consolidating and processing diagnostic data, Trace File Analyzer comes bundled with a number of support tools which enable us to obtain a lot of other useful information like upgrade readiness, health checks for both Engineered as well as non-Engineered systems, OS performance graphs, Top SQL queries etc.

Oracle Trace File Analyzer is shipped along with Oracle Grid Infrastructure (from version 11.2.0.4). However, it is recommended to download the latest TFA version which can be accessed via the My Oracle Support Note 1513912.1 since the TFA bundled with the Oracle Grid Infrastructure does not include many of the new features, bug fixes and more importantly the Oracle Database Support Tools bundle.

Oracle releases new versions of the TFA several times a year and the most current version is Trace File Analyzer 18.1.1 which is now available for download via the MOS Note 1513912.1.


##################################################################################
TFA
##################################################################################


__________________
Install   Tfa 
__________________

Download the latest version of Oracle Trace File Analyzer with Oracle Database support tools bundle from My Oracle Support note 1513912.1

Upgrading is similar to first-time install. As root, use the installTFAplatform script example for linux > 

./installTFA-Linux.

For  Rac :  
Install Tfa on both Nodes  -->  Sync  on both Nodes  --> Start 

( As root ) 
./installTFAplatform 

( As Oracle User ) 
$ ./installTFAplatform -extractto dir -javahome jre_home


cd /u01/software
unzip TFA-LINUX_v18.2.1.zip
mkdir -p $ORACLE_HOME/tfa
./installTFA-LINUX -local -tfabase $ORACLE_HOME/tfa



After having completed this process on all nodes, let’s synchronize them. As I decided not to use SSH, I need to execute those final steps :

Launch tfactl syncnodes using sudo : 
# sudo /grid/infrastructure/home/bin/tfactl syncnodes       ( # $GIHOME/tfa/nodename/tfa_home/bin/synctfanodes.sh ) 

Login using root is disabled in sshd config. Please enable it or
Please copy these files manually to remote node and restart TFA

1. /grid/infrastructure/home/tfa/node01/tfa_home/server.jks
2. /grid/infrastructure/home/tfa/node01/tfa_home/client.jks
3. /grid/infrastructure/home/tfa/node01/tfa_home/internal/ssl.properties

These files must be owned by root and should have 600 permissions.


sudo /grid/infrastructure/home/bin/tfactl stop
sudo /grid/infrastructure/home/bin/tfactl start


__________________
Log Collection   Tfa 
__________________

 cd $ORACLE_HOME/tfa/bin/

# Gather diagnostic information about TFA itself.
./tfactl diagnosetfa -local

# Gather information about errors. You are prompted to select a specific incident.
./tfactl diagcollect -srdc ORA-00600
./tfactl diagcollect -srdc ORA-07445

# Collect data for all components for a specific time period.
./tfactl diagcollect -from "2018-06-16 13:00:00" -to "2018-06-16 13:00:00"

# Collect data for all components for the last 12 hours.
./tfactl diagcollect


Trace File Analyzer Command Examples

  • Viewing System and Cluster Summary

tfactl summary

  • To find all errors in the last one day

tfactl analyze -last 1d

  • To find all occurrences of a specific error  (in this case ORA-00600 errors)

tfactl analyze -search “ora-00600” -last 8h

  • To set the notification email to use

tfactl set notificationAddress=joeblogs@oracle.com

  • Enable or disable Automatic collections (ON by default)

tfactl set autodiagcollect=OFF

  • Adjusting the Diagnostic Data Collection Period

tfactl diagcollect -last 1 h

tfactl diagcollect -from “2018-03-21″

tfactl diagcollect  from “2018-03-21” -to “2018-03-22”

  • Analyze, trim and zip all files updated in the last 12 hours, including Cluster Health Monitor and OSWatcher data, from across all nodes  the cluster

tfactl diagcollect -all -last 12h

  • Run collection from specific nodes in a RAC cluster

tfactl diagcollect -last 1d -node rac01

  • Run collection for a specific database

tfactl -diagcollect -database hrdb -last 1d

  • Uploading collections to Oracle Support

Execute tfactl setupmos to configure Oracle Trace File Analyzer with MOS user name and password followed by

tfactl diagcollect -last 1d -sr 1234567

  • Search  database alert logs for the string “ORA-” from the past one day

tfactl analyze -search “ORA” -comp db -last 1d

  • Display a summary of events collected from all alert logs and system logs from the past six hours

tfactl analyze -last 6h

  • View the summary of a TFA deployment. This will display cluster node information as well as information related to database and grid infrastructure software homes like version, patches installed, databases running etc.

tfactl summary

  • Grant access to a user

tfactl access add -user oracle

  • List users with TFA access

tfactl access lsusers

  • Run orachk

tfactl run orachk

  • Display current configuration settings

tfactl print config




Commonly Used Commands : 

tfactl set reposizeMB=50240 
tfactl print repository 
tfactl purge -older 2h
tfactl set repositorydir=/u02/repository/tfa/ 
tfactl diagcollect -srdc ora600
tfactl diagcollect -srdc ora04030 
tfactl analyze -search "/ORA- /c" -comp db -last 2d 
tfactl diagcollect -all -from "DEC/14/2021 01:00:00 -to DEC/15/2021 03:00:00" 
tfactl diagcollect  -from "DEC/14/2021 01:00:00 -to DEC/15/2021 03:00:00"  

tfactl analyze -search "ORA-04031" -last 1d
tfactl analyze -since 1d
tfactl analyze -comp os -for ”Oct/01/2020 11" -search "."
tfactl analyze -comp osw -since 6h
tfactl analyze -search "ORA-" -since 2d
tfactl analyze -comp oswslabinfo -from ”Oct/01/2020 05:00:01" -to ”Oct/01/2020 06:00:01"

tfactl diagcollect –srdc dbperf
tfactl diagcollect -srdc ORA-00600
tfactl managelogs -show usage 
tfactl managelogs -purge -older 30d
tfactl tail alert





tfactl summary

-- Genearate complete summary overview in html
tfactl summary -html

-- Generate patching summary:
tfactl summary -patch -html

-- Generate asm summary
tfactl summary -asm -html


__________________
TFA  Status  : 
__________________

[root@ermantest tmp]# /u01/app/oracle/product/12.1.0.2/db_1/bin/tfactl status
root@ermantest tmp]# /u01/app/oracle/product/12.1.0.2/db_1/bin/tfactl toolstatus
root@ermantest tmp]# /u01/app/oracle/product/12.1.0.2/db_1/bin/tfactl version
# sudo /grid/infrastructure/home/bin/tfactl print config
# sudo /grid/infrastructure/home/bin/tfactl syncnodes



____________________________________
TFA installed as part of 21523375 (Oct2015 CPU)
____________________________________

Ater The Grid  Psu   TFA was installed.
If you need to shutdown processes running from grid, TFA will need to be stopped as well (# /etc/sinit.d/init.tfa stop) since crsctl stop crs does not stop TFA


__________________
Controlling TFA  cpu  usage 
__________________

tfactl setresourcelimit 
 [-tool tool_name] 
 [-resource resource_type] 
 [-value value]
To limit TFA to a maximum of 50% of a single CPU, run the following:
# tfactl setresourcelimit -value 0.5




__________________
TFA Reference : 
__________________

https://xy2401-local-doc-java.github.io/en/engineered-systems/health-diagnostics/trace-file-analyzer/tfaug/troubleshoot-tfa.html#GUID-169D2468-008B-4CE1-AB8E-1BA2A6233360

__________________
Troubleshooting TFA 
__________________

https://xy2401-local-doc-java.github.io/en/engineered-systems/health-diagnostics/trace-file-analyzer/tfaug/troubleshoot-tfa.html#GUID-AEEC5C9E-00F1-44B7-B39F-76E836AFC10F



##################################################################################
Oracle Autonomous Health Framework (Former TFA)
##################################################################################

Oracle Autonomous Health Framework is a collection of components that analyzes the diagnostic data collected, and proactively identifies issues before they affect the health of your clusters or your Oracle Real Application Clusters (Oracle RAC) databases. Oracle Autonomous Health Framework contains Oracle ORAchk, Oracle EXAchk, and Oracle Trace File Analyzer.

Install Oracle Autonomous Health Framework as root to obtain the fullest capabilities. Oracle Autonomous Health Framework has reduced capabilities when you install it as a non-root user.AFH can be run in two different modes either as daemon or Non-daemon mode. Both are doing same thing but daemon mode is more preferred.

To install Oracle AHF we run the ahf_setup installer with the -extract parameter. We also specify the -notfasetup parameter to avoid enabling the Oracle Trace File Analyser component of Oracle AHF.

For Rac  we need to install on both nodes . AHF will automatically synchronize between nodes.  IF show status  shows both nodes means   its synchronized no need to manually synchronize like tfa  

By default Oracle AHF will be installed to the /opt/oracle.ahf directory.

1) Download latest TFA software DocID 2550798.1
2) copy /u01/src/AHF-LINUX_v20.2.0.zip
3) unzip AHF-LINUX_v20.2.0.zip
4) Install  TFA  
as root 
 cd /u01/src/
 ./ahf_setup
[root@dbhost]# ./ahf_setup -extract -notfasetup 


 as  non root 
./ahf_setup -ahf_loc $ORACLE_HOME/ahf




__________________
upgrade the AHF 
__________________


1  mkdir /tmp/AHF
2  copy the latest AHF to /tmp/AHF and unzip 
4. uninstall existing  TFA : 
tfactl uninstall
3  ./ahf_setup -data_dir /u01/app/grid_base -tmp_loc /u01/app/grid_base/tmp/   ( You can find the data directory " ps -ef  | grep tfa | grep HeapDumpPath" ) 
4) Verify  TFA installed : 
/opt/oracle.ahf/tfa/bin/tfactl status
/opt/oracle.ahf/tfa/bin/tfactl toolstatus


5) syncnodes should be auto . below is just for reference 

tfactl syncnodes


Reference : 
Remove existing AHF and install latest AHF 21.4.1 as per MOS Doc ID 2832630.1




__________________
Controlling TFA  cpu  usage 
__________________

ahfctl setresourcelimit 
[-tool tool_name] 
[-resource resource_type] 
[-value value]



__________________
uninstall AHF 
__________________

[root@dbhost]# cd /opt/oracle.ahf/tfa/bin 
[root@dbhost]# ./tfactl uninstall 



__________________
Upload Files directly to Oracle Sr 
__________________

There are options to upload files  directly to  Oracle Sr 

$ curl -T [FILE_YOU_WANT_TO_SEND] -u [MOS_USER]

https://transport.oracle.com/upload/issue/[SR_NUMBER]/

$ tfactl upload -sr [SR_NUMBER] -user [MOS_USER] [FILE_YOU_WANT_TO_SEND]




__________________
AHF Reference 
__________________

https://docs.oracle.com/en/database/oracle/oracle-database/19/atnms/troubleshoot-tfa.html#GUID-11964D53-74C9-4754-9E80-9DB22557FF4E


https://docs.oracle.com/en/database/oracle/oracle-database/18/atnms/tfa-service.html#GUID-C470800D-B690-45F2-8C38-8EC60B6BB828

https://docs.oracle.com/en/engineered-systems/health-diagnostics/trace-file-analyzer/tfaug/performing-custom-collections.html#GUID-E4A2492E-A123-480A-B954-57898DBCE8BE