Sunday, April 19, 2020

Oracle 12c Zero downtime Patching on Grid Infrastructure



 Oracle  12c Zero downtime  Patching  on Grid Infrastructure


In this article we will throw some light on patching 12c  grid and database  using newly opatchauto .



Also  Oracle  has introduced  Grid Infrastructure Out of Place   ( OOP ) Patching using opatchauto .
Starting 12.2 , there is a new feature introduced with the latest version of opatchauto.
You can perform out of place ( OOP ) Patching .
This is helpful when the current Oracle Homes do not have sufficient space left and there is a need to shift the Oracle Homes to a new location .
This operation generally requires cloning of the Oracle Homes followed by Patching and then switching the CRS and the Database , including all the services to newly cloned Homes respectively .
All these operations can be performed at a single stretch by making use of OOP Patching .

However i am not covering OOP patching in this article because it is  very nicely and in-detailed explained in  Oracle (Doc ID 2419319.1)






####################   12c  Pre  and Post Patching  ####################

1) If data guard is  used  ,  remember to  defer  log shipping and  re-enable log shipping after  patch is applied 

2) If  you are using acl or sticky bit on database home , do remember to take  backup of  home permission and reapply after patch is completed 

3) Take  backup of database and  oracle / grid homes 

4) Compare database  invalid objects  pre and post patch 





#############    12c Zero downtime grid and  database  Patching  ########## 



Zero Downtime patching provides a process and mechanism for rolling out a patch across a domain while allowing applications to continue to service requests.
You can apply a ZDT patch using OPatchAuto, which rolls out the change to one node at a time and allows a load balancer   to redirect incoming traffic to the remaining nodes until the change is complete.

The Opatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes. It operates by querying existing configurations and automating the steps required for patching each Oracle RAC database home of same version and the GI home.
The utility must be executed by an operating system (OS) user with root privileges, and it must be executed on each node in the cluster if the GI home or Oracle RAC database home is in nonshared storage. The utility should not be run in parallel on the cluster nodes.
Depending on command line options specified, one invocation of opatchauto can patch the GI home, Oracle RAC database homes, or both GI and Oracle RAC database homes of the same Oracle release version as the patch. You can also roll back the patch with the same selectivity.
The current node and the DBs on it will be in the open state. The opatchauto utility will take them down automatically while patching . 

Below steps  is designed considering you are  using bundle patch of grid + database Psu   and DB jvm .    If we are not using  bundle patch we might have to   apply  psu separately on  Grid and Db home . 


Before running  actual patch we can  analyze patch 
/oragridpx/app/12.1.0/grid/OPatch/opatchauto apply  -analyze 

Please note only to export CRS_HOME parameter . 
cd /oratmp/patches/Jan2020/30463691/30464119

export CRS_HOME=/oragridpx/app/12.1.0/grid

/oragridpx/app/12.1.0/grid/OPatch/opatchauto apply     




In our case we  take full downtime so I personally apply 2nd  node at same time  while first node is going on   due to shortage of  time .  On 2nd node we can initiate patch  while  first node patch  is ongoing  mentioning  nonrolling option 

/oragridpx/app/12.1.0/grid/OPatch/opatchauto apply   -nonrolling 




Please note that   If  database  has  java been used  you need downtime to apply java patch as  below  with oracle user  from any one of node 

Also if  we are using  acfs we might have to apply  acfs patch separately just like java patch 



 cd  /oratmp/patches/Jan2020/30463691/30502041  
  
opatch apply 

srvctl start database -d Dbname 


alter system set cluster_database=false scope=spfile ;
startup upgrade 

$ORACLE_HOME/OPatch/datapatch -verbose


 sqlplus "/ as sysdba"

alter system set cluster_database=true  scope=spfile ;
srvctl start database -d Dbname 
@?/rdbms/admin/utlrp.sql
select count(*)  , owner    from dba_objects where  status !='VALID'   group by owner ; 
select * from dba_registry_sqlpatch ; 





-->  verify  grid services are up 

crs_stat | awk -F= '/NAME=/{n=$2}/TYPE=/{t=$2}/TARGET=/{g=$2}/STATE=/{s=$2; printf("%-45s%-15s%-10s%-30s\n", n,t,g,s)}'





#############    Resume from failed  Psu Patch  ########## 

In 12c, we have the patching sessions with their configuration in JSON files.

So go to directory $grid_home/OPatch/auto/dbsessioninfo/


Run   below with Grid   user 


cd /oratmp/patches/Jan2020/30463691/30464119

export CRS_HOME=/oragridpx/app/12.1.0/grid

/oragridpx/app/12.1.0/grid/OPatch/opatchauto resume with session id "DZSQ" 





#############   How to get Clusterware state out of "Rolling Upgrade"   ########## 


At time of  patching or Upgrade at time we see below error   . We need to  remove  server from  rolling upgrade/patch  more . 

PRVG-13410 : The state of Oracle Clusterware is not consistent to allow upgrade. [Expected = "NORMAL" ; Found = "ROLLING UPGRADE"].


Checks Done : 
 crsctl query crs releasepatch
 crsctl query crs softwarepatch
crsctl query crs activeversion -f   
crsctl query crs softwareversion  

ASMCMD [+] > showversion
ASMCMD [+] > showclusterstate
+ASM1:>SELECT SYS_CONTEXT('sys_cluster_properties', 'cluster_state') as "SYSTEM_STATE" FROM dual;
kfod op=patchlvl
kfod op=patches



Fix Tried : 
ASMCMD> showclusterstate
$crsctl stop rollingpatch
crsctl stop rollingupgrade
+ASM1:>ALTER SYSTEM STOP ROLLING PATCH;
+ASM1:>ALTER SYSTEM STOP ROLLING MIGRATION;
$GI_HOME/crs/install/rootcrs.pl -prepatch   
$GI_HOME/bin/clscfg -patch
$GI_HOME/crs/install/rootcrs.pl -postpatch



Known Issue : 
BUG 30140462 - CLUSTERWARE STATE IS STILL ROLLING UPGRADE AFTER PERFORMING OFFLINE DOWNGRADE

Bug 25197395 - Command 'crsctl startupgrade' hangs having many Databases (Doc ID 25197395.8)

rootupgrade.sh Fails with CRS-1136: Rejecting the rolling upgrade mode change because the cluster is being patched (Doc ID 2494827.1)




 For some reason,  patches are available from “opatch lsinventory”, but they are missing from kfod output:


-- as super user
$GRID_HOME/crs/install/rootcrs.sh -prepatch 

-- as grid owner,
$GRID_HOME/bin/patchgen commit -pi 12345678 
$GRID_HOME/bin/patchgen commit -pi 23456789 

-- as super user
$GRID_HOME/crs/install/rootcrs.sh -postpatch






#############   Rollback  Psu and   DB Java patch    ########## 


To rollback  Psu please note only to export CRS_HOME parameter .and run below with root 
cd /oratmp/patches/Jan2020/30463691/30464119

export CRS_HOME=/oragridpx/app/12.1.0/grid

/oragridpx/app/12.1.0/grid/OPatch/opatchauto rollback 

/oragridpx/app/12.1.0/grid/OPatch/opatch lspatches





To rollback database Java patch  .  From binary rollback on both nodes however  on database only from any 1 node with oracle user 


 cd  /oratmp/patches/Jan2020/30463691/30502041  
  
opatch rollback -id 27105253

srvctl start database -d Dbname 


alter system set cluster_database=false scope=spfile ;
startup upgrade 

$ORACLE_HOME/OPatch/datapatch -verbose


 sqlplus "/ as sysdba"

alter system set cluster_database=true  scope=spfile ;
srvctl start database -d Dbname 
@?/rdbms/admin/utlrp.sql
select count(*)  , owner    from dba_objects where  status !='VALID'   group by owner ; 
select * from dba_registry_sqlpatch ; 





For troubleshooting reference : 

1) https://docs.oracle.com/cd/E24628_01/doc.121/e39376/troubleshooting_opatchauto.htm#BJFBIDEF




#############   Oracle 12c Patching Not using opatchauto    ########## 


 

Download and copy both OCT2020PSU and bugfix PSU and extract under /u01/src/

==================================================
For Grid Infrastructure Home, as home user:
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31771877
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31772784
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31773437
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31780966



For Database home, as home user:
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31771877
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31772784

==================================================

Run the pre root script.

If this is a GI Home, as the root user execute:

/u01/app/19.0.0/grid/crs/install/rootcrs.sh -prepatch

==================================================

3) from GI home as oracle user.

GRID_HOME=/u01/app/19.0.0/grid
export PATH=$GRID_HOME/bin:$GRID_HOME/OPatch:$PATH:/usr/css/bin


INcluding Bugfix
opatch apply -local /u01/src/32242453/32242453 -oh /u01/app/19.0.0/grid
opatch apply -local /u01/src/31750108/31773437 -oh /u01/app/19.0.0/grid
opatch apply -local /u01/src/31750108/31771877 -oh /u01/app/19.0.0/grid
opatch apply -local /u01/src/31750108/31780966  -oh /u01/app/19.0.0/grid



4.    Apply  On  DB home.

ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:/usr/css/bin
/u01/src/31750108/31772784/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/19.0.0/dbhome_1

opatch apply -local /u01/src/32242453/32242453 -oh /u01/app/oracle/product/19.0.0/dbhome_1
opatch apply -local /u01/src/31750108/31771877 -oh /u01/app/oracle/product/19.0.0/dbhome_1

/u01/src/31750108/31772784/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/19.0.0/dbhome_1


==================================================

Run the pre root script.

If this is a GI Home, as the root user execute:

/u01/app/19.0.0/grid/crs/install/rootcrs.sh -postpatch




################################
Know  Issue  :   Crs Home Not Stopped 
################################
Please check the below steps to fix that issue(Doc ID 2703720.1) It happened today in cbem db1a uat server.
 
1) As grid user , execute :
/u01/app/19.0.0/grid/bin/cluutil -ckpt -oraclebase $ORACLE_BASE -writeckpt -name ROOTCRS_PREPATCH -state START
2) Verify:
/u01/app/19.0.0/grid/bin/cluutil -ckpt -oraclebase $ORACLE_BASE -chkckpt -name ROOTCRS_PREPATCH -status
START
3) Check for all nodes of the cluster and perform the above actions where required.
4) Re-execute opatchauto apply command or 'rootcrs.sh -prepatch' as required



  " Bug 33036568 : HAIP FLIP CAUSING NODE REBOOT DURING 19.11 GI RU PATCHING ON IBM AIX", 
    ASM process is terminated by LMON process during postpatch on grid home 19.11





#############   Issues commonly faced   ########## 



1)  While applying datapatch at dataabase  we face ORA-20001 for which   below metalink document cant be referred 

Queryable Patch Inventory - Issues/Solutions for ORA-20001: Latest xml inventory is not loaded into table (Doc ID 1602089.1)

verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table


select dbms_sqlpatch.verify_queryable_inventory from dual;

VERIFY_QUERYABLE_INVENTORY
--------------------------------------------------------------------------------

ORA-20001: Latest xml inventory is not loaded into table


alter system set "_bug27355984_xt_preproc_timeout"=1000 scope=spfile ;
alter system set "_enable_ptime_update_for_sys"=TRUE   scope=spfile ;


After patching :
alter system reset  "_bug27355984_xt_preproc_timeout"  scope=spfile ;
alter system reset  "_enable_ptime_update_for_sys"   scope=spfile ;




Similarly if ORA-20008:  is reported we need use below trace 

>> Before patching 
SQL: alter session set events '18219841 trace name context forever';


>> After patching 
alter session set events '18219841 trace name context off';





2)  Session disconnect  during opatchauto .

We can rerun  opatchauto again . Oracle will internally determine missing patch and apply same



3) Due to  failed opatchauto  patching  it happens sometimes  that  crs  does not start on node where it failed . When we try to   rerun  opatchauto it shows crs is not configured

i personally followed below

A)  Stop crs  on 2nd node where we are not patching to avoid mismatch  of crs version between nodes 

B)  ran below on node where we  getting error   . It  was part of post patching   till 11g

/optware/grid/11.2.0.4/crs/install/rootcrs.pl -patch

C)   rerun opatchauto on node where  it failed 




4) oracle home space issue during and after patching

Before patching  we can initiate  below to ensure prevoius backups are removed. Please refer to 550522.1  to troubleshoot space issues during patching

$ opatch util cleanup


How To Avoid Disk Full Issues Because OPatch Backups Take Big Amount Of Disk Space. (Doc ID 550522.1)





#############   Old 11g  approach of applying  GI  patch  ########## 


In   case   resume  approach of  above  doesnt help we must  have clarity on  traditional way of  GI  patch .  Seeing logs which  step  patch  failed  we need to  proceed  accordingly 

Hence  below  old steps are shared only for backup purpose 




APPLY PATCH TO GRID HOME => /optware/grid/11.2.0.4

************************



1. Run the pre root script. If this is a GI Home, as the root user execute

***********************************************************************


/optware/grid/11.2.0.4/crs/install/rootcrs.pl -unlock

2. As the GI home owner execute, apply the OCW patch
**************************************************
/optware/grid/11.2.0.4/OPatch/opatch napply -oh /u01/app/11204 -local /u02/patch/22191577/21948348

3. As the GI home owner execute, apply the ACFS patch
**************************************************
/optware/grid/11.2.0.4/OPatch/opatch napply -oh /u01/app/11204 -local /u02/patch/22191577/21948355


4. As the GI home owner execute, apply the DB PSU Patch
*****************************************************
/optware/grid/11.2.0.4/OPatch/opatch apply -oh /u01/app/11204 -local /u02/patch/22191577/21948347


5. Run the post script. As the root user execute:
********************************************** 
/optware/grid/11.2.0.4/rdbms/install/rootadd_rdbms.sh

6. If this is a GI Home, as the root user execute:
*********************************************** 
/optware/grid/11.2.0.4/crs/install/rootcrs.pl -patch










Sunday, March 22, 2020

Oracle 12.1.0.2 to 19c Rac in place upgrade using response file


Oracle 12.1.0.2 to 19c  Rac in place upgrade using response file 



Below are steps we  followed  to  silent upgrade  2 node 12.1.0.2   rac    to 19c on same server.

Before we  start  upgrade its necessary to have strong rollback plan , hence will be  highlighting rollback plan  Also .  Please  ensure  we have proper database and  binary backups in place .

Before we  start upgrade  , please note  19c upgrade will need  some 32gb  free  space  in  grid  diskgroup .   Ensure to  check free space  rightly from asmcm>lsdg  under  usable free ms , rather checking from sqlplus   . Its better to have 40gb free space in grid home before we start upgrade

 After  upgrade its also important to track  that  your backups are  Successful and user upgrade their client  to 19c

There is also  AutoUpgrade Tool  as per (Doc ID 2485457.1)    introduced which we can try to explore .



#################### Take  necessary backups    ####################



1) Add newly allocated disk  to Ocr  diskgroup 
2) Take Tar backup of grid home  and database home
3)  Take  backup of asm metadata , ocr and  olr 
4) Verify dataabase full backup is taken 
5)  Verify dataguard is in sync 

####################    Grid  upgrade    ####################

1) On both nodes as root

o+r /etc/inittab

2)  Stop   dataguard log shipping

3)  on both nodes make 19.3.0 and grid  directory with grid user ownership

 4)  Download and unzip  Grid software On 1st  node only

5) Prepare response file on 1st node only  . Right response file parameter is very  crucial  for  upgrade to be successful .

/oragridzj/app/19.3.0/grid/install/response/gridsetup.rsp

INVENTORY_LOCATION=/oragridzj/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/oragridzj/app/grid
oracle.install.asm.OSDBA=oinstall
oracle.install.asm.OSOPER=oinstall
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.clusterName=Clustername
oracle.install.asm.diskGroup.name=CRS
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE



6)  On 1st node only  run pre upgrade script  from new home and  generate fix script if generated .   as grid user

./runcluvfy.sh stage -pre crsinst -upgrade
-rolling -src_crshome /oragridzj/app/12.1.0/grid/ -dest_crshome /oragridzj/app/19.3.0/grid -dest_version 19.3.0.0.0 -fixup -verbose


7)  on 1st  node  run grid setup   dry run  from new home   . Smarting  from 19c we have option to run dry run  before we  actually  do upgrade .   Note , this will copy  software to 2nd  node .

./gridSetup.sh -silent -dryRunForUpgrade 
-responseFile /oragridzj/app/19.3.0/grid/install/response/gridsetup.rsp


8)  Run actual run from  1st node

./gridSetup.sh -silent
-responseFile /oragridzj/app/19.3.0/grid/install/response/gridsetup.rsp



9) modify .profile   to point to new home

10)  On Both Nodes   Verify  files if  looks fine and point to new grid home

/etc/init.d/init.ohasd
/etc/init.d/ohasd
 /var/opt/oracle/olr.loc
.profile
/etc/initab


11)    On   1st node only   Detach old home and  verify  inventory on   both nodes

/oragridzj/app/12.1.0/grid/oui/bin/runInstaller -detachHome -silent
ORACLE_HOME=/oragridzj/app/12.1.0/grid


12) Perform  reboot test  for both nodes


13) re enable  dataguard log shipping and  ensure dg in sync 





####################   Database  Pre upgrade Patch     ####################


Patches to apply before upgrading Oracle GI and DB to 19c or downgrading to previous release (Doc ID 2539751.1)


If you are planning to upgrade your GI or RDBMS versions to 19c, you must consider applying the following patches before starting the upgrade operation.










####################   Database  upgrade    ####################


Before we start any upgrade  as best practice create a  guaranteed restore point 



1)   Stop Log shipping to  Data guard

2)    On Both Nodes ,

mkdir -p /oraswzj/app/oracle/product/19.3.0/dbhome_1

Download and  unzip  database software



3)    On both nodes Run run.sh  with below contents


./runInstaller -ignorePrereq -waitforcompletion -silent    -responseFile /oraswzj/app/oracle/product/19.3.0/dbhome_1/install/response/db_install.rsp  oracle.install.option=INSTALL_DB_SWONLY  ORACLE_HOSTNAME=yourhosname  UNIX_GROUP_NAME=oinstall  INVENTORY_LOCATION=/oragridzj/app/oraInventory   SELECTED_LANGUAGES=en,en_GB ORACLE_BASE=/oraswzj/app/oracle   oracle.install.db.InstallEdition=EE oracle.install.db.OSDBA_GROUP=oinstall oracle.install.db.OSBACKUPDBA_GROUP=oinstall    oracle.install.db.OSDGDBA_GROUP=oinstall     oracle.install.db.OSKMDBA_GROUP=oinstall   oracle.install.db.OSRACDBA_GROUP=oinstall   SECURITY_UPDATES_VIA_MYORACLESUPPORT=false  DECLINE_SECURITY_UPDATES=true


As a root user, execute the following script(s):
        1. /u01/app/oracle/product/19.0.0/dbhome_1/root.sh



4)  download  pre upgraade jar    ( 1st node  )

Download the latest "preupgrade.jar" file from MOS 884522.1.


unzip in 19c home    $ORACLE_HOME/rdbms/admin

unzip -o /u01/software/preupgrade_19_cbuild_??_lf.zip





5)  run pre upgrade script from 19c home

export ORACLE_HOME=/orazj/oraswzj/app/oracle/product/12.1.0/dbhome_1
export ORACLE_SID=DBNAME

/oraswzj/app/oracle/product/19.3.0/dbhome_1/jdk/bin/java -jar /oraswzj/app/oracle/product/19.3.0/dbhome_1/rdbms/admin/preupgrade.jar TERMINAL TEXT  >> dbname_precheck.log




6)  run  pre upgrade  recomendation  received from step  5 on all database 


>> purge recyclebin :  purge  dba_recyclebin ;
>> utlrp.sql
>>  undo , system , sysaux , temp tablespace
>>  EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

@/oraswzj/app/oracle/cfgtoollogs/DBNAME/preupgrade/preupgrade_fixups.sql




7)   copy  dbs and  network files  from old home to new home
   ( both the servers )


cp $ORACLE_HOME/dbs/*  /oraswzj/app/oracle/product/19.3.0/dbhome_1/dbs/
cp $ORACLE_HOME/network/admin/*  /oraswzj/app/oracle/product/19.3.0/dbhome_1/network/admin/



8) enable rac on  new home  on both servers in 19c home

$ cd $ORACLE_HOME/rdbms/lib

make -f ins_rdbms.mk rac_on
make -f ins_rdbms.mk ioracle


9)   Shutdown database from old home and remove from crs

shutdown old database    ( srvctl  stop database -d )

remove  database from  ocr from   old  home and add  from new home
From old Oracle_Home:
% $ORACLE_HOME/bin/srvctl remove database -d db_name



10)   update new  home  in .profile   on both servers


11)   Upgrade  database  from new home  on 1st node only

sqlplus / as sysdba <<EOF

alter system set cluster_database=false scope=spfile ;
startup upgrade;
exit;
EOF
You can run the upgrade using either of the following commands. The second is actually just a shorthand for the former.

# Regular upgrade command.
cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl  $ORACLE_HOME/rdbms/admin/catupgrd.sql

# Shorthand command.
$ORACLE_HOME/bin/dbupgrade

alter system set cluster_database=true  scope=spfile ;




12)  Update timezone , gather stats and run post script  that will be generated in step 5 

--->>>  start databse   in upgrade mode 

SELECT * FROM v$timezone_file;
-- Begin upgrade to the latest version.
SET SERVEROUTPUT ON
DECLARE
  l_tz_version PLS_INTEGER;
BEGIN
  l_tz_version := DBMS_DST.get_latest_timezone_version;

  DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
  DBMS_DST.begin_upgrade(l_tz_version);
END;
/

SHUTDOWN IMMEDIATE;
STARTUP;

-- Do the upgrade.
SET SERVEROUTPUT ON
DECLARE
  l_failures   PLS_INTEGER;
BEGIN
  DBMS_DST.upgrade_database(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.upgrade_database : l_failures=' || l_failures);
  DBMS_DST.end_upgrade(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.end_upgrade : l_failures=' || l_failures);
END;
/

-- Check new settings.
SELECT * FROM v$timezone_file;

COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A20

SELECT property_name, property_value
FROM   database_properties
WHERE  property_name LIKE 'DST_%'
ORDER BY property_name;

sqlplus / as sysdba <<EOF
EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
 EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
exit;
EOF

# AUTOFIXUP
sqlplus / as sysdba <<EOF
@/oraswzj/app/oracle/cfgtoollogs/dbname/preupgrade/postupgrade_fixups.sql
exit;
EOF






13)  enable  cluster mode  to true

alter system set cluster_database=true  scope=spfile ;




14)  add database to crs from  new home .  If we are facing  PRKC-1137
and PRCT-1402   , copy  osdbagrp  from grid home to 19c  dbhome

cd $ORACLE_HOME/bin
mv osdbagrp osdbagrp.org
cp /oragridzj/app/19.3.0/grid/bin/osdbagrp .


srvctl add database -d DBNAME  -o /oraswzj/app/oracle/product/19.3.0/dbhome_1 -s '+DATA/DBNAME/PARAMETERFILE/spfile.310.951432401'  -role {PRIMARY | PHYSICAL_STANDBY

srvctl add instance -d DBNAME   -i DBNAME1  -n SERVER1
srvctl add instance -d DBNAME   -i DBNAME2  -n SERVER2




15)   Perform post  checks

 check  invalid count
verify  dba_registry

/etc/init.d/init.ohasd
/etc/init.d/ohasd
 /var/opt/oracle/olr.loc
.profile
/etc/initab



16)  Detach 12c home from  1st node  only 

/oraswzj/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller 
-detachHome -silent ORACLE_HOME=/oraswzj/app/oracle/product/12.1.0/dbhome_1 





17)  Fix netbackup  for database backups 

A)  create  softlink in 19chome/lib pointing to  netbackup .  Check 12c for  reference 
B)  Correct database home path in  backup script 



18)   Start dataguard log shipping and  verify log sync 



19)  update monitoring , upgrade  catalog database  . If  upgrade catalog fails


RMAN-07539: insufficient privileges to upgrade the catalog schema

Refer : Doc ID 2558120.1

SQL> grant CREATE ANY CONTEXT to rman;
SQL> > grant drop ANY CONTEXT to rman;








#################### Restart database upgrade from where it failed failed   ####################


1) Check logs where

grep -i 'error at line' catupgrd*.log

PHASE_TIME___START 15 15-01-16 08:49:41



2)  Start upgrade from step where it failed

cd $ORACLE_HOME/bin
dbupgrade -p 15




######################## rhpserver / mgmtdb  disable after 19c upgrade  ####################


If failed to  failed to run  Pre  upgrade patch before starting upgrade procedure mgmt database  will be in disabled state  after upgrade.

In 19c  Mgmt database is optional , but still if you want to fix, we can use below  procedure  to bring it online .



Bring mgmt online 

1)

The first thing is to bring up the MGMTDB in the 12.1 GI_HOME.

srvctl enable mgmtdb
srvctl start mgmtdb
srvctl status mgmtdb


2)

Once the MGMTDB is up and running, you need to drop the RHP service that was created during the rootupgrade process. This has to be done from the 18.5 GI_HOME.

[root@node2 ~]# env | grep ORA
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/18.5.0/grid
[root@node2 ~]# srvctl remove rhpserver -f



3)  Now that the RHP service has been removed, we need to remove the MGMTDB in 12.1.

########################################
# As root user in BOTH nodes
########################################
#Node 1
[root@node1 ~]#  export ORACLE_HOME=/u01/app/12.1.0.2/grid
[root@node1 ~]#  export PATH=$PATH:$ORACLE_HOME/bin
[root@node1 ~]# crsctl stop res ora.crf -init
[root@node1 ~]# crsctl modify res ora.crf -attr ENABLED=0 -init
#Node 2
[root@node2 ~]#  export ORACLE_HOME=/u01/app/12.1.0.2/grid
[root@node2 ~]#  export PATH=$PATH:$ORACLE_HOME/bin
[root@node2 ~]# crsctl stop res ora.crf -init
[root@node2 ~]# crsctl modify res ora.crf -attr ENABLED=0 -init
########################################
# As oracle User on Node 1
########################################
oracle@node1 : ~&amp;gt; export ORACLE_HOME=/u01/app/12.1.0.2/grid
oracle@node1 : ~&amp;gt; export PATH=$PATH:$ORACLE_HOME/bin
oracle@node1 : ~&amp;gt; srvctl relocate mgmtdb -node node1
oracle@node1 : ~&amp;gt; srvctl stop mgmtdb
oracle@node1 : ~&amp;gt; srvctl stop mgmtlsnr

 /u01/app/12.1.0.2/grid/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB

oracle@node1 : ~&amp;gt; srvctl remove mgmtdb
Remove the database _mgmtdb? (y/[n]) y
########################################
##### Manually Removed the mgmtdb files
##### Verify that the files for MGMTDB match your environment before deleting them
########################################
ASMCMD&amp;gt; cd DBFS_DG/_MGMTDB/DATAFILE
ASMCMD&amp;gt; ls
SYSAUX.257.879563483
SYSTEM.258.879563493
UNDOTBS1.259.879563509
ASMCMD&amp;gt; rm system.258.879563493
ASMCMD&amp;gt; rm sysaux.257.879563483
ASMCMD&amp;gt; rm undotbs1.259.879563509
ASMCMD&amp;gt; cd ../PARAMETERFILE
ASMCMD&amp;gt; rm spfile.268.879563627
ASMCMD&amp;gt; cd ../TEMPFILE
ASMCMD&amp;gt; rm TEMP.264.879563553
ASMCMD&amp;gt; cd ../ONLINELOG
ASMCMD&amp;gt; rm group_1.261.879563549
ASMCMD&amp;gt; rm group_2.262.879563549
ASMCMD&amp;gt; rm group_3.263.879563549
ASMCMD&amp;gt; cd ../CONTROLFILE
ASMCMD&amp;gt; rm Current.260.879563547




4)
Once the MGMTDB is deleted, then we run the mdbutil.pl (which you can grab from MOS Doc 2065175.1) and add the MGMTDB in the 18.5 GI_HOME.


########################################
# As oracle User on Node 1
########################################
oracle@node1 : ~&amp;gt; env | grep ORA
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/18.5.0/grid
oracle@node1 : ~&amp;gt; ./mdbutil.pl --addmdb --target=+DBFS_DG


########################################
# As root user in BOTH nodes
########################################


[root@node1 ~]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@node1 ~]# crsctl start res ora.crf -init

[root@node2 ~]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@node2 ~]# crsctl start res ora.crf -init




 5)

Once the MGMTDB has been recreated, we now rerun the gridSetup.sh -executeConfigTools command, and we can now see that the cluster status is NORMAL and everything is running as expected in 18.5 version.



oracle@node1 : ~&amp;gt; env | grep ORA
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/18.5.0/grid
/u01/app/18.5.0/grid/gridSetup.sh -executeConfigTools -responseFile /tmp/gridresponse.rsp -silent
crsctl query crs activeversion -f
crsctl check cluster -all





Reference : 



How to Move/Recreate GI Management Repository (GIMR / MGMTDB) to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)
12.2: How to Create GI Management Repository (Doc ID 2246123.1)
MDBUtil: GI Management Repository configuration tool (Doc ID 2065175.1)







######################## Rollback Plan for Grid home   ####################


1) On  1st node only , Delete Mgmt database  with grid user on main node 

srvctl status mgmtdb
$ORACLE_HOME/bin/dbca -silent -deleteDatabase -sourceDb  -MGMTDB


2) On Both nodes  , Downgrade  grid  with root user  on all node 

$ORACLE_HOME/crs/install/rootcrs.sh -downgrade


3) Remove the 19c grid_home from Active  cluster inventory: ( only from one node)

cd /sharearea/crs/grid19c/oui/bin


oracle$./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false  


4) On first node only  Update the active cluster inventory with  ORACLE 12C grid home ( only from one node)

cd /crs/app/oracle/product/grid12c/oui/bin

oracle$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true  



5) On first node only   START THE CRS FROM ORACLE 12C GRID HOME  for all  nodes 

:/crs/app/oracle/product/grid12c/bin# ./crsctl start crs



6) On first node only  Remove MGMTDB service from cluster.

srvctl remove mgmtdb -f 


7) On first node only   Create the MGMTDB CONTAINER DB:

/crs/app/oracle/product/grid12c/bin/dbca -silent -createDatabase -createAsContainerDatabase true 
-templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName +MGMT 
-datafileJarLocation /crs/app/oracle/product/grid12c/assistants/dbca/templates -characterset AL32UTF8 

-autoGeneratePasswords -skipUserTemplateCheck



8) On first node only 

Configure the Management Database by running  the Configuration Assistant from the location 

/crs/app/oracle/product/grid12c/bin/dbca -silent -createDatabase -createAsContainerDatabase true 
-templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName +MGMT 
-datafileJarLocation /crs/app/oracle/product/grid12c/assistants/dbca/templates -characterset AL32UTF8 

-autoGeneratePasswords -skipUserTemplateCheck


9)  On first node Only 

Configure the Management Database by running  the Configuration Assistant from the location 

$ORACLE_HOME/bin/mgmtca —local.



10)   On first node  only ,   As grid user, run the post_gimr_ugdg.pl script  from 19c Grid home:

$ $ORACLE_HOME/crs/install/post_gimr_ugdg.pl -downgrade -clusterType SC -destHome /u01/app/19.0.0/grid 

-lowerVersion 12.2.0.1.0 -oraBase /u01/app/grid2





 ##########  Deconfigure  19c grid if  upgrade fails and  rollback steps  not possible  ##########




1) Deconfigure from  both nodes  from 19c home 

/oragrid/app/19.3.0/grid//crs/install/rootcrs.sh -deconfig -force -verbose


2)  Cleanup below files if present,in both nodes:

ls -l /oragrid/app/grid/crsdata/nodename/crsconfig/ckptGridHA_nodename.xml
ls -l /oragrid/app/grid/crsdata/@global/crsconfig/ckptGridHA_global.xml

If it's still there, please remove it manually with "rm" command on all nodes



3)  If GPNP profile is different between nodes/setup, clean it up on all nodes as grid user

$ find <GRID_HOME>/gpnp/* -type f -exec rm -rf {} \;



4) Nullify all 3  ocr disk   (  Please do only for  Ocr disk .  if you perform for data disk your data will be lost )   

dd if=/dev/zero of=/dev/raw/raw13 bs=1048576 count=50




5) Run root.sh again on both node from 12c  home 

/oragrid/app/12.1.0/grid/root.sh



6) Add database and instance to  ocr using srvctl add database / srvctl add instance 




######################## Rollback Plan for Database home   ####################




1)  Fix  timezone if   we have  run  timezone script as part of upgrade.  To find which time zone file version your database is  currently using, query V$TIMEZONE_FILE 


SQL> select * from  V$TIMEZONE_FILE;

For example:



If the query returns timezlrg_18.dat in the column V$TIMEZONE_FILE.FILENAME, then check if the file is present in the target Oracle Home:



Linux and UNIX

$ORACLE_HOME/oracore/zoneinfo/timezlrg_18.dat









2)  On   1st node only    Downgrade   from 19c  database 


Start 1st instance only with cluster database to false 

from 19c home
$cd $ORACLE_HOME/bin
$./dbdowngrade

or

from 19c home
SQL> startup downgrade
@catdwgrd.sql
shu




3) on 1st  node only Reload  from 12c home 

 startup upgrade
SQL> $ORACLE_HOME/rdbms/admin/catrelod.sql
shu  
start
SQL> $ORACLE_HOME/rdbms/admin/utlrp.sql




4) On 1st  node only ,   Gather  stats , 

EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
execute dbms_stats.gather_fixed_objects_stats;





#################### Oracle reference links   ####################

1)   Restart  Database  upgrade from failed  state  

https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/restarting-upgrade-specific-phase-that-failed-using-p.html#GUID-29A9C778-7AE6-4C2C-85E6-4ADDA8466A34




2)  Downgrade  Oracle  database 

https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/
downgrading-oracle-standalone-cluster-12crelease2.html#GUID-C375E4DF-482B-4B14-8FF7-0332254F0547

https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/downgrading-oracle-db-after-upgrade.html#GUID-0227D321-577E-47C5-A8C0-36991B8D0357




3)  Upgrade reference 

https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/upgrading-oracle-database.html#GUID-CC88F262-3370-4FFA-8604-EDA9F20F56D9


https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/database-upgrade-guide.pdf

Oracle 19c - Complete Checklist for Manual Upgrades to Non-CDB Oracle Database 19c (Doc ID 2539778.1)




4) Downgrade Grid 

https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/downgrading-to-oracle-grid-infrastructure-12c-release-1-12-1.html#GUID-DD425330-02F5-49B0-9CC9-D292234C8A6F


5)  To deconfigure  Cluesterware when upgrade fails and  rollback not possible 

How to Deconfigure/Reconfigure(Rebuild OCR) or Deinstall Grid Infrastructure (Doc ID 1377349.1)

How to Abort Grid Infrastructure Upgrade Without Taking a Cluster Outage (Doc ID 1910156.1)



5)  Fix pre upgrade issues 

Oracle 19c Preupgrade Log Warns Oracle Streams must be Removed Though no Streams is Setup (Doc ID 2666384.1)   



#################### Installing  19c client in silent mode     ####################


1) Download   19c software and unzip it 


2)  Prepare response file   /client/response/client_install.rsp


INVENTORY_LOCATION=/dmapps/product/oraInventory
ORACLE_HOME=/dmapps/product/release/oracle/oracle/product/12.1.0/client_1
ORACLE_BASE=/dmapps/product/release


3)  Run below  from install directory 

cd /software/client
$ ./runInstaller -ignoreSysPrereqs -showProgress -silent -responseFile  /software/client/response/client_install.rsp





####################  Issue faced during upgrade    ####################


1) Grid  upgrade failed : 
SInce we doing silent upgrade , it failed few times since response  file was not right . Once we drafted right response file , it was piece of cake 


2) MGMT database was not upgraded . 
Since we missed to apply pre patch ,  mgmt database was not upgraded . . Later we manually upgraded mgmt database