Sunday, April 19, 2020

Oracle 19c 2 node Rac silent mode Installation

    Oracle 19c 2 node Rac silent mode Installation 




Below we  have planned for  2 node  Rac  Installation .  Will   keep enhancing  periodically .
This step  we need to still test and will enhance based on actual encounter .  I am just keeping this ready for our reference for actual rac installation . 



In case  its  migration in most case  we need parallel environment  build up before go live date . In case  there is no budget to have another   environment  build  we  planned  below 2  options as  backup plan 

1)  To  upgrade  only  grid  home on existing servers . Have 2 database  homes on same  servers . While  application is  continuing  business on  

2)  Arrange 1 common  test server to  for all application to test application compatibility  






####################   Server preparation  ####################




I am  not mentioning  below topics in details and  just giving high level of  requirements 



1) ssh configuration between both nodes 

2) Install required OS packages / rpms 

3) Arranging Ocr Luns 

4) Creating Grid and oracle user ids 

5) Arrange  required Ip  
     > 2 Private ip  each node 
     > 1 Public IP  each node 
     >  I Vip each  node 
     >  3 scan vip 



6) Dns registration for Scan 





####################   Grid  Software Installation ################ 




1)  Download and  unzip software 



As the grid user, download the Oracle Grid Infrastructure image files and extract
the files into the Grid home. For example:

$ mkdir -p /u01/app/19.0.0/grid
$ chown grid:oinstall /u01/app/19.0.0/grid
$ cd /u01/app/19.0.0/grid
$ unzip -q download_location/grid.zip



2)  Run cluvfy  to  identify gaps  

/u01/oragrid/19c/grid/runcluvfy.sh stage -pre crsinst -n rac19c01,rac19c02 -verbose | tee /tmp/cluvfy.out



3)  Prepare response file

While other parameters are quite straight forward i would like throw light on below parameter alone. Total description / sample response file is  mentioned at end of blog


oracle.install.crs.config.networkInterfaceList .    Under  oracle.install.crs.config.networkInterfaceList   parameter starting with  :1  are  to define public interface and  starting :5 are to define asm and  private interface 





INVENTORY_LOCATION=/opt/oracle/oraInventory
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/app/grid
oracle.install.asm.OSDBA=dba
oracle.install.asm.OSOPER=dba
oracle.install.asm.OSASM=sysasm
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.gpnp.scanName=rac19c-scan.novalocal
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=prod-bss
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.clusterNodes=rac19c01:rac19c01-vip:HUB,rac19c02:rac19c02-vip:HUB
oracle.install.crs.config.networkInterfaceList=eth0:192.168.10.0:1,eth1:192.168.30.0:5
oracle.install.crs.config.useIPMI=false
oracle.install.asm.storageOption=ASM
oracle.install.asmOnNAS.configureGIMRDataDG=false
oracle.install.asm.SYSASMPassword=Oracle123
oracle.install.asm.diskGroup.name=OCR_VOTE
oracle.install.asm.diskGroup.redundancy=NORMAL
oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/OCR_VOTE1p1,/dev/oracleasm/disks/OCR_VOTE2p1,/dev/oracleasm/disks/OCR_VOTE3p1
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*
oracle.install.asm.monitorPassword=Oracle123
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=false



4)  Run pre check 

/u01/app/19c/grid/gridSetup.sh -silent  -executePrereqs  -waitForCompletion  -responseFile /tmp/Silent_19cGrid.rsp




5) Perform the Installation in Silent Mode using response file 

/u01/oragrid/19c/grid/gridSetup.sh -silent -ignorePrereqFailure  -waitForCompletion  -responseFile  /u01/oragrid/19c/grid/install/response/gridsetup.rsp


As a root user, execute the following script(s):
        1. /u01/app/19c/grid/root.sh 




6) Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes. 



 /u01/app/19c/grid/gridSetup.sh  -executeConfigTools -responseFile /tmp/OBS_19cGrid.rsp [-silent]




7) Verify the Installation

/u01/oragrid/19c/grid/bin/cluvfy stage -post crsinst -n rac19c01,rac19c02 -verbose | tee /tmp/cluvfy.out






create diskgroup DATA external redundancy disk '/dev/asm-diski','/dev/asm-diskj','/dev/asm-diskk'  ATTRIBUTE 'compatible.rdbms' = '19.0', 'compatible.asm' = '19.0';

create diskgroup FRA external redundancy disk '/dev/asm-diskl','/dev/asm-diskm','/dev/asm-diskn'  ATTRIBUTE 'compatible.rdbms' = '19.0', 'compatible.asm' = '19.0';

srvctl start diskgroup -diskgroup data
srvctl start diskgroup -diskgroup fra
alter diskgroup all mount;





####################  Database   installation  ############## 


1)    On Both Nodes ,

mkdir -p /oraswzj/app/oracle/product/19.3.0/dbhome_1

Download and  unzip  database software



2)    On both nodes Run run.sh  with below contents


./runInstaller -ignorePrereq -waitforcompletion -silent    -responseFile /oraswzj/app/oracle/product/19.3.0/dbhome_1/install/response/db_install.rsp  oracle.install.option=INSTALL_DB_SWONLY  ORACLE_HOSTNAME=yourhosname  UNIX_GROUP_NAME=oinstall  INVENTORY_LOCATION=/oragridzj/app/oraInventory   SELECTED_LANGUAGES=en,en_GB ORACLE_BASE=/oraswzj/app/oracle   oracle.install.db.InstallEdition=EE oracle.install.db.OSDBA_GROUP=oinstall oracle.install.db.OSBACKUPDBA_GROUP=oinstall    oracle.install.db.OSDGDBA_GROUP=oinstall     oracle.install.db.OSKMDBA_GROUP=oinstall   oracle.install.db.OSRACDBA_GROUP=oinstall   SECURITY_UPDATES_VIA_MYORACLESUPPORT=false  DECLINE_SECURITY_UPDATES=true


As a root user, execute the following script(s):
        1. /u01/app/oracle/product/19.0.0/dbhome_1/root.sh




3) enable rac on  new home  on both servers in 19c home

$ cd $ORACLE_HOME/rdbms/lib

make -f ins_rdbms.mk rac_on
make -f ins_rdbms.mk ioracle




4) Create database and add to  Ocr


dbca -silent -ignorePreReqs  -ignorePrereqFailure  -createDatabase -templateName General_Purpose.dbc -responseFile NO_VALUE \
-gdbname rac19c  -sid rac19c \
-createAsContainerDatabase TRUE \
-sysPassword lhr -systemPassword lhr -pdbAdminPassword lhr -dbsnmpPassword lhr \
-datafileDestination '+DATA' -recoveryAreaDestination '+FRA' \
-storageType ASM \
-characterset AL32UTF8 \
-sampleSchema true \
-totalMemory 1024 \
-databaseType MULTIPURPOSE \
-emConfiguration none \
-nodeinfo raclhr-19c-n1,raclhr-19c-n2



srvctl add database -d DBNAME  -o /oragrid/app/oracle/product/19.3.0/dbhome_1 -s '+DATA/DBNAME/PARAMETERFILE/spfile.310.951432401'  -role {PRIMARY | PHYSICAL_STANDBY

srvctl add instance -d DBNAME   -i DBNAME1  -n SERVER1
srvctl add instance -d DBNAME   -i DBNAME2  -n SERVER2





5)  Fix netbackup  for database backups 

A)  create  softlink in 19chome/lib pointing to  netbackup . 
B)  Correct database home path in  backup script 






####################   References  ####################





https://www.oracle.com/webfolder/technetwork/tutorials/architecture-diagrams/19/rac/pdf/rac-19c-architecture.pdf





################  Grid   Response file  description ####################


###############################################################################
## Copyright(c) Oracle Corporation 1998,2017. All rights reserved.           ##
##                                                                           ##
## Specify values for the variables listed below to customize                ##
## your installation.                                                        ##
##                                                                           ##
## Each variable is associated with a comment. The comment                   ##
## can help to populate the variables with the appropriate                   ##
## values.                                                                   ##
##                                                                           ##
## IMPORTANT NOTE: This file contains plain text passwords and               ##
## should be secured to have read permission only by oracle user             ##
## or db administrator who owns this installation.                           ##
##                                                                           ##
###############################################################################

###############################################################################
##                                                                           ##
## Instructions to fill this response file                                   ##
## To register and configure 'Grid Infrastructure for Cluster'               ##
##  - Fill out sections A,B,C,D,E,F and G                                    ##
##  - Fill out section G if OCR and voting disk should be placed on ASM      ##
##                                                                           ##
## To register and configure 'Grid Infrastructure for Standalone server'     ##
##  - Fill out sections A,B and G                                            ##
##                                                                           ##
## To register software for 'Grid Infrastructure'                            ##
##  - Fill out sections A,B and D                                            ##
##  - Provide the cluster nodes in section D when choosing CRS_SWONLY as     ##
##    installation option in section A                                       ##
##                                                                           ##
## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##
##                                                                           ##
## To add more nodes to the cluster                                          ##
##  - Fill out sections A and D                                              ##
##  - Provide the cluster nodes in section D when choosing CRS_ADDNODE as    ##
##    installation option in section A                                       ##
##                                                                           ##
###############################################################################

#------------------------------------------------------------------------------
# Do not change the following system generated value. 
#------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.2.0

###############################################################################
#                                                                             #
#                          SECTION A - BASIC                                  #
#                                                                             #
###############################################################################


#-------------------------------------------------------------------------------
# Specify the location which holds the inventory files.
# This is an optional parameter if installing on  
# Windows based Operating System.
#-------------------------------------------------------------------------------
INVENTORY_LOCATION=/u01/app/oraInventory

#-------------------------------------------------------------------------------
# Specify the installation option.
# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY
#   - CRS_CONFIG  : To register home and configure Grid Infrastructure for cluster
#   - HA_CONFIG   : To register home and configure Grid Infrastructure for stand alone server
#   - UPGRADE     : To register home and upgrade clusterware software of earlier release
#   - CRS_SWONLY  : To register Grid Infrastructure Software home (can be configured for cluster 
#                   or stand alone server later)
#   - HA_SWONLY   : To register Grid Infrastructure Software home (can be configured for stand 
#                   alone server later. This is only supported on Windows.)
#   - CRS_ADDNODE : To add more nodes to the cluster
#-------------------------------------------------------------------------------
oracle.install.option=CRS_CONFIG

#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Base.
#-------------------------------------------------------------------------------
ORACLE_BASE=/u01/app/oracle

################################################################################
#                                                                              #
#                              SECTION B - GROUPS                              #
#                                                                              #
#   The following three groups need to be assigned for all GI installations.   #
#   OSDBA and OSOPER can be the same or different.  OSASM must be different    #
#   than the other two.                                                        #
#   The value to be specified for OSDBA, OSOPER and OSASM group is only for    #
#   Unix based Operating System.                                               #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges.
#-------------------------------------------------------------------------------
oracle.install.asm.OSDBA=dba

#-------------------------------------------------------------------------------
# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges.
# The value to be specified for OSOPER group is optional.
# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE.
#-------------------------------------------------------------------------------
oracle.install.asm.OSOPER=

#-------------------------------------------------------------------------------
# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This
# must be different than the previous two.
#-------------------------------------------------------------------------------
oracle.install.asm.OSASM=dba

################################################################################
#                                                                              #
#                           SECTION C - SCAN                                   #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify a name for SCAN
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.scanName=ol7-122-scan

#-------------------------------------------------------------------------------
# Specify a unused port number for SCAN service
#-------------------------------------------------------------------------------

oracle.install.crs.config.gpnp.scanPort=1521

################################################################################
#                                                                              #
#                           SECTION D - CLUSTER & GNS                         #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure the cluster as Extended, else
# specify 'false'
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.configureAsExtendedCluster=false


#-------------------------------------------------------------------------------
# Specify the Member Cluster Manifest file
#
# Applicable only for MEMBERDB and MEMBERAPP cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.memberClusterManifestFile=

#-------------------------------------------------------------------------------
# Specify a name for the Cluster you are creating.
#
# The maximum length allowed for clustername is 15 characters. The name can be 
# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-)
# and underscore(_).
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterName=ol7-122-cluster

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration.
# Specify 'true' if you would like to configure Grid Naming Service(GNS), else
# specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.configureGNS=false

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS.
# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP
# , else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.autoConfigureClusterNodeVIP=false

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure GNS.
# Specify the type of GNS configuration for cluster
# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS
# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration.
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsOption=

#-------------------------------------------------------------------------------
# Applicable only if SHARED_GNS is being configured for cluster
# Specify the path to the GNS client data file
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsClientDataFile=

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to 
# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
# Specify the GNS subdomain and an unused virtual hostname for GNS service
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=

#-------------------------------------------------------------------------------
# Specify the list of sites - only if configuring an Extended Cluster
#-------------------------------------------------------------------------------
oracle.install.crs.config.sites=

#-------------------------------------------------------------------------------
# Specify the list of nodes that have to be configured to be part of the cluster.
#
# The list should a comma-separated list of tuples.  Each tuple should be a
# colon-separated string that contains
# - 1 field if you have chosen CRS_SWONLY as installation option, or
# - 1 field if configuring an Application Cluster, or
# - 3 fields if configuring a Flex Cluster
# - 3 fields if adding more nodes to the configured cluster, or
# - 4 fields if configuring an Extended Cluster
# 
# The fields should be ordered as follows:
# 1. The first field should be the public node name.
# 2. The second field should be the virtual host name
#    (Should be specified as AUTO if you have chosen 'auto configure for VIP'
#     i.e. autoConfigureClusterNodeVIP=true)
# 3. The third field indicates the role of node (HUB,LEAF). This has to 
#    be provided only if Flex Cluster is being configured.
#    For Extended Cluster only HUB should be specified for all nodes
# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster.
# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option
# The 2nd and 3rd fields are not applicable if configuring an Application Cluster
#
# Examples
# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2
# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2
# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2
# You can specify a range of nodes in the tuple using colon separated fields of format
# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterNodes=ol7-122-rac1.localdomain:ol7-122-rac1-vip.localdomain:HUB,ol7-122-rac2.localdomain:ol7-122-rac2-vip.localdomain:HUB

#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetAddress:InterfaceType
# where InterfaceType can be either "1", "2", "3", "4", or "5"
# InterfaceType stand for the following values
#   - 1 : PUBLIC
#   - 2 : PRIVATE
#   - 3 : DO NOT USE
#   - 4 : ASM
#   - 5 : ASM & PRIVATE
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=enp0s8:192.168.56.0:1,enp0s9:192.168.1.0:5,virbr0:192.168.122.0:3

#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup to store GIMR data.
# Specify 'true' if you would like to separate GIMR data with clusterware data, 
# else specify 'false'
# Value should be 'true' for DOMAIN cluster configurations
# Value can be true/false for STANDALONE cluster configurations.
#------------------------------------------------------------------------------
oracle.install.asm.configureGIMRDataDG=false

################################################################################
#                                                                              #
#                              SECTION E - STORAGE                             #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting
# Disks files
#   - FLEX_ASM_STORAGE
#   - CLIENT_ASM_STORAGE
#
# Applicable only for MEMBERDB cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.storageOption=                
################################################################################
#                                                                              #
#                               SECTION F - IPMI                               #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure Intelligent Power Management interface
# (IPMI), else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.useIPMI=false

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure IPMI
# i.e. oracle.install.crs.config.useIPMI=true
# Specify the username and password for using IPMI service
#-------------------------------------------------------------------------------
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
################################################################################
#                                                                              #
#                                SECTION G - ASM                               #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# ASM Storage Type
# Allowed values are : ASM and ASM_ON_NAS
# ASM_ON_NAS applicable only if
# oracle.install.crs.config.ClusterConfiguration=STANDALONE
#-------------------------------------------------------------------------------
oracle.install.asm.storageOption=ASM

#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing OCR/VDSK 
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store OCR/VDSK files
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.ocrLocation=
#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup on NAS to store GIMR data
# Specify 'true' if you would like to separate GIMR data with clusterware data, else
# specify 'false'
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#------------------------------------------------------------------------------
oracle.install.asmOnNAS.configureGIMRDataDG=false

#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing GIMR data
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store the GIMR database
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
# and oracle.install.asmOnNAS.configureGIMRDataDG=true
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.gimrLocation=

#-------------------------------------------------------------------------------
# Password for SYS user of Oracle ASM
#-------------------------------------------------------------------------------
oracle.install.asm.SYSASMPassword=

#-------------------------------------------------------------------------------
# The ASM DiskGroup
#
# Example: oracle.install.asm.diskGroup.name=data
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.name=DATA

#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following  
#   - NORMAL
#   - HIGH
#   - EXTERNAL
#   - FLEX#   - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.diskGroup.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.redundancy=EXTERNAL

#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
#   - 1
#   - 2
#   - 4
#   - 8
#   - 16
# Example: oracle.install.asm.diskGroup.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.AUSize=4

#-------------------------------------------------------------------------------
# Failure Groups for the disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.FailureGroups=

#-------------------------------------------------------------------------------
# List of disks and their failure groups to create a ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/oracleasm/asm-disk3,,/dev/oracleasm/asm-disk2,,/dev/oracleasm/asm-disk4,,/dev/oracleasm/asm-disk1,

#-------------------------------------------------------------------------------
# List of disks to create a ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disks=/dev/oracleasm/asm-disk3,/dev/oracleasm/asm-disk2,/dev/oracleasm/asm-disk4,/dev/oracleasm/asm-disk1

#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.quorumFailureGroupNames=
#-------------------------------------------------------------------------------
# The disk discovery string to be used to discover the disks used create a ASM DiskGroup
#
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/*
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK*
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/*

#-------------------------------------------------------------------------------
# Password for ASMSNMP account
# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances
#-------------------------------------------------------------------------------
oracle.install.asm.monitorPassword=

#-------------------------------------------------------------------------------
# GIMR Storage data ASM DiskGroup
# Applicable only when 
# oracle.install.asm.configureGIMRDataDG=true
# Example: oracle.install.asm.GIMRDG.name=MGMT
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.name=

#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following  
#   - NORMAL
#   - HIGH
#   - EXTERNAL
#   - FLEX#   - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.gimrDG.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.redundancy=

#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
#   - 1
#   - 2
#   - 4
#   - 8
#   - 16
# Example: oracle.install.asm.gimrDG.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.AUSize=1

#-------------------------------------------------------------------------------
# Failure Groups for the GIMR storage data ASM disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.FailureGroups=

#-------------------------------------------------------------------------------
# List of disks and their failure groups to create GIMR data ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
#     For Windows based Operating System:
#     oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disksWithFailureGroupNames=

#-------------------------------------------------------------------------------
# List of disks to create GIMR data ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2
#     For Windows based Operating System:
#     oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disks=

#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.quorumFailureGroupNames=

#-------------------------------------------------------------------------------
# Configure AFD - ASM Filter Driver
# Applicable only for FLEX_ASM_STORAGE option
# Specify 'true' if you want to configure AFD, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.asm.configureAFD=false
#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

################################################################################
#                                                                              #
#                             SECTION H - UPGRADE                              #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify whether to ignore down nodes during upgrade operation.
# Value should be 'true' to ignore down nodes otherwise specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.ignoreDownNodes=false                
################################################################################
#                                                                              #
#                               MANAGEMENT OPTIONS                             #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the management option to use for managing Oracle Grid Infrastructure
# Options are:
# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
# 2. NONE   -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
#-------------------------------------------------------------------------------
oracle.install.config.managementOption=NONE

#-------------------------------------------------------------------------------
# Specify the OMS host to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsHost=

#-------------------------------------------------------------------------------
# Specify the OMS port to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsPort=0

#-------------------------------------------------------------------------------
# Specify the EM Admin user name to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminUser=

#-------------------------------------------------------------------------------
# Specify the EM Admin password to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminPassword=
################################################################################
#                                                                              #
#                      Root script execution configuration                     #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------------------------------
# Specify the root script execution mode.
#
#   - true  : To execute the root script automatically by using the appropriate configuration methods.
#   - false : To execute the root script manually.
#
# If this option is selected, password should be specified on the console.
#-------------------------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.executeRootScript=false

#--------------------------------------------------------------------------------------
# Specify the configuration method to be used for automatic root script execution.
#
# Following are the possible choices:
#   - ROOT
#   - SUDO
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.configMethod=
#--------------------------------------------------------------------------------------
# Specify the absolute path of the sudo program.
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoPath=

#--------------------------------------------------------------------------------------
# Specify the name of the user who is in the sudoers list. 
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoUserName=
#--------------------------------------------------------------------------------------
# Specify the nodes batch map.
#
# This should be a comma separated list of node:batch pairs.
# During upgrade, you can sequence the automatic execution of root scripts
# by pooling the nodes into batches. 
# A maximum of three batches can be specified. 
# Installer will execute the root scripts on all the nodes in one batch before
# proceeding to next batch.
# Root script execution on the local node must be in Batch 1.
# Only one type of node role can be used for each batch.
# Root script execution should be done first in all HUB nodes and then, when
# existent, in all the LEAF nodes.
#
# Examples:
# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3
# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2
# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3
#
# Applicable only for UPGRADE install option. 
#--------------------------------------------------------------------------------------
oracle.install.crs.config.batchinfo=
################################################################################
#                                                                              #
#                           APPLICATION CLUSTER OPTIONS                        #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the Virtual hostname to configure virtual access for your Application
# The value to be specified for Virtual hostname is optional.
#-------------------------------------------------------------------------------
oracle.install.crs.app.applicationAddress=

Oracle 12c Zero downtime Patching on Grid Infrastructure



 Oracle  12c Zero downtime  Patching  on Grid Infrastructure


In this article we will throw some light on patching 12c  grid and database  using newly opatchauto .



Also  Oracle  has introduced  Grid Infrastructure Out of Place   ( OOP ) Patching using opatchauto .
Starting 12.2 , there is a new feature introduced with the latest version of opatchauto.
You can perform out of place ( OOP ) Patching .
This is helpful when the current Oracle Homes do not have sufficient space left and there is a need to shift the Oracle Homes to a new location .
This operation generally requires cloning of the Oracle Homes followed by Patching and then switching the CRS and the Database , including all the services to newly cloned Homes respectively .
All these operations can be performed at a single stretch by making use of OOP Patching .

However i am not covering OOP patching in this article because it is  very nicely and in-detailed explained in  Oracle (Doc ID 2419319.1)






####################   12c  Pre  and Post Patching  ####################

1) If data guard is  used  ,  remember to  defer  log shipping and  re-enable log shipping after  patch is applied 

2) If  you are using acl or sticky bit on database home , do remember to take  backup of  home permission and reapply after patch is completed 

3) Take  backup of database and  oracle / grid homes 

4) Compare database  invalid objects  pre and post patch 





#############    12c Zero downtime grid and  database  Patching  ########## 



Zero Downtime patching provides a process and mechanism for rolling out a patch across a domain while allowing applications to continue to service requests.
You can apply a ZDT patch using OPatchAuto, which rolls out the change to one node at a time and allows a load balancer   to redirect incoming traffic to the remaining nodes until the change is complete.

The Opatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes. It operates by querying existing configurations and automating the steps required for patching each Oracle RAC database home of same version and the GI home.
The utility must be executed by an operating system (OS) user with root privileges, and it must be executed on each node in the cluster if the GI home or Oracle RAC database home is in nonshared storage. The utility should not be run in parallel on the cluster nodes.
Depending on command line options specified, one invocation of opatchauto can patch the GI home, Oracle RAC database homes, or both GI and Oracle RAC database homes of the same Oracle release version as the patch. You can also roll back the patch with the same selectivity.
The current node and the DBs on it will be in the open state. The opatchauto utility will take them down automatically while patching . 

Below steps  is designed considering you are  using bundle patch of grid + database Psu   and DB jvm .    If we are not using  bundle patch we might have to   apply  psu separately on  Grid and Db home . 


Before running  actual patch we can  analyze patch 
/oragridpx/app/12.1.0/grid/OPatch/opatchauto apply  -analyze 

Please note only to export CRS_HOME parameter . 
cd /oratmp/patches/Jan2020/30463691/30464119

export CRS_HOME=/oragridpx/app/12.1.0/grid

/oragridpx/app/12.1.0/grid/OPatch/opatchauto apply     




In our case we  take full downtime so I personally apply 2nd  node at same time  while first node is going on   due to shortage of  time .  On 2nd node we can initiate patch  while  first node patch  is ongoing  mentioning  nonrolling option 

/oragridpx/app/12.1.0/grid/OPatch/opatchauto apply   -nonrolling 




Please note that   If  database  has  java been used  you need downtime to apply java patch as  below  with oracle user  from any one of node 

Also if  we are using  acfs we might have to apply  acfs patch separately just like java patch 



 cd  /oratmp/patches/Jan2020/30463691/30502041  
  
opatch apply 

srvctl start database -d Dbname 


alter system set cluster_database=false scope=spfile ;
startup upgrade 

$ORACLE_HOME/OPatch/datapatch -verbose


 sqlplus "/ as sysdba"

alter system set cluster_database=true  scope=spfile ;
srvctl start database -d Dbname 
@?/rdbms/admin/utlrp.sql
select count(*)  , owner    from dba_objects where  status !='VALID'   group by owner ; 
select * from dba_registry_sqlpatch ; 





-->  verify  grid services are up 

crs_stat | awk -F= '/NAME=/{n=$2}/TYPE=/{t=$2}/TARGET=/{g=$2}/STATE=/{s=$2; printf("%-45s%-15s%-10s%-30s\n", n,t,g,s)}'





#############    Resume from failed  Psu Patch  ########## 

In 12c, we have the patching sessions with their configuration in JSON files.

So go to directory $grid_home/OPatch/auto/dbsessioninfo/


Run   below with Grid   user 


cd /oratmp/patches/Jan2020/30463691/30464119

export CRS_HOME=/oragridpx/app/12.1.0/grid

/oragridpx/app/12.1.0/grid/OPatch/opatchauto resume with session id "DZSQ" 





#############   How to get Clusterware state out of "Rolling Upgrade"   ########## 


At time of  patching or Upgrade at time we see below error   . We need to  remove  server from  rolling upgrade/patch  more . 

PRVG-13410 : The state of Oracle Clusterware is not consistent to allow upgrade. [Expected = "NORMAL" ; Found = "ROLLING UPGRADE"].


Checks Done : 
 crsctl query crs releasepatch
 crsctl query crs softwarepatch
crsctl query crs activeversion -f   
crsctl query crs softwareversion  

ASMCMD [+] > showversion
ASMCMD [+] > showclusterstate
+ASM1:>SELECT SYS_CONTEXT('sys_cluster_properties', 'cluster_state') as "SYSTEM_STATE" FROM dual;
kfod op=patchlvl
kfod op=patches



Fix Tried : 
ASMCMD> showclusterstate
$crsctl stop rollingpatch
crsctl stop rollingupgrade
+ASM1:>ALTER SYSTEM STOP ROLLING PATCH;
+ASM1:>ALTER SYSTEM STOP ROLLING MIGRATION;
$GI_HOME/crs/install/rootcrs.pl -prepatch   
$GI_HOME/bin/clscfg -patch
$GI_HOME/crs/install/rootcrs.pl -postpatch



Known Issue : 
BUG 30140462 - CLUSTERWARE STATE IS STILL ROLLING UPGRADE AFTER PERFORMING OFFLINE DOWNGRADE

Bug 25197395 - Command 'crsctl startupgrade' hangs having many Databases (Doc ID 25197395.8)

rootupgrade.sh Fails with CRS-1136: Rejecting the rolling upgrade mode change because the cluster is being patched (Doc ID 2494827.1)




 For some reason,  patches are available from “opatch lsinventory”, but they are missing from kfod output:


-- as super user
$GRID_HOME/crs/install/rootcrs.sh -prepatch 

-- as grid owner,
$GRID_HOME/bin/patchgen commit -pi 12345678 
$GRID_HOME/bin/patchgen commit -pi 23456789 

-- as super user
$GRID_HOME/crs/install/rootcrs.sh -postpatch






#############   Rollback  Psu and   DB Java patch    ########## 


To rollback  Psu please note only to export CRS_HOME parameter .and run below with root 
cd /oratmp/patches/Jan2020/30463691/30464119

export CRS_HOME=/oragridpx/app/12.1.0/grid

/oragridpx/app/12.1.0/grid/OPatch/opatchauto rollback 

/oragridpx/app/12.1.0/grid/OPatch/opatch lspatches





To rollback database Java patch  .  From binary rollback on both nodes however  on database only from any 1 node with oracle user 


 cd  /oratmp/patches/Jan2020/30463691/30502041  
  
opatch rollback -id 27105253

srvctl start database -d Dbname 


alter system set cluster_database=false scope=spfile ;
startup upgrade 

$ORACLE_HOME/OPatch/datapatch -verbose


 sqlplus "/ as sysdba"

alter system set cluster_database=true  scope=spfile ;
srvctl start database -d Dbname 
@?/rdbms/admin/utlrp.sql
select count(*)  , owner    from dba_objects where  status !='VALID'   group by owner ; 
select * from dba_registry_sqlpatch ; 





For troubleshooting reference : 

1) https://docs.oracle.com/cd/E24628_01/doc.121/e39376/troubleshooting_opatchauto.htm#BJFBIDEF




#############   Oracle 12c Patching Not using opatchauto    ########## 


 

Download and copy both OCT2020PSU and bugfix PSU and extract under /u01/src/

==================================================
For Grid Infrastructure Home, as home user:
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31771877
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31772784
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31773437
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31780966



For Database home, as home user:
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31771877
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/src/31750108/31772784

==================================================

Run the pre root script.

If this is a GI Home, as the root user execute:

/u01/app/19.0.0/grid/crs/install/rootcrs.sh -prepatch

==================================================

3) from GI home as oracle user.

GRID_HOME=/u01/app/19.0.0/grid
export PATH=$GRID_HOME/bin:$GRID_HOME/OPatch:$PATH:/usr/css/bin


INcluding Bugfix
opatch apply -local /u01/src/32242453/32242453 -oh /u01/app/19.0.0/grid
opatch apply -local /u01/src/31750108/31773437 -oh /u01/app/19.0.0/grid
opatch apply -local /u01/src/31750108/31771877 -oh /u01/app/19.0.0/grid
opatch apply -local /u01/src/31750108/31780966  -oh /u01/app/19.0.0/grid



4.    Apply  On  DB home.

ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:/usr/css/bin
/u01/src/31750108/31772784/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/19.0.0/dbhome_1

opatch apply -local /u01/src/32242453/32242453 -oh /u01/app/oracle/product/19.0.0/dbhome_1
opatch apply -local /u01/src/31750108/31771877 -oh /u01/app/oracle/product/19.0.0/dbhome_1

/u01/src/31750108/31772784/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/19.0.0/dbhome_1


==================================================

Run the pre root script.

If this is a GI Home, as the root user execute:

/u01/app/19.0.0/grid/crs/install/rootcrs.sh -postpatch




################################
Know  Issue  :   Crs Home Not Stopped 
################################
Please check the below steps to fix that issue(Doc ID 2703720.1) It happened today in cbem db1a uat server.
 
1) As grid user , execute :
/u01/app/19.0.0/grid/bin/cluutil -ckpt -oraclebase $ORACLE_BASE -writeckpt -name ROOTCRS_PREPATCH -state START
2) Verify:
/u01/app/19.0.0/grid/bin/cluutil -ckpt -oraclebase $ORACLE_BASE -chkckpt -name ROOTCRS_PREPATCH -status
START
3) Check for all nodes of the cluster and perform the above actions where required.
4) Re-execute opatchauto apply command or 'rootcrs.sh -prepatch' as required



  " Bug 33036568 : HAIP FLIP CAUSING NODE REBOOT DURING 19.11 GI RU PATCHING ON IBM AIX", 
    ASM process is terminated by LMON process during postpatch on grid home 19.11





#############   Issues commonly faced   ########## 



1)  While applying datapatch at dataabase  we face ORA-20001 for which   below metalink document cant be referred 

Queryable Patch Inventory - Issues/Solutions for ORA-20001: Latest xml inventory is not loaded into table (Doc ID 1602089.1)

verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table


select dbms_sqlpatch.verify_queryable_inventory from dual;

VERIFY_QUERYABLE_INVENTORY
--------------------------------------------------------------------------------

ORA-20001: Latest xml inventory is not loaded into table


alter system set "_bug27355984_xt_preproc_timeout"=1000 scope=spfile ;
alter system set "_enable_ptime_update_for_sys"=TRUE   scope=spfile ;


After patching :
alter system reset  "_bug27355984_xt_preproc_timeout"  scope=spfile ;
alter system reset  "_enable_ptime_update_for_sys"   scope=spfile ;




Similarly if ORA-20008:  is reported we need use below trace 

>> Before patching 
SQL: alter session set events '18219841 trace name context forever';


>> After patching 
alter session set events '18219841 trace name context off';





2)  Session disconnect  during opatchauto .

We can rerun  opatchauto again . Oracle will internally determine missing patch and apply same



3) Due to  failed opatchauto  patching  it happens sometimes  that  crs  does not start on node where it failed . When we try to   rerun  opatchauto it shows crs is not configured

i personally followed below

A)  Stop crs  on 2nd node where we are not patching to avoid mismatch  of crs version between nodes 

B)  ran below on node where we  getting error   . It  was part of post patching   till 11g

/optware/grid/11.2.0.4/crs/install/rootcrs.pl -patch

C)   rerun opatchauto on node where  it failed 




4) oracle home space issue during and after patching

Before patching  we can initiate  below to ensure prevoius backups are removed. Please refer to 550522.1  to troubleshoot space issues during patching

$ opatch util cleanup


How To Avoid Disk Full Issues Because OPatch Backups Take Big Amount Of Disk Space. (Doc ID 550522.1)





#############   Old 11g  approach of applying  GI  patch  ########## 


In   case   resume  approach of  above  doesnt help we must  have clarity on  traditional way of  GI  patch .  Seeing logs which  step  patch  failed  we need to  proceed  accordingly 

Hence  below  old steps are shared only for backup purpose 




APPLY PATCH TO GRID HOME => /optware/grid/11.2.0.4

************************



1. Run the pre root script. If this is a GI Home, as the root user execute

***********************************************************************


/optware/grid/11.2.0.4/crs/install/rootcrs.pl -unlock

2. As the GI home owner execute, apply the OCW patch
**************************************************
/optware/grid/11.2.0.4/OPatch/opatch napply -oh /u01/app/11204 -local /u02/patch/22191577/21948348

3. As the GI home owner execute, apply the ACFS patch
**************************************************
/optware/grid/11.2.0.4/OPatch/opatch napply -oh /u01/app/11204 -local /u02/patch/22191577/21948355


4. As the GI home owner execute, apply the DB PSU Patch
*****************************************************
/optware/grid/11.2.0.4/OPatch/opatch apply -oh /u01/app/11204 -local /u02/patch/22191577/21948347


5. Run the post script. As the root user execute:
********************************************** 
/optware/grid/11.2.0.4/rdbms/install/rootadd_rdbms.sh

6. If this is a GI Home, as the root user execute:
*********************************************** 
/optware/grid/11.2.0.4/crs/install/rootcrs.pl -patch