Sunday, April 26, 2020

Oracle 11g Grid and Database patching -- Rac


                           Oracle 11g Grid and Database patching 


  
 ###########  Applying Patch on GI  Home    ###########


 APPLY PATCH TO GRID HOME => /optware/grid/11.2.0.4

************************

1. Run the pre root script. If this is a GI Home, as the root user execute
***********************************************************************
/optware/grid/11.2.0.4/crs/install/rootcrs.pl -unlock


2. As the GI home owner execute, apply the OCW patch
**************************************************
/optware/grid/11.2.0.4/OPatch/opatch napply -oh /u01/app/11204 -local /u02/patch/22191577/21948348


3. As the GI home owner execute, apply the ACFS patch
**************************************************
/optware/grid/11.2.0.4/OPatch/opatch napply -oh /u01/app/11204 -local /u02/patch/22191577/21948355


4. As the GI home owner execute, apply the DB PSU Patch
*****************************************************
/optware/grid/11.2.0.4/OPatch/opatch apply -oh /u01/app/11204 -local /u02/patch/22191577/21948347


5. Run the post script. As the root user execute:
********************************************** 
/optware/grid/11.2.0.4/rdbms/install/rootadd_rdbms.sh


6. If this is a GI Home, as the root user execute:
*********************************************** 
/optware/grid/11.2.0.4/crs/install/rootcrs.pl -patch









 ###########  Applying Patch on Database  Binary ###########




APPLY PATCH TO DATABASE HOME => /optware/oracle/11.2.0.4/db_1
*****************************

Stop the CRS managed resources running from DB homes. If this is a GI Home environment, as the database home owner execute
==========================================================================================================================

/bin/srvctl stop home -o -s -n


/optware/oracle/11.2.0.4/db_1/bin/srvctl stop home -o /optware/oracle/11.2.0.4/db_1 -s /optware/oracle/tmp/status.log -n Servername




1.Run the pre script for DB component of the patch. As the database home owner execute:
=======================================================================================
$///custom/server//custom/scripts/prepatch.sh -dbhome
/optware/grid/11.2.0.4/PATCH/oct2018/28689170/28429134/27735020/custom/scripts/prepatch.sh -dbhome /optware/oracle/11.2.0.4/db_1



2.Apply the OCW & DB PSU patch. As the database home owner execute:
====================================================================
$/OPatch/opatch napply -oh -local ///custom/server/
/optware/oracle/11.2.0.4/db_1/OPatch/opatch napply -oh /optware/oracle/11.2.0.4/db_1 -local  /optware/grid/11.2.0.4/PATCH/oct2018/28689170/28429134/27735020/custom/server/27735020


$/OPatch/opatch apply -oh -local //
/optware/oracle/11.2.0.4/db_1/OPatch/opatch apply -oh /optware/oracle/11.2.0.4/db_1 -local /optware/grid/11.2.0.4/PATCH/oct2018/28689170/28429134/28204707




3.Apply OJVM Patch. As the database home owner execute:
======================================================

/optware/oracle/11.2.0.4/db_1/OPatch/opatch apply -oh /optware/oracle/11.2.0.4/db_1 -local /optware/grid/11.2.0.4/PATCH/oct2018/28689170/28440700




4.Run the post script for DB component of the patch. As the database home owner execute:
========================================================================================

$///custom/server//custom/scripts/postpatch.sh -dbhome
/optware/grid/11.2.0.4/PATCH/oct2018/28689170/28429134/27735020/custom/scripts/postpatch.sh -dbhome /optware/oracle/11.2.0.4/db_1




5.START HOME
==========
/bin/srvctl start home -o -s -n
/optware/oracle/11.2.0.4/db_1/bin/srvctl start home -o /optware/oracle/11.2.0.4/db_1 -s /optware/oracle/tmp/status.log -n Servername 



 Check for logs under /optware/grid/11.2.0.4/cfgtoollogs/ 







 ###########  Applying Patch on Database  after  binary patch ###########



Basic check Pre & Post:
=======================

for i in `cat /optware/oracle/admin/mig11204/db.dat`
do
. ~oracle/.profile.$i
sqlplus "/ as sysdba" <<EOF
select name from v\$database;
set lines 300 pages 100 feed off
col comp_id for a10
col comp_name for a60
col version for a10
col status for a10
col ACTION_TIME for a40
col ACTION for a10
col namespace for a20
col version for a10
col comments for a30
select a.name,comp_id,comp_name,version,status from dba_registry, v$database a;
select a.name,ACTION_TIME,action,namespace,version,comments from sys.registry$history, v$database a;
EOF
done






To Apply patch Manually: ( db.dat  will contain name of all database. 1 db name per line ) 
=========================
Below is for oct2016 need to modify

for i in `cat /optware/oracle/admin/mig11204/db.dat`
do
. ~oracle/.profile.$i
sqlplus "/ as sysdba" <>patching1.log
STARTUP UPGRADE
@$ORACLE_HOME/sqlpatch/27923163/postinstall.sql
SHUTDOWN immediate
STARTUP
@$ORACLE_HOME/rdbms/admin/catbundle.sql psu apply
@$ORACLE_HOME/rdbms/admin/utlrp.sql
EOF
done



To start all database:   ( db.dat  will contain name of all database. 1 db name per line ) 
======================

for i in `cat /optware/oracle/admin/mig11204/db.dat`
do
. ~oracle/.profile.$i
sqlplus "/ as sysdba" <<EOF
startup
EOF
Done







 ###########  Rollback  Patch on Database ###########


Rollback  Patch 29251270: OJVM PATCH SET UPDATE 11.2.0.4.190416

cd $ORACLE_HOME/OPatch
./opatch rollback -id 29251270


cd $ORACLE_HOME/sqlpatch/29251270
sqlplus /nolog
CONNECT / AS SYSDBA
STARTUP
alter system set cluster_database=false scope=spfile;
SHUTDOWN
STARTUP UPGRADE
@postdeinstall.sql
alter system set cluster_database=true scope=spfile;
SHUTDOWN
STARTUP




Rollback  Patch 29141056: DATABASE PATCH SET UPDATE 11.2.0.4.190416
  
Shut down the instance on the node.

cd $ORACLE_HOME/OPatch
opatch rollback -id 29141056

  
Start all database instances running from the Oracle home.  


cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
CONNECT / AS SYSDBA
STARTUP
@catbundle_PSU__ROLLBACK.sql
QUIT

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
CONNECT / AS SYSDBA
@utlrp.sql







 ########### Hot patching  ###########


Reference 

RDBMSOnline Patching Aka Hot Patching [ID 761111.1]



Among the high availability features provided by 11g, Oracle introduced the Hot patching concept. Hot patching allows the DBA to install, enable, and disable a patch online without disruption to Oracle services. Hot patches don't require instance shutdown, and they are installed with the traditional OPatch tool. This tool can detect conflicts between hot patches.

Not all patches in 11g can be installed in Hot patch mode. First you must find out if the patch supports the hot patch apply feature. You can use the following command to determine if this mode is allowed



How to check if patch is online 

opatch query -is_online_patch <PatchLocation>

Or 

$ cd <PATCH_TOP>/10188727
$ opatch query -all online




Applying Patch 

opatch apply online -connectString <SID>:<USERNAME>:<PASSWORD>:<NODE>


For RAC you can list all of the instances:
opatch apply online -connectString <SID>:<USERNAME>:<PASSWORD>:<NODE1>,<SID2>:<USERNAME>:<PASSWORD>:<NODE2>,...



To see patches installed in the ORACLE_HOME:

$ORACLE_HOME/OPatch/opatch lsinventory -details






How are Online Patches rollback'ed?

Using "opatch" you can rollback the patch

opatch rollback -id <patchID> -connectString <SID>:<USERNAME>:<PASSWORD>:<NODE1>,<SID2>:<USERNAME>:<PASSWORD>:<NODE2>, ...

The USERNAME and PASSWORD are those of a user that has SYSDBA privileges. The USERNAME and PASSWORD can be left blank if the OS user applying the patch has the SYSDBA privilege. Also the NODE is optional if the patch is being applied locally.
Using opatch does not remove the patch, it simply disables it (rolls it back) and removes the patch entry from the inventory. This behavior may change in the future.








Sunday, April 19, 2020

Oracle 19c 2 node Rac silent mode Installation

    Oracle 19c 2 node Rac silent mode Installation 




Below we  have planned for  2 node  Rac  Installation .  Will   keep enhancing  periodically .
This step  we need to still test and will enhance based on actual encounter .  I am just keeping this ready for our reference for actual rac installation . 



In case  its  migration in most case  we need parallel environment  build up before go live date . In case  there is no budget to have another   environment  build  we  planned  below 2  options as  backup plan 

1)  To  upgrade  only  grid  home on existing servers . Have 2 database  homes on same  servers . While  application is  continuing  business on  

2)  Arrange 1 common  test server to  for all application to test application compatibility  






####################   Server preparation  ####################




I am  not mentioning  below topics in details and  just giving high level of  requirements 



1) ssh configuration between both nodes 

2) Install required OS packages / rpms 

3) Arranging Ocr Luns 

4) Creating Grid and oracle user ids 

5) Arrange  required Ip  
     > 2 Private ip  each node 
     > 1 Public IP  each node 
     >  I Vip each  node 
     >  3 scan vip 



6) Dns registration for Scan 





####################   Grid  Software Installation ################ 




1)  Download and  unzip software 



As the grid user, download the Oracle Grid Infrastructure image files and extract
the files into the Grid home. For example:

$ mkdir -p /u01/app/19.0.0/grid
$ chown grid:oinstall /u01/app/19.0.0/grid
$ cd /u01/app/19.0.0/grid
$ unzip -q download_location/grid.zip



2)  Run cluvfy  to  identify gaps  

/u01/oragrid/19c/grid/runcluvfy.sh stage -pre crsinst -n rac19c01,rac19c02 -verbose | tee /tmp/cluvfy.out



3)  Prepare response file

While other parameters are quite straight forward i would like throw light on below parameter alone. Total description / sample response file is  mentioned at end of blog


oracle.install.crs.config.networkInterfaceList .    Under  oracle.install.crs.config.networkInterfaceList   parameter starting with  :1  are  to define public interface and  starting :5 are to define asm and  private interface 





INVENTORY_LOCATION=/opt/oracle/oraInventory
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/app/grid
oracle.install.asm.OSDBA=dba
oracle.install.asm.OSOPER=dba
oracle.install.asm.OSASM=sysasm
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.gpnp.scanName=rac19c-scan.novalocal
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=prod-bss
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.clusterNodes=rac19c01:rac19c01-vip:HUB,rac19c02:rac19c02-vip:HUB
oracle.install.crs.config.networkInterfaceList=eth0:192.168.10.0:1,eth1:192.168.30.0:5
oracle.install.crs.config.useIPMI=false
oracle.install.asm.storageOption=ASM
oracle.install.asmOnNAS.configureGIMRDataDG=false
oracle.install.asm.SYSASMPassword=Oracle123
oracle.install.asm.diskGroup.name=OCR_VOTE
oracle.install.asm.diskGroup.redundancy=NORMAL
oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/OCR_VOTE1p1,/dev/oracleasm/disks/OCR_VOTE2p1,/dev/oracleasm/disks/OCR_VOTE3p1
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*
oracle.install.asm.monitorPassword=Oracle123
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=false



4)  Run pre check 

/u01/app/19c/grid/gridSetup.sh -silent  -executePrereqs  -waitForCompletion  -responseFile /tmp/Silent_19cGrid.rsp




5) Perform the Installation in Silent Mode using response file 

/u01/oragrid/19c/grid/gridSetup.sh -silent -ignorePrereqFailure  -waitForCompletion  -responseFile  /u01/oragrid/19c/grid/install/response/gridsetup.rsp


As a root user, execute the following script(s):
        1. /u01/app/19c/grid/root.sh 




6) Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes. 



 /u01/app/19c/grid/gridSetup.sh  -executeConfigTools -responseFile /tmp/OBS_19cGrid.rsp [-silent]




7) Verify the Installation

/u01/oragrid/19c/grid/bin/cluvfy stage -post crsinst -n rac19c01,rac19c02 -verbose | tee /tmp/cluvfy.out






create diskgroup DATA external redundancy disk '/dev/asm-diski','/dev/asm-diskj','/dev/asm-diskk'  ATTRIBUTE 'compatible.rdbms' = '19.0', 'compatible.asm' = '19.0';

create diskgroup FRA external redundancy disk '/dev/asm-diskl','/dev/asm-diskm','/dev/asm-diskn'  ATTRIBUTE 'compatible.rdbms' = '19.0', 'compatible.asm' = '19.0';

srvctl start diskgroup -diskgroup data
srvctl start diskgroup -diskgroup fra
alter diskgroup all mount;





####################  Database   installation  ############## 


1)    On Both Nodes ,

mkdir -p /oraswzj/app/oracle/product/19.3.0/dbhome_1

Download and  unzip  database software



2)    On both nodes Run run.sh  with below contents


./runInstaller -ignorePrereq -waitforcompletion -silent    -responseFile /oraswzj/app/oracle/product/19.3.0/dbhome_1/install/response/db_install.rsp  oracle.install.option=INSTALL_DB_SWONLY  ORACLE_HOSTNAME=yourhosname  UNIX_GROUP_NAME=oinstall  INVENTORY_LOCATION=/oragridzj/app/oraInventory   SELECTED_LANGUAGES=en,en_GB ORACLE_BASE=/oraswzj/app/oracle   oracle.install.db.InstallEdition=EE oracle.install.db.OSDBA_GROUP=oinstall oracle.install.db.OSBACKUPDBA_GROUP=oinstall    oracle.install.db.OSDGDBA_GROUP=oinstall     oracle.install.db.OSKMDBA_GROUP=oinstall   oracle.install.db.OSRACDBA_GROUP=oinstall   SECURITY_UPDATES_VIA_MYORACLESUPPORT=false  DECLINE_SECURITY_UPDATES=true


As a root user, execute the following script(s):
        1. /u01/app/oracle/product/19.0.0/dbhome_1/root.sh




3) enable rac on  new home  on both servers in 19c home

$ cd $ORACLE_HOME/rdbms/lib

make -f ins_rdbms.mk rac_on
make -f ins_rdbms.mk ioracle




4) Create database and add to  Ocr


dbca -silent -ignorePreReqs  -ignorePrereqFailure  -createDatabase -templateName General_Purpose.dbc -responseFile NO_VALUE \
-gdbname rac19c  -sid rac19c \
-createAsContainerDatabase TRUE \
-sysPassword lhr -systemPassword lhr -pdbAdminPassword lhr -dbsnmpPassword lhr \
-datafileDestination '+DATA' -recoveryAreaDestination '+FRA' \
-storageType ASM \
-characterset AL32UTF8 \
-sampleSchema true \
-totalMemory 1024 \
-databaseType MULTIPURPOSE \
-emConfiguration none \
-nodeinfo raclhr-19c-n1,raclhr-19c-n2



srvctl add database -d DBNAME  -o /oragrid/app/oracle/product/19.3.0/dbhome_1 -s '+DATA/DBNAME/PARAMETERFILE/spfile.310.951432401'  -role {PRIMARY | PHYSICAL_STANDBY

srvctl add instance -d DBNAME   -i DBNAME1  -n SERVER1
srvctl add instance -d DBNAME   -i DBNAME2  -n SERVER2





5)  Fix netbackup  for database backups 

A)  create  softlink in 19chome/lib pointing to  netbackup . 
B)  Correct database home path in  backup script 






####################   References  ####################





https://www.oracle.com/webfolder/technetwork/tutorials/architecture-diagrams/19/rac/pdf/rac-19c-architecture.pdf





################  Grid   Response file  description ####################


###############################################################################
## Copyright(c) Oracle Corporation 1998,2017. All rights reserved.           ##
##                                                                           ##
## Specify values for the variables listed below to customize                ##
## your installation.                                                        ##
##                                                                           ##
## Each variable is associated with a comment. The comment                   ##
## can help to populate the variables with the appropriate                   ##
## values.                                                                   ##
##                                                                           ##
## IMPORTANT NOTE: This file contains plain text passwords and               ##
## should be secured to have read permission only by oracle user             ##
## or db administrator who owns this installation.                           ##
##                                                                           ##
###############################################################################

###############################################################################
##                                                                           ##
## Instructions to fill this response file                                   ##
## To register and configure 'Grid Infrastructure for Cluster'               ##
##  - Fill out sections A,B,C,D,E,F and G                                    ##
##  - Fill out section G if OCR and voting disk should be placed on ASM      ##
##                                                                           ##
## To register and configure 'Grid Infrastructure for Standalone server'     ##
##  - Fill out sections A,B and G                                            ##
##                                                                           ##
## To register software for 'Grid Infrastructure'                            ##
##  - Fill out sections A,B and D                                            ##
##  - Provide the cluster nodes in section D when choosing CRS_SWONLY as     ##
##    installation option in section A                                       ##
##                                                                           ##
## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##
##                                                                           ##
## To add more nodes to the cluster                                          ##
##  - Fill out sections A and D                                              ##
##  - Provide the cluster nodes in section D when choosing CRS_ADDNODE as    ##
##    installation option in section A                                       ##
##                                                                           ##
###############################################################################

#------------------------------------------------------------------------------
# Do not change the following system generated value. 
#------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.2.0

###############################################################################
#                                                                             #
#                          SECTION A - BASIC                                  #
#                                                                             #
###############################################################################


#-------------------------------------------------------------------------------
# Specify the location which holds the inventory files.
# This is an optional parameter if installing on  
# Windows based Operating System.
#-------------------------------------------------------------------------------
INVENTORY_LOCATION=/u01/app/oraInventory

#-------------------------------------------------------------------------------
# Specify the installation option.
# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY
#   - CRS_CONFIG  : To register home and configure Grid Infrastructure for cluster
#   - HA_CONFIG   : To register home and configure Grid Infrastructure for stand alone server
#   - UPGRADE     : To register home and upgrade clusterware software of earlier release
#   - CRS_SWONLY  : To register Grid Infrastructure Software home (can be configured for cluster 
#                   or stand alone server later)
#   - HA_SWONLY   : To register Grid Infrastructure Software home (can be configured for stand 
#                   alone server later. This is only supported on Windows.)
#   - CRS_ADDNODE : To add more nodes to the cluster
#-------------------------------------------------------------------------------
oracle.install.option=CRS_CONFIG

#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Base.
#-------------------------------------------------------------------------------
ORACLE_BASE=/u01/app/oracle

################################################################################
#                                                                              #
#                              SECTION B - GROUPS                              #
#                                                                              #
#   The following three groups need to be assigned for all GI installations.   #
#   OSDBA and OSOPER can be the same or different.  OSASM must be different    #
#   than the other two.                                                        #
#   The value to be specified for OSDBA, OSOPER and OSASM group is only for    #
#   Unix based Operating System.                                               #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges.
#-------------------------------------------------------------------------------
oracle.install.asm.OSDBA=dba

#-------------------------------------------------------------------------------
# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges.
# The value to be specified for OSOPER group is optional.
# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE.
#-------------------------------------------------------------------------------
oracle.install.asm.OSOPER=

#-------------------------------------------------------------------------------
# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This
# must be different than the previous two.
#-------------------------------------------------------------------------------
oracle.install.asm.OSASM=dba

################################################################################
#                                                                              #
#                           SECTION C - SCAN                                   #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify a name for SCAN
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.scanName=ol7-122-scan

#-------------------------------------------------------------------------------
# Specify a unused port number for SCAN service
#-------------------------------------------------------------------------------

oracle.install.crs.config.gpnp.scanPort=1521

################################################################################
#                                                                              #
#                           SECTION D - CLUSTER & GNS                         #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure the cluster as Extended, else
# specify 'false'
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.configureAsExtendedCluster=false


#-------------------------------------------------------------------------------
# Specify the Member Cluster Manifest file
#
# Applicable only for MEMBERDB and MEMBERAPP cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.memberClusterManifestFile=

#-------------------------------------------------------------------------------
# Specify a name for the Cluster you are creating.
#
# The maximum length allowed for clustername is 15 characters. The name can be 
# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-)
# and underscore(_).
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterName=ol7-122-cluster

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration.
# Specify 'true' if you would like to configure Grid Naming Service(GNS), else
# specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.configureGNS=false

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS.
# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP
# , else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.autoConfigureClusterNodeVIP=false

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure GNS.
# Specify the type of GNS configuration for cluster
# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS
# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration.
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsOption=

#-------------------------------------------------------------------------------
# Applicable only if SHARED_GNS is being configured for cluster
# Specify the path to the GNS client data file
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsClientDataFile=

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to 
# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
# Specify the GNS subdomain and an unused virtual hostname for GNS service
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=

#-------------------------------------------------------------------------------
# Specify the list of sites - only if configuring an Extended Cluster
#-------------------------------------------------------------------------------
oracle.install.crs.config.sites=

#-------------------------------------------------------------------------------
# Specify the list of nodes that have to be configured to be part of the cluster.
#
# The list should a comma-separated list of tuples.  Each tuple should be a
# colon-separated string that contains
# - 1 field if you have chosen CRS_SWONLY as installation option, or
# - 1 field if configuring an Application Cluster, or
# - 3 fields if configuring a Flex Cluster
# - 3 fields if adding more nodes to the configured cluster, or
# - 4 fields if configuring an Extended Cluster
# 
# The fields should be ordered as follows:
# 1. The first field should be the public node name.
# 2. The second field should be the virtual host name
#    (Should be specified as AUTO if you have chosen 'auto configure for VIP'
#     i.e. autoConfigureClusterNodeVIP=true)
# 3. The third field indicates the role of node (HUB,LEAF). This has to 
#    be provided only if Flex Cluster is being configured.
#    For Extended Cluster only HUB should be specified for all nodes
# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster.
# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option
# The 2nd and 3rd fields are not applicable if configuring an Application Cluster
#
# Examples
# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2
# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2
# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2
# You can specify a range of nodes in the tuple using colon separated fields of format
# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterNodes=ol7-122-rac1.localdomain:ol7-122-rac1-vip.localdomain:HUB,ol7-122-rac2.localdomain:ol7-122-rac2-vip.localdomain:HUB

#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetAddress:InterfaceType
# where InterfaceType can be either "1", "2", "3", "4", or "5"
# InterfaceType stand for the following values
#   - 1 : PUBLIC
#   - 2 : PRIVATE
#   - 3 : DO NOT USE
#   - 4 : ASM
#   - 5 : ASM & PRIVATE
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=enp0s8:192.168.56.0:1,enp0s9:192.168.1.0:5,virbr0:192.168.122.0:3

#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup to store GIMR data.
# Specify 'true' if you would like to separate GIMR data with clusterware data, 
# else specify 'false'
# Value should be 'true' for DOMAIN cluster configurations
# Value can be true/false for STANDALONE cluster configurations.
#------------------------------------------------------------------------------
oracle.install.asm.configureGIMRDataDG=false

################################################################################
#                                                                              #
#                              SECTION E - STORAGE                             #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting
# Disks files
#   - FLEX_ASM_STORAGE
#   - CLIENT_ASM_STORAGE
#
# Applicable only for MEMBERDB cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.storageOption=                
################################################################################
#                                                                              #
#                               SECTION F - IPMI                               #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure Intelligent Power Management interface
# (IPMI), else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.useIPMI=false

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure IPMI
# i.e. oracle.install.crs.config.useIPMI=true
# Specify the username and password for using IPMI service
#-------------------------------------------------------------------------------
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
################################################################################
#                                                                              #
#                                SECTION G - ASM                               #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# ASM Storage Type
# Allowed values are : ASM and ASM_ON_NAS
# ASM_ON_NAS applicable only if
# oracle.install.crs.config.ClusterConfiguration=STANDALONE
#-------------------------------------------------------------------------------
oracle.install.asm.storageOption=ASM

#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing OCR/VDSK 
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store OCR/VDSK files
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.ocrLocation=
#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup on NAS to store GIMR data
# Specify 'true' if you would like to separate GIMR data with clusterware data, else
# specify 'false'
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#------------------------------------------------------------------------------
oracle.install.asmOnNAS.configureGIMRDataDG=false

#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing GIMR data
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store the GIMR database
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
# and oracle.install.asmOnNAS.configureGIMRDataDG=true
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.gimrLocation=

#-------------------------------------------------------------------------------
# Password for SYS user of Oracle ASM
#-------------------------------------------------------------------------------
oracle.install.asm.SYSASMPassword=

#-------------------------------------------------------------------------------
# The ASM DiskGroup
#
# Example: oracle.install.asm.diskGroup.name=data
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.name=DATA

#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following  
#   - NORMAL
#   - HIGH
#   - EXTERNAL
#   - FLEX#   - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.diskGroup.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.redundancy=EXTERNAL

#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
#   - 1
#   - 2
#   - 4
#   - 8
#   - 16
# Example: oracle.install.asm.diskGroup.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.AUSize=4

#-------------------------------------------------------------------------------
# Failure Groups for the disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.FailureGroups=

#-------------------------------------------------------------------------------
# List of disks and their failure groups to create a ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/oracleasm/asm-disk3,,/dev/oracleasm/asm-disk2,,/dev/oracleasm/asm-disk4,,/dev/oracleasm/asm-disk1,

#-------------------------------------------------------------------------------
# List of disks to create a ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disks=/dev/oracleasm/asm-disk3,/dev/oracleasm/asm-disk2,/dev/oracleasm/asm-disk4,/dev/oracleasm/asm-disk1

#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.quorumFailureGroupNames=
#-------------------------------------------------------------------------------
# The disk discovery string to be used to discover the disks used create a ASM DiskGroup
#
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/*
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK*
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/*

#-------------------------------------------------------------------------------
# Password for ASMSNMP account
# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances
#-------------------------------------------------------------------------------
oracle.install.asm.monitorPassword=

#-------------------------------------------------------------------------------
# GIMR Storage data ASM DiskGroup
# Applicable only when 
# oracle.install.asm.configureGIMRDataDG=true
# Example: oracle.install.asm.GIMRDG.name=MGMT
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.name=

#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following  
#   - NORMAL
#   - HIGH
#   - EXTERNAL
#   - FLEX#   - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.gimrDG.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.redundancy=

#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
#   - 1
#   - 2
#   - 4
#   - 8
#   - 16
# Example: oracle.install.asm.gimrDG.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.AUSize=1

#-------------------------------------------------------------------------------
# Failure Groups for the GIMR storage data ASM disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.FailureGroups=

#-------------------------------------------------------------------------------
# List of disks and their failure groups to create GIMR data ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
#     For Windows based Operating System:
#     oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disksWithFailureGroupNames=

#-------------------------------------------------------------------------------
# List of disks to create GIMR data ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2
#     For Windows based Operating System:
#     oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disks=

#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.quorumFailureGroupNames=

#-------------------------------------------------------------------------------
# Configure AFD - ASM Filter Driver
# Applicable only for FLEX_ASM_STORAGE option
# Specify 'true' if you want to configure AFD, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.asm.configureAFD=false
#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

################################################################################
#                                                                              #
#                             SECTION H - UPGRADE                              #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify whether to ignore down nodes during upgrade operation.
# Value should be 'true' to ignore down nodes otherwise specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.ignoreDownNodes=false                
################################################################################
#                                                                              #
#                               MANAGEMENT OPTIONS                             #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the management option to use for managing Oracle Grid Infrastructure
# Options are:
# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
# 2. NONE   -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
#-------------------------------------------------------------------------------
oracle.install.config.managementOption=NONE

#-------------------------------------------------------------------------------
# Specify the OMS host to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsHost=

#-------------------------------------------------------------------------------
# Specify the OMS port to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsPort=0

#-------------------------------------------------------------------------------
# Specify the EM Admin user name to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminUser=

#-------------------------------------------------------------------------------
# Specify the EM Admin password to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminPassword=
################################################################################
#                                                                              #
#                      Root script execution configuration                     #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------------------------------
# Specify the root script execution mode.
#
#   - true  : To execute the root script automatically by using the appropriate configuration methods.
#   - false : To execute the root script manually.
#
# If this option is selected, password should be specified on the console.
#-------------------------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.executeRootScript=false

#--------------------------------------------------------------------------------------
# Specify the configuration method to be used for automatic root script execution.
#
# Following are the possible choices:
#   - ROOT
#   - SUDO
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.configMethod=
#--------------------------------------------------------------------------------------
# Specify the absolute path of the sudo program.
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoPath=

#--------------------------------------------------------------------------------------
# Specify the name of the user who is in the sudoers list. 
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoUserName=
#--------------------------------------------------------------------------------------
# Specify the nodes batch map.
#
# This should be a comma separated list of node:batch pairs.
# During upgrade, you can sequence the automatic execution of root scripts
# by pooling the nodes into batches. 
# A maximum of three batches can be specified. 
# Installer will execute the root scripts on all the nodes in one batch before
# proceeding to next batch.
# Root script execution on the local node must be in Batch 1.
# Only one type of node role can be used for each batch.
# Root script execution should be done first in all HUB nodes and then, when
# existent, in all the LEAF nodes.
#
# Examples:
# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3
# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2
# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3
#
# Applicable only for UPGRADE install option. 
#--------------------------------------------------------------------------------------
oracle.install.crs.config.batchinfo=
################################################################################
#                                                                              #
#                           APPLICATION CLUSTER OPTIONS                        #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the Virtual hostname to configure virtual access for your Application
# The value to be specified for Virtual hostname is optional.
#-------------------------------------------------------------------------------
oracle.install.crs.app.applicationAddress=