Sunday, March 22, 2020

Oracle 12.1.0.2 to 19c Rac in place upgrade using response file


Oracle 12.1.0.2 to 19c  Rac in place upgrade using response file 



Below are steps we  followed  to  silent upgrade  2 node 12.1.0.2   rac    to 19c on same server.

Before we  start  upgrade its necessary to have strong rollback plan , hence will be  highlighting rollback plan  Also .  Please  ensure  we have proper database and  binary backups in place .

Before we  start upgrade  , please note  19c upgrade will need  some 32gb  free  space  in  grid  diskgroup .   Ensure to  check free space  rightly from asmcm>lsdg  under  usable free ms , rather checking from sqlplus   . Its better to have 40gb free space in grid home before we start upgrade

 After  upgrade its also important to track  that  your backups are  Successful and user upgrade their client  to 19c

There is also  AutoUpgrade Tool  as per (Doc ID 2485457.1)    introduced which we can try to explore .



#################### Take  necessary backups    ####################



1) Add newly allocated disk  to Ocr  diskgroup 
2) Take Tar backup of grid home  and database home
3)  Take  backup of asm metadata , ocr and  olr 
4) Verify dataabase full backup is taken 
5)  Verify dataguard is in sync 

####################    Grid  upgrade    ####################

1) On both nodes as root

o+r /etc/inittab

2)  Stop   dataguard log shipping

3)  on both nodes make 19.3.0 and grid  directory with grid user ownership

 4)  Download and unzip  Grid software On 1st  node only

5) Prepare response file on 1st node only  . Right response file parameter is very  crucial  for  upgrade to be successful .

/oragridzj/app/19.3.0/grid/install/response/gridsetup.rsp

INVENTORY_LOCATION=/oragridzj/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/oragridzj/app/grid
oracle.install.asm.OSDBA=oinstall
oracle.install.asm.OSOPER=oinstall
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.clusterName=Clustername
oracle.install.asm.diskGroup.name=CRS
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE



6)  On 1st node only  run pre upgrade script  from new home and  generate fix script if generated .   as grid user

./runcluvfy.sh stage -pre crsinst -upgrade
-rolling -src_crshome /oragridzj/app/12.1.0/grid/ -dest_crshome /oragridzj/app/19.3.0/grid -dest_version 19.3.0.0.0 -fixup -verbose


7)  on 1st  node  run grid setup   dry run  from new home   . Smarting  from 19c we have option to run dry run  before we  actually  do upgrade .   Note , this will copy  software to 2nd  node .

./gridSetup.sh -silent -dryRunForUpgrade 
-responseFile /oragridzj/app/19.3.0/grid/install/response/gridsetup.rsp


8)  Run actual run from  1st node

./gridSetup.sh -silent
-responseFile /oragridzj/app/19.3.0/grid/install/response/gridsetup.rsp



9) modify .profile   to point to new home

10)  On Both Nodes   Verify  files if  looks fine and point to new grid home

/etc/init.d/init.ohasd
/etc/init.d/ohasd
 /var/opt/oracle/olr.loc
.profile
/etc/initab


11)    On   1st node only   Detach old home and  verify  inventory on   both nodes

/oragridzj/app/12.1.0/grid/oui/bin/runInstaller -detachHome -silent
ORACLE_HOME=/oragridzj/app/12.1.0/grid


12) Perform  reboot test  for both nodes


13) re enable  dataguard log shipping and  ensure dg in sync 





####################   Database  Pre upgrade Patch     ####################


Patches to apply before upgrading Oracle GI and DB to 19c or downgrading to previous release (Doc ID 2539751.1)


If you are planning to upgrade your GI or RDBMS versions to 19c, you must consider applying the following patches before starting the upgrade operation.










####################   Database  upgrade    ####################


Before we start any upgrade  as best practice create a  guaranteed restore point 



1)   Stop Log shipping to  Data guard

2)    On Both Nodes ,

mkdir -p /oraswzj/app/oracle/product/19.3.0/dbhome_1

Download and  unzip  database software



3)    On both nodes Run run.sh  with below contents


./runInstaller -ignorePrereq -waitforcompletion -silent    -responseFile /oraswzj/app/oracle/product/19.3.0/dbhome_1/install/response/db_install.rsp  oracle.install.option=INSTALL_DB_SWONLY  ORACLE_HOSTNAME=yourhosname  UNIX_GROUP_NAME=oinstall  INVENTORY_LOCATION=/oragridzj/app/oraInventory   SELECTED_LANGUAGES=en,en_GB ORACLE_BASE=/oraswzj/app/oracle   oracle.install.db.InstallEdition=EE oracle.install.db.OSDBA_GROUP=oinstall oracle.install.db.OSBACKUPDBA_GROUP=oinstall    oracle.install.db.OSDGDBA_GROUP=oinstall     oracle.install.db.OSKMDBA_GROUP=oinstall   oracle.install.db.OSRACDBA_GROUP=oinstall   SECURITY_UPDATES_VIA_MYORACLESUPPORT=false  DECLINE_SECURITY_UPDATES=true


As a root user, execute the following script(s):
        1. /u01/app/oracle/product/19.0.0/dbhome_1/root.sh



4)  download  pre upgraade jar    ( 1st node  )

Download the latest "preupgrade.jar" file from MOS 884522.1.


unzip in 19c home    $ORACLE_HOME/rdbms/admin

unzip -o /u01/software/preupgrade_19_cbuild_??_lf.zip





5)  run pre upgrade script from 19c home

export ORACLE_HOME=/orazj/oraswzj/app/oracle/product/12.1.0/dbhome_1
export ORACLE_SID=DBNAME

/oraswzj/app/oracle/product/19.3.0/dbhome_1/jdk/bin/java -jar /oraswzj/app/oracle/product/19.3.0/dbhome_1/rdbms/admin/preupgrade.jar TERMINAL TEXT  >> dbname_precheck.log




6)  run  pre upgrade  recomendation  received from step  5 on all database 


>> purge recyclebin :  purge  dba_recyclebin ;
>> utlrp.sql
>>  undo , system , sysaux , temp tablespace
>>  EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

@/oraswzj/app/oracle/cfgtoollogs/DBNAME/preupgrade/preupgrade_fixups.sql




7)   copy  dbs and  network files  from old home to new home
   ( both the servers )


cp $ORACLE_HOME/dbs/*  /oraswzj/app/oracle/product/19.3.0/dbhome_1/dbs/
cp $ORACLE_HOME/network/admin/*  /oraswzj/app/oracle/product/19.3.0/dbhome_1/network/admin/



8) enable rac on  new home  on both servers in 19c home

$ cd $ORACLE_HOME/rdbms/lib

make -f ins_rdbms.mk rac_on
make -f ins_rdbms.mk ioracle


9)   Shutdown database from old home and remove from crs

shutdown old database    ( srvctl  stop database -d )

remove  database from  ocr from   old  home and add  from new home
From old Oracle_Home:
% $ORACLE_HOME/bin/srvctl remove database -d db_name



10)   update new  home  in .profile   on both servers


11)   Upgrade  database  from new home  on 1st node only

sqlplus / as sysdba <<EOF

alter system set cluster_database=false scope=spfile ;
startup upgrade;
exit;
EOF
You can run the upgrade using either of the following commands. The second is actually just a shorthand for the former.

# Regular upgrade command.
cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl  $ORACLE_HOME/rdbms/admin/catupgrd.sql

# Shorthand command.
$ORACLE_HOME/bin/dbupgrade

alter system set cluster_database=true  scope=spfile ;




12)  Update timezone , gather stats and run post script  that will be generated in step 5 

--->>>  start databse   in upgrade mode 

SELECT * FROM v$timezone_file;
-- Begin upgrade to the latest version.
SET SERVEROUTPUT ON
DECLARE
  l_tz_version PLS_INTEGER;
BEGIN
  l_tz_version := DBMS_DST.get_latest_timezone_version;

  DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
  DBMS_DST.begin_upgrade(l_tz_version);
END;
/

SHUTDOWN IMMEDIATE;
STARTUP;

-- Do the upgrade.
SET SERVEROUTPUT ON
DECLARE
  l_failures   PLS_INTEGER;
BEGIN
  DBMS_DST.upgrade_database(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.upgrade_database : l_failures=' || l_failures);
  DBMS_DST.end_upgrade(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.end_upgrade : l_failures=' || l_failures);
END;
/

-- Check new settings.
SELECT * FROM v$timezone_file;

COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A20

SELECT property_name, property_value
FROM   database_properties
WHERE  property_name LIKE 'DST_%'
ORDER BY property_name;

sqlplus / as sysdba <<EOF
EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
 EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
exit;
EOF

# AUTOFIXUP
sqlplus / as sysdba <<EOF
@/oraswzj/app/oracle/cfgtoollogs/dbname/preupgrade/postupgrade_fixups.sql
exit;
EOF






13)  enable  cluster mode  to true

alter system set cluster_database=true  scope=spfile ;




14)  add database to crs from  new home .  If we are facing  PRKC-1137
and PRCT-1402   , copy  osdbagrp  from grid home to 19c  dbhome

cd $ORACLE_HOME/bin
mv osdbagrp osdbagrp.org
cp /oragridzj/app/19.3.0/grid/bin/osdbagrp .


srvctl add database -d DBNAME  -o /oraswzj/app/oracle/product/19.3.0/dbhome_1 -s '+DATA/DBNAME/PARAMETERFILE/spfile.310.951432401'  -role {PRIMARY | PHYSICAL_STANDBY

srvctl add instance -d DBNAME   -i DBNAME1  -n SERVER1
srvctl add instance -d DBNAME   -i DBNAME2  -n SERVER2




15)   Perform post  checks

 check  invalid count
verify  dba_registry

/etc/init.d/init.ohasd
/etc/init.d/ohasd
 /var/opt/oracle/olr.loc
.profile
/etc/initab



16)  Detach 12c home from  1st node  only 

/oraswzj/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller 
-detachHome -silent ORACLE_HOME=/oraswzj/app/oracle/product/12.1.0/dbhome_1 





17)  Fix netbackup  for database backups 

A)  create  softlink in 19chome/lib pointing to  netbackup .  Check 12c for  reference 
B)  Correct database home path in  backup script 



18)   Start dataguard log shipping and  verify log sync 



19)  update monitoring , upgrade  catalog database  . If  upgrade catalog fails


RMAN-07539: insufficient privileges to upgrade the catalog schema

Refer : Doc ID 2558120.1

SQL> grant CREATE ANY CONTEXT to rman;
SQL> > grant drop ANY CONTEXT to rman;








#################### Restart database upgrade from where it failed failed   ####################


1) Check logs where

grep -i 'error at line' catupgrd*.log

PHASE_TIME___START 15 15-01-16 08:49:41



2)  Start upgrade from step where it failed

cd $ORACLE_HOME/bin
dbupgrade -p 15




######################## rhpserver / mgmtdb  disable after 19c upgrade  ####################


If failed to  failed to run  Pre  upgrade patch before starting upgrade procedure mgmt database  will be in disabled state  after upgrade.

In 19c  Mgmt database is optional , but still if you want to fix, we can use below  procedure  to bring it online .



Bring mgmt online 

1)

The first thing is to bring up the MGMTDB in the 12.1 GI_HOME.

srvctl enable mgmtdb
srvctl start mgmtdb
srvctl status mgmtdb


2)

Once the MGMTDB is up and running, you need to drop the RHP service that was created during the rootupgrade process. This has to be done from the 18.5 GI_HOME.

[root@node2 ~]# env | grep ORA
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/18.5.0/grid
[root@node2 ~]# srvctl remove rhpserver -f



3)  Now that the RHP service has been removed, we need to remove the MGMTDB in 12.1.

########################################
# As root user in BOTH nodes
########################################
#Node 1
[root@node1 ~]#  export ORACLE_HOME=/u01/app/12.1.0.2/grid
[root@node1 ~]#  export PATH=$PATH:$ORACLE_HOME/bin
[root@node1 ~]# crsctl stop res ora.crf -init
[root@node1 ~]# crsctl modify res ora.crf -attr ENABLED=0 -init
#Node 2
[root@node2 ~]#  export ORACLE_HOME=/u01/app/12.1.0.2/grid
[root@node2 ~]#  export PATH=$PATH:$ORACLE_HOME/bin
[root@node2 ~]# crsctl stop res ora.crf -init
[root@node2 ~]# crsctl modify res ora.crf -attr ENABLED=0 -init
########################################
# As oracle User on Node 1
########################################
oracle@node1 : ~&amp;gt; export ORACLE_HOME=/u01/app/12.1.0.2/grid
oracle@node1 : ~&amp;gt; export PATH=$PATH:$ORACLE_HOME/bin
oracle@node1 : ~&amp;gt; srvctl relocate mgmtdb -node node1
oracle@node1 : ~&amp;gt; srvctl stop mgmtdb
oracle@node1 : ~&amp;gt; srvctl stop mgmtlsnr

 /u01/app/12.1.0.2/grid/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB

oracle@node1 : ~&amp;gt; srvctl remove mgmtdb
Remove the database _mgmtdb? (y/[n]) y
########################################
##### Manually Removed the mgmtdb files
##### Verify that the files for MGMTDB match your environment before deleting them
########################################
ASMCMD&amp;gt; cd DBFS_DG/_MGMTDB/DATAFILE
ASMCMD&amp;gt; ls
SYSAUX.257.879563483
SYSTEM.258.879563493
UNDOTBS1.259.879563509
ASMCMD&amp;gt; rm system.258.879563493
ASMCMD&amp;gt; rm sysaux.257.879563483
ASMCMD&amp;gt; rm undotbs1.259.879563509
ASMCMD&amp;gt; cd ../PARAMETERFILE
ASMCMD&amp;gt; rm spfile.268.879563627
ASMCMD&amp;gt; cd ../TEMPFILE
ASMCMD&amp;gt; rm TEMP.264.879563553
ASMCMD&amp;gt; cd ../ONLINELOG
ASMCMD&amp;gt; rm group_1.261.879563549
ASMCMD&amp;gt; rm group_2.262.879563549
ASMCMD&amp;gt; rm group_3.263.879563549
ASMCMD&amp;gt; cd ../CONTROLFILE
ASMCMD&amp;gt; rm Current.260.879563547




4)
Once the MGMTDB is deleted, then we run the mdbutil.pl (which you can grab from MOS Doc 2065175.1) and add the MGMTDB in the 18.5 GI_HOME.


########################################
# As oracle User on Node 1
########################################
oracle@node1 : ~&amp;gt; env | grep ORA
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/18.5.0/grid
oracle@node1 : ~&amp;gt; ./mdbutil.pl --addmdb --target=+DBFS_DG


########################################
# As root user in BOTH nodes
########################################


[root@node1 ~]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@node1 ~]# crsctl start res ora.crf -init

[root@node2 ~]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@node2 ~]# crsctl start res ora.crf -init




 5)

Once the MGMTDB has been recreated, we now rerun the gridSetup.sh -executeConfigTools command, and we can now see that the cluster status is NORMAL and everything is running as expected in 18.5 version.



oracle@node1 : ~&amp;gt; env | grep ORA
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/18.5.0/grid
/u01/app/18.5.0/grid/gridSetup.sh -executeConfigTools -responseFile /tmp/gridresponse.rsp -silent
crsctl query crs activeversion -f
crsctl check cluster -all





Reference : 



How to Move/Recreate GI Management Repository (GIMR / MGMTDB) to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)
12.2: How to Create GI Management Repository (Doc ID 2246123.1)
MDBUtil: GI Management Repository configuration tool (Doc ID 2065175.1)







######################## Rollback Plan for Grid home   ####################


1) On  1st node only , Delete Mgmt database  with grid user on main node 

srvctl status mgmtdb
$ORACLE_HOME/bin/dbca -silent -deleteDatabase -sourceDb  -MGMTDB


2) On Both nodes  , Downgrade  grid  with root user  on all node 

$ORACLE_HOME/crs/install/rootcrs.sh -downgrade


3) Remove the 19c grid_home from Active  cluster inventory: ( only from one node)

cd /sharearea/crs/grid19c/oui/bin


oracle$./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false  


4) On first node only  Update the active cluster inventory with  ORACLE 12C grid home ( only from one node)

cd /crs/app/oracle/product/grid12c/oui/bin

oracle$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true  



5) On first node only   START THE CRS FROM ORACLE 12C GRID HOME  for all  nodes 

:/crs/app/oracle/product/grid12c/bin# ./crsctl start crs



6) On first node only  Remove MGMTDB service from cluster.

srvctl remove mgmtdb -f 


7) On first node only   Create the MGMTDB CONTAINER DB:

/crs/app/oracle/product/grid12c/bin/dbca -silent -createDatabase -createAsContainerDatabase true 
-templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName +MGMT 
-datafileJarLocation /crs/app/oracle/product/grid12c/assistants/dbca/templates -characterset AL32UTF8 

-autoGeneratePasswords -skipUserTemplateCheck



8) On first node only 

Configure the Management Database by running  the Configuration Assistant from the location 

/crs/app/oracle/product/grid12c/bin/dbca -silent -createDatabase -createAsContainerDatabase true 
-templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName +MGMT 
-datafileJarLocation /crs/app/oracle/product/grid12c/assistants/dbca/templates -characterset AL32UTF8 

-autoGeneratePasswords -skipUserTemplateCheck


9)  On first node Only 

Configure the Management Database by running  the Configuration Assistant from the location 

$ORACLE_HOME/bin/mgmtca —local.



10)   On first node  only ,   As grid user, run the post_gimr_ugdg.pl script  from 19c Grid home:

$ $ORACLE_HOME/crs/install/post_gimr_ugdg.pl -downgrade -clusterType SC -destHome /u01/app/19.0.0/grid 

-lowerVersion 12.2.0.1.0 -oraBase /u01/app/grid2





 ##########  Deconfigure  19c grid if  upgrade fails and  rollback steps  not possible  ##########




1) Deconfigure from  both nodes  from 19c home 

/oragrid/app/19.3.0/grid//crs/install/rootcrs.sh -deconfig -force -verbose


2)  Cleanup below files if present,in both nodes:

ls -l /oragrid/app/grid/crsdata/nodename/crsconfig/ckptGridHA_nodename.xml
ls -l /oragrid/app/grid/crsdata/@global/crsconfig/ckptGridHA_global.xml

If it's still there, please remove it manually with "rm" command on all nodes



3)  If GPNP profile is different between nodes/setup, clean it up on all nodes as grid user

$ find <GRID_HOME>/gpnp/* -type f -exec rm -rf {} \;



4) Nullify all 3  ocr disk   (  Please do only for  Ocr disk .  if you perform for data disk your data will be lost )   

dd if=/dev/zero of=/dev/raw/raw13 bs=1048576 count=50




5) Run root.sh again on both node from 12c  home 

/oragrid/app/12.1.0/grid/root.sh



6) Add database and instance to  ocr using srvctl add database / srvctl add instance 




######################## Rollback Plan for Database home   ####################




1)  Fix  timezone if   we have  run  timezone script as part of upgrade.  To find which time zone file version your database is  currently using, query V$TIMEZONE_FILE 


SQL> select * from  V$TIMEZONE_FILE;

For example:



If the query returns timezlrg_18.dat in the column V$TIMEZONE_FILE.FILENAME, then check if the file is present in the target Oracle Home:



Linux and UNIX

$ORACLE_HOME/oracore/zoneinfo/timezlrg_18.dat









2)  On   1st node only    Downgrade   from 19c  database 


Start 1st instance only with cluster database to false 

from 19c home
$cd $ORACLE_HOME/bin
$./dbdowngrade

or

from 19c home
SQL> startup downgrade
@catdwgrd.sql
shu




3) on 1st  node only Reload  from 12c home 

 startup upgrade
SQL> $ORACLE_HOME/rdbms/admin/catrelod.sql
shu  
start
SQL> $ORACLE_HOME/rdbms/admin/utlrp.sql




4) On 1st  node only ,   Gather  stats , 

EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
execute dbms_stats.gather_fixed_objects_stats;





#################### Oracle reference links   ####################

1)   Restart  Database  upgrade from failed  state  

https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/restarting-upgrade-specific-phase-that-failed-using-p.html#GUID-29A9C778-7AE6-4C2C-85E6-4ADDA8466A34




2)  Downgrade  Oracle  database 

https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/
downgrading-oracle-standalone-cluster-12crelease2.html#GUID-C375E4DF-482B-4B14-8FF7-0332254F0547

https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/downgrading-oracle-db-after-upgrade.html#GUID-0227D321-577E-47C5-A8C0-36991B8D0357




3)  Upgrade reference 

https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/upgrading-oracle-database.html#GUID-CC88F262-3370-4FFA-8604-EDA9F20F56D9


https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/database-upgrade-guide.pdf

Oracle 19c - Complete Checklist for Manual Upgrades to Non-CDB Oracle Database 19c (Doc ID 2539778.1)




4) Downgrade Grid 

https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/downgrading-to-oracle-grid-infrastructure-12c-release-1-12-1.html#GUID-DD425330-02F5-49B0-9CC9-D292234C8A6F


5)  To deconfigure  Cluesterware when upgrade fails and  rollback not possible 

How to Deconfigure/Reconfigure(Rebuild OCR) or Deinstall Grid Infrastructure (Doc ID 1377349.1)

How to Abort Grid Infrastructure Upgrade Without Taking a Cluster Outage (Doc ID 1910156.1)



5)  Fix pre upgrade issues 

Oracle 19c Preupgrade Log Warns Oracle Streams must be Removed Though no Streams is Setup (Doc ID 2666384.1)   



#################### Installing  19c client in silent mode     ####################


1) Download   19c software and unzip it 


2)  Prepare response file   /client/response/client_install.rsp


INVENTORY_LOCATION=/dmapps/product/oraInventory
ORACLE_HOME=/dmapps/product/release/oracle/oracle/product/12.1.0/client_1
ORACLE_BASE=/dmapps/product/release


3)  Run below  from install directory 

cd /software/client
$ ./runInstaller -ignoreSysPrereqs -showProgress -silent -responseFile  /software/client/response/client_install.rsp





####################  Issue faced during upgrade    ####################


1) Grid  upgrade failed : 
SInce we doing silent upgrade , it failed few times since response  file was not right . Once we drafted right response file , it was piece of cake 


2) MGMT database was not upgraded . 
Since we missed to apply pre patch ,  mgmt database was not upgraded . . Later we manually upgraded mgmt database 







1 comment:

  1. Hello.
    Regarding the below points :
    9) modify .profile to point to new home

    10) On Both Nodes Verify files if looks fine and point to new grid home

    /etc/init.d/init.ohasd
    /etc/init.d/ohasd
    /var/opt/oracle/olr.loc
    .profile
    /etc/initab

    Where is the .profile located? Is it .bash_profile you are speaking of?

    ReplyDelete