Saturday, May 22, 2021

Oracle Rac -- Adding HAIP Manually : Redundant Interconnect



Oracle Grid Infrastructure from 11g R2  now provides RAC HAIP, which is link aggregation moved to the clusterware level. Instead of bonding the network adapters on the OS side, Grid Infrastructure in instructed to use multiple network adapters. Grid Infrastructure will still start HAIP even if the system is configured with only one private network adapter. Shows the resource name ora.cluster_interconnect.haip is online.

Starting 11gR2, Oracle supports up to 4 redundant interconnects that will be automatically managed by the cluster for fail-over and load balancing.
Following procedure shows how to add a new private interconnect to an existing RAC cluster.


 Before making this change, check on all nodes if any resource is OFFLINE.
> All the CRS resources should be online.
> In one case, we had ADVM (12c) STABLE and we got errors:
> Check to see what private interfaces are already registered


** IMPORTANT: Make sure your second/redundant Private has DIFFERENT SUBNET

List all the interfaces on the server
Make sure that the new private interface that you are going to add is correctly plumbed
On all other nodes - it has to be on the same interface name 


Each NIC used in the HAIP configuration must be in its own subnet, if the same subnet is used and the NIC having its subnet first in the routing table fails you can experience a node eviction

Each NIC defined as a cluster interconnect  on a given node will have a static ip address (Private IP) assigned to it and each cluster interconnect NIC on a given node must be on a unique subnet. If any one of the cluster interconnect NICs is down on a node, then the subnet associated with the down NIC is considered not usable by any node of the cluster 







#####################################
Check HAIP Information  Before Adding
#####################################

[oracle@host01 bin]$ ./crsctl stat res -t -init

---------------------------------------------------------------------------------------------------- 

Name          Target  State        Server                   State details      Cluster Resources
---------------------------------------------------------------------------------------------------- 
ora.asm  1        ONLINE  ONLINE       host01                   Started,STABLE
ora.cluster_interconnect.haip   1        ONLINE  ONLINE       host01                   STABLE



oifcfg  shows , only one adapter is defined for the Cluster Interconnect.

[oracle@host01 bin]$ ./oifcfg getif
eth0  192.168.56.0  global  public
eth1  192.168.10.0  global  cluster_interconnect



The ifconfig command shows that network device eth2 is part of two subnets.

[oracle@host01 bin]$ ifconfig -a

eth0      Link encap:Ethernet  HWaddr 08:00:27:98:EA:FE 
         inet addr:192.168.56.71  Bcast:192.168.56.255  Mask:255.255.255.0
         inet6 addr: fe80::a00:27ff:fe98:eafe/64 Scope:Link
         UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
         RX packets:947 errors:0 dropped:0 overruns:0 frame:0
         TX packets:818 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:100821 (98.4 KiB)  TX bytes:92406 (90.2 KiB)

eth2      Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 
          inet addr:192.168.10.1  Bcast:192.168.10.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe54:738f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:406939 errors:0 dropped:0 overruns:0 frame:0
          TX packets:382298 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:445270636 (424.6 MiB)  TX bytes:202801222 (193.4 MiB)
 

eth2:1    Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 
          inet addr:192.168.225.190  Bcast:192.168.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1





When Grid Infrastructure is stopped, the ifconfig command will no longer show the eth1:1 device. The gv$cluster_interconnects view shows the HAIP subnets for each instance.

select     
     inst_id,
     name,
     ip_address
  from
     gv$cluster_interconnects;


   INST_ID NAME            IP_ADDRESS
---------- --------------- ----------------
         1 eth2:1          192.168.225.190
         2 eth2:1          192.168.230.98





#####################################
Adding Secondary Private Network / HAIP 
#####################################

While HAIP is running, there is no redundancy or additional network bandwidth because only one network interface is configured. If a second network interface is available for the private network, it will need to be added to Grid Infrastructure. The device needs to be a well-configured network adapter in the operating system. The new network interface needs to have the same configuration as the current interface, i.e. both must be on the same subnet, have the same MTU size, etc.  The oifcfg command is used to set the new interface as a cluster_interconnect device.

$ oifcfg iflist
eth0  10.0.2.0        <--local router
eth1  192.168.56.0    <-- public Interface
eth2  192.168.10.0     <-- RAC cluster_interconnect
eth2  192.168.0.0     <-- RAC used  
eth3  192.168.0.0     <-- Our new device we want to add to the cluster_interconnect




oifcfg iflist -p -n  ( eg only ) 

eth0  192.168.4.0  PRIVATE  255.255.255.0
eth1  192.168.0.128  PRIVATE  255.255.255.128
eth1  192.168.0.0  UNKNOWN  255.255.0.0
Note:
– The first column is the network adapter name.
– The second column is the subnet ID.
– The third column indicates whether it’s private, public or unknown according to RFC standard, it has NOTHING to do whether it’s used as a private or public network in Oracle Clusterware.
– The last column is the netmask.


oifcfg setif <interface-name>/<subnet>:<cluster_interconnect|public>

[oracle@host01 bin]$ ./oifcfg setif -global  eth3/192.168.10.0:cluster_interconnect,asm



The device eth3 is now part of the Cluster Interconnect. The commands do not need to be repeated on all nodes as Grid Infrastructure takes care of that for us. On host02, the device is already configured.
 
[oracle@host02 bin]$ ./oifcfg getif

eth1  192.168.56.0  global  public
eth2  192.168.10.0  global  cluster_interconnect
eth3  192.168.10.0  global  cluster_interconnect

 

Grid Infrastructure needs to be restarted on all nodes.

[root@host01 bin]# ./crsctl stop crs
[root@host01 bin]# ./crsctl start crs




Once the cluster nodes are back up and running, the new interface will be part of the RAC HAIP configuration.

[root@host01 ~]# ifconfig ?a

eth1      Link encap:Ethernet  HWaddr 08:00:27:98:EA:FE 
          inet addr:192.168.56.71  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe98:eafe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5215 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6593 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2469064 (2.3 MiB)  TX bytes:7087438 (6.7 MiB)

 
eth2      Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 
          inet addr:192.168.10.1  Bcast:192.168.10.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe54:738f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:3517 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2771 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000

          RX bytes:789056 (770.5 KiB)  TX bytes:694387 (678.1 KiB)

 

eth2:1    Link encap:Ethernet  HWaddr 08:00:27:54:73:8F 
          inet addr:192.168.21.30  Bcast:192.168.127.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1

 
eth3      Link encap:Ethernet  HWaddr 08:00:27:6A:8B:8A 
          inet addr:192.168.10.3  Bcast:192.168.10.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe6a:8b8a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:857 errors:0 dropped:0 overruns:0 frame:0
         TX packets:511 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:158563 (154.8 KiB)  TX bytes:64923 (63.4 KiB)

 
eth3:1    Link encap:Ethernet  HWaddr 08:00:27:6A:8B:8A 
          inet addr:192.168.170.240 Bcast:192.168.255.255 Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1

 

The new interface is also found in the gv$cluster_interconnects view.

 select
     inst_id,
     name,
     ip_address
  from
     gv$cluster_interconnects;

 
   INST_ID NAME            IP_ADDRESS
---------- --------------- ----------------
         1 eth2:1          192.168.21.30
         1 eth3:1          192.168.170.240
         2 eth2:1         192.168.75.234
         2 eth3:1          192.168.188.35



#####################################
Testing  HAIP 
#####################################

Stop eth2 at OS level and verify eth3 
# ifconfig eth2 down
# ifconfig eth3
 
--> Custer_interconnect failed over  from eth2:1 to eth3:2 - eth2 is not used anymore

Re-enable eth2 again at OS level
# ifconfig eth2 up
--> Wait some seconds to see that failed_over cluster_interconnect is back on eth2



#####################################
Disable haip service and haip dependencies
#####################################


Disable : 
[ora12c1:root]:/>crsctl modify res ora.cluster_interconnect.haip -attr "ENABLED=0" -init


Enable : 
[ora12c1:root]:/>crsctl modify res ora.cluster_interconnect.haip -attr "ENABLED=0" -init





#####################################
Reference
#####################################
How to Modify Private Network Information in Oracle Clusterware (Doc ID 283684.1)

https://www.oracle.com/technetwork/products/clusterware/overview/interconnect-vlan-06072012-1657506.pdf

https://docs.oracle.com/database/121/CWLIN/networks.htm#CIHIJAJB
ID 1481481.1

Friday, May 21, 2021

Oracle Statspack Installation / Re-Creation

We all must be thinking in this era of  21c why we need statspack. However for some non critical database where we dont  have license of tuning pack . Hence we need to survive on  statspack 


1)    take export  backup of  exiting perftstat user 

2) Drop  statspack  
@?/rdbms/admin/spdrop.sql

3) create stastpack 
@?/rdbms/admin/spcreate.sql

4)  Change  stastpack snap level to level 6 
BEGIN
  statspack.modify_statspack_parameter(i_snap_level=>6, i_modify_parameter=>'true');
END;
/

select SNAP_ID, SNAP_LEVEL from STATS$SNAPSHOT;


5) try manual snap 
exec PERFSTAT.statspack.snap;


6)  Schedule  statspack auto jobs for statspack snap   ( note job id) 
@?/rdbms/admin/spauto.sql



7)  Change  snap interval to 30 min 

alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
  set lines 180
  col SCHEMA_USER for a20
  col INTERVAL for a30
 col WHAT for a30
 select JOB, SCHEMA_USER, INTERVAL, BROKEN, WHAT from dba_jobs where JOB=428;

execute dbms_job.interval(428,'sysdate+(1/48)');

alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
  set lines 180
  col SCHEMA_USER for a20
  col INTERVAL for a30
 col WHAT for a30
 select JOB, SCHEMA_USER, INTERVAL, BROKEN, WHAT , to_char(next_date ,'DD-MON-YYYY:HH24:MI:SS')      "next date"  , failures   from dba_jobs where JOB=428;

select name,snap_id,to_char(snap_time,'DD-MON-YYYY:HH24:MI:SS')      "Date/Time" from stats$snapshot,v$database;


8)  after  30  minutes verify snap interval working fine with 30 min and level 6 

@?/rdbms/admin/spreport.sql





Saturday, May 15, 2021

Whats new in Oracle database Home binary Cloning in 19c

 There are  2 main  feature been  introduced in 19c with regards to   home binary cloning 

1)  Taking "Gold Copy "  instead of  TAR  backup form source 

2) clone.pl is depreciated 


Gold Image : 

Instead  of taking traditional tar backup , Oracle now suggest taking gold image . Below is example . Zip file will be created in target  location . 

$ORACLE_HOME/runInstaller -createGoldImage -destinationLocation /refresh/home/app/oracle/product/ -silent 


Clone.pl depreciated : 

If we use  "clone.pl"  or   "runInstaller -clone"  now  from 19c , we will get below exception .


[INFO] [INS-32183] Use of clone.pl is deprecated in this release. Clone operation is equivalent to performing a Software Only installation from the image.

You must use /refresh/home/app/oracle/product/19.3_clone/runInstaller script available to perform the Software Only install. For more details on image based installation, refer to help documentation.


To  avoid  above  , we used runInstaller  as below ,    just like  we  for fresh installation  . 


ORACLE_BASE=/u01/app/oracle
NODE2_HOSTNAME=node2
ORACLE_HOSTNAME=node1
NODE1_HOSTNAME=node1
ORA_INVENTORY=/u01/app/oraInventory
ORACLE_HOME=/u01/app/oracle/product/19.3.0.0/dbhome_2
[oracle@node1 ~]$ cd $ORACLE_HOME
[oracle@node1 dbhome_2]$ ${ORACLE_HOME}/runInstaller -ignorePrereq -waitforcompletion -silent \
> -responseFile ${ORACLE_HOME}/install/response/db_install.rsp \
> oracle.install.option=INSTALL_DB_SWONLY \
> ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \
> UNIX_GROUP_NAME=oinstall \
> INVENTORY_LOCATION=${ORA_INVENTORY} \
> SELECTED_LANGUAGES=en \
> ORACLE_HOME=${ORACLE_HOME} \
> ORACLE_BASE=${ORACLE_BASE} \
> oracle.install.db.InstallEdition=EE \
> oracle.install.db.OSDBA_GROUP=dba \
> oracle.install.db.OSOPER_GROUP=dba \
> oracle.install.db.OSBACKUPDBA_GROUP=dba \
> oracle.install.db.OSDGDBA_GROUP=dba \
> oracle.install.db.OSKMDBA_GROUP=dba \
> oracle.install.db.OSRACDBA_GROUP=dba \
> oracle.install.db.CLUSTER_NODES=${NODE1_HOSTNAME},${NODE2_HOSTNAME} \
> oracle.install.db.isRACOneInstall=false \
> oracle.install.db.rac.serverpoolCardinality=0 \
> oracle.install.db.config.starterdb.type=GENERAL_PURPOSE \
> oracle.install.db.ConfigureAsContainerDB=false \
> SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
> DECLINE_SECURITY_UPDATES=true



Reference : 

19.x:Clone.pl script is deprecated and how to clone using gold-image (Doc ID 2565006.1)





Sunday, May 2, 2021

Oracle TFA And AHF log collection / Installation

 
Trace File Analyzer Collector also known as TFA is a diagnostic collection utility which greatly simplifies the diagnostic data collection for both Oracle Database as well as Oracle Clusterware/Grid Infrastructure RAC environments.

Trace File Analyzer provides a central and single interface for all diagnostic data collection and analysis.

When a problem occurs, TFA collects all the relevant data at the time of the problem and consolidates data even across multiple nodes in a clustered Oracle RAC environment.  Only the relevant diagnostic data is collected and can be packaged and uploaded to Oracle Support and this leads to faster resolution times. All the required diagnostic data is collected via a single tfactl command instead of having to individually look for the required diagnostic information across a number of database and cluster alert logs, trace files or dump files.

In addition to the core functionality of gathering, consolidating and processing diagnostic data, Trace File Analyzer comes bundled with a number of support tools which enable us to obtain a lot of other useful information like upgrade readiness, health checks for both Engineered as well as non-Engineered systems, OS performance graphs, Top SQL queries etc.

Oracle Trace File Analyzer is shipped along with Oracle Grid Infrastructure (from version 11.2.0.4). However, it is recommended to download the latest TFA version which can be accessed via the My Oracle Support Note 1513912.1 since the TFA bundled with the Oracle Grid Infrastructure does not include many of the new features, bug fixes and more importantly the Oracle Database Support Tools bundle.

Oracle releases new versions of the TFA several times a year and the most current version is Trace File Analyzer 18.1.1 which is now available for download via the MOS Note 1513912.1.


##################################################################################
TFA
##################################################################################


__________________
Install   Tfa 
__________________

Download the latest version of Oracle Trace File Analyzer with Oracle Database support tools bundle from My Oracle Support note 1513912.1

Upgrading is similar to first-time install. As root, use the installTFAplatform script example for linux > 

./installTFA-Linux.

For  Rac :  
Install Tfa on both Nodes  -->  Sync  on both Nodes  --> Start 

( As root ) 
./installTFAplatform 

( As Oracle User ) 
$ ./installTFAplatform -extractto dir -javahome jre_home


cd /u01/software
unzip TFA-LINUX_v18.2.1.zip
mkdir -p $ORACLE_HOME/tfa
./installTFA-LINUX -local -tfabase $ORACLE_HOME/tfa



After having completed this process on all nodes, let’s synchronize them. As I decided not to use SSH, I need to execute those final steps :

Launch tfactl syncnodes using sudo : 
# sudo /grid/infrastructure/home/bin/tfactl syncnodes       ( # $GIHOME/tfa/nodename/tfa_home/bin/synctfanodes.sh ) 

Login using root is disabled in sshd config. Please enable it or
Please copy these files manually to remote node and restart TFA

1. /grid/infrastructure/home/tfa/node01/tfa_home/server.jks
2. /grid/infrastructure/home/tfa/node01/tfa_home/client.jks
3. /grid/infrastructure/home/tfa/node01/tfa_home/internal/ssl.properties

These files must be owned by root and should have 600 permissions.


sudo /grid/infrastructure/home/bin/tfactl stop
sudo /grid/infrastructure/home/bin/tfactl start


__________________
Log Collection   Tfa 
__________________

 cd $ORACLE_HOME/tfa/bin/

# Gather diagnostic information about TFA itself.
./tfactl diagnosetfa -local

# Gather information about errors. You are prompted to select a specific incident.
./tfactl diagcollect -srdc ORA-00600
./tfactl diagcollect -srdc ORA-07445

# Collect data for all components for a specific time period.
./tfactl diagcollect -from "2018-06-16 13:00:00" -to "2018-06-16 13:00:00"

# Collect data for all components for the last 12 hours.
./tfactl diagcollect


Trace File Analyzer Command Examples

  • Viewing System and Cluster Summary

tfactl summary

  • To find all errors in the last one day

tfactl analyze -last 1d

  • To find all occurrences of a specific error  (in this case ORA-00600 errors)

tfactl analyze -search “ora-00600” -last 8h

  • To set the notification email to use

tfactl set notificationAddress=joeblogs@oracle.com

  • Enable or disable Automatic collections (ON by default)

tfactl set autodiagcollect=OFF

  • Adjusting the Diagnostic Data Collection Period

tfactl diagcollect -last 1 h

tfactl diagcollect -from “2018-03-21″

tfactl diagcollect  from “2018-03-21” -to “2018-03-22”

  • Analyze, trim and zip all files updated in the last 12 hours, including Cluster Health Monitor and OSWatcher data, from across all nodes  the cluster

tfactl diagcollect -all -last 12h

  • Run collection from specific nodes in a RAC cluster

tfactl diagcollect -last 1d -node rac01

  • Run collection for a specific database

tfactl -diagcollect -database hrdb -last 1d

  • Uploading collections to Oracle Support

Execute tfactl setupmos to configure Oracle Trace File Analyzer with MOS user name and password followed by

tfactl diagcollect -last 1d -sr 1234567

  • Search  database alert logs for the string “ORA-” from the past one day

tfactl analyze -search “ORA” -comp db -last 1d

  • Display a summary of events collected from all alert logs and system logs from the past six hours

tfactl analyze -last 6h

  • View the summary of a TFA deployment. This will display cluster node information as well as information related to database and grid infrastructure software homes like version, patches installed, databases running etc.

tfactl summary

  • Grant access to a user

tfactl access add -user oracle

  • List users with TFA access

tfactl access lsusers

  • Run orachk

tfactl run orachk

  • Display current configuration settings

tfactl print config




Commonly Used Commands : 

tfactl set reposizeMB=50240 
tfactl print repository 
tfactl purge -older 2h
tfactl set repositorydir=/u02/repository/tfa/ 
tfactl diagcollect -srdc ora600
tfactl diagcollect -srdc ora04030 
tfactl analyze -search "/ORA- /c" -comp db -last 2d 
tfactl diagcollect -all -from "DEC/14/2021 01:00:00 -to DEC/15/2021 03:00:00" 
tfactl diagcollect  -from "DEC/14/2021 01:00:00 -to DEC/15/2021 03:00:00"  

tfactl analyze -search "ORA-04031" -last 1d
tfactl analyze -since 1d
tfactl analyze -comp os -for ”Oct/01/2020 11" -search "."
tfactl analyze -comp osw -since 6h
tfactl analyze -search "ORA-" -since 2d
tfactl analyze -comp oswslabinfo -from ”Oct/01/2020 05:00:01" -to ”Oct/01/2020 06:00:01"

tfactl diagcollect –srdc dbperf
tfactl diagcollect -srdc ORA-00600
tfactl managelogs -show usage 
tfactl managelogs -purge -older 30d
tfactl tail alert





tfactl summary

-- Genearate complete summary overview in html
tfactl summary -html

-- Generate patching summary:
tfactl summary -patch -html

-- Generate asm summary
tfactl summary -asm -html


__________________
TFA  Status  : 
__________________

[root@ermantest tmp]# /u01/app/oracle/product/12.1.0.2/db_1/bin/tfactl status
root@ermantest tmp]# /u01/app/oracle/product/12.1.0.2/db_1/bin/tfactl toolstatus
root@ermantest tmp]# /u01/app/oracle/product/12.1.0.2/db_1/bin/tfactl version
# sudo /grid/infrastructure/home/bin/tfactl print config
# sudo /grid/infrastructure/home/bin/tfactl syncnodes



____________________________________
TFA installed as part of 21523375 (Oct2015 CPU)
____________________________________

Ater The Grid  Psu   TFA was installed.
If you need to shutdown processes running from grid, TFA will need to be stopped as well (# /etc/sinit.d/init.tfa stop) since crsctl stop crs does not stop TFA


__________________
Controlling TFA  cpu  usage 
__________________

tfactl setresourcelimit 
 [-tool tool_name] 
 [-resource resource_type] 
 [-value value]
To limit TFA to a maximum of 50% of a single CPU, run the following:
# tfactl setresourcelimit -value 0.5




__________________
TFA Reference : 
__________________

https://xy2401-local-doc-java.github.io/en/engineered-systems/health-diagnostics/trace-file-analyzer/tfaug/troubleshoot-tfa.html#GUID-169D2468-008B-4CE1-AB8E-1BA2A6233360

__________________
Troubleshooting TFA 
__________________

https://xy2401-local-doc-java.github.io/en/engineered-systems/health-diagnostics/trace-file-analyzer/tfaug/troubleshoot-tfa.html#GUID-AEEC5C9E-00F1-44B7-B39F-76E836AFC10F



##################################################################################
Oracle Autonomous Health Framework (Former TFA)
##################################################################################

Oracle Autonomous Health Framework is a collection of components that analyzes the diagnostic data collected, and proactively identifies issues before they affect the health of your clusters or your Oracle Real Application Clusters (Oracle RAC) databases. Oracle Autonomous Health Framework contains Oracle ORAchk, Oracle EXAchk, and Oracle Trace File Analyzer.

Install Oracle Autonomous Health Framework as root to obtain the fullest capabilities. Oracle Autonomous Health Framework has reduced capabilities when you install it as a non-root user.AFH can be run in two different modes either as daemon or Non-daemon mode. Both are doing same thing but daemon mode is more preferred.

To install Oracle AHF we run the ahf_setup installer with the -extract parameter. We also specify the -notfasetup parameter to avoid enabling the Oracle Trace File Analyser component of Oracle AHF.

For Rac  we need to install on both nodes . AHF will automatically synchronize between nodes.  IF show status  shows both nodes means   its synchronized no need to manually synchronize like tfa  

By default Oracle AHF will be installed to the /opt/oracle.ahf directory.

1) Download latest TFA software DocID 2550798.1
2) copy /u01/src/AHF-LINUX_v20.2.0.zip
3) unzip AHF-LINUX_v20.2.0.zip
4) Install  TFA  
as root 
 cd /u01/src/
 ./ahf_setup
[root@dbhost]# ./ahf_setup -extract -notfasetup 


 as  non root 
./ahf_setup -ahf_loc $ORACLE_HOME/ahf




__________________
upgrade the AHF 
__________________


1  mkdir /tmp/AHF
2  copy the latest AHF to /tmp/AHF and unzip 
4. uninstall existing  TFA : 
tfactl uninstall
3  ./ahf_setup -data_dir /u01/app/grid_base -tmp_loc /u01/app/grid_base/tmp/   ( You can find the data directory " ps -ef  | grep tfa | grep HeapDumpPath" ) 
4) Verify  TFA installed : 
/opt/oracle.ahf/tfa/bin/tfactl status
/opt/oracle.ahf/tfa/bin/tfactl toolstatus


5) syncnodes should be auto . below is just for reference 

tfactl syncnodes


Reference : 
Remove existing AHF and install latest AHF 21.4.1 as per MOS Doc ID 2832630.1




__________________
Controlling TFA  cpu  usage 
__________________

ahfctl setresourcelimit 
[-tool tool_name] 
[-resource resource_type] 
[-value value]



__________________
uninstall AHF 
__________________

[root@dbhost]# cd /opt/oracle.ahf/tfa/bin 
[root@dbhost]# ./tfactl uninstall 



__________________
Upload Files directly to Oracle Sr 
__________________

There are options to upload files  directly to  Oracle Sr 

$ curl -T [FILE_YOU_WANT_TO_SEND] -u [MOS_USER]

https://transport.oracle.com/upload/issue/[SR_NUMBER]/

$ tfactl upload -sr [SR_NUMBER] -user [MOS_USER] [FILE_YOU_WANT_TO_SEND]




__________________
AHF Reference 
__________________

https://docs.oracle.com/en/database/oracle/oracle-database/19/atnms/troubleshoot-tfa.html#GUID-11964D53-74C9-4754-9E80-9DB22557FF4E


https://docs.oracle.com/en/database/oracle/oracle-database/18/atnms/tfa-service.html#GUID-C470800D-B690-45F2-8C38-8EC60B6BB828

https://docs.oracle.com/en/engineered-systems/health-diagnostics/trace-file-analyzer/tfaug/performing-custom-collections.html#GUID-E4A2492E-A123-480A-B954-57898DBCE8BE