Tuesday, March 16, 2021

Oracle Rac gpnp profile and gpnptool

 

What is GPnP profile and Why it is needed?

In Oracle 11g R2 RAC, we can store OCR and Voting disk in ASM, but clusterware needs OCR and Voting disk to start CRSD and CSSD process but point is, both OCR and Voting disk are stored in ASM, which itself  is a resource for the nodes that means CRSD and CSSD process needs the OCR and Voting file before the ASM startup. So the question arise ” how the clusterware will start?”, we shall find the answer of this question in this same document, just wait..

To resolve this issue Oracle introduced two new node specific files OLR & GPnP, in Oracle 11g R2.

Now If we talk about GPnP profile, This GPnP profile is a new feature included in Oracle 11g R2.The GPnP profile is a small XML file located in

$GRID_HOME/gpnp//profiles/peer with name profile.xml.   

Each node of the cluster maintains a local copy of this profile and is maintained by GPnP daemon along with mdns daemon . GPnP deamon ensures the synchronization of  GPnP profile across all the nodes in the cluster and GPnP profile is used by clusterware to establish the correct global personality of a node. it cannot be stored on ASM as it is required prior to start of ASM. Hence, it is stored locally on each node and is kept synchronized across all the nodes by GPnPd.

Local copy of Gpnp profile is stored under <GRID_HOME/gpnp/<hostname>/profiles/peer as profile.xml


How does GPnP Profile used ?:

When a node of an Oracle Clusterware cluster restarts, OHASD is started by platform-specific means, OHASD has access to the OLR (Oracle Local Registry) stored on the local file system. OLR provides needed data to complete OHASD initialization. OHASD brings up GPnP Daemon and CSS Daemon. CSS Daemon has access to the GPNP Profile stored on the local file system. The information regarding voting disk is on ASM , is read from GPnP profile i.e. 

We can even read voting disk by using kfed utility ,even if ASM is not up.

In next step, the clusterware checks whether all the nodes have the updated GPnP profile and the nodes joins the cluster based on the GPnP configuration . Whenever a node is started or added to the cluster, the clusterware software on the starting node starts a GPnP agent and perform following task.

  1. If the node is already part of the cluster, the GPnP agent reads the existing profile on that node.
  2. If the node is being added to the cluster, GPnP agent locates agent on another existing node using multicast protocol (provided by mDNS) and gets the profile from other node’s GPnP agent.

The Voting Files locations on ASM Disks are accessed by CSSD with well-known pointers in the ASM Disk headers and CSSD is able to complete initialization and start or join an existing cluster.

Now OHASD starts an ASM instance and ASM can now operate with initialized and operating CSSD.

With, an ASM instance running and its Diskgroup mounted, access to Clusterware’s OCR is available to CRSD (CRSD needs to read OCR to startup various resources on the node and hence update it, as status of resources changes )Now OHASD starts CRSD with access to the OCR in an ASM Diskgroup and thus Clusterware completes initialization and brings up other services under its control.

The ASM instance uses special code to locate the contents of the ASM SPFILE , which is stored in a Diskgroup.

Next. Since OCR is also on ASM, location of ASM spfile should be known. The order of searching the ASM SPfile is

  • GPnP profile
  • ORACLE_HOME/dbs/spfile
  • ORACLE_HOME/dbs/init

ASM spfile is stored in ASM. But to start ASM, we’ll need spfile.  Oracle know spfile  location from GPnP profile & it reads spfile flag from underlying disk(s) and then starts the ASM.

Thus with the use of GPnP profile stores several information. GPnP profile information along with the information in the OLR have enough information , that have sufficient to automate several tasks or eased for the administrators and also the dependency on OCR is gradually reduced but not eliminated.


Who and When GPNP PROFILE UPDATES? :

GPnP daemon replicates changes to the profile during

  • installation
  • system boot
  • when system updated using standard cluster tools

Profile is automatically updated Whenever changes are made to a cluster during installation and with configuration tools like

  • oifcfg (Change network),
  • crsctl (change location of voting disk),
  • asmcmd (change ASM_DISKSTRING, spfile location) etc.



What Information GPnP Profile Contains:

GPnP profile defines a node’s metadata about:

  • Cluster Name
  • Network interfaces for public and private interconnect
  • ASM server parameter file Location and ASM Diskstring etc.
  • CSS voting disks Discovery String
  • Digital Signature Information

it contains digital signature information of the provisioning authority because the profile is security sensitive. It might identify the storage to be used as the root partition of a machine. This profile is protected by a wallet against modification. As in my case the WALLET information can be found in : /u01/app/11.2.0/grid/gpnp/paw-racnode1/wallets/peer  “OR” /u01/app/11.2.0/grid/gpnp/wallets/peer .

If you have to manually modify the profile, it must first be unsigned with $GRID_HOME/bin/gpnptool, modified, and then signed again with the same utility, however there is a very slight chance you would ever be required to do so.

Now we can use the gpnptool with get option to dump this xml file into standard output. Below is the formatted output .

[grid@paw-racnode1 peer]$ pwd

/u01/app/11.2.0/grid/gpnp/paw-racnode1/profiles/peer


gpnptool command:
================= 
gpnptool edit -p=/u01/app/12.1.0.2/grid/gpnp/<hostname>profiles/peer/profile.xml -o=/u01/app/12.1.0.2/grid/gpnp/<hostname>profiles/peer/profile.xml -ovr -prf_sq=2 -net1:net_use=public -net1:net_ip=10.232.71.0 -net1:net_ada=ipmp0

if we want to specifically change network adapter alone, then user below command.
gpnptool edit -p=/u01/app/12.1.0.2/grid/gpnp/<hostname>profiles/peer/profile.xml -o=/u01/app/12.1.0.2/grid/gpnp/<hostname>profiles/peer/profile.xml -ovr -prf_sq=2 -net1:net_ada=ipmp0

Sign the profile :
gpnptool sign -p=/u01/app/12.1.0.2/grid/gpnp/<hostname>profiles/peer/profile.xml -o=/u01/app/12.1.0.2/grid/gpnp/<hostname>profiles/peer/profile.xml -ovr -w= -net1:net_ada
=ipmp0

[grid@paw-racnode1 peer]$ gpnptool get

Warning: some command line parameters were defaulted. Resulting command line:

         /u01/app/11.2.0/grid/bin/gpnptool.bin get -o-

<?xml version=”1.0″ encoding=”UTF-8″?><gpnp:GPnP-Profile Version=”1.0″ xmlns=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:gpnp=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:orcl=”http://www.oracle.com/gpnp/2005/11/gpnp-profile” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd” ProfileSequence=”5″ ClusterUId=”1c12005940a3efa8bf244ccd47060927″ ClusterName=”paw-rac-cluster PALocation=””><gpnp:Network-Profile><gpnp:HostNetwork id=”gen” HostName=”*”><gpnp:Network id=”net1″ IP=”192.168.75.0″ Adapter=”eth0″ Use=”public”/><gpnp:Network id=”net2″ IP=”10.0.0.0″ Adapter=”eth1″ Use=”cluster_interconnect”/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400″/><orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/oracleasm/disks” SPFile=”+DATA/paw-rac-cluster/asmparameterfile/registry.253.919259819″/><ds:Signature xmlns:ds=”http://www.w3.org/2000/09/xmldsig#”><ds:SignedInfo><ds:CanonicalizationMethod Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#”/><ds:SignatureMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#rsa-sha1″/><ds:Reference URI=””><ds:Transforms><ds:Transform Algorithm=”http://www.w3.org/2000/09/xmldsig#enveloped-signature”/><ds:Transform Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#”> <InclusiveNamespaces xmlns=”http://www.w3.org/2001/10/xml-exc-c14n#” PrefixList=”gpnp orcl xsi”/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#sha1″/><ds:DigestValue>HIz8dOjUIFB32YPkmXW2HMVazoY=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>L6GOD0rB03Hp+NoKVcIHb9/Rp3xznBKpUJGfixN/27Qo6IL8/4HkjSnzsbHf1IuK1SQfqV5624tygB0x9HJfVcW+k6E6cQWwAgZOzpPR3ltctD7XeikkXtt5TOWQ6boMvCKJ5mOwzGzuj4S/qDu7lWPBHM9EPzHAEn/8NOlDcDo=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>

Success.


[grid@paw-racnode1 peer]$  gpnptool show Mode="remote"


[grid@paw-racnode1 peer]$ gpnptool getpval -asm_spf

Warning: some command line parameters were defaulted. Resulting command line:

         /u01/app/11.2.0/grid/bin/gpnptool.bin getpval -asm_spf -p=profile.xml -o-

+DATA/paw-rac-cluster/asmparameterfile/registry.253.919259819


 [grid@paw-racnode1 peer]$ gpnptool getpval -asm_dis

Warning: some command line parameters were defaulted. Resulting command line:

         /u01/app/11.2.0/grid/bin/gpnptool.bin getpval -asm_dis -p=profile.xml -o-

/dev/oracleasm/disks


[grid@paw-racnode1 peer]$ gpnptool find

 Found 2 instances of service ‘gpnp’.

        mdns:service:gpnp._tcp.local.://paw-racnode2:64098/agent=gpnpd,cname=paw-rac-cluster,host=paw-racnode2,pid=6444/gpnpd h:paw-racnode2 c:paw-rac-cluster

        mdns:service:gpnp._tcp.local.://paw-racnode1:55790/agent=gpnpd,cname=paw-rac-cluster,host=paw-racnode1,pid=6677/gpnpd h:paw-racnode1 c:paw-rac-cluster


[grid@racp1vm1 ~]$ gpnptool get -o- | xmllint --format - | grep SPFile

Success.

  <orcl:ASM-Profile id="asm" DiscoveryString="" SPFile="+CRS_DG/ws-dbi-scan1/ASMPARAMETERFILE/registry.253.905527691" Mode="remote"/>



Fix a wrong entry profile.xml and redistribute the fixed profile with gpnptool put

The GI caches the profile.xml in multiple places in addition to the profile.xml file under ($GRID_HOME/gpnp/$HOSTNAME/profiles/peer/profile.xml) and the OCR. The OLR($GRID_HOME/cdata/$HOSTNAME.olr) is another location

When gpnpd tries to come up, it looks for the “best profile” from these 3 locations (olr,ocr,profile.xml). And my understanding of the “best profile” is the profile with the highest ProfileSequence). And this “best profile” then overwrites the other cached lower versions.

Assume we have a wrong ASM SPFILE entry in our current profile.xml

[grid@grac41 peer]$ cd /u01/app/11204/grid/gpnp/grac41/profiles/peer/
Current status:
profile.xml
ProfileSequence="10"  
SPFile="$GRID_HOME/dbs/spfileCopyASM_nogo" <--- wrong 


Copy the profile to profile.bak and  remove the oracle signature
[grid@grac41 peer]$  cp profile.xml profile.bak 
[grid@grac41 peer]$  gpnptool unsign -p=profile.bak

Warning: some command line parameters were defaulted. Resulting command line: 
         /u01/app/11204/grid/bin/gpnptool.bin unsign -p=profile.bak -o-
<?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" 
 xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" 
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" 
 ProfileSequence="10" ClusterUId="09a8e930abf56f66bfbdb0b7c915cfa8" ClusterName="grac4" PALocation=""><gpnp:Network-Profile>
<gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.168.1.0" Adapter="eth1" Use="public"/>
<gpnp:Network id="net2" IP="192.168.2.0" Adapter="eth2" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile>
<orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/asm*,/dev/oracleasm/disks/*" 
 SPFile="$GRID_HOME/dbs/spfileCopyASM_nogo"/></gpnp:GPnP-Profile>
Success.
--> Current ProfileSequence="10" 



Update SPFILE in profile.bak and increase ProfileSequence to 11 

[grid@grac41 peer]$ gpnptool edit -asm:asm_spf='+OCR/grac4/asmparameterfile/spfileCopyASM.ora'   -p=profile.bak -o=profile.bak -ovr  -prf_sq=11
Resulting profile written to "profile.bak".
Success.



Verify profile.bak and check whether ASM SPFILE location is valid 
ProfileSequence="11"  
SPFile="+OCR/grac4/asmparameterfile/spfileCopyASM.ora"
[grid@grac41 peer]$ asmcmd ls -l  +OCR/grac4/asmparameterfile/spfileCopyASM.ora
Type              Redund  Striped  Time             Sys  Name
                                                    N    spfileCopyASM.ora => +OCR/grac4/asmparameterfile/REGISTRY.253.842605053



Sign profile 
$  gpnptool sign -p=profile.bak -w=file:/u01/app/11204/grid/gpnp/grac41/wallets/peer -o=profile.new
Resulting profile written to "profile.new".
Success.

Redistribute profile.xml 
[grid@grac41 peer]$ gpnptool put  -p=$GRID_HOME/gpnp/grac41/profiles/peer/profile.new
Success.


Verfy GPND profile on  all Nodes ( check for ProfileSequence and  SPFile )
$  gpnptool rget
Warning: some command line parameters were defaulted. Resulting command line: 
         /u01/app/11204/grid/bin/gpnptool.bin rget -o-
Found 3 gpnp service instance(s) to rget profile from.
RGET from tcp://grac41:47588 (mdns:service:gpnp._tcp.local.://grac41:47588/agent=gpnpd,cname=grac4,host=grac41,pid=27693/gpnpd h:grac41 c:grac4):
<?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" 
   xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" 
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" 
   ProfileSequence="11" ClusterUId="09a8e930abf56f66bfbdb0b7c915cfa8" ClusterName="grac4" PALocation="">
  <gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.168.1.0" Adapter="eth1" Use="public"/>
  <gpnp:Network id="net2" IP="192.168.2.0" Adapter="eth2" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile>
   <orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/asm*,/dev/oracleasm/disks/*" 
   SPFile="+OCR/grac4/asmparameterfile/spfileCopyASM.ora"/><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
  <ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>ALIJdwPKQGRB8BKcoiUsxzXw1xw=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>aV1ueSkHquo2P0MN1HnOLKRXTyQljz8MMlAo+WhRVMlrDQev2DketkJ+H00aTFUXyX9A05VVyNWD9ZsglDQqk/13joh1Qz5LjtvQ3Ei5V9FhGnUUofY4MKduT89Cbic5kql0xvUlvsapez7utaq+5ecYJDLBCrH15c/WBnsOtm8=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>
Success.
RGET from tcp://grac43:55100 (mdns:service:gpnp._tcp.local.://grac43:55100/agent=gpnpd,cname=grac4,host=grac43,pid=5193/gpnpd h:grac43 c:grac4):
<?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" ...
 ProfileSequence="11" 
 .. 
 SPFile="+OCR/grac4/asmparameterfile/spfileCopyASM.ora"/>
...
RGET from tcp://grac42:37261 (mdns:service:gpnp._tcp.local.://grac42:37261/agent=gpnpd,cname=grac4,host=grac42,pid=4974/gpnpd h:grac42 c:grac4):
<?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" ...
 ProfileSequence="11" 
 .. 
 SPFile="+OCR/grac4/asmparameterfile/spfileCopyASM.ora"/> ...


Thursday, March 4, 2021

Oracle Awr data dump -- Very important while migration


















Most of time after migration  of database to new server we observe performance degradation . Usually we dont have historical performance data before migration which land us is big mess.

To overcome this , its very important to  take awr dump from existing database before migration . 


Extract AWR data

For Extract the AWR data from source database run the awrextr.sql script which extracts AWR data for a range of snapshots from the database into a Data Pump export file.
 
1. Run the following script for extract AWR:
@?/rdbms/admin/awrextr.sql;


2. Script ask for select DBID
Enter value for dbid:

3. Enter the number of days backup export:
Enter value for num_days: 2

4. It will list the 2 days snapshot in AWR. Choose the begin and end snapshot for export:
Enter value for begin_snap: 76
Begin Snapshot Id specified: 76
Enter value for end_snap: 86
End Snapshot Id specified: 86

5. List the Directory present in Database, Choose the directory location and dump file name:
Enter value for directory_name: ORACLE_HOME
.....
.....

Enter value for file_name:
awrdat_76_86.dmp

6. Now dump is generated.
The AWR extract dump file will be located
| in the following directory/file:
| /usr/tmp/
| awrdat_76_86.dmp



Load the AWR data to target

For loading the extracted AWR data with awrload.sql script. It will first create a staging schema where the snapshot data is transferred
from the Data Pump file into the database

1. Run the AWRload.sql script for start loading data.

@?/rdbms/admin/awrload.sql

2. Specify the directory name where export file exists.

Enter value for directory_name:

3. Put the prefix of name of dump file:

Enter value for file_name:

4. Specify the name of staging schema where data loaded i.e AWRSTAGE

Enter value for schema_name:

5. Specify the default or temporary tablespace.

Enter value for default_tablespace: SYSAUX
-----------
Enter value for temporary_tablespace: TEMP

6. Loading of data is successful.

*** AWR Load Started ...

|
| This operation will take a few moments. The
| progress of the AWR load operation can be
| monitored in the following directory/file:
| /usr/tmp/
| AWRDAT_76_86.log



The process will then prompt for the staging schema name, the default is AWR_STAGE. If you accept the default, the script will create the AWR_STAGE user after asking you for default tablespaces. Once it has completed the awr load process, the script will drop the AWR_STAGE user.

After the process completes, the AWR tables now have new data in them! You can query DBA_HIST_SNAPSHOT or any of the other DBA_HIST views (including DBA_HIST_ACTIVE_SESS_HISTORY). 








Thursday, February 11, 2021

Oracle restore point without enabling flashback

 


We all might heard of guaranteed restore point . However very few  know that we can create guaranteed  restore point without enabling flashback .  

Fantastic feature called guaranteed restore point came with Oracle 10G R2. By this feature,
you can set a restore point which will remain until you drop it. The best part of this feature is,
it uses db_file_recovery_dest to keep the changed blocks, even if flashback logging is not enabled for your database.
The only thing you must do is setting db_file_recovery_dest parameter and create a guaranteed restore point (for the first restore point you must be in mount mode). Before image of changed blocks,
will be kept on flashback logs after you created the guaranteed restore point.
Flashback logs of guaranteed restore points are not big because of keeping before image of changed blocks for only 1 time after first change.

To create first guaranteed restore point when flashback off, you first start the database in mount state after a consistent shut down. After opening in mount state you create your first guaranteed restore point like below. 

SQL>CREATE RESTORE POINT before_upgrade GUARANTEE FLASHBACK DATABASE;


Below are high level steps  : before proceeding ensure recover dest size and retention is set 

1) Do few log switches in Primary and Make sure primary and Standby in sync
2) Disable Dataguard and log shipping .
3) Comment  arch redo maintenance in primary and  in standby
4)  create guaranteed restore point  on standby
- create restore point BEFORE_REL guarantee flashback database;
 
5) do a couple of log switches on primary
6)  create guaranteed restore point PRE_RELEASE on primary
7) continue to monitor the flashback space


Saturday, January 23, 2021

Oracle SecureFile Lob - Speeding Up Lob import from 11g to 12c

This  blog is mainly to give insight of SecureFile .We  all  face issue  for Lob that  though we  use multiple parallelism  for Lob import , it only uses 1 channel for import of Lob data. However if we  convert to SecureFile  while import  it used multiple channels .

In Oracle 12C it’s possible to define LOB storage method during import time independent from export settings. It gives more flexibility during importing data.

If the COMPATIBLE parameter is set to "12.0.0.0" or higher, the default value for the DB_SECUREFILE parameter is "PREFERRED", which means all LOBs will be created as SecureFiles, unless they are explicitly created as BasicFiles or their tablespace uses Manual Segment Space Management.

Prior to 12cR1 Parallel DML (PDML) had a number of restrictions when used against a table containing LOBs (BasicFile or SecureFile). PDML was only supported for the following operations and only if the table containing the LOB was partitioned.


INSERT INTO ... AS SELECT ...
CREATE TABLE ... AS SELECT ...
DELETE
UPDATE
MERGE
Multitable INSERT

From Oracle Database 12c Release 1 (12.1) onward Oracle supports more parallel operations against partitioned tables containing LOBs (SecureFile and BasicFile).

INSERT
INSERT INTO ... AS SELECT ...
CREATE TABLE ... AS SELECT ...
DELETE
UPDATE
MERGE
Multitable INSERT
SQL*Loader
Import/Export


The new PREFERRED value is now the default, if the COMPATIBLE init  parameter is set to 12.0.0.0 or higher. PREFERRED makes LOBs to be stored as SecureFiles per default, unless the BASICFILE clause is explicitely used when creating the table, or the tablespace is not  ASSM. The ALWAYS value still forces the storage as SecureFiles,  however, LOBs are stored as BasicFile if the tablespace is not ASSM,  instead of raising an error. BasicFile is not banned.


SQL> show parameter db_secure
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_securefile string PREFERRED


SQL> select securefile from user_lobs where table_name='TBL_LOB_TEST';
SEC
---
YES



impdp .. TRANSFORM=LOB_STORAGE:SECUREFILE|BASICFILE|DEFAULT|NO_CHANGE

DEFAULT – no lob storage clause is set for CREATE TABLE

NO_CHANGE – use settings from dump file

BASICFILE – creates LOBs as basicfile

SECUREFILE – creates LOBs as securefile

Can be set in PL/SQL using DBMS_DATAPUMP.METADATA_TRANSFORM

Example


Imports data and sets LOBs as SECUREFILE


impdp hr/hr DIRECTORY=dpdump_dir DUMPFILE=hr.dmp

TRANSFORM=LOB_STORAGE:SECUREFILE



Reference : 

DataPump Export/Import Of LOBs Are Not Executed in Parallel (Doc ID 1467662.1)



Saturday, January 16, 2021

Oracle database Rman forceful Incomplete recovery : Adjust Scn and Open database



The main reason most of us are not able to  Open database  after restore and recovery is due to  below 2 main issues 

1)  missing  required archive logs  which results in mismatch in  checkpoint#  between  datafiles  

2)  Another issue we face is  database SCN.   is behind  datafiles checkpoint#  


If  datafiles scn are not in sync  after restore  i,.e  it has fuzzy and if you have required archive log then  we don't need any special fix rather applying additional required archives . 
So the first step we need to do after restore and recovery is  to check there is no  fuzzy  . 



###############################################
###############################################

Checking Fuzzy 


Fri Jun 05 14:15:15 2015
ALTER DATABASE RECOVER CANCEL
Fri Jun 05 14:15:16 2015
Errors in file /ora_data/diag/rdbms/SNORT/SNORT/trace/SNORT_pr00_18570.trc:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/ora_data/oradata/SNORT/system01.dbf’
ORA-1547 signalled during: ALTER DATABASE RECOVER CANCEL …
ALTER DATABASE RECOVER CANCEL
ORA-1112 signalled during: ALTER DATABASE RECOVER CANCEL …



First key thing to check is all your datafile headers are in sync . We can check same using below  sql 


select CONTROLFILE_TYPE from v$database ;
select FUZZY,count(*) from v$datafile_header group by fuzzy ;

set lines 187
set numwidth 20
set pages 999
alter session set nls_date_format='DD-MON-YYYY:HH24:MI:SS';
select * from v$recover_file;
select min(fhrba_Seq)min_seq, max(fhrba_Seq) max_seq_required from x$kcvfh;
select distinct fuzzy from v$datafile_header;

alter session set nls_date_format='YYYY/MON/DD hh24:mi:ss';
 
select checkpoint_time,fuzzy,count(*),status
from ( select checkpoint_time,fuzzy,status
       from v$datafile_header
       union all
       select controlfile_time,'CTL',null from v$database)
group by checkpoint_time,fuzzy,status;




How  to fix fuzzy and determine archive needed : 


1) find archive log needed to  fix fuzzy 

SQL> -- Check for MIN, and MAX SCN in Datafiles
SQL> select min(CHECKPOINT_CHANGE#), max(CHECKPOINT_CHANGE#) from v$datafile_header ;

-- Use MIN(CHECKPOINT_CHANGE#) 2446300 as found before, then use it with this query to find the
-- first SEQ# 'number' and archivelog file needed for recover to start with.
-- All SEQ# up to the online Current Redolog SEQ# must be available without any gap for successful recovery

-- MIN(CHECKPOINT_CHANGE#) 2446300


SQL> select thread#, sequence#, substr(name,1,80) from v$Archived_log
      where 2446300 between first_change# and next_change#;



2)   use below  to determine max scn till when we need to recover database . 

set serveroutput on
declare
scn number(12) := 0;
scnmax number(12) := 0;
begin
for f in (select * from v$datafile) loop
scn := dbms_backup_restore.scandatafile(f.file#);
dbms_output.put_line(‘File ‘ || f.file# ||’ absolute fuzzy scn = ‘ || scn);
if scn > scnmax then scnmax := scn; end if;
end loop;

dbms_output.put_line(‘Minimum PITR SCN = ‘ || scnmax);
end;
/

SQL> alter database recover database until change 7203942;


Absolute Fuzzy

In  some cases we see that  there is no fuzzy however still it shows datafiles header mismatch error while opening database with resetlogs . 

Oracle reserves a section of the file header block of each file for just such an occurrence. This is called the Absolute Fuzzy SCN and represents the SCN required for recovery to make this a consistent file. Our bookends are then defined as the checkpoint SCN and the Absolute Fuzzy SCN. At a minimum, Oracle must recover from the checkpoint SCN through the Absolute Fuzzy SCN for consistency. If Oracle did not detect any SCNs higher than the checkpoint SCN during the backup then the backup would be considered consistent (file header status 0x0) and the Absolute Fuzzy SCN would remain at 0x0 - obviating the need for any backup-necessary redo to be applied. As you can see, this is the reason Oracle waits until all data blocks in the file have been read and written before it writes the header to the backup set. This permits the proper settings for the bookends


Follow below to determine archive needed for recovery 

SQL> select hxfil file#, substr(hxfnm, 1, 50) name, fhscn checkpoint_change#, fhafs Absolute_Fuzzy_SCN, max(fhafs) over () Min_PIT_SCN from x$kcvfh where fhafs!=0 ;

Note: Column Min_PIT_SCN will return same value even for multiple rows as we have applied ANALYTICAL "MAX() OVER ()" function on it.

SQL> V$ARCHIVED_LOG
SQL> ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-RR HH24:MI:SS';
SQL> SELECT THREAD#, SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG WHERE '31-AUG-11 23:20:14' BETWEEN FIRST_TIME AND NEXT_TIME;

If the above query does not return any rows, it may be that the information has aged out of the controlfile run the following query against v$log_history.

SQL> V$ LOG_HISTORY view does not have a column NEXT_TIME
SQL> ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-RR HH24:MI:SS';
SQL> select a.THREAD#, a.SEQUENCE#, a.FIRST_TIME from V$LOG_HISTORY a
where FIRST_TIME =
( SELECT MAX(b.FIRST_TIME) FROM V$LOG_HISTORY b
WHERE b.FIRST_TIME < to_date('31-AUG-11 23:20:14', 'DD-MON-RR HH24:MI:SS')
) ;


RMAN> RUN
{
SET UNTIL SEQUENCE 531 THREAD 1;
RECOVER DATABASE;
}





###############################################
###############################################

Mismatch between datafile header scn  due to missing archive .


This is case where due to missing archivelogs there is mismatch between checkpoint# of datafiles  and below errors is reported


ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u03/oradata/tstc/dbsyst01.dbf'

Or:

ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 48 needs more recovery to be consistent
ORA-01110: data file 48: '/vol06/oradata/testdb/ard01.dbf'

Or 

ORA-01595: error freeing extent (11) of rollback segment (2))
ORA-00600: internal error code, arguments: [4194], [], [], [], [], [], [], [], [], [], [], []
Fri Sep 26 10:23:22 2014

ORA-600 [4194] The official explanation of the error is: "Undo Record Number Mismatch While Adding Undo Record". When the database increases the UNDO record through REDO recovery, it is found that the number of the UNDO record does not match, that is, there is an inconsistency.





Below was  solution adopted for above said  problems  to force open database 


Top open  database with resetlogs we need to use below approach . 

1) Open database in mount  stage 

2)  Set  database _OFFLINE_ROLLBACK_SEGMENTS   and  _corrupted_rollback_segments . We will rollback segment list from database alert logs . 
 
_OFFLINE_ROLLBACK_SEGMENTS=("_SYSSMU11$", "_SYSSMU12$", "_SYSSMU13$", "_SYSSMU14$", "_SYSSMU15$", "_SYSSMU16$", "_SYSSMU17$", "_SYSSMU18$")

_corrupted_rollback_segments=("_SYSSMU11$", "_SYSSMU12$", "_SYSSMU13$", "_SYSSMU14$", "_SYSSMU15$", "_SYSSMU16$", "_SYSSMU17$", "_SYSSMU18$")


2) Set database  parameter UNDO_MANAGEMENT=MANUAL  and  _allow_resetlogs_corruption = TRUE



4)  Perform dummy  recovery 
> recover database until cancel 
cancel 


5)  Open database with resetlogs 


6) Create new  Undo  tablespace  

CREATE UNDO TABLESPACE undo2 datafile '/u01/app/oracle/oradata/RTS_NEW/undo2_df1.dbf' size 200m autoextend on maxsize 30G;


7) Assign new  undo tablespace to database 

SQL> alter system set undo_tablespace = undo2 scope=spfile;
System altered.
SQL> alter system set undo_management=auto scope=spfile;
System altered.


8) Take Full backup of database 


###############################################
###############################################


Database scn is behind datafile checkpoint#  and ORA-00600 is reported 

To put it simply this ORA-00600 error means that a datafile has a recorded SCN that’s ahead of the database SCN.  The current database SCN is shown as the 3rd argument (in this case 551715) and the datafile SCN is shown as the 5th argument (in this case 562781).  Hence a difference of:
562781 - 551715 = 11066
In this example, that’s not too large of a gap.  But in a real system, the difference may be more significant.  
Also if multiple datafiles are ahead of the current SCN you should expect to see multiple ORA-00600 errors.


Before 12c one of below options were used , however from 12c  patch/event 21307096     has been introduced to advance database scn .

1 : the oradebug Poke directly modify the values in memory;
2 : Event 10015 is to increase the value of scn;
3 : _minimum_giga_scn to increase the value of scn;
4 : GDB / the dbx directly modify the values in memory;
5 : modified values to modify the control file of scn;
6 : modify the data of the file header scn modified value;
7 : adjust_scn to increase scn.
8. Rollforward scn with multiple restart 




One of The solution to this problem is quite simple: roll forward the current SCN until it exceeds the datafile SCN.  The database automatically generates a number of internal transactions on each startup hence the way to roll forward the database SCN is to simply perform repeated shutdowns and startups.  Depending on how big the gap is, it may be necessary to repeatedly shutdown abort and startup – the gap between the 5th and 3rd parameter to the ORA-00600 will decrease each time.  However eventually the gap will reduce to zero and the database will open
 
 
-> Alter database  open resetlogs ( will fail with ORA-00600: internal error code, arguments: [2663],    )
-> Shut abourt
-> Startup
  



Pre 12c  : _minimum_giga_scn


There also another   option of setting _minimum_giga_scn  which  is not supported from  11.2.0.2.5 .
 
From past 3 days its failing with the same reason.
ORA-00704: bootstrap process failure
ORA-00600: internal error code, arguments: [2662], [1477], [4214140426], [1477], [4215310734], [4228956], [], [], [], [], [], []
Error 704 happened during db open, shutting down database
USER (ospid: 3866830): terminating the instance due to error 70
 
SQL> select checkpoint_change# from v$database;
 SQL> select ceil(&decimal_scn_expected/1024/1024/1024) from dual;
 enter the value from the fist select when prompted
 
Then use those results as follows:
 set parameter _minimum_giga_scn=<results from most recent query> in the init.ora file.
 
startup mount
recover database
alter database open;
 




Post 12c : applying 21307096  event/patch 

From 12c  Oracle support introduced applying patch 21307096   to advance database before opening database with resetlogs.  This was indirect replacement  of _minimum_giga_scn

Below  steps can be followed  for applying 21307096  event/patch 

 
1. Apply patch 21307096 then: 

2. The fix needs to be enabled with Event 21307096 at level SCN delta. 

The SCN delta in million units is with the range of values from 1 to 4095 which increases the scn by: 
 
lowest_scn + event level * 1000000 
 
Example: if the lowest datafile checkpoint scn in the database is 990396 
and the highest is 992660 then SCN delta is 1; given by (992660 - 990396) / 1000000 
 
event="21307096 trace name context forever, level 1" 
 
or use this query: 
 
select decode(ceil((max(CHECKPOINT_CHANGE#) - min(CHECKPOINT_CHANGE#))/1000000),0,'Event 21307096 is not needed' 
, 'event="21307096 trace name context forever, level ' 
||ceil((max(CHECKPOINT_CHANGE#) - min(CHECKPOINT_CHANGE#))/1000000) 
||'"') "EVENT TO SET in init.ora:" 
from v$datafile_header 
where status != 'OFFLINE'; 
 
Note that the event is needed to activate this fix so please add it in the init.ora file. 

Here are some tests in 12.1.0.2 using each level for alter database open resetlogs:
  level 1 Elapsed: 00:01:02.35
  level 2 Elapsed: 00:02:16.23
  level 6 Elapsed: 00:06:08.05
  
In general:  based on a 16k per second scn rate (16K/sec) , the open resetlogs time
would be at least (event level * 1000000 / 16000) seconds. Then level 1 would be at least 
62+ seconds and level 4095 would be 71+ hours !.


Before starting,
- Check db is in mounted state and the ORACLE_HOME is right  which has the patch 21307096 aplied.
- Backup the current contrlfiles as we may be over writing them.
-- Ensure all datafiles are online 
 
a. Ensure datafiles, redo log files have correct path. If not, rename the files appropriately.
select name from v$datafile;
select member form v$logfile;

 
b. Disable block change tracking.
alter database disable block change tracking;


c. Take a backup of "create controlfile" command -
conn / as sysdba
alter database backup controlfile to trace;
oradebug setmypid
oradebug tracefile_name
!cat <tracefile_name>


d. Startup mount the database after uncommenting the below parameters -

_corrupted_rollback_segments =’take from alert log’
_allow_resetlogs_corruption = TRUE
event = "21307096 trace name context forever, level 1"
undo_management=MANUAL  -- Comment out undo_management=AUTO

 
e. Perform fake recovery and open resetlogs -

conn / as sysdba
recover database using backup controlfile until cancel;
CANCEL

alter database open resetlogs;
 

If the above fails to open the database then -

- Modify event 21307096 level to next level.
- startup nomount the db
- create the controlfile using the command saved earlier.
- execute step e) again.


f. Once the database is opened add temp file to TEMP tablespace and create a new undo tablespace UNDOTBS1.

g. Shut down the database.
shut immediate


h. Comment out the below parameters in init file -

_corrupted_rollback_segments =
_allow_resetlogs_corruption = TRUE
event = "21307096 trace name context forever, level 3"
undo_management=MANUAL  -- Uncomment undo_management=AUTO


Set undo_tablespace=UNDOTBS1.

i. Open the database.

 
 

###############################################
###############################################

----------------
ORA-01194 Error:
----------------
-- this error may raise during startup a cloned database
-- resolution: provide the online redo log file to recover

SQL> startup 
Page 281 Oracle DBA Code Examples
ORACLE instance started.
..
Database mounted.
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
SQL> alter database open noresetlogs;
alter database open noresetlogs
*
ERROR at line 1:
ORA-01588: must use RESETLOGS option for database open
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: 'C:\ORACLE\ORADATA\MANAGER\SYSTEM01.DBF'
SQL> RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE;
ORA-00279: change 405719 generated at 06/30/2008 15:51:04 needed for thread 1
ORA-00289: suggestion : C:\ORACLE\RDBMS\ARC00019.001
ORA-00280: change 405719 for thread 1 is in sequence #19
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
C:\ORACLE\ORADATA\MANAGER\REDO03.LOG
Log applied.
Media recovery complete.
SQL> alter database open resetlogs;
Database altered.


----------------
ORA-01152 Error:
----------------
-- resolution: provide the online redo log file to recover

ORA-00289: suggestion :
/u01/app/oracle/admin/finance/arch/finance/_0000012976.arc
ORA-00280: change 962725326 for thread 1 is in sequence #12976
ORA-00278:
logfile'/u01/app/oracle/admin/finance/arch/finance/_0000012975.arc'
no longer needed for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/pase16/oradata/finance/system_01.dbf'ORA-01112:
media recovery not started
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/pase04/oradata/finance/redo01a.rdo
ORA-00279: change 962746677 generated at 07/30/2008 04:33:52 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/admin/finance/arch/finance/_0000012978.arc
ORA-00280: change 962746677 for thread 1 is in sequence #12978
ORA-00278: log file '/pase04/oradata/finance/redo01a.rdo'
no longer needed for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/pase04/oradata/finance/redo02a.rdo
Log applied.
Media recovery complete. 
Page 282 Oracle DBA Code Examples


----------------
ORA-00376 Error:
----------------
-- reason: might be datafile or tablespace being offline
-- resolution: bringing the tablespace or datafile online

ORA-00376: file 10 cannot be read at this time
ORA-01110: data file 10: '/u01/app/oracle/remorse/data_01.dbf'


Friday, January 1, 2021

Enabling and Disabling Oracle Database Options after installation using chopt utility


Long back with Oracle 9i, 10.1 and 10.2, whether any specific component needs to be install/deinstall, we need to access OUI. OUI allows these specific request whether enabled/disabled the Oracle component.

In Oracle 11.1, we can remove/deinstall component from already installed Oracle Home using OUI, only if home version is 11.1.0.6 (base release). It is not possible to deinstall/remove compoenent from 11.1.0.7 home – a new change in 11.1 patchset.

In case of 11.1.0.7 or versions which doesn’t support these changes using OUI, over here we can enable/disable the specific component at binary level.

Steps:

1. Shutdown all database instance(s)/service(s) running from the Oracle Database Home
2. Run the following relink command to disable the Oracle Label Security option at the binary level:

% cd $ORACLE_HOME/rdbms/lib
% make -f ins_rdbms.mk lbac_on ioracle

3. Restart the database instance(s) and ensure that “Oracle Label Security” option is turned on i.e TRUE:

SQL> select * from v$option where parameter=’Oracle Label Security’;

PARAMETER        VALUE
----------------------- -----------
Oracle Label Security   TRUE
Note :

The following are a few of the component options that can be controlled (switch ON or OFF) at the binary level.

Component/Option              Enable      Disable

Partitioning                  part_on     part_off
Datamining                    dm_on       dm_off
OLAP                          olap_on     olap_off
Data Vault                    dv_on       dv_off
Real Application Testing      rat_on      rat_off
Oracle Label Security         lbac_on     lbac_off
Real Application Cluster      rac_on      rac_off
Oracle Database Vault         dv_on       dv_off
Automatic Storage Mgt         asm_on      asm_off
Context Management Text       ctx_on      ctx_off




Coming up with Oracle 11R2 EE, any specific components can be enabled/disabled with the help of chopt utility. The chopt tool is a command-line utility that is located in the ORACLE_HOME\bin directory.

The syntax for chopt tool:

chopt [ enable | disable] db_option

 

Usage:

chopt <enable|disable> <option>

options:
dm = Oracle Data Mining RDBMS Files
dv = Oracle Database Vault option
lbac = Oracle Label Security
olap = Oracle OLAP
partitioning = Oracle Partitioning
rat = Oracle Real Application Testing


Steps:

1. Shut down the database SID=orcl with srvctl or SQL*Plus:

srvctl stop database -d myDb -> clusterware env

SQL> shu immediate -> single instance

2. Stop the database service, OracleServiceSID, using the services.msc.

3. Run the following commands:

cd %ORACLE_HOME%/bin

chopt enable lbac

– to enable Oracle Label Security

4. Start the database service, OracleServiceSID, using the services.msc.

5. Start up the database:

srvctl start database -d myDb

SQL> startup

example:

bash-3.00$ chopt enable lbac

Writing to /oracle11g/u01/app/oracle/product/11.2.0/install/enable_lbac.log...
%s_unixOSDMakePath% -f /oracle11g/u01/app/oracle/product/11.2.0/rdbms/lib/ins_rdbms.mk lbac_on
%s_unixOSDMakePath% -f /oracle11g/u01/app/oracle/product/11.2.0/rdbms/lib/ins_rdbms.mk ioracle
s_unixOSDMakePath is nothing but make command



To verify options is enable/disable, enter:

bash-3.00$ cd $ORACLE_HOME/rdbms/lib
bash-3.00$ /usr/ccs/bin/ar -t libknlopt.a | grep -c kzlibac.o

where kzlibac.o denotes Oracle Label Security

If the output is 0, the option is disabled.
If the output is 1, the option is enabled.

The output are maintained in “$ORACLE_HOME/install/” path

bash-3.00$ pwd
/oracle11g/u01/app/oracle/product/11.2.0/install
bash-3.00$ ls -lrt
-rwxrwxr-x   1 ora11g   dba11     259989 Dec 16 21:21 make.log
-rw-r--r--   1 ora11g   dba11        190 Dec 29 18:47 enable_lbac.log
The installed component at the time of oracle home installation can be found in $ORACLE_HOME/install/make.log path.




If we are doing Migration , we need to verify in source what options are enabled and what options are used in database .  To check database options were actually used in database  check DBA_FEATURE_USAGE_STATISTICS   .  Oracle uses  data from same view  during audit  for charging  you for license . Download  and run options_packs_usage_statistics.sql  from 1317265.1  for detailed report .  By default the feature usage view is updated about once per week. You can force the view to be updated by using the DBMS_FEATURE_USAGE_INTERNAL package.


EXEC DBMS_FEATURE_USAGE_INTERNAL.exec_db_usage_sampling(SYSDATE);


COLUMN name  FORMAT A60
COLUMN detected_usages FORMAT 999999999999
SELECT u1.name,
       u1.detected_usages,
       u1.currently_used,
       u1.version
FROM   dba_feature_usage_statistics u1
WHERE  u1.version = (SELECT MAX(u2.version)
                     FROM   dba_feature_usage_statistics u2
                     WHERE  u2.name = u1.name)
AND    u1.detected_usages > 0
AND    u1.dbid = (SELECT dbid FROM v$database)
ORDER BY name;





References :
How to Check and Enable/Disable Oracle Binary Options [ID 948061.1]
How to enable Partitioning in Oracle Database Enterprise Edition 11.2 on Windows [ID 1188238.1]
Database Options/Management Packs Usage Reporting for Oracle Databases 11gR2, 12c, 12cR2 and 18c (Doc ID 1317265.1)