Friday, September 24, 2021

Oracle PGA_AGGREGATE_LIMIT dependency on Processes parameter

 Intention to write Blog  is recently we faced scenario where PGA_AGGREGATE_LIMIT value was automatically increasing when it been set lower then desired value . 

The initialization parameter PGA_AGGREGATE_LIMIT has been introduced since Oracle Database 12.1.0.1. It is used to put a hard limit on PGA memory usage. If PGA usage exceeds the PGA_AGGREGATE_LIMIT value defined, Oracle Database aborts or terminates the sessions or processes that are consuming the most  PGA memory.

From Oracle 12.1.0.2,  By default,  PGA_AGGREGATE_LIMIT parameter is set to the greater of 2 GB, 200% of the PGA_AGGREGATE_TARGET value, or 3 MB times the value of the PROCESSES parameter. However, it will not exceed 120% of the physical memory size minus the total SGA size.“


Since Oracle Database 12.2.0.1, the default value calculation has been adjusted again as below:

> If MEMORY_TARGET is set, then PGA_AGGREGATE_LIMIT defaults to the MEMORY_MAX_TARGET value.

> If MEMORY_TARGET is not set, then PGA_AGGREGATE_LIMIT defaults to 200% of PGA_AGGREGATE_TARGET.

> If MEMORY_TARGET is not set, and PGA_AGGREGATE_TARGET is explicitly set to 0, then the value of PGA_AGGREGATE_LIMIT is set to 90% of the      physical memory size minus the total SGA size.

> In all cases, the default PGA_AGGREGATE_LIMIT is at least 2GB and at least 3MB times the PROCESSES parameter (and at least 5MB times the      PROCESSES parameter for an Oracle RAC instance).



Reference:

Limiting Process Size with Database Parameter PGA_AGGREGATE_LIMIT (Doc ID 1520324.1)

Swapping While Sum of SGA Usage And PGA_AGGREGATE_LIMIT Is Larger Than The Amount Of Physical Memory (Doc ID 2273931.1)

ORA-38856: cannot mark instance UNNAMED_INSTANCE_2 : after rac database restore



When Performing a RAC 2 node Production Database Restore with disk based backup, by duplicating to a single instance database, Restoration went successful.
Once Restore has completed, i have tried to open the database with RESETLOGS option. But end with the below error, So thought of sharing this little information.


##########################
#   Errors 
##########################
RESETLOGS after incomplete recovery UNTIL CHANGE 13115861554203
ORA-38856 signalled during: alter database open resetlogs…
ORA-38856: cannot mark instance UNNAMED_INSTANCE_2 (redo thread 2) as enabled

##########################
#   Command Used  
##########################
Alter Database open ResetLogs;


##########################
#   Informations  
##########################
This seems to be Bug 4355382 and its expected while doing RAC Restore/Recovery.



##########################
#  Solution 
##########################
  ADD _no_recovery_through_resetlogs Parameter and set it to TRUE. 

I have added _no_recovery_through_resetlogs=TRUE parameter to our PFILE and brought up the database to Mount Stage.
Now opened the database with RESETLOGS options and it worked.
This parameter tells oracle not to do any recovery while doing this resetlogs operation.
After Opening the database, the parameter can be removed from the pfile.

Tuesday, September 14, 2021

Oracle - Sql ID Signature , FORCE_MATCHING_SIGNATURE , EXACT_MATCHING_SIGNATURE ,Adaptive Cursor Sharing

 

In this Blog we will try to get some insight on below topics which are all interlinked to understand hard parsing 

1) How different sql id having same signature , FORCE_MATCHING_SIGNATURE and EXACT_MATCHING_SIGNATURE are related 
2) What is bind peeking and Bind Seeking 
3) Adaptive Cursor Sharing  ,  is_bind_sensitive and is_bind_aware
5) Literals  vs bind variables 
6) Cursor sharing parameter 
7) Usefull sql statements 
8) What is skew data 
 


Sql performance depends on 

1) Cursore sharing  parameter  
2) Use of Bind Variables 
3) If data is skewed 
4) Optimizer mode parameter 
5) Whether histograms  statistics are gathered . 



########################################
Sqlid signature , FORCE_MATCHING_SIGNATURE and EXACT_MATCHING_SIGNATURE 
########################################

Use of bind variables in SQL statements is a must to make the transaction processing application scalable. 

To find the queries that don’t use bind variable and are parsed each time they are executed we may see Signature 

Starting with Oracle 10gR2, two interesting columns – FORCE_MATCHING_SIGNATURE and EXACT_MATCHING_SIGNATURE – were added in a number of tables and views. We could see them in V$SQL, V$SQLAREA, DBA_HIST_SQLSTAT and DBA_HIST_ACTIVE_SESS_HISTORY data dictionaries.  As you see they are present all over the place – shared pool, ASH, AWR and Statspack. That means we have good choice of source to look for problematic sql’s.

EXACT_MATCHING_SIGNATURE means, a signature is calculated on the normalized sql text. The normalization includes the removal of white space and uppercasing of all the non-literal strings.

FORCE_MATCHING_SIGNATURE means, a signature used as if CURSOR_SHARING set to FORCE. It seems to be another hash value calculated from SQL statement.

As noted before FORCE_MATCHING_SIGNATURE is calculated from sql text as if CURSOR_SHARING set to FORCE – that forces sql statements as to share cursors by replacing constants with bind variables, so all statements that differ only by the constants share the same cursor.  

Let’s have a look at the FORCE_MATCHING_SIGNATURE value for the same sql’s.
When Cursor_sharing=force ==> force_matching_signature applies
When Cursor_sharing=exact ==> exact_matching_signature applies




########################################
Bind peeking and Bind Seeking 
########################################

Sharing the same execution plan for different bind variables is not always optimal as far as different bind variables can generate different  data volume. This is why Oracle introduces bind variable peeking feature which allows Oracle to peek at the bind variable value and give it the best execution plan possible. However, bind variable peeking occurs only at hard parse time which means as far as 
the query is not hard parsed it will share the same execution plan that corresponds to the last hard parsed bind variable. 

Bind Variable Peeking was introduced in Oracle 9i. Prior to Bind Variable Peeking the Oracle optimizer did not know the value being  used in the statement and could end up choosing a poor execution plan. Bind Variable Peeking will look at the value of the bind variable the very first time the statement is executed and generate an execution plan. The cached execution plan will be optimal based on the first 
value sent to the bind variable. Subsequent executions of this statement will use the same execution plan regardless of the value of the bind variable. 

Using Bind Variable Peeking is good if the bind variable is selecting from a column which is not skewed. This means the initial  execution plan for the statement will be optimal for all values of the bind variable. For example, a bind variable for the emp.id is a good idea as the value is unique in the table. The initial execution plan will be good for all statements against id. On the other hand, using  a bind variable for emp.deptid could pose problems. Let say there are two departments. Department 20 has 3 employees and department 21 has  10,000 employees. As the emp.deptid data is skewed, the initial execution and execution plan of the statement may not be optimal for subsequent executions of the statement. Looking at select name from emp where deptid=:deptid, with deptid set to 20 returns 3 rows. 

If this was the initial statement, the optimizer would create an execution plan which would use an index. If deptid is then set to 21, where 10,000 rows are returned, the optimizer will still use the initial plan and use the index. Using the index in this case is not optimal  as a large percentage of the table will be returned. A full table scan would be better. So you see the problem with bind variable peeking. 
Oracle 11g overcomes the Bind Variable Peeking problem with the new Adaptive Cursor Sharing feature. Due to the Bind Peeking problem, some developers will purposely use literal values, for fields with highly skewed data, to avoid the Bind Variable Peeking problem.

When they use literal values it forces Oracle to create a single cursor with its own execution plan. This ensures the query will be  executed optimally.




########################################
Adaptive Cursor Sharing  ,  is_bind_sensitive and is_bind_aware
########################################

In order to avoid such situation Oracle introduces in its 11gR2 release, Adaptive Cursor Sharing allowing Oracle to adapt itself to  the bind variable when necessary without having to wait for a hard parse of the query.

In v$sql this is indicated by the columns is_bind_sensitive and is_bind_aware. The former indicates that a particular sql_id is a candidate  for adaptive cursor sharing, whereas the latter means that Oracle acts on the information it has gathered about the cursor and alters the  execution plan.

Problematic is that adaptive cursor sharing can only lead to an improved plan after the SQL statement has tanked at least once. 

You can bypass the initial monitoring by supplying the BIND_AWARE hint: it instructs the database that the query is bind sensitive and adaptive cursor sharing should be used from the very first execution onwards. A prerequisite for the hint to be used is that the bind variables only appear in the WHERE clause and an applicable histogram is available. The hint may improve the performance but 
you should be aware that it’s rarely the answer in the case of generic static statements, which we describe below. 

The NO_BIND_AWARE hint does exactly the opposite: it disables bind-aware cursor sharing.



########################################
Literals  vs bind variables 
########################################

A literal means the values being compared in the SQL statement are hard coded in the statement itself.
An example of a literal value would be, select name from emp where id=10.
The use of literal values will cause many unique statements to be cached as each literal value causes the statement to be different. 

This will cause more space to be used in the Share Pool. With the use of bind variables the statement remains the same, therefore there is only one statement cached as opposed to many

A bind variable is created and set to a particular value, which is then used by a SQL statement.
This allows the developer to assign any value to the bind variable and the SQL statement will use the new value. For example,  select name from emp where id=:id. The :id is the bind variable and the values being passed into the statement can change as the developer needs.



########################################
Cursor Sharing
########################################

The cursor_sharing parameter can be set to one of three values, FORCE, EXACT or SIMILAR. This parameter is really telling the Oracle  server process how to handle statements which have literal values. If the parameter is set to FORCE or SIMILAR the Oracle server process  will strip out the literal values in the SQL statements and generate system generated bind variables. With FORCE or SIMILAR all statements will go through the Bind Variable Peeking process. At first I though both SIMILAR and FORCE will expose and amplify the Bind Peeking problem

In my testing I determined that SIMILAR does not expose the Bind Peeking problem, but FORCE does. 
If the parameter is set to EXACT, the  Oracle server processes the query as it is and generates an execution plan based on the query. With EXACT literal values are maintained. 

Here are the tests I performed. Notice that both SIMILAR and EXACT act the same when literal and bind variables are used.

According to MOSC Doc 11657468.8, adaptive cursor sharing can be disabled by setting hidden parameters:
 
alter system set "cursor_sharing"=exact scope=both;
alter system set "_optimizer_extended_cursor_sharing_rel"=none scope=both;
alter system set "_optimizer_extended_cursor_sharing"=none scope= both;
alter system set "_optimizer_adaptive_cursor_sharing"=false scope= both;



########################################
Skew data
########################################
Skewed columns are columns in which the data is not evenly distributed among the rows.
For example, suppose:
You have a table order_lines with 100,000,000 rows
The table has a column named customer_id
You have 1,000,000 distinct customers

Some (very large) customers can have hundreds of thousands or millions of order lines.
In the above example, the data in order_lines.customer_id is skewed. On average, you'd expect each distinct customer_id to have 100 order lines
(100 million rows divided by 1 million distinct customers). But some large customers have many, many more than 100 order lines.

This hurts performance because Oracle bases its execution plan on statistics. So, statistically speaking, Oracle thinks it can access order
lines based on a non-unique index on customer_id and get only 100 records back, which it might then join to another table or whatever using 
a NESTED LOOP operation.

But, then when it actually gets 1,000,000 order lines for a particular customer, the index access and nested loop join are hideously slow. 

It would have been far better for Oracle to do a full table scan and hash join to the other table.
So, when there is skewed data, the optimal access plan depends on which particular customer you are selecting!

Oracle lets you avoid this problem by optionally gathering "histograms" on columns, so Oracle knows which values have lots of rows and 
which have only a few. That gives the Oracle optimizer the information it needs to generate the best plan in most cases.




########################################
Useful sql queries 
########################################

-- Converting Sql id to Signature 
SET SERVEROUTPUT ON
DECLARE
 SQLTEXT   CLOB;
 SIG       NUMBER;
BEGIN
 SELECT SQL_TEXT
 INTO SQLTEXT
 FROM DBA_HIST_SQLTEXT
 WHERE SQL_ID = '80xyxfffu6awb';
 SIG := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE (SQLTEXT, FALSE);
 DBMS_OUTPUT.PUT_LINE ('SIGNATURE=' || SIG);
END;
/
 

-- If you want to find Bind variable for SQL_ID
col VALUE_STRING for a50  
SELECT NAME,POSITION,DATATYPE_STRING,VALUE_STRING FROM gv$sql_bind_capture WHERE sql_id='&sql_id';


-- checking Adaptive Cursor sharing 
   select
    hash_value,
   sql_id,
   child_number,
   range_id,
   low,
   high,
   predicate
from
   v$sql_cs_selectivity;
 
select
   hash_value,
   sql_id,
   child_number,
   bucket_id,
   count
from
   v$sql_cs_histogram;
 
select
  sql_id,
  hash_value,
  plan_hash_value,
  is_bind_sensitive,
  is_bind_aware,
  sql_text
from
   v$sql;
 
select
   hash_value,
   sql_id,
   child_number,
   bind_set_hash_value,
   peeked,
   executions,
   rows_processed,
   buffer_gets
   cpu_time
from
   v$sql_cs_statistics; 


  col exact_matching_signature for 99999999999999999999999999
  col sql_text for a50
  select sql_id,force_matching_signature, exact_matching_signature, SQL_TEXT from v$sqlarea where UPPER(sql_text) like ‘%DUMMY%’ order by UPPER(sql_text);
 
 col force_matching_signature for 99999999999999999999999999
 select * from (select force_matching_signature, count(*) "Count" from v$sqlarea group by force_matching_signature order by 2 desc) where rownum<=3;
 
 select sql_fulltext from v$sql where force_matching_signature=7415896326081021278 and rownum=1;
 



########################################
References : 
########################################
Adaptive Cursor Sharing Overview [ID 740052.1] 


Tuesday, September 7, 2021

Exadata Disk replacement

 
In this  blog  We will be concentrating on disk  replacement strategy for Exadata . However before proceeding on disk replacement we will also try to get some insight on   Asm redundancy levels in Exadata and failover groups . 



When we created these ASM disk groups, we specified normal redundancy. With Exadata, external redundancy is not an option - you either need to use normal or high redundancy. With normal redundancy, each extent is mirrored to a different cell, and with high redundancy, they are mirrored via ASM to two additional cells. Specifically, extents are mirrored to partner disks in different failure groups.

On an Exadata Quarter Rack, Oracle only supports normal redundancy because there are only three failure groups, one for each cell. While this certainly seems like it should be enough failure groups to support three extent mirrors, the challenges comes with Oracle's requirement on OCR and voting disks. 

With high redundancy ASM disk groups, Oracle requires five failure groups. Since Oracle cannot guarantee that you won't place your OCR/voting disks on this ASM disk group, it simply prevents you from creating it. This being said, the syntax above for creating a high redundancy ASM disk group is valid for the Half Rack and Full Rack configurations.

In the Exadata environment, the failure group for disks on each cell node is cellnode itself. In this way, the blocks on the disk in the cell node is also present on another cellnode, so even if the cellnode is turned off, there will be no problem.



Before we proceed we need to understand what is Cell Disk  and Grid Disk 

The Exadata Storage Server contains 12 physical disks.
There is one to one relationship between a physical disk and the cell disk. One physical disk corresponds to a single cell disk.
Then a Grid disk comprises of many cell disks.
And a Grid disk corresponds to one ASM disk.
An ASM diskgroup comprises of many ASM disks.


For disk maintenance activity asmdeactivationoutcome  for  grid disk  and DISK_REPAIR_TIME for diskgroup is important .


 Below are 2 files that make cell nodes visible to Database node ( Compute Node ) 
 cellinit.ora -- decides which network takes storage traffic.
 cellip.ora - list of cells, new cells can be added dynamically without shutdown


 Below is hierarchy in which disk are created 
 Physical disk / Cell disk is allocated to Cell 
 Grid disk is created on top of Cell Disk 
 Asm  disk is allocated using Grid Disk 


Each Cell comes with 12 SAS Harddisks (600 GB each with High Performance resp. 2 TB each with High Capacity). The picture below shows a Cell with the 12 Harddisks on the front:
Also each Cell has 4 Flashcards built in that are divided into 4 Flashdisks each, summarizing to 16 Flashdisks in each Cell that deliver by default 384 GB Flash Cache.  


1) Physical Disks
Physical Disks can be of the type Harddisk or of the type Flashdisk. You cannot create or drop them. The only administrative task on that layer can be to turn the LED at the front of the Cell on before you replace a damaged Harddisk to be sure you pull out the right one, with a command like
CellCLI> alter physicaldisk  serviceled on



2) Luns
Luns are the second layer of abstraction. They have been introduced, because the first two Harddisks in every Cell are different than the other 10 in so far as they contain the Operating System (Oracle Enterprise Linux). About 30 GB have been carved out of the first 2 Harddisks for that purpose. We have 2 of them for redundancy  – the Cell can still operate if only one of the first 2 Harddisks fails. If we investigate the first 2 LUNs, we see the mirrored OS Partitions. Jo 
As an Administrator, you do not need to do anything on the Lun Layer except looking at it with commands like
CellCLI> list lun



3) Celldisks
Celldisks are the third layer of abstraction. It was introduced to enable interleaving in the first place.
CellCLI> create celldisk all harddisk interleaving='normal_redundancy'
CellCLI> create celldisk all harddisk


4) Griddisks
Griddisks are the fourth layer of abstraction, and they will be the Candidate Disks to build your ASM diskgroups from.
With interleaving specified at the Celldisk layer, this is different: The Griddisks are then being created from outer and inner parts of the Harddisk, leading to equal performance of the Griddisks and also then of the later created Diskgroups.


5) Asm  diskgroups 

Asm diskgroups are created using Grid disk from step 4 



++++++++++++++++++++++++++++++++++++++++++++++++++++
Strategy for Disk  replacement depends on  below scenario
++++++++++++++++++++++++++++++++++++++++++++++++++++
1) If disk is   failed /total failure 
2) Disk is slow performance or  Predictive Failures
3)  if   poor performance disk to be replace is flash disk 



 
 
++++++++++++++++++++++++++++++++++++++++++++++++++++
Flash Disk Replacement due to poor performance in Exadata 
++++++++++++++++++++++++++++++++++++++++++++++++++++
 
To identify a poor performance flash disk, use the following command:

CellCLI> LIST PHYSICALDISK WHERE DISKTYPE=flashdisk AND STATUS= 'warning - poor performance' DETAIL
 
Recommended Action  
The flash disk has entered poor performance status. A white cell locator LED has been lit to help locate the affected cell. 
Please replace the flash disk.
If the flash disk is used for flash cache, then flash cache will be disabled on this disk thus reducing the effective flash cache size. 
If the flash disk is used for flash log, then flash log will be disabled on this disk thus reducing the effective flash log size.
If the flash disk is used for grid disks, then Oracle ASM rebalance will automatically restore the data redundancy.
Sun Oracle Exadata Storage Server is equipped with four PCIe cards. Each card has four flash disks (FDOMs) for a total of 16 flash disks. 
The 4 PCIe cards are present on PCI slot numbers 1, 2, 4, and 5. 
The PCIe cards are not hot-pluggable such that Exadata Cell must be powered down before replacing the flash disks or cards.
Hence DataCenter Team replaced a flash disk in co-ordination with us (DBA) because the flash disk was in poor performance status.


[1] Verify the existing disk_repair_time attribute for all diskgroups

SQL> select dg.name,a.value from 
v$asm_diskgroup dg, v$asm_attribute a 
where dg.group_number=a.group_number and
a.name='disk_repair_time';


[2] The default disk_repair_time is 3.6 hours only so better to adjust.

 SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'DISK_REPAIR_TIME'='8.5H';





1. Shut down the cell.
The following procedure describes how to power down Exadata Cell.Run the following command to check if there are offline disks 
on other cells that are mirrored with disks on this cell:

CellCLI > LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != 'Yes'

If any grid disks are returned, then it is not safe to take the storage server offline because proper Oracle ASM disk group redundancy 
will not be intact. Taking the storage server offline when one or more grid disks are in this state will cause Oracle ASM to dismount 
the affected disk group, causing the databases to shut down abruptly.
Inactivate all the grid disks when Oracle Exadata Storage Server is safe to take offline using the following command:

CellCLI> ALTER GRIDDISK ALL INACTIVE

The preceding command will complete once all disks are inactive and offline. Depending on the storage server activity,
 it may take several minutes for this command to complete.
Verify all grid disks are INACTIVE to allow safe storage server shut down by running the following command.

CellCLI> LIST GRIDDISK

If all grid disks are INACTIVE, then the storage server can be shutdown without affecting database availability.


Stop the cell services using the following command:

CellCLI> ALTER CELL SHUTDOWN SERVICES ALL
Shut down the cell.

2. Replace the failed flash disk based on the PCI number and FDOM number.

3. Power up the cell. The cell services will be started automatically.

4.Bring all grid disks are online using the following command:
CellCLI> ALTER GRIDDISK ALL ACTIVE



5. Verify that all grid disks have been successfully put online using the following command:
CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus
        Wait until asmmodestatus from SYNCING to ONLINE for all grid disks. 
 
CellCLI> LIST GRIDDISK ATTRIBUTES name, asmmodestatus


      
6. Verify FlashCache 

CellCLI> list flashcache
orclcel05_FLASHCACHE warning - degraded

If the flashcache state is not normal after the flash disk change, drop flashcache, flashlog, and flashdisk drop and recreate in the same order.

Exadata X6 systems do not require this step.

CellCLI> drop flashcache
Flash cache orclcel05_FLASHCACHE successfully dropped
 
CellCLI> drop flashlog
Flash log orclcel05_FLASHLOG successfully dropped
 
CellCLI> drop celldisk all flashdisk force


CellCLI> create celldisk all flashdisk
 
CellCLI> create flashlog all
Flash log orclcel05_FLASHLOG successfully created
 
CellCLI> create flashcache all
Flash cache orclcel05_FLASHCACHE successfully created





Oracle ASM synchronization is only complete when all grid disks show attribute asmmodestatus=ONLINE. Before taking another storage server offline, Oracle ASM synchronization must complete on the restarted Oracle Exadata Storage Server. If synchronization is not complete, then the check performed on another storage server will fail.

The new flash disk will be automatically used by the system. If the flash disk is used for flash cache, then the effective cache size will increase. If the flash disk is used for grid disks, then the grid disks will be recreated on the new flash disk. If those gird disks were part of an Oracle ASM disk group, then they will be added back to the disk group and the data will be rebalanced on them based on the disk group redundancy and asm_power_limit parameter.

Oracle ASM rebalance occurs when dropping or adding a disk. To check the status of the rebalance, do the following:

The rebalance operation may have been successfully run. Check the Oracle ASM alert logs to confirm
The rebalance operation may be currently running. Check the GV$ASM_OPERATION view to determine if the rebalance operation is still running.

The rebalance operation may have failed. Check the GV$ASM_OPERATION.ERROR view to determine if the rebalance operation failed.

Rebalance operations from multiple disk groups can be done on different Oracle ASM instances in the same cluster if the physical disk being replaced contains ASM disks from multiple disk groups. One Oracle ASM instance can run one rebalance operation at a time. If all Oracle ASM instances are busy, then rebalance operations will be queued.






++++++++++++++++++++++++++++++++++++++++++++++++++++
Replacing Failed disk in Exadata 
++++++++++++++++++++++++++++++++++++++++++++++++++++

Hard Disks on the cell node are automatically dropped at the time of damage, and the disks of the asm disk groups will also be deleted. 
After the disk drop operation and after the disk changes, rebalance process will automatically start in ASM disk groups.

 
1) 
CellCLI> List alerthistory 
CellCLI> LIST PHYSICALDISK WHERE diskType=HardDisk AND status like failed DETAIL



2) This query should return no rows indicating the disk is no longer in the ASM diskgroup configuration.    If this returns any other value, then contact the SR owner for further guidance.

SQL> select group_number,path,header_status,mount_status,mode_status,name from V$ASM_DISK where path like 'Í_05_abcdecel02';



3) The ALTER PHYSICALDISK command sets the Service Action Required LED on or off.

CellCLI> ALTER PHYSICALDISK n:n DROP FOR REPLACEMENT
CellCLI> alter physicaldisk 20:11 drop for replacement




4)   If Drop is Hung  in Step 3  proceed with Step 4  

CellCLI> alter griddisk <griddisk_name_from_alertlog> flush nowait;
NOTE: Wait for sometimes and monitor the cell's alert.log file.

Check following too to see if flushing dirty blocks from flash cache to specific grid disk(s) cached is decreasing :
CellCLI> list metriccurrent attributes name,metricvalue where name like 'FC_BY_DIRTY';
NOTE: Need to wait and monitor the cell's alert.log file for flush completed.
 
CellCLI> alter griddisk <griddisk_name_from_alertlog> cancel flush;




5) Replace damage disk 

CellCLI> list griddisk attributes name,status,asmmodestatus,asmdeactivationoutcome

NOTE: If it is showing this ->> "normal - dropped for replacement", then proceed to replace the disk.

alter physicaldisk 28:5 serviceled off




 
6)  Disk addition   back  to Asm 

If the new griddisks were not automatically added back into the ASM diskgroup configuration, then locate the disks with group_number=0, and add them back in manually using "alter diskgroup <name> add disk <path> re-balance power 10;" command:

If replacement time crosses ASM FAST MIRROR RESYNC period 

SQL> select path,header_status from v$asm_disk where group_number=0;
PATH                                               HEADER_STATU
-------------------------------------------------- ------------
o/192.168.9.10/DBFS_DG_CD_05_abcdcel02        FORMER
o/192.168.9.10/DATA_Q1_CD_05_abcdcel02        FORMER
o/192.168.9.10/RECO_Q1_CD_05_abcdcel02        FORMER

SQL> alter diskgroup dbfs_dg add disk 'o/192.168.9.10/DBFS_DG_CD_05_abcdcel02' rebalance power 10;
SQL> alter diskgroup data_q1 add disk 'o/192.168.9.10/DATA_Q1_CD_05_abcdcel02' rebalance power 10;
SQL> alter diskgroup reco_q1 add disk 'o/192.168.9.10/RECO_Q1_CD_05_abcdcel02' rebalance power 10;







++++++++++++++++++++++++++++++++++++++++++++++++++++
Below 2 sql was used for  checking disk statistics 
++++++++++++++++++++++++++++++++++++++++++++++++++++

COL cv_cellname       HEAD CELLNAME         FOR A20
COL cv_cellversion    HEAD CELLSRV_VERSION  FOR A20
COL cv_flashcachemode HEAD FLASH_CACHE_MODE FOR A20
PROMPT Show Exadata cell versions from V$CELL_CONFIG....
SELECT 
    disktype
  , cv_cellname
  , status
  , ROUND(SUM(physicalsize/1024/1024/1024)) total_gb
  , ROUND(AVG(physicalsize/1024/1024/1024)) avg_gb
  , COUNT(*) num_disks
  , SUM(CASE WHEN predfailStatus  = 'TRUE' THEN 1 END) predfail
  , SUM(CASE WHEN poorPerfStatus  = 'TRUE' THEN 1 END) poorperf
  , SUM(CASE WHEN wtCachingStatus = 'TRUE' THEN 1 END) wtcacheprob
  , SUM(CASE WHEN peerFailStatus  = 'TRUE' THEN 1 END) peerfail
  , SUM(CASE WHEN criticalStatus  = 'TRUE' THEN 1 END) critical
FROM (
    SELECT /*+ NO_MERGE */
        c.cellname cv_cellname
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/name/text()')                          AS VARCHAR2(20)) diskname
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/diskType/text()')                      AS VARCHAR2(20)) diskType          
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/luns/text()')                          AS VARCHAR2(20)) luns              
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/makeModel/text()')                     AS VARCHAR2(50)) makeModel         
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/physicalFirmware/text()')              AS VARCHAR2(20)) physicalFirmware  
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/physicalInsertTime/text()')            AS VARCHAR2(30)) physicalInsertTime
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/physicalSerial/text()')                AS VARCHAR2(20)) physicalSerial    
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/physicalSize/text()')                  AS VARCHAR2(20)) physicalSize      
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/slotNumber/text()')                    AS VARCHAR2(30)) slotNumber        
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/status/text()')                        AS VARCHAR2(20)) status            
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/id/text()')                            AS VARCHAR2(20)) id                
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/key_500/text()')                       AS VARCHAR2(20)) key_500           
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/predfailStatus/text()')                AS VARCHAR2(20)) predfailStatus    
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/poorPerfStatus/text()')                AS VARCHAR2(20)) poorPerfStatus    
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/wtCachingStatus/text()')               AS VARCHAR2(20)) wtCachingStatus   
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/peerFailStatus/text()')                AS VARCHAR2(20)) peerFailStatus    
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/criticalStatus/text()')                AS VARCHAR2(20)) criticalStatus    
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errCmdTimeoutCount/text()')            AS VARCHAR2(20)) errCmdTimeoutCount
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errHardReadCount/text()')              AS VARCHAR2(20)) errHardReadCount  
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errHardWriteCount/text()')             AS VARCHAR2(20)) errHardWriteCount 
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errMediaCount/text()')                 AS VARCHAR2(20)) errMediaCount     
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errOtherCount/text()')                 AS VARCHAR2(20)) errOtherCount     
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errSeekCount/text()')                  AS VARCHAR2(20)) errSeekCount      
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/sectorRemapCount/text()')              AS VARCHAR2(20)) sectorRemapCount  
    FROM
        v$cell_config c
      , TABLE(XMLSEQUENCE(EXTRACT(XMLTYPE(c.confval), '/cli-output/physicaldisk'))) v   
    WHERE 
        c.conftype = 'PHYSICALDISKS'
)
GROUP BY
    cv_cellname
  , disktype
  , status
ORDER BY
    disktype
  , cv_cellname
/




COL cellname            HEAD CELLNAME       FOR A20
COL celldisk_name       HEAD CELLDISK       FOR A30
COL physdisk_name       HEAD PHYSDISK       FOR A30
COL griddisk_name       HEAD GRIDDISK       FOR A30
COL asmdisk_name        HEAD ASMDISK        FOR A30
BREAK ON asm_diskgroup SKIP 1 ON asm_disk
PROMPT Showing Exadata disk topology from V$ASM_DISK and V$CELL_CONFIG....
WITH
  pd AS (
    SELECT /*+ MATERIALIZE */
        c.cellname
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/name/text()')                          AS VARCHAR2(100)) name
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/diskType/text()')                      AS VARCHAR2(100)) diskType          
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/luns/text()')                          AS VARCHAR2(100)) luns              
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/makeModel/text()')                     AS VARCHAR2(100)) makeModel         
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/physicalFirmware/text()')              AS VARCHAR2(100)) physicalFirmware  
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/physicalInsertTime/text()')            AS VARCHAR2(100)) physicalInsertTime
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/physicalSerial/text()')                AS VARCHAR2(100)) physicalSerial    
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/physicalSize/text()')                  AS VARCHAR2(100)) physicalSize      
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/slotNumber/text()')                    AS VARCHAR2(100)) slotNumber        
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/status/text()')                        AS VARCHAR2(100)) status            
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/id/text()')                            AS VARCHAR2(100)) id                
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/key_500/text()')                       AS VARCHAR2(100)) key_500           
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/predfailStatus/text()')                AS VARCHAR2(100)) predfailStatus    
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/poorPerfStatus/text()')                AS VARCHAR2(100)) poorPerfStatus    
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/wtCachingStatus/text()')               AS VARCHAR2(100)) wtCachingStatus   
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/peerFailStatus/text()')                AS VARCHAR2(100)) peerFailStatus    
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/criticalStatus/text()')                AS VARCHAR2(100)) criticalStatus    
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errCmdTimeoutCount/text()')            AS VARCHAR2(100)) errCmdTimeoutCount
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errHardReadCount/text()')              AS VARCHAR2(100)) errHardReadCount  
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errHardWriteCount/text()')             AS VARCHAR2(100)) errHardWriteCount 
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errMediaCount/text()')                 AS VARCHAR2(100)) errMediaCount     
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errOtherCount/text()')                 AS VARCHAR2(100)) errOtherCount     
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/errSeekCount/text()')                  AS VARCHAR2(100)) errSeekCount      
      , CAST(EXTRACTVALUE(VALUE(v), '/physicaldisk/sectorRemapCount/text()')              AS VARCHAR2(100)) sectorRemapCount  
    FROM
        v$cell_config c
      , TABLE(XMLSEQUENCE(EXTRACT(XMLTYPE(c.confval), '/cli-output/physicaldisk'))) v  -- gv$ isn't needed, all cells should be visible in all instances
    WHERE 
        c.conftype = 'PHYSICALDISKS'
),
 cd AS (
    SELECT /*+ MATERIALIZE */
        c.cellname 
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/name/text()')                              AS VARCHAR2(100)) name
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/comment        /text()')                   AS VARCHAR2(100)) disk_comment
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/creationTime   /text()')                   AS VARCHAR2(100)) creationTime
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/deviceName     /text()')                   AS VARCHAR2(100)) deviceName
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/devicePartition/text()')                   AS VARCHAR2(100)) devicePartition
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/diskType       /text()')                   AS VARCHAR2(100)) diskType
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/errorCount     /text()')                   AS VARCHAR2(100)) errorCount
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/freeSpace      /text()')                   AS VARCHAR2(100)) freeSpace
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/id             /text()')                   AS VARCHAR2(100)) id
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/interleaving   /text()')                   AS VARCHAR2(100)) interleaving
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/lun            /text()')                   AS VARCHAR2(100)) lun
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/physicalDisk   /text()')                   AS VARCHAR2(100)) physicalDisk
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/size           /text()')                   AS VARCHAR2(100)) disk_size
      , CAST(EXTRACTVALUE(VALUE(v), '/celldisk/status         /text()')                   AS VARCHAR2(100)) status
    FROM
        v$cell_config c
      , TABLE(XMLSEQUENCE(EXTRACT(XMLTYPE(c.confval), '/cli-output/celldisk'))) v  -- gv$ isn't needed, all cells should be visible in all instances
    WHERE 
        c.conftype = 'CELLDISKS'
),
 gd AS (
    SELECT /*+ MATERIALIZE */
        c.cellname 
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/name/text()')                               AS VARCHAR2(100)) name
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/asmDiskgroupName/text()')                   AS VARCHAR2(100)) asmDiskgroupName 
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/asmDiskName     /text()')                   AS VARCHAR2(100)) asmDiskName
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/asmFailGroupName/text()')                   AS VARCHAR2(100)) asmFailGroupName
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/availableTo     /text()')                   AS VARCHAR2(100)) availableTo
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/cachingPolicy   /text()')                   AS VARCHAR2(100)) cachingPolicy
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/cellDisk        /text()')                   AS VARCHAR2(100)) cellDisk
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/comment         /text()')                   AS VARCHAR2(100)) disk_comment
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/creationTime    /text()')                   AS VARCHAR2(100)) creationTime
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/diskType        /text()')                   AS VARCHAR2(100)) diskType
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/errorCount      /text()')                   AS VARCHAR2(100)) errorCount
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/id              /text()')                   AS VARCHAR2(100)) id
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/offset          /text()')                   AS VARCHAR2(100)) offset
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/size            /text()')                   AS VARCHAR2(100)) disk_size
      , CAST(EXTRACTVALUE(VALUE(v), '/griddisk/status          /text()')                   AS VARCHAR2(100)) status
    FROM
        v$cell_config c
      , TABLE(XMLSEQUENCE(EXTRACT(XMLTYPE(c.confval), '/cli-output/griddisk'))) v  -- gv$ isn't needed, all cells should be visible in all instances
    WHERE 
        c.conftype = 'GRIDDISKS'
),
 lun AS (
    SELECT /*+ MATERIALIZE */
        c.cellname 
      , CAST(EXTRACTVALUE(VALUE(v), '/lun/cellDisk         /text()')              AS VARCHAR2(100)) cellDisk      
      , CAST(EXTRACTVALUE(VALUE(v), '/lun/deviceName       /text()')              AS VARCHAR2(100)) deviceName    
      , CAST(EXTRACTVALUE(VALUE(v), '/lun/diskType         /text()')              AS VARCHAR2(100)) diskType      
      , CAST(EXTRACTVALUE(VALUE(v), '/lun/id               /text()')              AS VARCHAR2(100)) id            
      , CAST(EXTRACTVALUE(VALUE(v), '/lun/isSystemLun      /text()')              AS VARCHAR2(100)) isSystemLun   
      , CAST(EXTRACTVALUE(VALUE(v), '/lun/lunAutoCreate    /text()')              AS VARCHAR2(100)) lunAutoCreate 
      , CAST(EXTRACTVALUE(VALUE(v), '/lun/lunSize          /text()')              AS VARCHAR2(100)) lunSize       
      , CAST(EXTRACTVALUE(VALUE(v), '/lun/physicalDrives   /text()')              AS VARCHAR2(100)) physicalDrives
      , CAST(EXTRACTVALUE(VALUE(v), '/lun/raidLevel        /text()')              AS VARCHAR2(100)) raidLevel
      , CAST(EXTRACTVALUE(VALUE(v), '/lun/lunWriteCacheMode/text()')              AS VARCHAR2(100)) lunWriteCacheMode
      , CAST(EXTRACTVALUE(VALUE(v), '/lun/status           /text()')              AS VARCHAR2(100)) status        
    FROM
        v$cell_config c
      , TABLE(XMLSEQUENCE(EXTRACT(XMLTYPE(c.confval), '/cli-output/lun'))) v  -- gv$ isn't needed, all cells should be visible in all instances
    WHERE 
        c.conftype = 'LUNS'
)
 , ad  AS (SELECT /*+ MATERIALIZE */ * FROM v$asm_disk)
 , adg AS (SELECT /*+ MATERIALIZE */ * FROM v$asm_diskgroup)
SELECT 
    adg.name                        asm_diskgroup
  , ad.name                         asm_disk
  , gd.name                         griddisk_name
  , cd.name                         celldisk_name
  , pd.cellname
  , SUBSTR(cd.devicepartition,1,20) cd_devicepart
  , pd.name                         physdisk_name
  , SUBSTR(pd.status,1,20)          physdisk_status
  , lun.lunWriteCacheMode
--  , SUBSTR(cd.devicename,1,20)      cd_devicename
--  , SUBSTR(lun.devicename,1,20)     lun_devicename
--    disktype
FROM
    gd
  , cd
  , pd
  , lun
  , ad
  , adg
WHERE
    ad.group_number = adg.group_number (+)
AND gd.asmdiskname = ad.name (+)
AND cd.name = gd.cellDisk (+)
AND pd.id = cd.physicalDisk (+)
AND cd.name = lun.celldisk (+)
--GROUP BY
--    cellname
--  , disktype
--  , status
ORDER BY
--    disktype
    asm_diskgroup
  , asm_disk
  , griddisk_name
  , celldisk_name
  , physdisk_name
  , cellname
/






++++++++++++++++++++++++++++++++++++++++++++++++++++
References : 
++++++++++++++++++++++++++++++++++++++++++++++++++++
Oracle Exadata Diagnostic Information required for Disk Failures and some other Hardware issues (Doc ID 761868.1)

Things to Check in ASM When Replacing an ONLINE disk from Exadata Storage Cell (Doc ID 1326611.1)

Exadata ALTER PHYSICALDISK N:N DROP FOR REPLACEMENT is hung (Doc ID 2574663.1)

How to Replace a Hard Drive in an Exadata Storage Server (Predictive Failure) (Doc ID 1390836.1)

How to Replace a Hard Drive in an Exadata Storage Server (Hard Failure) (Doc ID 1386147.1)

Drop Online Disk: Things to Check in ASM When Replacing an ONLINE disk from Exadata Storage Cell (Doc ID 1326611.1)

Understanding ASM Capacity and Reservation of Free Space in Exadata (Doc ID 1551288.1)

Flash disk poor performance status (Doc ID 1206015.1)

Steps to shut down or reboot an Exadata storage cell without affecting ASM (Doc ID 1188080.1)

Understanding ASM Capacity and Reservation of Free Space in Exadata (Doc ID 1551288.1)

Thursday, September 2, 2021

RMAN-20011: target database incarnation is not current in recovery catalog

 
After some research, we came across Note 412113.1 on http://metalink.oracle.com, which pointed towards the possibility of some non-prod instance getting registered with RMAN catalog with the same DBID as production’s.

In addition, Notes 1076158.6 and 1061206.6 were also *probably* relevant, but the relevance was not striking

SQL> select db_key,DBID,name,current_incarnation from rc_database_incarnation order by 1;


So, in reality, what had happened was that the reincarnation of DR PROD instance had been setup in the rman catalog.

To undo this, we reset the database incarnation to the catalog using the RESET INCARNATION TO DBKEY and then did a manual resync of catalog using the rman RESYNC CATALOG command.


If backups are falling  due to said reason 

Reset PROD to previous incarnation
 1. Check the latest incarnation key for production database in rc_database_incarnation
  select DBID,NAME,DBINC_KEY,  RESETLOGS_CHANGE#, RESETLOGS_TIME   
  from rc_database_incarnation
  where dbid=2284119847;

       ==> Check the DBINC_KEY corresponding to the current incarnation of PROD database
    
2. Connect to the production database with RMAN
     rman catalog <un/pw@catalog_db> target /

     RMAN> reset database to incarnation <dbinc_key>; <<< From step 1
     Or 
     RMAN>    RESET DATABASE;
RMAN> unregister database;
   RMAN> register database;


    RMAN> resync catalog;
    RMAN> list incarnation; => Now the production DB incarnation should be the current one





Reference : 
Target Database Incarnation Is Not Current In Recovery Catalog RMAN-20011 (Doc ID 412113.1)
RMAN restore database fails with RMAN-20011, Target Incarnation Is Not Current (Doc ID 394493.1)

Tuesday, August 31, 2021

Oracle rename Asm diskgroup with DB files


There are basically we have two phases of using renamedg command, phase 1 generates a configuration file to be used by phase 2. In phase 2 actual renaming of diskgroup is done.


renamedg
[phase={ one|two |both } ] dgname=diskgroup
newdgname=newdiskgroup [config=configfile]
[ asm_diskstring=discoverystring, discoverystring ... ]
[ clean={true|false} ] [ check={true|false} ]
[ confirm={true|false}] [ verbose={ true|false} ]
[ keep_voting_files={true|false}]


1) Check Current configuration for database attached to diskgroup 

srvctl config database -d <DB_NAME>
select file_name from  v$datafile ; 


2) Stop/dismount diskgroup 

srvctl stop database -d <DB_NAME>
srvctl status diskgroup -g reco
srvctl stop diskgroup -g recoc1   


3) Rename diskgroup 

renamedg phase=both dgname=RECOC1 newdgname=RECO verbose=true keep_voting_files=true 

If the above command fails, searching for disks, then we need to include the diskstring and then you need to use

renamedg phase=both dgname=<OLD_DG_NAME> newdgname=<NEW_DG_NAME> verbose=true asm_diskstring='<discoverystring1>','<discoverystring2>


SQL> alter diskgroup RECO mount restricted;
SQL> alter diskgroup RECO rename disks all ;

srvctl start diskgroup -g reco   
srvctl status diskgroup -g reco


4) Modify spfile and password file location for database to point to new asm diskgroup 

srvctl modify database -d <db_name> -p <spfile_path_with_new_diskgroup> -a "<diskgroup_list>"


5)  Rename datafile location  in mount stage 

 select 'alter database rename file '''||name||''' to '' +NEWDG'||substr(name,instr(name,'/',1,1))||''';' from V$DATAFILE;

select 'alter database rename file '''||member||''' to '' +NEWDG'||substr(member,instr(member,'/',1,1))||''';' from V$logfile;


V$RECOVER_FILE view can be used to check for any issues with datafiles not being available.
If everything is fine, you can open the database. You would have noticed that I have not done anything for Temp files. Checking alert log, we see that it is renamed automatically


6)   Drop and recreate block change tracking file 

7) Change  Archive location / Fra Location / Snapshot Controlfile location  that is attached to this diskgroup 


References : 

How To Rename A Diskgroup On ASM 11gR2? (Doc ID 948040.1)
How to Change Database to Use a New Diskgroup After Diskgroup Rename (Doc ID 1359625.1)

Tuesday, August 17, 2021

Oracle -- Database Resident Connection Pooling (DRCP)

 DRCP (11g) is especially designed to help architectures such as PHP with the Apache
server, that can’t take advantage of middle-tier connection pooling because they used
multiprocess single-threaded application servers. DRCP enables applications such as these
to easily scale up to server connections in the tens of thousands.


DRCP is controlled by the following configuration parameters:

INACTIVITY_TIMEOUT maximum idle time for a pooled server before it is terminated.

MAX_LIFETIME_SESSION time to live TTL duration for a pooled session.

MAX_USE_SESSION maximum number of times a connection can be taken and released to the
pool.

MAX_SIZE and MIN_SZIE the maximum and minimum number of pooled servers in the connections
pool.

INCRSIZE pool would increment by this number of pooled server when pooled server
are unavailable at application request time.

MAX_THINK_TIME maximum time of inactivity by the client after getting a server from the
pool. If the client does not issue a database call after grabbing a server
from the pool, the client will be forced to relinquish control of the pooled
server and will get an error. The freed up server may or may not be
returned to the pool.

SESSION_CACHED_CURSORS turn on SESSION_CACHED_CURSORS for all connections in the pool. This is an existing initialization parameter



/* Enabling and Disabling DRCP */

conn sys as sysdba
-- the ramins open after DB restart
exec dbms_connection_pool.start_pool();
select connection_pool, status, maxsize from dba_cpool_info;
exec dbms_connection_pool.stop_pool();

-- specify using DRCP
-- in EZCONNECT method (.Net 11g)
myhost.comany.com:1521/mydb.company.com:POOLED

-- tnsnames
mydb = (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=myhost.company.com)
(SERVER=POOLED))) 
Page 241 Oracle DBA Code Examples
/* Configuring DRCP */
begin
 DBMS_CONNECTION_POOL.ALTER_PARAM( PARAM_NAME =>'INACTIVITY_TIMEOUT',
PARAM_VALUE=>'3600');
end;
/
-- restore parameter values to their defaults
exec dbms_connection_pool.restore_defaults()
/* Monitor DRCP */
SELECT
STATUS,MINSIZE,MAXSIZE,INCRSIZE,SESSION_CACHED_CURSORS,INACTIVITY_TIMEOUT
FROM DBA_CPOOL_INFO;
SELECT NUM_OPEN_SERVERS, NUM_BUSY_SERVERS, NUM_REQUESTS, NUM_HITS
 NUM_MISSES, NUM_WAITS, NUM_PURGED, HISTORIC_MAX
FROM V$CPOOL_STATS;

-- class-level stats
Select * From V$CPOOL_CC_STATS

Sunday, August 8, 2021

Oracle -- Open Cursor & Session Cached Cursors

 

This article mainly covers  importance of SESSION_CACHED_CURSORS on performance and  open cursors  .   Apart from SESSION_CACHED_CURSORS ,   other parameters that control   open cursors in database are cursor_sharing and  how developers close cursor in pl/sql block  .


OPEN_CURSORS 
specifies the maximum number of open cursors (handles to private SQL areas) a
session can have at once. You can use this parameter to prevent a session from opening an excessive
number of cursors. This parameter also constrains the size of the PL/SQL cursor cache, which PL/SQL
uses to avoid having to reparse as statements are re-executed by a user. 

SESSION_CACHED_CURSORS 
lets you specify the number of session cursors to cache. Repeated
parse calls of the same SQL statement cause the session cursor for that statement to be moved into the
session cursor cache. Subsequent parse calls will find the cursor in the cache and do not need to reopen
the cursor. 

If the session cursor cache count is maxed out, session_cursor_cache_hits is low compared to all parses, and you suspect that the application is re-submitting the same queries for parsing repeatedly, then increasing SESSION_CURSOR_CACHE_COUNT may help with latch contention and give a slight boost to performance. Note that if your application is not resubmitting the same queries for parsing repeatedly, then session_cursor_cache_hits will be low and the session cursor cache count may be maxed out, but caching cursors by session will not help at all. For example, if your application is using a lot of unsharable SQL, raising this parameter will not help


When a SQL statement is issued, the server process, after checking its syntax and semantics, searches the library cache for an existing cursor for the SQL statement. If a cursor does not already exist, a new cursor is created (hard parse), else existing cursor is used (soft parse). Whereas hard parsing is a resource intensive operation, soft parse,  although less expensive, also incurs some cost, as the server process has to search the library cache for previously parsed SQL, which requires the use of the library cache and shared pool latches. Latches can often become points of contention for busy OLTP systems, thereby affecting response time and scalability.

To minimize the impact on performance, session cursors of repeatedly issued statements can be stored in the session cursor cache to reduce the cost of or even eliminate soft parse. This is called Session Cursor Caching. When session cursor caching is enabled, Oracle caches the cursor of a reentrant SQL statement in the session memory (PGA / UGA). As a result, the session cursor cache now contains a pointer into the library cache where the cursor existed when it was closed. Since presence of a cursor in session cursor cache guarantees the correctness of the corresponding SQL’s syntax and semantics, these checks are bypassed when the same SQL is resubmitted. Subsequently, instead of searching for the cursor in library cache, the server process follows the pointer in the session memory and uses the cursor after having confirmed its presence and validity.

Hence, if a closed cursor is found in the session cursor cache, it is registered as a ‘session cached cursor hit’ and also as a ‘soft parse’, since a visit to the shared SQL area must be made to  confirm its presence and validity


Handling Cursor from Pl/SQL side Using Precompiler Option RELEASE_CURSOR=YES 

Well this is  totally different  topic altogether hence not sharing too much information on this. However  just to give  direction to developer on cursor handling can be done at coding side , 



Find out the session that is causing the error by using the following SQL statement:

select sid ,  count(*)  from v$open_cursor  group by sid order by 2  ;
select sql_id ,  count(*)  from v$open_cursor where sid=193  group by sql_id  ;

col hwm_open_cur format 99,999
col max_open_cur format 99,999
select 
   max(a.value) as hwm_open_cur, 
   p.value      as max_open_cur
from 
   v$sesstat a, 
   v$statname b, 
   v$parameter p
where 
   a.statistic# = b.statistic# 
and 
   b.name = 'opened cursors current'
and 
   p.name= 'open_cursors'
group by p.value;



select a.value, s.username, s.sid, s.serial# from v$sesstat a, v$statname b, v$session s where a.statistic# = b.statistic#  and s.sid=a.sid and b.name = 'opened cursors current' and s.username is not null;

select  sid ,sql_text, count(*) as "OPEN CURSORS", USER_NAME from v$open_cursor where sid in ($SID);

SELECT  max(a.value) as highest_open_cur, p.value as max_open_cur FROM v$sesstat a, v$statname b, v$parameter p WHERE  a.statistic# = b.statistic#  and b.name = 'opened cursors current' and p.name= 'open_cursors' group by p.value;


-- cursor open for  each session and its address 
SELECT C.SID AS "OraSID",
 C.ADDRESS || ':' || C.HASH_VALUE AS "SQL Address",
 COUNT(C.SADDR) AS "Cursor Copies"
 FROM V$OPEN_CURSOR C
GROUP BY C.SID, C.ADDRESS || ':' || C.HASH_VALUE
HAVING COUNT(C.SADDR) > 2
ORDER BY 3 DESC


--  open cursor value and  its usage 
SELECT
 'session_cached_cursors' parameter,
 LPAD(value, 5) value,
 DECODE(value, 0, ' n/a', to_char(100 * used / value, '990') ||
'%') usage
FROM
 (SELECT
 MAX(s.value) used
 FROM
 v$statname n,
 v$sesstat s
 WHERE
 n.name = 'session cursor cache count' and
 s.statistic# = n.statistic#
 ),
 (SELECT
 value
 FROM
 v$parameter
 WHERE
 name = 'session_cached_cursors'
 )
UNION ALL
SELECT
 'open_cursors',
 LPAD(value, 5),
 to_char(100 * used / value, '990') || '%'
FROM
 (SELECT
 MAX(sum(s.value)) used
 FROM
 v$statname n,
 v$sesstat s
 WHERE
 n.name in ('opened cursors current', 'session cursor cache
count') and
 s.statistic# = n.statistic#
 GROUP BY
 s.sid
 ),
 (SELECT
 value
 FROM
 v$parameter
 WHERE
 name = 'open_cursors'
 )


-- Cursor details for particular session 

SELECT b.SID, UPPER(a.NAME), b.VALUE
 FROM v$statname a, v$sesstat b, v$session c
 WHERE a.statistic# = b.statistic#
 AND c.SID = b.SID
 AND LOWER(a.NAME) LIKE '%' || LOWER('CURSOR')||'%' 
 AND b.SID=20
UNION
SELECT SID, 'v$open_cursor opened cursor', COUNT(*)
FROM v$open_cursor
WHERE SID=&sid
GROUP BY SID
ORDER BY SID
/

 SELECT a.value curr_cached, p.value max_cached,
s.username, s.sid, s.serial#
 FROM v$sesstat a, v$statname b, v$session s, v$parameter2 p
 WHERE a.statistic# = b.statistic# and s.sid=a.sid and a.sid=&sid
 AND p.name='session_cached_cursors'
 AND b.name = 'session cursor cache count';


-- CHECKING CACHE CURSOR FOR SESSIONS 

select s.username, s.sid, s.serial#, 'CACHED_CURSORS'
Statistic,
       a.value
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic#  and s.sid=a.sid
and s.username='HAFEEZ' 
and b.name in ('session cursor cache count')
union
select s.username, s.sid, s.serial#, 'CURSORS_HITS',
       a.value
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic#  and s.sid=a.sid
and s.username='HAFEEZ' 
and b.name in ( 'session cursor cache hits')
union
select s.username, s.sid, s.serial#, 'PARSES',
       a.value
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic#  and s.sid=a.sid
and s.username='HAFEEZ' 
and b.name in ( 'parse count (total)')
union
select s.username, s.sid, s.serial#, 'HARD PARSES',
       a.value
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic#  and s.sid=a.sid
and s.username='HAFEEZ' 
and b.name in ( 'parse count (hard)') ;

-- CHECKING DETAIL OF PARTICULAR SQL ID 

SELECT
  CHILD_NUMBER CN,
  NAME,
  VALUE,
  ISDEFAULT DEF
FROM
  V$SQL_OPTIMIZER_ENV
WHERE
  SQL_ID='5ngzsfstg8tmy'
  AND CHILD_NUMBER in (0,2,12)
ORDER BY
  NAME,
  CHILD_NUMBER;


SELECT
  CHILD_NUMBER CN,
  PARSING_SCHEMA_NAME,
  OPTIMIZER_ENV_HASH_VALUE OPTIMIZER_ENV,
  INVALIDATIONS,
  PARSE_CALLS,
  IS_OBSOLETE,
  FIRST_LOAD_TIME,
  TO_CHAR(LAST_ACTIVE_TIME,'YYYY-MM-DD/HH24:MI:SS') LAST_ACTIVE_TIME
FROM
  V$SQL
WHERE
  SQL_ID='5ngzsfstg8tmy';



It will provide highest count of open cursor that opened by a session currently. 
========================================================== 


col open_cur_configured form 9999999999
col max_open_cur form a15
select 
max(a.value) as highest_open_cur,
p.value as max_open_cur
from v$sesstat a, 
v$statname b, 
v$parameter p
where a.statistic# = b.statistic# 
and b.name = 'opened cursors current'
and p.name= 'open_cursors'
group by p.value;


Cursor usage per user:
======================

select 
sum(a.value) total_cur, 
max(a.value) max_cur, 
s.username
from v$sesstat a, 
v$statname b,
v$session s 
where a.statistic# = b.statistic# 
and s.sid=a.sid 
and b.name = 'opened cursors current' 
group by s.username
order by 2 desc;


Cursor usage per session:
=========================

select 
s.sid,
s.serial#,
s.username,
a.value cur_count
from 
v$sesstat a,
v$statname b,
v$session s
where a.statistic# = b.statistic# 
and s.sid = a.sid
and b.name = 'opened cursors current' 
order by 4 desc ;

historical open cursor usage:
============================= 

select to_char(b.begin_interval_time,'DD-MON-YY HH24:MI') begin_time,a.instance_number,a.value Open_cursor from dba_hist_sysstat a, dba_hist_snapshot b 
where a.stat_name='opened cursors current' 
and to_char(b.begin_interval_time,'DD-MON-YY HH24:MI') like '%14-OCT-19%'
and a.snap_id=b.snap_id 
and a.instance_number=b.instance_number 
order by 2,1;



To get the % useage of all available open cursor 
======================================================

SELECT 'session_cached_cursors' parameter, LPAD(value, 5) value, 
DECODE(value, 0, ' n/a', to_char(100 * used / value, '990') || '%' ) usage 
FROM ( SELECT MAX(s.value) used 
FROM v$statname n, v$sesstat s 
WHERE 
n.name = 'session cursor cache count' and 
s.statistic# = n.statistic# ), 
(SELECT value 
FROM v$parameter 
WHERE 
name = 'session_cached_cursors' ) 
UNION ALL 
SELECT 'open_cursors', LPAD(value, 5), to_char(100 * used / value, '990') || '%' 
FROM 
(SELECT MAX(sum(s.value)) used 
FROM v$statname n, v$sesstat s 
WHERE 
n.name in ('opened cursors current', 'session cursor cache count') and s.statistic#=n.statistic# 
GROUP BY 
s.sid 
), 
(SELECT value 
FROM v$parameter 
WHERE 
name = 'open_cursors' ) 







Cursor leak in Oracle :

It is important to be able to diagnose which cursor is being 'leaked' (not closed) to identify what part of the application is responsible for managing the cursor,

  Lists each cursor, which has been opened by the session more than once in descending order.
   SELECT COUNT(*), address
  2  FROM v$open_cursor
  3  WHERE sid = 135
  4  GROUP BY address HAVING COUNT(address) > 1 ORDER BY COUNT(*);



Enable  Traving to diagnose ORA-1000   :
ALTER SYSTEM SET EVENTS '1000 trace name errorstack level 3';



Bugs Related to Cursors :

1) BUG 30518349 - ORA-01000 SELECT DOCID FROM "DR#IX_TS0481$U" , "DR#IX_TS0481$K"...

2) Bug 23608322 - CURSOR LEAK WITH DBMS_SQL.RETURN_RESULT IN JDBC THIN DRIVER 12.1.0.2 was opened for this issue.



References   : 

https://docs.oracle.com/cd/E40329_01/admin.1112/e27149/cursor.htm#OMADM5352
https://docs.oracle.com/cd/B10501_01/appdev.920/a97269/pc_10opt.htm
https://docs.oracle.com/database/121/TGSQL/tgsql_cursor.htm#TGSQL848


Saturday, July 31, 2021

Oracle Database Migration -- Transportable Tablespace


For database migration  we  come up  different migration requirements based on  which  we need to  opt  for migration technologies .

We will overview on  different approaches available and do a deep drive of transportable tablespace . 


Migration Scenarios : 
1) 32bit to 64 bit Operating system 
2) Cross platform  --> different Indian format
3) Cloud migration 
4) Storage migration 
5) Migrate with upgrade to high version of database .


Methods for migration : 
1) Golden gate 
2)  export / Import 
3)  Dataguard  
4)  Heterogeneous Dataguard  
5) Transportable Tablespace and  XTTS 
6) Rman restore ( restoration and  Duplicate ,  ) 


Consideration for choosing migration method :  
1) Downtime 
2) Product license cost 
3) Size of database 





Transportable Tablespace 

Transportable Tablespace method can be used for Oracle database Migration and Upgrade when the operating systems of the source database and the target database are different and conversion between 32 bits and 64 bits.

When using Cross Platform Transportable Tablespaces (XTTS) to migrate database between systems that have different endian formats,
the amount of downtime required is related directly proportional to the size of the data set being moved. To reduce amount of downtime, Oracle recommend  Cross Platform Incremental Backup with Oracle 12c.

We personally used Backup as copy 1 day before migration date  and use incremental backup for update of copy on  date of upgrade  to reduce downtime .


What is Endian?
Endian is the storage method of multi-byte data types in memory. In other words, it determines the byte order of the data. There are two kinds of endian, Little and Big.

Little Endian
The data is stored little end first. That is, the firs byte is the biggest.

Big Endian
The data is stored big end first. That is, the first byte is the smallest.

For example ;

Assume that an integer is stored as 4 bytes (32 bits), then a variable with a value of 0x01234567 (Hexadecimal decimal representation) will be stored in the form of 0x01, 0x23, 0x45, 0x67. In systems with big endian, this data is stored in this order while in small endian systems it is stored in reverse orde

In Oracle databases, endian format is determined by the endian information in the environment in which it works. The endian format in the databases tells us which environments the related database can be moved to. It is not possible to move the database with normal methods between different endian environments. For example, you cannot transfer a database with Data Guard to a system with Big Endian from a Little Endian system.



Transportable tablespaces progressed  in different Version :

We can use the transportable tablespaces feature to copy/move subset of data (set of user tablespaces), from an Oracle database and plug it in to another Oracle database. The tablespaces being transported can be either dictionary managed or locally managed.

With Oracle 8i, Oracle introduced transportable tablespace (TTS) technology that moves tablespaces between databases. Oracle 8i supports tablespace transportation between databases that run on same OS platforms and use the same database block size.

With Oracle 9i, TTS (Transportable Tablespaces) technology was enhanced to support tablespace transportation between databases on platforms of the same type, but using different block sizes.

With Oracle 10g, TTS (Transportable Tablespaces) technology was further enhanced to support transportation of tablespaces between databases running on different OS platforms (e.g. Windows to Linux, Solaris to HP-UX), which has same ENDIAN formats. Oracle Database 10g Release 1 introduced cross platform transportable tablespaces (XTTS), which allows data files to be moved between platforms of different endian format. XTTS is an enhancement to the transportable tablespace (TTS). If ENDIAN formats are different we have to use RMAN (e.g. Windows to Solaris, Tru64 to AIX).

From 10g Release2 we can transport whole database, this is called Transportable Database (TDB).

From Oracle 11g, we can transport single partition of a tablespace between databases.


Below is using traditional method of transportable taablespace however we can also use oracle provided rman_xttconvert package as per 2471245.1  that uses incremental backup concept to reduce downtime . I see below  good writeup on rman_xttconvert method using incremental backup 

https://dohdatabase.com/2020/12/09/how-to-migrate-a-database-using-full-transportable-export-import-and-incremental-backups/




Limitations/Restrictions

1) The source and target database must use the same character set and national character set.
2) System, undo, sysaux and temporary tablespaces cannot be transported.
3) If Automatic Storage Management (ASM) is used with either the source or destination database, you must use RMAN to transport/convert the tablespace.
4) Transportable tablespaces do not support: Materialized views/replication Function-based indexes.
5) Binary_Float and Binary_Double datatypes (new in Oracle 10g) are not supported

Key Note :
1) Source and Target Character Set must match 
2) Source and Target Time Zone must match 
3) Compatible parameter on target must be same or higer then source 
4) Tablespace endianness can be converted with either Rman convert or Dbms_file_transfer 
5) Tablespace may be encrypted with Transparent Data Encryption 



Below are steps used for manual transportable tablespace :  we would suggest to use rman specially for big database where need to take incremental backup on last day

1)   Query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are supported and to determine each platform's endian format (byte ordering).

SQL> COLUMN PLATFORM_NAME FORMAT A32
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;

SELECT tp.platform_id,substr(d.PLATFORM_NAME,1,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;


SQL> set lines 200
SQL> set pages 200
SQL> COL "Source" FORM a32
SQL> COL "Compatible Targets" FORM a40
SQL> select d.platform_name "Source", t.platform_name "Compatible Targets", endian_format
from v$transportable_platform t, v$database d where t.endian_format = (select endian_format from v$transportable_platform t, v$database d where d.platform_name = t.platform_name) 
order by "Compatible Targets"; 



2)   Prepare for export of the tablespace.

Check that the tablespace will be self contained:

SQL> execute sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
SQL> select * from sys.transport_set_violations;

Sql> select * from transport_set_violations;
Violations
---------------------------------------------------------------------------
Constraint dept_fk between table jim.emp in tablespace sales_1 and table
Jim.dept in tablespace other
Partitioned table jim.sales is partially contained in the transportable set
These violations must be resolved before sales_1 and sales_2 are transportable. As noted in the next step, one choice for bypassing the integrity constraint violation is to not export
the integrity constraints


 
3)  The tablespaces need to be in READ ONLY mode in order to successfully run a transport tablespace export:

SQL> ALTER TABLESPACE TBS1 READ ONLY;
SQL> ALTER TABLESPACE TBS2 READ ONLY;



4)  Export the metadata.

you must have been assigned the exp_full_database role to perform a transportable tablespace export operation. If any of the tablespaces have xmltypes, you must use exp instead of data pump


expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y

if you want to perform a transport tablespace operation with a strict containment check, use the transport_full_check parameter, as shown in the following example:

If we want to use full=y transportable=always  we can use below . Please note if source database is of lower version we need to use version= to match target version .


$ expdp system/manager full=y transportable=always version=12 \
directory=dp_dir dumpfile=full_tts.dmp \
metrics=y exclude=statistics \
encryption_password=secret123word456 \
logfile=full_tts_export.log 



5)  Copy datafile to target 

If you see that the endian formats are different and then a conversion is necessary for transporting the tablespace set:
RMAN> convert tablespace TBS1 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';
RMAN> convert tablespace TBS2 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';


6) 
Transport both the datafiles and the export file of the tablespaces to a place that is accessible to the target database either using ftp or The dbms_file_transfer package
or asmcmd cp .



7)  Restore file on target .

In releases lower than 11.2.0.4  you need to follow the same steps specified above for ASM files. But if the endian formats are different then you must use the RMAN convert AFTER  transferring the files. The files cannot be copied directly between two ASM instances at different platforms.


This is an example of usage change of endian format is needed on target if not done in source :

RMAN> CONVERT DATAFILE
      '/path/tbs_31.f',
      '/path/tbs_32.f',
      '/path/tbs_41.f'
      TO PLATFORM="Solaris[tm] OE (32-bit)"
      FROM PLATFORM="HP TRu64 UNIX"
      DB_FILE_NAME_CONVERT= "/path_source/", "/path_dest/"
      PARALLELISM=5;

The same example, but here showing the destination being an +ASM diskgroup:

RMAN> CONVERT DATAFILE
      '/path/tbs_31.f',
      '/path/tbs_32.f',
      '/path/tbs_41.f'
      TO PLATFORM="Solaris[tm] OE (32-bit)"
      FROM PLATFORM="HP TRu64 UNIX"
      DB_FILE_NAME_CONVERT="/path_source/", "+diskgroup"
      PARALLELISM=5;




8)  Import Metadata on target 

impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target) REMAP_SCHEMA=(source_sch2:target_schema_sch2)

or 

$ impdp system@PDB2 FULL=y DIRECTORY=dp_from_source \
TRANSPORT_DATAFILES='/u02/app/oracle/oradata/ORCL/PDB2/example01.dbf', \
'/u02/app/oracle/oradata/ORCL/PDB2/fsdata01.dbf',\
'/u02/app/oracle/oradata/ORCL/PDB2/fsindex01.dbf,'\
'/u02/app/oracle/oradata/ORCL/PDB2/users01.dbf'


There’s no expdp necessary when using Data Pump over a database link (NETWORK_LINK). But in this case you will need the keywords FULL=Y and TRANSPORTABLE=ALWAYS as export parameters as the export portion of Data Pump on the source side will be triggered underneath the covers.

impdp mike/passwd@v121
NETWORK_LINK=v112

FULL=Y
TRANSPORTABLE=ALWAYS
VERSION=12
METRICS=Y
exclude=table_statistics,index_statistics

LOGTIME=ALL
 LOGFILE=ftex_dir:v112fullimp.log
TRANSPORT_DATAFILES='/oracle/DQ1/sapdata50/ts1.dbf'
TRANSPORT_DATAFILES='/oracle/DQ1/sapdata50/ts2.dbf'


9)  Put the tablespaces into read/write mode in source and target 



Views : 

SQL> select platform_name, endian_format from v$transportable_platform;




References : 

1) How to Move a Database Using Transportable Tablespaces (Doc ID 1493809.1)
2) How to Migrate to different Endian Platform Using Transportable Tablespaces With RMAN (Doc ID 371556.1)
3) Reduce Transportable Tablespace Downtime using XTTS (Doc ID 1389592.1)
4) V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

   11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Note 1389592.1)

   12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Note 2005729.1)

12c How Perform Cross-Platform Database Transport to different Endian Platform with RMAN
Backup Sets (Doc ID 2013271.1)


XTTS v4
Doc ID 2471245.1

XTTS v3
Doc ID 1389592.1

XTTS v2
Doc ID 1389592.1

XTTS v1
Doc ID 1389592.1

M5
Doc ID 2999157.1


M5 is the next-generation cross-platform transportable tablespace procedure 
New RMAN functionality combined with 
Full Transportable Export/Import
• Doc ID 2999157.