Sunday, January 16, 2022

Flashback Oracle Rac Database with Dataguard -- Including Pluggable Container / Pdb

 


##########   Creating  Restore Point  ############

1)  Stop Redo Transport and Redo Apply 

A)  When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='defer'  sid='*' ;   ( On primary ) 
alter database recover managed standby database cancel  ;   ( On Standby ) 

B)  When we chave Broker Configuration  
edit database Chicago  set state='TRANSPORT-OFF'; 
EDIT database Boston  set state='apply-off' ; 



2) Create Restore  Point (in Standby First   ) 

create  restore point GRP_DG GUARANTEE FLASHBACK DATABASE ;   ( On Standby ) 
create  restore point GRP_PR  GUARANTEE FLASHBACK DATABASE ;   ( On Primary ) 



3) Enable Log Transport and Apply 

A) When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='enable'  sid='*' ;   ( On primary ) 
alter database recover managed standby databsae  using current logfile  disconnect ;  ( on  Standby ) 

B)  When we chave Broker Configuration 
edit database Chicago  set state='TRANSPORT-On'; 
EDIT database Boston  set state='apply-on' ; 





##########  Flashback Database   ############


1)  Stop Redo Transport and Redo Apply 

A)  When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='defer'  sid='*'  ;   ( On primary ) 
alter database recover managed standby database cancel  ;   ( On Standby ) 

B)  When we chave Broker Configuration  
edit database Chicago  set state='TRANSPORT-OFF'; 
EDIT database Boston  set state='apply-off' ; 





2) Flashbackup  Primary Database to restore Point 

srvctl stop database -d DB_NAME 
srvctl start instance -d DB_NAME -i instance_name -o mount 
flashback database to restore point GRP_PR ; 
alter database open resetlogs ; 
srvctl stop instance -d DB_NAME -i instance_name  
srvctl start database -d DB_NAME 


2) Flashbackup  Satndby Database to restore Point 

srvctl stop database -db  DB_NAME -stopoption immediate 
srvctl start instance -d DB_NAME -i instance_name -o mount 
flashback database to restore point GRP_DG ; 
srvctl stop instance -db DB_NAME -i instance_name  
srvctl start database -db DB_NAME -startoption  mount 


3) Enable Log Transport and Apply 

A) When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='enable'  SID='*';   ( On primary ) 
alter database recover managed standby databsae  using current logfile  disconnect ;  ( on  Standby ) 

B)  When we chave Broker Configuration 
edit database Chicago  set state='TRANSPORT-On'; 
EDIT database Boston  set state='apply-on' ; 



4)  Restart standby database on  Database pass through  resetlogs  and there is no lag 




########## Drop Restore Point   ############



1)  Stop Redo Transport and Redo Apply 

A)  When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='defer'  SID='*';   ( On primary ) 
alter database recover managed standby database cancel  ;   ( On Standby ) 

B)  When we chave Broker Configuration  
edit database Chicago  set state='TRANSPORT-OFF'; 
EDIT database Boston  set state='apply-off' ; 



2) Drop Restore  Point  

Drop  restore point GRP_PR  ;   ( On Primary ) 
DROP  restore point GRP_dg   ;   ( On Standby ) 




3) Enable Log Transport and Apply 

A) When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='enable'  SID='*';   ( On primary ) 
alter database recover managed standby databsae  using current logfile  disconnect ;  ( on  Standby ) 

B)  When we chave Broker Configuration 
edit database Chicago  set state='TRANSPORT-On'; 
EDIT database Boston  set state='apply-on' ; 





########## Flashback  Pluggable Database ############


Please ensure  to place standby database in  mount state  before we perform flashback  of pluggable database .   Once flashback of pluggable database is performed  on primary site  no  action is needed on standby site . 


You may see Mrp getting terminated saying flashback is not on and we may have to   enable flashback temporarily on standby 



There are options for creating restore points at the PDB level. If you connect to the PDB you can issue the commands as normal.

CONN / AS SYSDBA

ALTER SESSION SET CONTAINER=pdb1;

-- Normal restore point.
CREATE RESTORE POINT pdb1_before_changes;
DROP RESTORE POINT pdb1_before_changes;

-- Guaranteed restore point.
CREATE RESTORE POINT pdb1_before_changes GUARANTEE FLASHBACK DATABASE;
DROP RESTORE POINT pdb1_before_changes;



Flashback : If PDB uses local undo:

ALTER PLUGGABLE DATABASE pdb1 CLOSE;
FLASHBACK PLUGGABLE DATABASE pdb1 TO RESTORE POINT pdb1_before_changes;
ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;




What if LOCAL_UNDO is not enabled?


SQL> select * from database_properties where property_name='LOCAL_UNDO_ENABLED';

PROPERTY_NAME PROPE DESCRIPTION
----------------------- ----- ---------------------------------------------

LOCAL_UNDO_ENABLED TRUE true if local undo is enabled


SQL> SELECT C.CON_ID, P.NAME , C.TABLESPACE_NAME, C.STATUS FROM CDB_TABLESPACES C,V$PDBS P WHERE  C.TABLESPACE_NAME LIKE 'UNDO%' AND  C.CON_ID=P.CON_ID ORDER BY C.CON_ID;



Switching to Local Undo Mode

shutdown immediate;
startup upgrade;

alter database local undo on;

shutdown immediate;
startup;






In oracle 12.2 if local_undo is not enabled, then we can use AUXILIARY DESTINATION parameter , which will create an auxiliary instance with container database’s system,sysaux and undo tablespace, and then restore the PDB using rollback transactions. However this flahback will take a long time and will consume additional space on the auxiliary destination.

steps FOR SHARED UNDO:

SQL> alter pluggable database PDB1 close;
SQL>flashback pluggable database PDB1  to restore point STAGE1 auxiliary destination '/oradata/aux_inst';
SQL> alter pluggable database PDB1  open resetlogs;




. If PDB uses shared undo and restore point created when PDB was closed (i.e. clean restore point)

SQL> alter pluggable database PDB1 close;
SQL> flashback pluggable database PDB1 to clean restore point TEST1;
SQL> alter pluggable database PDB1 open resetlogs;



Prior to Oracle 19.9, setting the UNDO_RETENTION parameter in the root container meant that value was set for all pluggable databases also. From Oracle 19.9 onward this is not the case. Now there are two ways to set the parameter in the root container.

 
-- Just the root container.
alter system set undo_retention=3000;

-- The root container and all PDBs.
alter system set undo_retention=3000 container=all;




With Local Undo enabled, How To Set Specific Undo Tablespace For Each PDB Instance On RAC Node (Doc ID 2673826.1)





Oracle 19c New Feature Automatic Flashback of Standby Database

Please  ensure to place  standby database in mount state  before we perform flashback on standby site .


One of the new features in Oracle 19c is that either when a flashback or point-in-time recovery is performed on the primary database in an Oracle Data Guard configuration, the same operation is also performed on the standby database as well.

Oracle 19c feature is that when we create a Restore Point on the primary database, it will automatically create a restore point as well on the standby database.
These restore points are called Replicated Restore Points and have the restore point name suffixed with a “_PRIMARY”.


Flashback is performed to the restore point created earlier and we then open the database with the RESETLOGS option.

The standby database is placed in MOUNT mode and we will see that the MRP process on the standby database will start and perform the automatic flashback operation on the standby database as well.
When we see the message “Flashback Media Recovery Complete” in the standby database alert log, we can now open the standby database.



 ############ Views :  ############ 

show parameters flash

show parameter db_recovery_file_dest

select flashback_ON from v$database;

select * from v$flashback_database_log;

SELECT * FROM v$flashback_database_stat;

SELECT scn, database_incarnation#, guarantee_flashback_database, storage_size, time, name
FROM gv$restore_point;

select name from v$recovery_file_dest;

select space_limit,space_used,space_reclaimable,number_of_files from   v$recovery_file_dest;

select * from v$flash_recovery_area_usage;

SELECT ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG;

SELECT 
NAME, 
TO_CHAR(SPACE_LIMIT, '999,999,999,999') AS SPACE_LIMIT, 
TO_CHAR(SPACE_LIMIT - SPACE_USED + SPACE_RECLAIMABLE, 
'999,999,999,999') AS SPACE_AVAILABLE, 
ROUND((SPACE_USED - SPACE_RECLAIMABLE)/SPACE_LIMIT * 100, 1) 
AS PERCENT_FULL 
FROM V$RECOVERY_FILE_DEST;




########## Reference   ############

Metalink Doc   2338328.1 


Flashback of PDB failed with ORA-39867: Clean PDB Restore Point is On An Orphan Incarnation Of The PDB (Doc ID 2716855.1)

Running Script on Multiple Oracle Pdb Container Database


 
There are situations where we need to run script/task on multiple on Pdb/container   database . There are 3 options we can use .

1) Using catcon.pl 
2) Using Shell Script . 
3) Using  Pl/Sql block 



#####################################
Using catcon.pl  : 
#####################################

catcon.pl is oracle provided script / 

cd $ORACLE_HOME/rdbms/admin/

perl catcon.pl -d $ORACLE_HOME/rdbms/admin -C 'CDB$ROOT PDB$SEED'   -l /tmp/utlrp_output -b utlrp_output utlrp.sql
perl catcon.pl -d $ORACLE_HOME/rdbms/admin -C 'CDB$ROOT PDB$SEED'   -l /tmp/utlrp_output -b utlrp_output -- --x"SELECT SYS_CONTEXT('USERENV', 'CON_NAME') FROM dual"
perl catcon.pl -d $ORACLE_HOME/rdbms/admin -C 'CDB$ROOT PDB$SEED'   -l /tmp/utlrp_output -b utlrp_output -- --x"SELECT SYS_CONTEXT('USERENV', 'CON_NAME') FROM dual" --x"SELECT USER FROM dual"
 

Usage: catcon  [-h, --help]
                 [-u, --usr username
                   [{/password | -w, --usr_pwd_env_var env-var-name}]]
                 [-U, --int_usr username
                   [{/password | -W, --int_usr_pwd_env_var env-var-name]]
                 [-d, --script_dir directory]
                 [-l, --log_dir directory]
                 [{-c, --incl_con | -C, --excl_con} container]
                 [-p, --catcon_instances degree-of-parallelism]
                 [-z, --ez_conn EZConnect-strings]
                 [-e, --echo]
                 [-s, --spool]
                 [-E, --error_logging
                   { ON | errorlogging-table-other-than-SPERRORLOG } ]
                 [-F, --app_con Application-Root]
                 [-V, --ignore_errors errors-to-ignore ]
                 [-I, --no_set_errlog_ident]
                 [-g, --diag]
                 [-v, --verbose]
                 [-f, --ignore_unavailable_pdbs]
                 [--fail_on_unopenable_pdbs]
                 [-r, --reverse]
                 [-R, --recover]
                 [-m, --pdb_seed_mode pdb-mode]
                 [--force_pdb_mode pdb-mode]
                 [--all_instances]
                 [--upgrade]
                 [--ezconn_to_pdb pdb-name]
                 [--sqlplus_dir directory]
                 [--dflt_app_module app-module]
                 -b, --log_file_base log-file-name-base
                 --
                 { sqlplus-script [arguments] | --x } ...
 

.




#####################################
Pl sql block 
#####################################

Below is sample pl/sq block we can use 



declare
type names_t is table of v$pdbs.name%type;
names names_t;
--vname names_t;
type open_modes_t is table of v$pdbs.open_mode%type;
open_modes open_modes_t;
type privilege_user_t is table of dba_sys_privs.privilege%type;
privilege_user privilege_user_t;
begin
        select name, open_mode
        bulk collect
        into names, open_modes
        from v$pdbs
        where name not in ('PDB$SEED' , 'DCPDB01');
                for j in 1 .. names.count()
                loop
                if open_modes (j) <> 'MOUNTED'
                then
                execute immediate 'alter session set container= "' || names (j) || '"';
                end if;
                --select name bulk collect into vname from v$pdbs;
                select grantee bulk collect into privilege_user from dba_sys_privs
                where (privilege like '%ANY%' or privilege ='DBA') and grantee like 'U%'
                group by  grantee, privilege
                order by grantee;
                DBMS_OUTPUT.put_line(privilege_user);
        end loop;
end;
/

DBMS_OUTPUT.put_line(privilege_user(j).[column_name])
DBMS_OUTPUT.put_line() expects a string for an argument. You are passing it a type of a table of dba_sys_privs.privilege%type's.






#####################################
Using Shell Script : 
#####################################

Below was  shell script i wrote  for fetching  data from multiple  Pdb database .


#!/bin/bash
export LOG_FILE=/home/oracle/pdb.lst 
export OUTPUT_FILE=/home/oracle/pdbout.log 
$ORACLE_HOME/bin/sqlplus -s "/ as sysdba"  << EOF >> $LOG_FILE
set SQLBLANLINES OFF 
set echo off 
set feedback off 
SET SERVEROUTput off 
set heading off 
set termout off 
set trimsool off 
set wrap off 
select name from v$pdbs  where name not in (PDB\$SEED','DCPDB01') and open_mode  <> 'MOUNTED' order by 1 ;
exit 
EOF 

FOR pdb_name in `more  /home/oracle/pdb.lst`
do 
$ORACLE_HOME/bin/sqlplus -s "/ as  sysdba" << EOF >> $OUTPUT_FILE
set line 40
col name format a20
col value format a20 
set SQLBLANLINES OFF 
set echo off 
set feedback off 
SET SERVEROUTput off 
set heading off 
set termout off 
set trimsool off 
set wrap off 
set COLSEP '|'
alter session set container= $pdb_name ; 
select name , value from v\$nls_parameter , v\$pdbs where parameter='NLS_CHARACTERSET'; 
EXIT 
EOF
done 


Thursday, January 13, 2022

Oracle Multitenant / Container- Pluggable Database -- Pdb / Cdb


                     Oracle Multitenant/ Container  Database 



1) In the previous release the character set for the root container and all pluggable databases associated with it had to be the same. This could limit the movement of PDBs and make consolidation difficult where a non-standard character set was required.
In Oracle Database 12c Release 2 (12.2) a PDB can use a different character set to the CDB, provided the character set of the CDB is AL32UTF8, which is now the default character set when using the Database Configuration Assistant

2) When we open Container database seed database is in read only mode and Pdb are in mount stage 

3)  Background processes and memory allocation are at  CDB level only 

4) Log switches occur only at the multitenant container database (CDB) level.

5) Instance recovery is always performed at the CDB level.

6) Patching and Upgrades are   done at Cdb  level

7) User   creation can be done at root level and  Pdb level . User created in root is considered  global and created in all pdb. Though schema objects can vary  in all pdb

8)  While creating  CDB we can place PDB and Seed  files  in different  directory  using below

The SEED FILE_NAME_CONVERT Clause
The PDB_FILE_NAME_CONVERT Initialization Parameter 

9)  A PDB would have its SYSTEM, SYSAUX, TEMP tablespaces.It can also contains other user created tablespaces in it

10) For  revoking  privilege from common user 

If the current container is the root: 

 / Specify CONTAINER = CURRENT to revoke a locally granted system privilege, object privilege, or role from a common user or common role. The privilege or role is revoked from the user or role only in the root. This clause does not revoke privileges granted with CONTAINER = ALL. 

/ Specify CONTAINER = ALL to revoke a commonly granted system privilege, object privilege on a common object, or role from a common user or common role. The privilege or role is revoked from the user or role across the entire CDB. This clause can revoke only a privilege or role granted with CONTAINER = ALL from the specified common user or common role. This clause does not revoke privileges granted locally with CONTAINER = CURRENT. However, any locally granted privileges that depend on the commonly granted privilege being revoked are also revoked. 

If you omit this clause, then CONTAINER = CURRENT is the default.

11) When we   restart  CDB , be default Seed  will be in  Read only mode  and all PDB will be in mount stage



Hoe to  check if database is Container Database / Cdb: 
Select  CDB from v$database ; 


Undo Management : 

 Interesting behavior in 12.1.0.1 DB of creating an undo tablespace in a PDB. With the new Multitenant architecture the undo tablespace resides at the CDB level and PDBs all share the same UNDO tablespace. When the current container is a PDB, an attempt to create an undo tablespace fails without returning an error. 
In Oracle 12c Release 2 each container in an instance can use its own undo tablespace. This new undo management mechanism is called local undo mode, whilst that of previous releases is now known as shared undo mode. Local undo mode is the default mode in newly created databases, so you will probably only need to consider switching undo modes for upgraded instances
Prior to Oracle 19.9, setting the UNDO_RETENTION parameter in the root container meant that value was set for all pluggable databases also. From Oracle 19.9 onward this is not the case. Now there are two ways to set the parameter in the root container.
In addition, the UNDO_RETENTION parameter can be set separately in each PDB, provided local undo is being used.


column property_name format a30
column property_value format a30
select property_name, property_value
from   database_properties
where  property_name = 'LOCAL_UNDO_ENABLED';
PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED             TRUE
SQL>



select con_id, tablespace_name
from   cdb_tablespaces
where  contents = 'UNDO'
order by con_id;
    CON_ID TABLESPACE_NAME
---------- ------------------------------
         1 UNDOTBS1
         3 UNDOTBS1
SQL>



conn / as sysdba
shutdown immediate;
startup upgrade;
alter database local undo off;
shutdown immediate;
startup;


conn / as sysdba
-- Just the root container.
alter system set undo_retention=3000;
-- The root container and all PDBs.
alter system set undo_retention=3000 container=all;



Tablespace and Datafile Management : 

Management of the temporary tablespace in a CDB is unchanged from that of a non-CDB database.

A PDB can either have its owner temporary tablespace, or if it is created without a temporary tablespace, it can share the temporary tablespace with the CBD.

ALTER SESSION SET CONTAINER = pdb1;
CREATE TABLESPACE dummy
  DATAFILE '/u01/app/oracle/oradata/cdb1/pdb1/dummy01.dbf' SIZE 1M
  AUTOEXTEND ON NEXT 1M;

ALTER PLUGGABLE DATABASE DEFAULT TABLESPACE users;
ALTER PLUGGABLE DATABASE DEFAULT TEMPORARY TABLESPACE temp;

SQL> SELECT CON_ID,TABLESPACE_NAME FROM CDB_TABLESPACES WHERE CON_ID=1;
SQL> SELECT CON_ID,FILE_NAME FROM CDB_DATA_FILES WHERE CON_ID=1;
SQL> SELECT CON_ID,FILE_NAME FROM CDB_TEMP_FILES WHERE CON_ID=1;


select con_id, tablespace_name
from   cdb_tablespaces
where  contents = 'UNDO'
order by con_id;


Connect to a PDB directly with ORACLE_PDB_SID
'
This is when you  set environmental variable ORACLE_PDB_SID  on top of ORACLE_SID 
It is done with a trigger called DBMS_SET_PDB.
This trigger most likely comes in with the April 2019 RUs for Oracle Database 19c and 18c. It does not exist in Oracle Database 12.2.0.1 or Oracle 12.1.0.2.

The ORACLE_SID defines to which SID you connect to. ORACLE_PDB_SID does not work without the correct ORACLE_SID
It is an AFTER EVENT LOGON trigger. The trigger fires when you logon as SYS or SYSTEM
 
SQL> select trigger_body from dba_triggers where trigger_name = 'DBMS_SET_PDB'



Saving STATE of PDB 
The PDB defaults to a closed state when the CDB first starts up .Oracle has introduced the “Save state” option for the alter pluggable database command.This command will take the current state of the PDB and preserve that mode after the CDB is restarted.
the save state command is one that is instance specific so if you are in a RAC environment be sure to use the “instances=all” clause when placing the PDB into a specific state.

Alter plugable database abc save state instances=all  ; 
alter pluggable database all save state;
select con_name , instance_name , state   from dba_pdb_saved_states ; 
select con_name , state  from  CDB_PDB_SAVED_STATES ; 

The state is only saved and visible in the DBA_PDB_SAVED_STATES view if the container is in READ ONLY or READ WRITE mode. The ALTER PLUGGABLE DATABASE ... SAVE STATE command does not error when run against a container in MOUNTED mode, but nothing is recorded, as this is the default state after a CDB restart.



-->  Open pdbs automatically and  save its state 


 alter pluggable database PDB1 save state;

select con_name, state from dba_pdb_saved_states;

 alter pluggable database PDB2 discard state;



As this feature is not available in 12.1.0.1 , we can create a trigger to open the PDBs automatically, with container database startup.

CREATE TRIGGER open_all_pdbs
  AFTER STARTUP ON DATABASE
BEGIN
   EXECUTE IMMEDIATE ‘ALTER PLUGGABLE DATABASE ALL OPEN’;
END ;
/


--> Open  Services along with 

alter pluggable database pdborcl open services=('pdborcl_srv');
alter pluggable database pdborcl open services=All;


Reference : 12c services will not start while manually restart pdb (Doc ID 2006021.1)






Removing Pdb from Restricted Mode : 

Ideally Pdb will be in restricted mode due to existing violations .   Please  fix violation and restart PDB 


1) Check PDB  status.

 select INST_ID,NAME,OPEN_MODE,RESTRICTED   from gv$pdbs order by 1,2;

2) Check PDB_PLUG_IN_VIOLATIONS.

 select  status, message, action   from  pdb_plug_in_violations  where  status !='RESOLVED';


 3)Bounce all instances and pdbs.

 ALTER PLUGGABLE DATABASE ALL EXCEPT PDB001 OPEN READ WRITE;
 ALTER PLUGGABLE DATABASE ALL OPEN READ WRITE FORCE;



User  and Role  Management :

 Create User c##pwtest identified by password container=all ; 
grant create session to c#pwtest container=all ; 
connect c#pwtest/password 
alter user c#pwtest identified by password  ;
alter user c#pwtest identified password contrainer=all ;
select username , profile , common ,  con_id  from cdb_users where username='C##a1'; 

 select username,profile,common,con_id from cdb_users where username='C##AA1';

 SELECT ROLE,ROLE_ID,COMMON,CON_ID FROM CDB_ROLES WHERE ROLE='C##ROLE1';
 
 SQL> select username, con_id from cdb_users where username='C##AA1';

 SQL> select d.username, c.con_id,p.name from dba_users d, cdb_users c , v$pdbs p where d.username='C##AA1' and d.username=c.username and p.con_id=c.con_id;

 Create Role C##ROLE1 CONTAINER=ALL ; 
Create Role Lrole container=current ; 
Grant C##ROLE1 to c##user1 ; 
GRANT SELECT ON SYS.V_SESSION TO C##USER2 container=all ; 

There are some predefined Roles  that can be visible using Below 
SELECT ROLE, common, con_id FROM cdb_roles order by  role ,  con_id ; 
select  role ,  commn  from  dba_roles order by role , con_id ; 
select * from session_roles ; 

COL grantee format a12 
col privilege format a30 
select grantee , privilege  , common , con_id from cdb_sys_privs where grantee='C##USER1' ORDER BY 1,3 ; 


#Enabling Common User to view data of specific Pdb 
COL USERNAME format a10 
col default_attrformat a7 
col owner format a6 
col object_name format all 
col all_comtainer format a3 
column container_name format a10 
col con_id format 999 
set pages 100 
set line 200 
select username , default_attr , owner , object_name ,  all_containers , container_name , con_id form  cdb_container_data 
where username  not in ( 'GDADMIN_INTERNAL','APPQOSSYS','DBSNMP')  ORDER BY  USERNAME ; 

ALTER USER C##USER2 SET CONTAINER_DATA = (CDB$ROOT , PDB2) FOR V_SESSION CONTAINER=CURRENT   ;



Starting / Shutting down   PDB

ALTER PLUGGABLE DATABASE OPEN ALL ISSUED from the root
ALTER DATABASE OPEN issued from that PDB
ALTER PLUGGABLE DATABASE OPEN issued from that PDb
alter pluggable database PDB2 close immediate ; 
alter pluggable database PDB2 open ; 

alter pluggable database test1_tech close;
alter pluggable database test1_tech close immediate;
alter pluggable database test1_tech open;
alter pluggable database test1_tech open read only ;
alter pluggable database test1_tech open force;

ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE OPEN READ ONLY;
ALTER PLUGGABLE DATABASE OPEN FORCE;
ALTER PLUGGABLE DATABASE NOLOGGING;
ALTER PLUGGABLE DATABASE ENABLE FORCE LOGGING;
STARTUP OPEN
STARTUP OPEN READ ONLY
STARTUP RESTRICT OPEN READ ONLY


alter pluggable database all close immediate;
alter pluggable database all open;

ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ WRITE [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ ONLY [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN UPGRADE [RESTRICTED];
ALTER PLUGGABLE DATABASE <pdb-name-clause> CLOSE [IMMEDIATE];



We can create startup trigger to start all PDB

 CREATE OR REPLACE TRIGGER open_pdbs 
  AFTER STARTUP ON DATABASE 
BEGIN 
   EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN'; 
END open_pdbs;
/





Backup and  recovery  commands :

1) Backup and recovery can be done at  CDB and PDB level 
 
Backup Commands

RMAN> BACKUP DATABASE ROOT;
RMAN> BACKUP PLUGGABLE DATABASE pdb1, pdb2;
RMAN> BACKUP PLUGGABLE DATABASE pdb1  plus archivelog ;
-- It will skip archivelog backup when connected to Pdb 
RMAN> BACKUP DATABASE;
RMAN> BACKUP TABLESPACE system, sysaux, users;
RMAN> BACKUP TABLESPACE pdb1:system, pdb1:sysaux, pdb1:users, pdb2:system;

list backup of PLUGGABLE DATABASE pdb1 ; 
list backup of tablespace PDB1:users ; 




Restore Commands : 

RESTORE TABLESPACE pdb2:system  ; 


RUN {
  SHUTDOWN IMMEDIATE; # use abort if this fails
  STARTUP MOUNT;
  RESTORE DATABASE;
  RECOVER DATABASE;
  ALTER DATABASE OPEN;
}


RUN {
  SHUTDOWN IMMEDIATE; # use abort if this fails
  STARTUP MOUNT;
  RESTORE DATABASE ROOT;
  RECOVER DATABASE ROOT;
  # Consider recovering PDBs before opening.
  ALTER DATABASE OPEN;
}


RUN {
  ALTER PLUGGABLE DATABASE pdb1 CLOSE;
  SET UNTIL TIME "TO_DATE('23-DEC-2013 12:00:00','DD-MON-YYYY HH24:MI:SS')";
  RESTORE PLUGGABLE DATABASE pdb1;
  RECOVER PLUGGABLE DATABASE pdb1;
  ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;
}



RECOVER TABLE 'TEST'.'T1' OF PLUGGABLE DATABASE pdb1
  UNTIL TIME "TO_DATE('01-JAN-2013 15:00', 'DD-MON-YYYY HH24:MI')"
  AUXILIARY DESTINATION '/u01/aux'  
  REMAP TABLE 'TEST'.'T1':'T1_PREV';


RECOVER TABLE 'TEST'.'T1' OF PLUGGABLE DATABASE pdb1
  UNTIL SCN 5695703
  AUXILIARY DESTINATION '/u01/aux'
  DATAPUMP DESTINATION '/u01/export'
  DUMP FILE 'test_t1_prev.dmp'
  NOTABLEIMPORT;


Datapump :

It’s important to note that Data Pump doesn’t allow you to use a directory that’s owned by the root container or CDB for imports and exports. You also can’t use one that’s owned by a different PDB.

Instead, you need to create a directory under the PDB. The PDB must own the directory for you to use Data Pump export and import functionality.

Using a Data Pump export for PDBs is identical to using a Data Pump export for a non-CDB database. The only difference in using the Data Pump export utility for a PDB is that you must use a connect identifier, or Transparent Network Substrate (TNS) alias, in the export command prompt when you initiate the export. This step ensures that you’re initiating the Data Pump export for a specific PDB.





Parameter check : 

select name , value from v$nls_parameters where parameter='NLS_CHARACTERSET'; 

set linesize 300 
column name  format a30 
column value format a30
select b.name ,  a.name ,  a.value$  "Value"  from pdb_spfile$ a ,  v$pdbs b  where a. pbd_uid=b.con_uid  and a.name in ('cpu_cont','sga_target')    order by 1,2 ; 


alter session set CONTAINER=PDB1; 
select value, ISPDB_MODIFIABLE  from v$system_parameter where name='db_recover_file_dest_size';





Managing Services  : 


SQL> connect /@CDB1 as sysdba
SQL> alter session set container=pdb1;
SQL> exec dbms_service.create_service(service_name=>'pdb1_RW',network_name=>'pdb1_RW');
SQL> exec dbms_service.start_service(service_name=>'pdb1_RW');
SQL> alter session set container=cdb$root;

select name,name_hash,network_name,creation_date,pdb from cdb_services order by con_id,service_id;

select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;

select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;



srvctl add service -db CDB01 -service TEST_NEW -pdb TEST
srvctl modify service -db CDB01 -service TEST_NEW -pdb TEST
srvctl remove service -db CDB01 -service TEST_NEW

BEGIN
DBMS_SERVICE.CREATE_SERVICE(
service_name => 'TEST_NEW',
network_name => 'TEST_NEW.com');
END;
/

BEGIN
DBMS_SERVICE.DELETE_SERVICE(
service_name => 'TEST_NEW');
END;
/

SQL> select name, con_id from v$active_services order by 1;




Adding service with domain name : 
-- on primary
srvctl add service -db CDB01_fra3gh -service hr.oracle.com -pdb pdb01 -role primary
srvctl start service -db CDB01_fra3gh -service hr.oracle.com
 
lsnrctl services
...
Service "hr.oracle.com" has 1 instance(s).
...
 
-- on standby
srvctl add service -db CDB01_amsad1 -service hr.oracle.com -pdb pdb01 -role primary
srvctl start service -db CDB01_amsad1 -service hr.oracle.com
 
lsnrctl services
...
Service "hr.oracle.com" has 1 instance(s).
...




Drop Plugable Database : 

drop pluggable database pdb3_test including datafiles;
 



Rename Plugable Database : 

alter pluggable database TEST rename global_name to TEST1;




Pdb  checks :

show con_name 
show con_id 
show pdbs

alter session set container=cdb$root; 

col con_name for a10 
col instance_name for a10 
select con_name , instance_name , state from dba_pdb_saved_states ; 

select name , open_mode from v$pdbs ; 

COLUMN NAME FORMAT AI5
COLUMN RESTRICTED FORMAT A10 
COLUMN OPEN_TIME FORMAT A130 
SELECT NAME , OPEN_MODE , RESTRICTED , OPEN_TIME FROM $PDBS ;

COLUMN NAME FORMAT A8 
SELECT  name , con_id , dbid , con_uid ,  guid from v$containers  order  by  con_id ; 

COLUMN PDB_NAME FORMAT AIS 
SELECT PDB_ID , PDB_NAME , STATUS FROM DBA_PDDBS order  by pdb_id ; 

COLUMN DB_NAME FORMAT A10
COLUMN CON_ID FORMAT 999
COLUMN PDB_NAME FORMAT A15
COLUMN OPERATION FORMAT A16
COLUMN OP_TIMESTAMP FORMAT A10
COLUMN CLONED_FROM_PDB_NAME FORMAT A15
SELECT DB_NAME, CON_ID, PDB_NAME, OPERATION, OP_TIMESTAMP, CLONED_FROM_PDB_NAME
FROM CDB_PDB_HISTORY
WHERE CON_ID > 2
ORDER BY CON_ID;

COLUMN name FORMAT A30
SELECT name, pdb
FROM   v$services
ORDER BY name;

SQL> select tablespace_name, con_id from cdb_tablespaces where con_id =1;
SQL> select file_name, con_id from cdb_data_files where con_id=1;
SQL> select file_name, con_id from cdb_temp_files where con_id =1 ;



col name for a10
col owner for a8
col object_type for a10
select t.con_id,p.name,t.owner,t.object_type,count(*) from cdb_objects t ,v$containers p
where p.con_id = t.con_id and t.status = 'INVALID' group by t.con_id,p.name,t.owner,t.object_type;



-- Size of Pluggable Database : 
select con_id, name, open_mode, total_size/1024/1024/1024 "PDB_SIZE_GB" from v$pdbs;




COLUMN DB_NAME FORMAT A10
COLUMN CON_ID FORMAT 999
COLUMN PDB_NAME FORMAT A15
COLUMN OPERATION FORMAT A16
COLUMN OP_TIMESTAMP FORMAT A10
COLUMN CLONED_FROM_PDB_NAME FORMAT A15
SELECT DB_NAME, CON_ID, PDB_NAME, OPERATION, OP_TIMESTAMP, CLONED_FROM_PDB_NAME
FROM CDB_PDB_HISTORY
WHERE CON_ID > 2
ORDER BY CON_ID;

 
SQL> connect / as sysdba
Connected.
SQL> get show_orclpdb1_params.sql
  1  select  pdb.dbid, pdb.con_uid, sp.name, sp.value$
  2  from v$pdbs pdb, pdb_spfile$ sp
  3  where pdb.name = 'ORCLPDB1'
  4  and pdb.con_uid=sp.pdb_uid
  5* order by sp.name
SQL> /





Flashback Cdb :

SHUTDOWN IMMEDIATE
STARTUP MOUNT
FLASHBACK DATABASE to TIME “TO_DATE (‘08/20/12’ , ‘MM/DD/YY’)”;
ALTER DATABASE OPEN RESETLOGS;
ALTER PLUGGABLE DATABASE ALL OPEN;

Note :
A. Flashback must be enabled at cdb level
B. To enable flashback database, the CDB must be mounted.
c. The DB_FLASHBACK RETENTION_TARGET parameter must be set to enable          
    flashback of the CDB.
d. A CDB can be flashed back specifying the desired target point in time or an SCN, but not      a restore point.





Creating new Pdb 

$ sqlplus sys@cdb1 sys as sysdba

SQL> ALTER SYSTEM SET db_create_file_dest = '/u01/app/oracle/oradata';

SQL> CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pdb1adm IDENTIFIED BY *******;

OR

SQL> CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pdb1adm IDENTIFIED BY *******
2 CREATE_FILE_DEST='/u01/app/oracle/oradata';

OR

SQL> CREATE PLUGGABLE DATABASE repotestpdb ADMIN USER pdbadm IDENTIFIED BY *******
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb2/');

OR

SQL> ALTER SESSION SET PDB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/cdb1/pdbseed/'
,'/u01/app/oracle/oradata/cdb1/pdb3/';

SQL> CREATE PLUGGABLE DATABASE pdb3 ADMIN USER pdb_adm IDENTIFIED BY Password1 ROLES=(DBA);


or 

Creating a CDB: Using SQL*Plus

1) Instance startup:

• Set ORACLE_SID=CDB1

• Set in initCDB1.ora:
• Set CONTROL_FILES to CDB control file names.
• Set DB_NAME to CDB name.
• Set ENABLE_PLUGGABLE_DATABASE to TRUE.

SQL> CONNECT / AS SYSDBA
SQL> STARTUP NOMOUNT


2) Create the database:
• CDB$ROOT container
• PDB$SEED pluggable database

SQL> CREATE DATABASE CDB1 ENABLE PLUGGABLE DATABASE SEED FILE_NAME_CONVERT ('/oracle/dbs','/oracle/seed');

 
3) Run the catcdb.sql script.





Cloning PDB 


CDB1

$ sqlplus sys@cdb1 sys as sysdba

SQL> ALTER PLUGGABLE DATABASE PDB1 CLOSE;
SQL> ALTER PLUGGABLE DATABASE PDB1 OPEN READ ONLY;
SQL> CREATE PLUGGABLE DATABASE PDB2 FROM PDB1
2 STORAGE UNLIMITED TEMPFILE REUSE
3 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/', '/u01/app/oracle/oradata/cdb1/pdb2');


 PDB is in LOCAL UNDO MODE - HOT CLONING

SQL> CREATE PLUGGABLE DATABASE PDB3 FROM PDB1
2 STORAGE UNLIMITED TEMPFILE REUSE
3 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/', '/u01/app/oracle/oradata/cdb1/pdb3');




Plugging In an Unplugged PDB


UNPLUG

SQL> CONNECT SYS@CDB1 AS SYSDBA
SQL> ALTER PLUGGABLE DATABASE pdb1 CLOSE;
SQL> ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO '/u01/app/oracle/oradata/cdb1/pdb1/pdb1.xml';

OR

SQL> ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO '/u01/app/oracle/oradata/cdb1/pdb1/pdb1.pdb';




PLUG-IN

There are multiple  options to  Plug . Copy,Nocopy & Move

Before We  plug In we need to check  for violations 

set serveroutput on
DECLARE
   compatible BOOLEAN := FALSE;
BEGIN  
   compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
        pdb_descr_file => '/u01/app/oracle/oradata/pdb1.xml');
   if compatible then
      DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? YES');
   else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? NO');
   end if;
END;
/

NOTE: If the scripts returns NO, then check PDB_PLUG_IN_VIOLATIONS view

SQL> col cause for a10
col name for a10
col message for a35 word_wrapped
select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where name='PDB1';

Resolve all the violation reported by the above command before proceeding.





SQL> CONNECT SYS@CDB2 as sysdba
SQL> CREATE PLUGGABLE DATABASE pdb2 USING '/u01/app/oracle/oradata/cdb2/pdb2/pdb1.xml'
2 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb2/pdb2/');

OR

SQL> CREATE PLUGGABLE DATABASE pdb2 USING '/u01/app/oracle/oradata/cdb2/pdb2/pdb1.pdb'
2 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb2/pdb2/');
SQL> ALTER PLUGGABLE DATABASE pdb2 OPEN READ WRITE;





Adopting a Non-CDB as a PDB

DB19C NON-CDB

sqlplus / as sysdba
– SQL> shutdown immediate
– SQL> startup mount exclusive
– SQL> alter database open read only;
– SQL> exec dbms_pdb.describe(pdb_descr_file=>'/u01/app/oracle/oradata/db19cnoncdb/noncdb.xml');
– SQL> shutdown immediate


CDB1

sqlplus / as sysdba
– SQL> create pluggable database db19cpdb as clone using '/ u01/app/oracle/oradata/db19cnoncdb/noncdb.xml'
2 file_name_convert=('/u01/app/oracle/oradata/db19cnoncdb','/u01/app/oracle/oradata/db19cpdb') copy;
– SQL> alter pluggable database db19cpdb open;
sqlplus sys@db19cpdb as sysdba
– SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql




PDB Snapshots  

– PDB snapshot is a point-in-time copy of a PDB

– Maximum number of snapshots MAX_PDB_SNAPSHOTS = 8 (default)
NONE : The PDB does not support snapshots.
MANUAL : The PDB supports snapshots, but they are only created manually requested.
EVERY n HOURS : A snapshot is automatically created every "n" hours. Where "n" is between 1 and 1999.
EVERY n MINUTES : A snapshot is automatically created every "n" minutes. Where "n" is between 1 and 2999.

SQL> CONNECT / AS SYSDBA

SQL> CREATE PLUGGABLE DATABASE PDB2 ADMIN USER PDBADMIN IDENTIFIED BY ***********
FILE_NAME_CONVERT=('pdbseed','pdb2')
SNAPSHOT MODE EVERY 24 HOURS;

SQL> ALTER PLUGGABLE DATABASE PDB2 OPEN;
SQL> ALTER PLUGGABLE DATABASE PDB2 SAVE STATE;

SQL> SELECT con_id, con_name,snapshot_name,
snapshot_scn, full_snapshot_path
  FROM cdb_pdb_snapshots
 ORDER BY con_id, snapshot_scn;
CON_ID CON_NAME SNAPSHOT_NAME SNAPSHOT_SCN FULL_SNAPSHOT_PATH
---------- ---------- ------------------------------ ------------ -------------------------------------------
4 PDB2 SNAP_688979926_996491289 1764864 /u02/oradata/snap_688979926_1764864.pdb


Recovering From a PDB Snapshot

SQL> CREATE PLUGGABLE DATABASE PDB2COPY FROM pdb2 USING SNAPSHOT SNAP_688979926_996491631;

OR

SQL> CREATE PLUGGABLE DATABASE PDB2COPY FROM pdb2 USING SNAPSHOT SNAP_688979926_996491631
SNAPSHOT MODE EVERY 24 HOURS;

SQL> ALTER PLUGGABLE DATABASE DROP SNAPSHOT SNAP_688979926_996491289;






Resource Plan for PDB database 


ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'TEST_CDB_PLAN';

DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.create_cdb_plan(
    plan    => l_plan,
    comment => 'A test CDB resource plan');

  DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
    plan                  => l_plan, 
    pluggable_database    => 'pdb1', 
    shares                => 3, 
    utilization_limit     => 100,
    parallel_server_limit => 100);

  DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
    plan                  => l_plan, 
    pluggable_database    => 'pdb2', 
    shares                => 3, 
    utilization_limit     => 100,
    parallel_server_limit => 100);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/



DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.update_cdb_plan_directive(
    plan                      => l_plan, 
    pluggable_database        => 'pdb3', 
    new_shares                => 1, 
    new_utilization_limit     => 100,
    new_parallel_server_limit => 100);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.delete_cdb_plan_directive(
    plan                      => l_plan, 
    pluggable_database        => 'pdb3');

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.update_cdb_default_directive(
    plan                      => l_plan, 
    new_shares                => 1, 
    new_utilization_limit     => 80,
    new_parallel_server_limit => 80);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.update_cdb_autotask_directive(
    plan                      => l_plan, 
    new_shares                => 1, 
    new_utilization_limit     => 75,
    new_parallel_server_limit => 75);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;
  DBMS_RESOURCE_MANAGER.delete_cdb_plan(plan => l_plan);
  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


COLUMN plan FORMAT A30
COLUMN comments FORMAT A30
COLUMN status FORMAT A10
SET LINESIZE 100

SELECT plan_id,
       plan,
       comments,
       status,
       mandatory
FROM   dba_cdb_rsrc_plans
WHERE  plan = 'TEST_CDB_PLAN';



COLUMN plan FORMAT A30
COLUMN pluggable_database FORMAT A25
SET LINESIZE 100
SELECT plan, 
       pluggable_database, 
       shares, 
       utilization_limit AS util,
       parallel_server_limit AS parallel
FROM   dba_cdb_rsrc_plan_directives
WHERE  plan = 'TEST_CDB_PLAN'
ORDER BY pluggable_database;




Other Commands :


Alter pluggable database PDB1  default  temporary tablespace pdb1_temp1 ; 
Alter pluggable database PDB1 set  default  bigfile  tablespace ;
ALter  pluggable database PDB1 STORAGE(MAXSIZE UNLIMITED) ; 
ALTER PLUGGABLE DATABASE pdb1 storage unlimited ; 
Alter pluggable database pdb1  enable force logging ; 


  






References : 

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/cncpt/overview-of-the-multitenant-architecture.html#GUID-5C339A60-2163-4ECE-B7A9-4D67D3D894FB

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/cncpt/overview-of-the-multitenant-architecture.html#GUID-D0F40745-FC70-4BE0-85D3-3745DE3312AC

https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/pdb/pdb_basics/pdb_basics.html


How to Change Initialization Parameters at PDB Level on Standby Database (Doc ID 2903547.1)


How to Unplug and Plugin PDB (Doc ID 2728046.1)

Character Sets For CDB And PDB in 12.2 (Doc ID 2231602.1)

Moving PDB from Lower Release Update (or Patch Set Update, Bundle Patch) on Same or Higher DB Version (Doc ID 2847234.1)


How to set the Undo_tablespace in PDB in Physical Standby RAC Database. (Doc ID 2726173.1)


Saturday, November 27, 2021

Oracle Data Scrubbing Exadata and Asm

 
Oracle ASM disk scrubbing improves availability and reliability by searching for data that may be less likely to be read. Disk scrubbing checks logical data corruptions and repairs them automatically in normal and high redundancy disks groups. The scrubbing process repairs logical corruptions using the mirror disks. Disk scrubbing can be combined with disk group rebalancing to reduce I/O resources. The disk scrubbing process has minimal impact to the regular I/O in production systems.

You can perform scrubbing on a disk group, a specified disk, or a specified file of a disk group with the ALTER DISKGROUP SQL statement. For example, the following SQL statements show various options used when running the ALTER DISKGROUP disk_group SCRUB SQL statement.



Exadata Disk Scrubbing

A subtler Exadata Maintenance job is the bi-weekly Disk Scrub. This job does not appear in the CellCLI alert history. It only appears in the $CELLTRACE/alert.log.

Disk Scrubbing is designed to periodically validate the integrity of the mirrored ASM extents and thus eliminate latent corruption. The scrubbing is supposed to only run when average I/O utilization is under 25 percent. However, this can still cause spikes in utilization and latency and adversely affect database I/O, Oracle documentation says that a 4TB high capacity hard disk can take 8-12 hours to scrub, but I have seen it run more than 24 hours. Normally, this isn’t noticeable as it runs quietly in the background. 

However, if you have a high I/O workload, the additional 10-15 percent latency is noticeable.
Logs and Schedule

The $CELLTRACE/alert.log on the cell nodes reports the timing and results.
Wed Jan 11 16:00:07 2017
Begin scrubbing CellDisk:CD_11_xxxxceladm01.
Begin scrubbing CellDisk:CD_10_xxxxceladm01.

Thu Jan 12 15:12:37 2017
Finished scrubbing CellDisk:CD_10_xxxxceladm01, scrubbed blocks (1MB):3780032, found bad blocks:0
Thu Jan 12 15:42:02 2017
Finished scrubbing CellDisk:CD_11_xxxxceladm01, scrubbed blocks (1MB):3780032, found bad blocks:0
You can connect to the cell nodes and alter the Start Time and Interval at CellCLI prompt:
CellCLI> alter cell hardDiskScrubStartTime=’2017-01-21T08:00:00-08:00′;
CellCLI> list cell attributes name,hardDiskScrubInterval
biweekly


ASM Disk Scrubbing

ASM Disk Scrubbing performs a similar task to Exadata Disk Scrubbing. It searches the ASM blocks and repairs logical corruption using the mirror disks. The big difference is that ASM Disk scrubbing is run manually at disk group or file level and can be seen in V$ASM_OPERATION view and the alert_+ASM.log
Logs and Views
The alert_+ASM.log on the database node reports the command and duration.
Mon Feb 06 09:03:58 2017
SQL> alter diskgroup DBFS_DG scrub power low
Mon Feb 06 09:03:58 2017
NOTE: Start scrubbing diskgroup DBFS_DG
Mon Jan 06 09:03:58 2017
SUCCESS: alter diskgroup DBFS_DG scrub power low



Performing  Scrubbing :

Scrubbing A data file:  
Within a disk, ASM disk scrubbing can be invoked to scrub individual disk data.
alter diskgroup
   data
scrub file
   '+data/orcl/datafile/example.266.806582193'
repair power high force;



Scrubbing A specific disk:  
You can direct ASM to use the mirrored data to scrub and synchronize a mirrored disk spindle.
alter diskgroup
   data
scrub disk
   data_0005
repair power high force;



Scrubbing An ASM disk group:  
Entire ASM disk groups can be done with disk scrubbing.
alter diskgroup
   data
scrub power low;

 
scrub  . . . repair:   
This option automatically repairs disk corruptions. If the "repair" keywords is not used, Oracle will only identify corruptions and not fix them:
alter diskgroup
   data
scrub disk
   data_0005
repair power low; --> reports and repairs corruptions
alter diskgroup
   data
scrub disk
   data_0005
power high force;  --> reports corruptions



scrub . . . power:  
If the "power" argument is specified with data scrubbing, you can have several levels of power:
alter diskgroup data scrub power low;
alter diskgroup data scrub power auto; --> default
alter diskgroup data scrub power high
alter diskgroup data scrub power max;
SQL> ALTER DISKGROUP data SCRUB POWER LOW;
SQL> ALTER DISKGROUP data SCRUB FILE '+DATA/ORCL/DATAFILE/example.266.806582193' 
       REPAIR POWER HIGH FORCE;
SQL> ALTER DISKGROUP data SCRUB DISK DATA_0005 REPAIR POWER HIGH FORCE;



scrub  . . . wait:  
If the optional "wait" option is specified, the command returns after the scrubbing operation has completed. If the WAIT option is not specified, the scrubbing operation is added into the scrubbing queue and the command returns immediately.

scrub . . . force:  
If the optional "force" option is specified, the command is processed even if the system I/O load is high or if scrubbing has been disabled internally at the system level.




Seeing Scrubbing Process : 

You can monitor the scrubbing progress from V$ASM_OPERATION view.

[grid@dbserver]$ ps -ef | grep asm_sc
grid     17902     1  0 11:27 ?        00:00:00 asm_scrb_+ASM
grid     24365     1  0 11:49 ?        00:00:01 asm_scc0_+ASM
...
SCRB - ASM disk scrubbing master process that coordinates disk scrubbing operations.
SCCn - ASM disk scrubbing slave check process that performs the checking operations. The possible processes are SCC0-SCC9.
We would see additional processes during the repair operation:
SCRn - ASM disk scrubbing slave repair process that performs repair operation. The possible processes are SCR0-SCR9.
  
SCVn - ASM disk scrubbing slave verify process that performs the verifying operations. The possible processes are SCV0-SCV9.



ZFS Data Scrubbing : 
We can also perform  ZFS scrubbing . We can find more information  on this  in below link 

https://docs.oracle.com/cd/E19120-01/open.solaris/817-2271/gbbxi/index.html




Reference :

https://docs.oracle.com/database/121/OSTMG/GUID-ECCAE4F3-CFC5-4B55-81D2-FFB6953C035C.htm#OSTMG95351

http://ora-srv.wlv.ac.uk/oracle19c_doc/ostmg/alter-diskgroups.html#GUID-6BB31112-8687-4C1E-AF14-D94FFCDA736F

Doc 
2094581.1
2049911.1