Wednesday, January 19, 2022

Oracle Exadata / Exacc Cell Server Replacement without downtime



We most of time come accross requirement to replace Cell servers / shutdown Cell Server .  While replacing Cell or powering off cell we need to 
concentrate  on  3 things to be specific 

1) Checking REQUIRED_MIRROR_FREE_MB and   USABLE_FILE_MB .
2) Setting Right disk_repair_time for using ASM FAST MIRROR RESYNC  
3) Checking ASMDeactivationOutcome  .


ASMDeactivationOutcome  and   disk_repair_time  is  well documented in Metalink ID 1188080.1 .   In This Blog we will try to cover Checking REQUIRED_MIRROR_FREE_MB and   USABLE_FILE_MB .


If a physicaldisk fails, if the disk itself has hardware failure, all griddisks on that physicaldisk will be immediately DROPPED with FORCE option from ASM. Which is called Pro-Active Disk Quarantine ASM will not wait DISK_REPAIR_TIME to drop disks in this case. 




REQUIRED_MIRROR_FREE_MB and   USABLE_FILE_MB 

What most of us have doubt is how much free space we need to ensure  there is  no outage  when cell  is powered off .
Each cell server is a failure group

The ASM calculates the USABLE_FILE_MB using the following formula:
USABLE_FILE_MB = (FREE_MB - REQUIRED_MIRROR_FREE_MB) / 2

In Exadata with ASM version 12cR1, the REQUIRED_MIRROR_FREE_MB is reported as the size of the largest disk [2] in the disk group.



TOTAL_MB:- Refers to total capacity of the diskgroup
FREE_MB :- Refers to raw free space available in diskgroup in MB.

FREE_MB = (TOTAL_MB – (HOT_USED_MB + COLD_USED_MB))

REQUIRED_MIRROR_FREE_MB :- Indicates how much free space is required in an ASM disk group to restore redundancy after the failure of an ASM disk or ASM failure group.In exadata it is the disk capacity of one failure group.

USABLE_FILE_MB :- Indicates how much space is available in an ASM disk group considering the redundancy level of the disk group.

Its calculated as :-

USABLE_FILE_MB=(FREE_MB – REQUIRED_MIRROR_FREE_MB ) / 2 –> For Normal Redundancy
USABLE_FILE_MB=(FREE_MB – REQUIRED_MIRROR_FREE_MB ) / 3 –> For High Redundancy


 
1) Run this query for all the DG for Checking REQUIRED_MIRROR_FREE_MB and   USABLE_FILE_MB .   Formula used is  Total Allocatable Size/Redundancy

select sum(total_mb)/3 from v$asm_disk where lower(failgroup)='cel02.cn.db.com  and lower(name) like '%data%'; 


select FAILGROUP, count(NAME) "Disks", sum(TOTAL_MB) "MB"
from v$asm_disk_stat
where GROUP_NUMBER=2
group by FAILGROUP
order by 3;


select NAME, TOTAL_MB, FREE_MB, REQUIRED_MIRROR_FREE_MB, USABLE_FILE_MB
from v$asm_diskgroup_stat
where GROUP_NUMBER=2;




This is Old Copy of Oracle provided  Script i Had : 

SET SERVEROUTPUT ON

DECLARE
   v_num_disks    NUMBER;
   v_group_number   NUMBER;
   v_max_total_mb   NUMBER;

   v_required_free_mb   NUMBER;
   v_usable_mb      NUMBER;
   v_cell_usable_mb   NUMBER;
   v_one_cell_usable_mb   NUMBER;   
   v_enuf_free      BOOLEAN := FALSE;
   v_enuf_free_cell   BOOLEAN := FALSE;
   
   v_req_mirror_free_adj_factor   NUMBER := 1.10;
   v_req_mirror_free_adj         NUMBER := 0;
   v_one_cell_req_mir_free_mb     NUMBER  := 0;
   
   v_disk_desc      VARCHAR(10) := 'SINGLE';
   v_offset      NUMBER := 50;
   
   v_db_version   VARCHAR2(8);
   v_inst_name    VARCHAR2(1);
   
BEGIN

   SELECT substr(version,1,8), substr(instance_name,1,1)    INTO v_db_version, v_inst_name    FROM v$instance;
   
   IF v_inst_name <> '+' THEN
      DBMS_OUTPUT.PUT_LINE('ERROR: THIS IS NOT AN ASM INSTANCE!  PLEASE LOG ON TO AN ASM INSTANCE AND RE-RUN THIS SCRIPT.');
      GOTO the_end;
   END IF;

    DBMS_OUTPUT.PUT_LINE('------ DISK and CELL Failure Diskgroup Space Reserve Requirements  ------');
    DBMS_OUTPUT.PUT_LINE(' This procedure determines how much space you need to survive a DISK or CELL failure. It also shows the usable space ');
    DBMS_OUTPUT.PUT_LINE(' available when reserving space for disk or cell failure.  ');
   DBMS_OUTPUT.PUT_LINE(' Please see MOS note 1551288.1 for more information.  ');
   DBMS_OUTPUT.PUT_LINE('.  .  .');      
    DBMS_OUTPUT.PUT_LINE(' Description of Derived Values:');
    DBMS_OUTPUT.PUT_LINE(' One Cell Required Mirror Free MB : Required Mirror Free MB to permit successful rebalance after losing largest CELL regardless of redundancy type');
    DBMS_OUTPUT.PUT_LINE(' Disk Required Mirror Free MB     : Space needed to rebalance after loss of single or double disk failure (for normal or high redundancy)');
    DBMS_OUTPUT.PUT_LINE(' Disk Usable File MB              : Usable space available after reserving space for disk failure and accounting for mirroring');
    DBMS_OUTPUT.PUT_LINE(' Cell Usable File MB              : Usable space available after reserving space for SINGLE cell failure and accounting for mirroring');
   DBMS_OUTPUT.PUT_LINE('.  .  .');   

   IF v_db_version = '11.2.0.3' THEN
      v_req_mirror_free_adj_factor := 1.10;
      DBMS_OUTPUT.PUT_LINE('ASM Version: 11.2.0.3');
   ELSE
      v_req_mirror_free_adj_factor := 1.5;
      DBMS_OUTPUT.PUT_LINE('ASM Version: '||v_db_version||'  - WARNING DISK FAILURE COVERAGE ESTIMATES HAVE NOT BEEN VERIFIED ON THIS VERSION!');   
   END IF;
   
   DBMS_OUTPUT.PUT_LINE('.  .  .');      
      
   FOR dg IN (SELECT name, type, group_number, total_mb, free_mb, required_mirror_free_mb FROM v$asm_diskgroup ORDER BY name) LOOP

      v_enuf_free := FALSE;
     
     v_req_mirror_free_adj := dg.required_mirror_free_mb * v_req_mirror_free_adj_factor;
      
      -- Find largest amount of space allocated to a cell   
      SELECT sum(disk_cnt), max(max_total_mb), max(sum_total_mb)*v_req_mirror_free_adj_factor
     INTO v_num_disks, v_max_total_mb, v_one_cell_req_mir_free_mb
      FROM (SELECT count(1) disk_cnt, max(total_mb) max_total_mb, sum(total_mb) sum_total_mb 
      FROM v$asm_disk 
     WHERE group_number = dg.group_number 
     GROUP BY failgroup);     
     
      -- Eighth Rack
      IF dg.type = 'NORMAL' THEN
      
         -- Eighth Rack
         IF (v_num_disks < 36) THEN
            -- Use eqn: y = 1.21344 x+ 17429.8
            v_required_free_mb :=  1.21344 * v_max_total_mb + 17429.8;
            IF dg.free_mb > v_required_free_mb THEN v_enuf_free := TRUE; END IF;
         
         -- Quarter Rack
         ELSIF (v_num_disks >= 36 AND v_num_disks < 84) THEN 
            -- Use eqn: y = 1.07687 x+ 19699.3
            v_required_free_mb := 1.07687 * v_max_total_mb + 19699.3;
            IF dg.free_mb > v_required_free_mb THEN v_enuf_free := TRUE; END IF;
         
         -- Half Rack
         ELSIF (v_num_disks >= 84 AND v_num_disks < 168) THEN 
            -- Use eqn: y = 1.02475 x+53731.3
            v_required_free_mb := 1.02475 * v_max_total_mb + 53731.3;
            IF dg.free_mb > v_required_free_mb THEN v_enuf_free := TRUE; END IF;

         -- Full rack is most conservative, it will be default
         ELSE
            -- Use eqn: y = 1.33333 x+83220.
            v_required_free_mb := 1.33333 * v_max_total_mb + 83220;
            IF dg.free_mb > v_required_free_mb THEN v_enuf_free := TRUE; END IF;      
         
         END IF;
         
         -- DISK usable file MB
         v_usable_mb := ROUND((dg.free_mb - v_required_free_mb)/2);
         v_disk_desc := 'ONE disk';
         
         -- CELL usable file MB
         v_cell_usable_mb := ROUND( (dg.free_mb - v_one_cell_req_mir_free_mb)/2 );
         v_one_cell_usable_mb := v_cell_usable_mb;
       
      ELSE
         -- HIGH redundancy
         
         -- Eighth Rack
         IF (v_num_disks <= 18) THEN
            -- Use eqn: y = 4x + 0
            v_required_free_mb :=  4.0 * v_max_total_mb;
            IF dg.free_mb > v_required_free_mb THEN v_enuf_free := TRUE; END IF;
         
         -- Quarter Rack
         ELSIF (v_num_disks > 18 AND v_num_disks <= 36) THEN 
            -- Use eqn: y = 3.87356 x+417692.
            v_required_free_mb := 3.87356 * v_max_total_mb + 417692;
            IF dg.free_mb > v_required_free_mb THEN v_enuf_free := TRUE; END IF;
         
         -- Half Rack
         ELSIF (v_num_disks > 36 AND v_num_disks <= 84) THEN 
            -- Use eqn: y = 2.02222 x+56441.6
            v_required_free_mb := 2.02222 * v_max_total_mb + 56441.6;
            IF dg.free_mb > v_required_free_mb THEN v_enuf_free := TRUE; END IF;

         -- Full rack is most conservative, it will be default
         ELSE
            -- Use eqn: y = 2.14077 x+54276.4
            v_required_free_mb := 2.14077 * v_max_total_mb + 54276.4;
            IF dg.free_mb > v_required_free_mb THEN v_enuf_free := TRUE; END IF;      
         
         END IF;
      
         -- DISK usable file MB
         v_usable_mb := ROUND((dg.free_mb - v_required_free_mb)/3);      
         v_disk_desc := 'TWO disks';   
         
         -- CELL usable file MB
         v_one_cell_usable_mb := ROUND( (dg.free_mb - v_one_cell_req_mir_free_mb)/3 );
       
      END IF;
      
      DBMS_OUTPUT.PUT_LINE('-------------------------------------------------------------------------');
      DBMS_OUTPUT.PUT_LINE('DG Name: '||LPAD(dg.name,v_offset-9));
      DBMS_OUTPUT.PUT_LINE('DG Type: '||LPAD(dg.type,v_offset-9));
      DBMS_OUTPUT.PUT_LINE('Num Disks: '||LPAD(TO_CHAR(v_num_disks),v_offset-11));
      DBMS_OUTPUT.PUT_LINE('Disk Size MB: '||LPAD(TO_CHAR(v_max_total_mb,'999,999,999,999'),v_offset-14));  
      DBMS_OUTPUT.PUT_LINE('.  .  .');
      DBMS_OUTPUT.PUT_LINE('DG Total MB: '||LPAD(TO_CHAR(dg.total_mb,'999,999,999,999'),v_offset-13));
      DBMS_OUTPUT.PUT_LINE('DG Used MB: '||LPAD(TO_CHAR(dg.total_mb - dg.free_mb,'999,999,999,999'),v_offset-12));
      DBMS_OUTPUT.PUT_LINE('DG Free MB: '||LPAD(TO_CHAR(dg.free_mb,'999,999,999,999'),v_offset-12));
      DBMS_OUTPUT.PUT_LINE('.  .  .');     
      DBMS_OUTPUT.PUT_LINE('One Cell Required Mirror Free MB: '||LPAD(TO_CHAR(ROUND(v_one_cell_req_mir_free_mb),'999,999,999,999'),v_offset-34));
      DBMS_OUTPUT.PUT_LINE('.  .  .');          
      DBMS_OUTPUT.PUT_LINE('Disk Required Mirror Free MB: '||LPAD(TO_CHAR(ROUND(v_required_free_mb),'999,999,999,999'),v_offset-30));
      DBMS_OUTPUT.PUT_LINE('.  .  .');          
      DBMS_OUTPUT.PUT_LINE('Disk Usable File MB: '||LPAD(TO_CHAR(ROUND(v_usable_mb),'999,999,999,999'),v_offset-21));   
      DBMS_OUTPUT.PUT_LINE('Cell Usable File MB: '||LPAD(TO_CHAR(ROUND(v_one_cell_usable_mb),'999,999,999,999'),v_offset-21));   
      DBMS_OUTPUT.PUT_LINE('.  .  .');
      
      IF v_enuf_free THEN 
         DBMS_OUTPUT.PUT_LINE('Enough Free Space to Rebalance after loss of '||v_disk_desc||': PASS');
      ELSE
         DBMS_OUTPUT.PUT_LINE('Enough Free Space to Rebalance after loss of '||v_disk_desc||': FAIL');
      END IF;   

     IF dg.type = 'NORMAL' THEN
        -- Calc Free Space for Rebalance Due to Cell Failure
        IF v_req_mirror_free_adj < dg.free_mb THEN 
          DBMS_OUTPUT.PUT_LINE('Enough Free Space to Rebalance after loss of ONE cell: PASS');
        ELSE
          DBMS_OUTPUT.PUT_LINE('Enough Free Space to Rebalance after loss of ONE cell: WARNING (cell failure is very rare)');
        END IF;         
   ELSE
        -- Calc Free Space for Rebalance Due to Single Cell Failure
        IF v_one_cell_req_mir_free_mb < dg.free_mb THEN 
          DBMS_OUTPUT.PUT_LINE('Enough Free Space to Rebalance after loss of ONE cell: PASS');
        ELSE
          DBMS_OUTPUT.PUT_LINE('Enough Free Space to Rebalance after loss of ONE cell: WARNING (cell failure is very rare and high redundancy offers ample protection already)');
        END IF;
        
   END IF;
      
   END LOOP;
   
   <<the_end>>
   DBMS_OUTPUT.PUT_LINE('.  .  .');  
   DBMS_OUTPUT.PUT_LINE('Script completed.');
   
END;
/





This is Latest Copy of  Oracle Provided Scipt in Doc 1551288.1 as of today  . Have posted below same

SET SERVEROUTPUT ON
SET LINES 155
SET PAGES 0
SET TRIMSPOOL ON

DECLARE
   v_space_reserve_factor NUMBER := 0.15;
   v_num_disks    NUMBER;
   v_group_number   NUMBER;
   v_max_total_mb   NUMBER;
   v_max_used_mb NUMBER;
   v_fg_count   NUMBER;

   v_required_free_mb   NUMBER;
   v_usable_mb      NUMBER;
   v_cell_usable_mb   NUMBER;
   v_one_cell_usable_mb   NUMBER;
   v_enuf_free      BOOLEAN := FALSE;
   v_enuf_free_cell   BOOLEAN := FALSE;

   v_req_mirror_free_adj_factor   NUMBER := 1.10;
   v_req_mirror_free_adj         NUMBER := 0;
   v_one_cell_req_mir_free_mb     NUMBER  := 0;

   v_disk_desc      VARCHAR(10) := 'SINGLE';
   v_offset      NUMBER := 50;

   v_db_version   VARCHAR2(8);
   v_inst_name    VARCHAR2(1);

   v_dg_pct_msg   VARCHAR2(500);
   v_cfc_fail_msg VARCHAR2(500);

BEGIN

   SELECT substr(version,1,8), substr(instance_name,1,1)    INTO v_db_version, v_inst_name    FROM v$instance;

   IF v_inst_name <> '+' THEN
      DBMS_OUTPUT.PUT_LINE('ERROR: THIS IS NOT AN ASM INSTANCE!  PLEASE LOG ON TO AN ASM INSTANCE AND RE-RUN THIS SCRIPT.');
      GOTO the_end;
   END IF;

    DBMS_OUTPUT.PUT_LINE('------ DISK and CELL Failure Diskgroup Space Reserve Requirements  ------');
    DBMS_OUTPUT.PUT_LINE(' This procedure determines how much space you need to survive a DISK or CELL failure. It also shows the usable space ');
    DBMS_OUTPUT.PUT_LINE(' available when reserving space for disk or cell failure (loss of cell is rare and not usually a concern).  ');
    DBMS_OUTPUT.PUT_LINE(' These required mirror and usable space assume space utilized to full capacity - a worst case condition.');
    DBMS_OUTPUT.PUT_LINE(' Please see MOS note 1551288.1 for more information.  ');
    DBMS_OUTPUT.PUT_LINE('.  .  .');
    DBMS_OUTPUT.PUT_LINE(' Description of Derived Values:');
    DBMS_OUTPUT.PUT_LINE(' Recommended Reserve MB           : Space needed to rebalance after loss of single or double disk failure (for normal or high redundancy)');
    DBMS_OUTPUT.PUT_LINE(' Disk Usable File MB              : Usable space available after reserving space for disk failure and accounting for mirroring');
    DBMS_OUTPUT.PUT_LINE(' PCT Util                         : Percent of Total Diskgroup Space Utilized');
    DBMS_OUTPUT.PUT_LINE(' DFC                              : Disk Failure Coverage Check (PASS = able to rebalance after loss of single disk)');
   DBMS_OUTPUT.PUT_LINE('.  .  .');

   DBMS_OUTPUT.PUT_LINE('ASM Version is '||v_db_version);


-- Set up headings
      DBMS_OUTPUT.PUT_LINE('-------------------------------------------------------------------------------------------------------------------------------------------------');
      DBMS_OUTPUT.PUT('|          ');
      DBMS_OUTPUT.PUT('|         ');
      DBMS_OUTPUT.PUT('|     ');
      DBMS_OUTPUT.PUT('|     ');
      DBMS_OUTPUT.PUT('|            ');
      DBMS_OUTPUT.PUT('|                ');
      DBMS_OUTPUT.PUT('|                ');
      DBMS_OUTPUT.PUT('|                ');
      DBMS_OUTPUT.PUT('|Recommended     ');
      DBMS_OUTPUT.PUT('|                ');
      DBMS_OUTPUT.PUT('|       |');    
      DBMS_OUTPUT.PUT_LINE('    |');
      -- next row
      DBMS_OUTPUT.PUT('|          ');
      DBMS_OUTPUT.PUT('|DG       ');
      DBMS_OUTPUT.PUT('|Num  ');
      DBMS_OUTPUT.PUT('|Num  ');     
      DBMS_OUTPUT.PUT('|Disk Size   ');
      DBMS_OUTPUT.PUT('|DG Total        ');
      DBMS_OUTPUT.PUT('|DG Used         ');
      DBMS_OUTPUT.PUT('|DG Free         ');
      DBMS_OUTPUT.PUT('|Reserve         ');
      DBMS_OUTPUT.PUT('|Disk Usable     ');
      DBMS_OUTPUT.PUT('|PCT    |');
      DBMS_OUTPUT.PUT_LINE('    |');
      -- next row
      DBMS_OUTPUT.PUT('|DG Name   ');
      DBMS_OUTPUT.PUT('|Type     ');
      DBMS_OUTPUT.PUT('|FGs  ');
      DBMS_OUTPUT.PUT('|Disks');
      DBMS_OUTPUT.PUT('|MB          ');
      DBMS_OUTPUT.PUT('|MB              ');
      DBMS_OUTPUT.PUT('|MB              ');
      DBMS_OUTPUT.PUT('|MB              ');
      DBMS_OUTPUT.PUT('|MB              ');
      DBMS_OUTPUT.PUT('|File MB         ');
      DBMS_OUTPUT.PUT('|Util   ');
      DBMS_OUTPUT.PUT_LINE('|DFC |');
      DBMS_OUTPUT.PUT_LINE('-------------------------------------------------------------------------------------------------------------------------------------------------');

   FOR dg IN (SELECT name, type, group_number, total_mb, free_mb, required_mirror_free_mb FROM v$asm_diskgroup ORDER BY name) LOOP

      v_enuf_free := FALSE;

      -- Find largest amount of space allocated to a cell
      SELECT sum(disk_cnt), max(max_total_mb), max(sum_used_mb), count(distinct failgroup)
     INTO v_num_disks,v_max_total_mb, v_max_used_mb, v_fg_count
      FROM (SELECT failgroup, count(1) disk_cnt, max(total_mb) max_total_mb, sum(total_mb - free_mb) sum_used_mb
      FROM v$asm_disk
     WHERE group_number = dg.group_number and failgroup_type = 'REGULAR'
     GROUP BY failgroup);

   -- Amount to reserve depends on version and number of FGs
   IF  ((v_db_version like '12.2%') or (v_db_version like '18%') or  (v_db_version like '19%')) THEN
     IF v_fg_count < 5 THEN
     v_space_reserve_factor := 0.15 ;
     v_dg_pct_msg := v_dg_pct_msg||'Diskgroup '||dg.name||' using reserve factor of 15% '||chr(10);
     ELSE 
       v_space_reserve_factor := 0.09 ;
       v_dg_pct_msg := v_dg_pct_msg||'Diskgroup '||dg.name||' using reserve factor of 9% '||chr(10);
     END IF;
   ELSIF ( (v_db_version like '12.1%' ) or (v_db_version like '11.2.0.4%') ) THEN
       v_space_reserve_factor := 0.15 ;     
       v_dg_pct_msg := v_dg_pct_msg||'Diskgroup '||dg.name||' using reserve factor of 15% '||chr(10);
   ELSE 
       v_space_reserve_factor := 0.15 ;
       v_dg_pct_msg := v_dg_pct_msg||'Diskgroup '||dg.name||' using reserve factor of 15% '||chr(10);
   END IF;

   v_required_free_mb := v_space_reserve_factor * dg.total_mb;
   IF dg.free_mb > v_required_free_mb THEN v_enuf_free := TRUE; END IF;

IF dg.type = 'NORMAL' THEN

-- DISK usable file MB
v_usable_mb := ROUND((dg.free_mb - v_required_free_mb)/2);

ELSIF dg.type = 'HIGH' THEN
-- HIGH redundancy
-- DISK usable file MB
v_usable_mb := ROUND((dg.free_mb - v_required_free_mb)/3);
 
ELSIF dg.type = 'EXTEND' THEN
-- EXTENDED redundancy for stretch clusters

-- DISK usable file MB
v_usable_mb := ROUND((dg.free_mb - v_required_free_mb)/4);

ELSE
-- We don't know this type...maybe FLEX DG - not enough info to say 
v_usable_mb := NULL;

END IF;
  
      DBMS_OUTPUT.PUT('|'||RPAD(dg.name,v_offset-40));
      DBMS_OUTPUT.PUT('|'||RPAD(nvl(dg.type,'  '),v_offset-41));
      DBMS_OUTPUT.PUT('|'||LPAD(TO_CHAR(v_fg_count),v_offset-45));
      DBMS_OUTPUT.PUT('|'||LPAD(TO_CHAR(v_num_disks),v_offset-45));
      DBMS_OUTPUT.PUT('|'||TO_CHAR(v_max_total_mb,'999,999,999'));
      DBMS_OUTPUT.PUT('|'||TO_CHAR(dg.total_mb,'999,999,999,999'));
      DBMS_OUTPUT.PUT('|'||TO_CHAR(dg.total_mb - dg.free_mb,'999,999,999,999'));
      DBMS_OUTPUT.PUT('|'||TO_CHAR(dg.free_mb,'999,999,999,999'));
      DBMS_OUTPUT.PUT('|'||TO_CHAR(ROUND(v_required_free_mb),'999,999,999,999'));
      DBMS_OUTPUT.PUT('|'||TO_CHAR(ROUND(v_usable_mb),'999,999,999,999'));

     -- Calc Disk Utilization Percentage
      IF dg.total_mb > 0 THEN
         DBMS_OUTPUT.PUT('|'||TO_CHAR((((dg.total_mb - dg.free_mb)/dg.total_mb)*100),'999.9')||CHR(37));
      ELSE
         DBMS_OUTPUT.PUT('|       ');
      END IF;

      IF v_enuf_free THEN
         DBMS_OUTPUT.PUT_LINE('|'||'PASS|');
      ELSE
         DBMS_OUTPUT.PUT_LINE('|'||'FAIL|');
      END IF;


   END LOOP;

     DBMS_OUTPUT.PUT_LINE('-------------------------------------------------------------------------------------------------------------------------------------------------');
   <<the_end>>

   DBMS_OUTPUT.PUT_LINE(v_dg_pct_msg);

   IF v_cfc_fail_msg is not null THEN
      DBMS_OUTPUT.PUT_LINE('Cell Failure Coverage Freespace Failures Detected. Warning Message Follows.');
      DBMS_OUTPUT.PUT_LINE(v_cfc_fail_msg);
   END IF;

   DBMS_OUTPUT.PUT_LINE('.  .  .');
   DBMS_OUTPUT.PUT_LINE('Script completed.');

END;
/

WHENEVER SQLERROR EXIT FAILURE;




Dropping Disk :

Once  we  have done our checks we can proceed to drop  disk from all diskgroups  assigned to  that  cell which is going for maintenance 


SQL> ALTER DISKGROUP   DATA01 drop DISKS IN FAILGROUP cell01 rebalance power 32 nowait;
SQL> ALTER DISKGROUP  RECO  drop DISKS IN FAILGROUP cell01 rebalance power 32 nowait;



Once disk is repaired we may re-add in all diskgroups 

How To Add Back An ASM Disk or Failgroup (Normal or High Redundancy) After A Transient Failure Occurred Or When The DISK_REPAIR_TIME Attribute Expired (10.1 to 12.1)? (Doc ID 946213.1)


SQL> ALTER DISKGROUP <DG_NAME> ADD FAILGROUP <FG_NAME> DISK '/dev/rdsk/c3t13xxxx' REBALANCE POWER <power number 1-11>;


 



Procedure to reboot or shut down a storage cell without affecting ASM


Once  you are confirmed from above script from dba side that  cell can be taken offline .  Sa will perform  below steps 


A) 
an OEM blackout covering ALL ASM instances and the affected storage cell should be raised prior to shutting down or rebooting a storage cell.



B) Check current ASM mode status of each grid disk

# cellcli -e list griddisk attributes name, asmmodestatus, asmdeactivationoutcome, errorCount, status

If one or more disks return asmdeactivationoutcome='No', you should wait for a minute and repeat above step. If all disks return asmdeactivationoutcome='Yes' proceed with the remaining steps. 



C) Make all grid disks inactive in ASM

# cellcli -e alter griddisk all inactive

Note : This action could take 10 minutes or longer depending on activity. It is very important to make sure you were able to offline all the disks successfully before shutting down the cell services. Inactivating the grid disks will automatically OFFLINE the disks in the ASM instance.
Sometimes “alter griddisk all” inactivates all grid disks, but some of them are still online, even after waiting up to 30 to 60 minutes. To prevent it from happening, specify all the griddisks instead of “all”, and inactivate them individually.

For example:
# cellcli -e alter griddisk SYSTEMDG_CD_02_dmorlcel11, SYSTEMDG_CD_03_dmorlcel11, SYSTEMDG_CD_04_dmorlcel11 inactive


If it does not work, you’ll have to set these disks offline from ASM, as the “oragrid” user. This needs to be done by a DBA. For example:
$ sqlplus / as sysasm
> alter diskgroup SYSTEMDG offline disk SYSTEMDG_CD_02_dmorlcel11

You probably only need to do it on one disk only, and the rest will be set offline at the same time.



D)  Confirm that the griddisks are now offline

# cellcli -e list griddisk attributes name, asmmodestatus, asmdeactivationoutcome, errorCount, status
The output should show asmmodestatus=UNUSED and asmdeactivationoutcome=Yes for all griddisks once the disks are offline in ASM


Check griddisks
# cellcli -e list griddisk


E) If all are offline, proceed with storage cell reboot or power down.
·         IMPORTANT: After cell reboot is complete, re-enable the grid disks in ASM
# cellcli -e alter griddisk all active




To replace a flash disk due to disk failure, perform the following procedure:


Doc ID 1113023.1


Step 1.  Run the following command to stop the cell services:

CellCLI> ALTER CELL SHUTDOWN SERVICES ALL

The preceding command will check if there are any disks that are offline, in predictive failure status or need to be resilvered. If Oracle ASM redundancy is intact, the command will take the griddisks offline in ASM and then proceed to stop the cell services. If the following error message is seen, it is not safe to stop the cell services because some disk group may be forced to dismount due to reduced redundancy.

If such an error is encountered, please restore the Oracle ASM disk group redundancy and retry the command when all disk status is back to normal.


Step 2. Shut down the cell.
Step 3. Replace the failed flash disk based on the PCI number and FDOM number.

Step 4. Power up the cell. The cell services will be started automatically. As part of the cell service startup, all grid disks will also be auto-onlined in Oracle ASM.
Step 5. Verify that all grid disks have been successfully brought online using the following command:

CellCLI> LIST GRIDDISK ATTRIBUTES NAME, ASMMODESTATUS




________________________
Reference : 
________________________

Steps to shut down or reboot an Exadata storage cell without affecting ASM (Doc ID 1188080.1)
Script to Calculate New Grid Disk and Disk Group Sizes in Exadata (Doc ID 1464809.1)
Understanding ASM Capacity and Reservation of Free Space in Exadata (Doc ID 1551288.1)

https://docs.oracle.com/cd/E80920_01/DBMMN/maintaining-exadata-storage-servers.htm#DBMMN-GUID-96881470-4D44-4AF6-8A29-33E3164CD244

Sunday, January 16, 2022

Flashback Oracle Rac Database with Dataguard -- Including Pluggable Container / Pdb

 


##########   Creating  Restore Point  ############

1)  Stop Redo Transport and Redo Apply 

A)  When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='defer'  sid='*' ;   ( On primary ) 
alter database recover managed standby database cancel  ;   ( On Standby ) 

B)  When we chave Broker Configuration  
edit database Chicago  set state='TRANSPORT-OFF'; 
EDIT database Boston  set state='apply-off' ; 



2) Create Restore  Point (in Standby First   ) 

create  restore point GRP_DG GUARANTEE FLASHBACK DATABASE ;   ( On Standby ) 
create  restore point GRP_PR  GUARANTEE FLASHBACK DATABASE ;   ( On Primary ) 



3) Enable Log Transport and Apply 

A) When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='enable'  sid='*' ;   ( On primary ) 
alter database recover managed standby databsae  using current logfile  disconnect ;  ( on  Standby ) 

B)  When we chave Broker Configuration 
edit database Chicago  set state='TRANSPORT-On'; 
EDIT database Boston  set state='apply-on' ; 





##########  Flashback Database   ############


1)  Stop Redo Transport and Redo Apply 

A)  When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='defer'  sid='*'  ;   ( On primary ) 
alter database recover managed standby database cancel  ;   ( On Standby ) 

B)  When we chave Broker Configuration  
edit database Chicago  set state='TRANSPORT-OFF'; 
EDIT database Boston  set state='apply-off' ; 





2) Flashbackup  Primary Database to restore Point 

srvctl stop database -d DB_NAME 
srvctl start instance -d DB_NAME -i instance_name -o mount 
flashback database to restore point GRP_PR ; 
alter database open resetlogs ; 
srvctl stop instance -d DB_NAME -i instance_name  
srvctl start database -d DB_NAME 


2) Flashbackup  Satndby Database to restore Point 

srvctl stop database -db  DB_NAME -stopoption immediate 
srvctl start instance -d DB_NAME -i instance_name -o mount 
flashback database to restore point GRP_DG ; 
srvctl stop instance -db DB_NAME -i instance_name  
srvctl start database -db DB_NAME -startoption  mount 


3) Enable Log Transport and Apply 

A) When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='enable'  SID='*';   ( On primary ) 
alter database recover managed standby databsae  using current logfile  disconnect ;  ( on  Standby ) 

B)  When we chave Broker Configuration 
edit database Chicago  set state='TRANSPORT-On'; 
EDIT database Boston  set state='apply-on' ; 



4)  Restart standby database on  Database pass through  resetlogs  and there is no lag 




########## Drop Restore Point   ############



1)  Stop Redo Transport and Redo Apply 

A)  When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='defer'  SID='*';   ( On primary ) 
alter database recover managed standby database cancel  ;   ( On Standby ) 

B)  When we chave Broker Configuration  
edit database Chicago  set state='TRANSPORT-OFF'; 
EDIT database Boston  set state='apply-off' ; 



2) Drop Restore  Point  

Drop  restore point GRP_PR  ;   ( On Primary ) 
DROP  restore point GRP_dg   ;   ( On Standby ) 




3) Enable Log Transport and Apply 

A) When we Dont have Broker Configuration  
alter system set log_archive_dest_state_n='enable'  SID='*';   ( On primary ) 
alter database recover managed standby databsae  using current logfile  disconnect ;  ( on  Standby ) 

B)  When we chave Broker Configuration 
edit database Chicago  set state='TRANSPORT-On'; 
EDIT database Boston  set state='apply-on' ; 





########## Flashback  Pluggable Database ############


Please ensure  to place standby database in  mount state  before we perform flashback  of pluggable database .   Once flashback of pluggable database is performed  on primary site  no  action is needed on standby site . 


You may see Mrp getting terminated saying flashback is not on and we may have to   enable flashback temporarily on standby 



There are options for creating restore points at the PDB level. If you connect to the PDB you can issue the commands as normal.

CONN / AS SYSDBA

ALTER SESSION SET CONTAINER=pdb1;

-- Normal restore point.
CREATE RESTORE POINT pdb1_before_changes;
DROP RESTORE POINT pdb1_before_changes;

-- Guaranteed restore point.
CREATE RESTORE POINT pdb1_before_changes GUARANTEE FLASHBACK DATABASE;
DROP RESTORE POINT pdb1_before_changes;



Flashback : If PDB uses local undo:

ALTER PLUGGABLE DATABASE pdb1 CLOSE;
FLASHBACK PLUGGABLE DATABASE pdb1 TO RESTORE POINT pdb1_before_changes;
ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;




What if LOCAL_UNDO is not enabled?


SQL> select * from database_properties where property_name='LOCAL_UNDO_ENABLED';

PROPERTY_NAME PROPE DESCRIPTION
----------------------- ----- ---------------------------------------------

LOCAL_UNDO_ENABLED TRUE true if local undo is enabled


SQL> SELECT C.CON_ID, P.NAME , C.TABLESPACE_NAME, C.STATUS FROM CDB_TABLESPACES C,V$PDBS P WHERE  C.TABLESPACE_NAME LIKE 'UNDO%' AND  C.CON_ID=P.CON_ID ORDER BY C.CON_ID;



Switching to Local Undo Mode

shutdown immediate;
startup upgrade;

alter database local undo on;

shutdown immediate;
startup;






In oracle 12.2 if local_undo is not enabled, then we can use AUXILIARY DESTINATION parameter , which will create an auxiliary instance with container database’s system,sysaux and undo tablespace, and then restore the PDB using rollback transactions. However this flahback will take a long time and will consume additional space on the auxiliary destination.

steps FOR SHARED UNDO:

SQL> alter pluggable database PDB1 close;
SQL>flashback pluggable database PDB1  to restore point STAGE1 auxiliary destination '/oradata/aux_inst';
SQL> alter pluggable database PDB1  open resetlogs;




. If PDB uses shared undo and restore point created when PDB was closed (i.e. clean restore point)

SQL> alter pluggable database PDB1 close;
SQL> flashback pluggable database PDB1 to clean restore point TEST1;
SQL> alter pluggable database PDB1 open resetlogs;



Prior to Oracle 19.9, setting the UNDO_RETENTION parameter in the root container meant that value was set for all pluggable databases also. From Oracle 19.9 onward this is not the case. Now there are two ways to set the parameter in the root container.

 
-- Just the root container.
alter system set undo_retention=3000;

-- The root container and all PDBs.
alter system set undo_retention=3000 container=all;




With Local Undo enabled, How To Set Specific Undo Tablespace For Each PDB Instance On RAC Node (Doc ID 2673826.1)





Oracle 19c New Feature Automatic Flashback of Standby Database

Please  ensure to place  standby database in mount state  before we perform flashback on standby site .


One of the new features in Oracle 19c is that either when a flashback or point-in-time recovery is performed on the primary database in an Oracle Data Guard configuration, the same operation is also performed on the standby database as well.

Oracle 19c feature is that when we create a Restore Point on the primary database, it will automatically create a restore point as well on the standby database.
These restore points are called Replicated Restore Points and have the restore point name suffixed with a “_PRIMARY”.


Flashback is performed to the restore point created earlier and we then open the database with the RESETLOGS option.

The standby database is placed in MOUNT mode and we will see that the MRP process on the standby database will start and perform the automatic flashback operation on the standby database as well.
When we see the message “Flashback Media Recovery Complete” in the standby database alert log, we can now open the standby database.



 ############ Views :  ############ 

show parameters flash

show parameter db_recovery_file_dest

select flashback_ON from v$database;

select * from v$flashback_database_log;

SELECT * FROM v$flashback_database_stat;

SELECT scn, database_incarnation#, guarantee_flashback_database, storage_size, time, name
FROM gv$restore_point;

select name from v$recovery_file_dest;

select space_limit,space_used,space_reclaimable,number_of_files from   v$recovery_file_dest;

select * from v$flash_recovery_area_usage;

SELECT ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG;

SELECT 
NAME, 
TO_CHAR(SPACE_LIMIT, '999,999,999,999') AS SPACE_LIMIT, 
TO_CHAR(SPACE_LIMIT - SPACE_USED + SPACE_RECLAIMABLE, 
'999,999,999,999') AS SPACE_AVAILABLE, 
ROUND((SPACE_USED - SPACE_RECLAIMABLE)/SPACE_LIMIT * 100, 1) 
AS PERCENT_FULL 
FROM V$RECOVERY_FILE_DEST;




########## Reference   ############

Metalink Doc   2338328.1 


Flashback of PDB failed with ORA-39867: Clean PDB Restore Point is On An Orphan Incarnation Of The PDB (Doc ID 2716855.1)

Running Script on Multiple Oracle Pdb Container Database


 
There are situations where we need to run script/task on multiple on Pdb/container   database . There are 3 options we can use .

1) Using catcon.pl 
2) Using Shell Script . 
3) Using  Pl/Sql block 



#####################################
Using catcon.pl  : 
#####################################

catcon.pl is oracle provided script / 

cd $ORACLE_HOME/rdbms/admin/

perl catcon.pl -d $ORACLE_HOME/rdbms/admin -C 'CDB$ROOT PDB$SEED'   -l /tmp/utlrp_output -b utlrp_output utlrp.sql
perl catcon.pl -d $ORACLE_HOME/rdbms/admin -C 'CDB$ROOT PDB$SEED'   -l /tmp/utlrp_output -b utlrp_output -- --x"SELECT SYS_CONTEXT('USERENV', 'CON_NAME') FROM dual"
perl catcon.pl -d $ORACLE_HOME/rdbms/admin -C 'CDB$ROOT PDB$SEED'   -l /tmp/utlrp_output -b utlrp_output -- --x"SELECT SYS_CONTEXT('USERENV', 'CON_NAME') FROM dual" --x"SELECT USER FROM dual"
 

Usage: catcon  [-h, --help]
                 [-u, --usr username
                   [{/password | -w, --usr_pwd_env_var env-var-name}]]
                 [-U, --int_usr username
                   [{/password | -W, --int_usr_pwd_env_var env-var-name]]
                 [-d, --script_dir directory]
                 [-l, --log_dir directory]
                 [{-c, --incl_con | -C, --excl_con} container]
                 [-p, --catcon_instances degree-of-parallelism]
                 [-z, --ez_conn EZConnect-strings]
                 [-e, --echo]
                 [-s, --spool]
                 [-E, --error_logging
                   { ON | errorlogging-table-other-than-SPERRORLOG } ]
                 [-F, --app_con Application-Root]
                 [-V, --ignore_errors errors-to-ignore ]
                 [-I, --no_set_errlog_ident]
                 [-g, --diag]
                 [-v, --verbose]
                 [-f, --ignore_unavailable_pdbs]
                 [--fail_on_unopenable_pdbs]
                 [-r, --reverse]
                 [-R, --recover]
                 [-m, --pdb_seed_mode pdb-mode]
                 [--force_pdb_mode pdb-mode]
                 [--all_instances]
                 [--upgrade]
                 [--ezconn_to_pdb pdb-name]
                 [--sqlplus_dir directory]
                 [--dflt_app_module app-module]
                 -b, --log_file_base log-file-name-base
                 --
                 { sqlplus-script [arguments] | --x } ...
 

.




#####################################
Pl sql block 
#####################################

Below is sample pl/sq block we can use 



declare
type names_t is table of v$pdbs.name%type;
names names_t;
--vname names_t;
type open_modes_t is table of v$pdbs.open_mode%type;
open_modes open_modes_t;
type privilege_user_t is table of dba_sys_privs.privilege%type;
privilege_user privilege_user_t;
begin
        select name, open_mode
        bulk collect
        into names, open_modes
        from v$pdbs
        where name not in ('PDB$SEED' , 'DCPDB01');
                for j in 1 .. names.count()
                loop
                if open_modes (j) <> 'MOUNTED'
                then
                execute immediate 'alter session set container= "' || names (j) || '"';
                end if;
                --select name bulk collect into vname from v$pdbs;
                select grantee bulk collect into privilege_user from dba_sys_privs
                where (privilege like '%ANY%' or privilege ='DBA') and grantee like 'U%'
                group by  grantee, privilege
                order by grantee;
                DBMS_OUTPUT.put_line(privilege_user);
        end loop;
end;
/

DBMS_OUTPUT.put_line(privilege_user(j).[column_name])
DBMS_OUTPUT.put_line() expects a string for an argument. You are passing it a type of a table of dba_sys_privs.privilege%type's.






#####################################
Using Shell Script : 
#####################################

Below was  shell script i wrote  for fetching  data from multiple  Pdb database .


#!/bin/bash
export LOG_FILE=/home/oracle/pdb.lst 
export OUTPUT_FILE=/home/oracle/pdbout.log 
$ORACLE_HOME/bin/sqlplus -s "/ as sysdba"  << EOF >> $LOG_FILE
set SQLBLANLINES OFF 
set echo off 
set feedback off 
SET SERVEROUTput off 
set heading off 
set termout off 
set trimsool off 
set wrap off 
select name from v$pdbs  where name not in (PDB\$SEED','DCPDB01') and open_mode  <> 'MOUNTED' order by 1 ;
exit 
EOF 

FOR pdb_name in `more  /home/oracle/pdb.lst`
do 
$ORACLE_HOME/bin/sqlplus -s "/ as  sysdba" << EOF >> $OUTPUT_FILE
set line 40
col name format a20
col value format a20 
set SQLBLANLINES OFF 
set echo off 
set feedback off 
SET SERVEROUTput off 
set heading off 
set termout off 
set trimsool off 
set wrap off 
set COLSEP '|'
alter session set container= $pdb_name ; 
select name , value from v\$nls_parameter , v\$pdbs where parameter='NLS_CHARACTERSET'; 
EXIT 
EOF
done 


Thursday, January 13, 2022

Oracle Multitenant / Container- Pluggable Database -- Pdb / Cdb


                     Oracle Multitenant/ Container  Database 



1) In the previous release the character set for the root container and all pluggable databases associated with it had to be the same. This could limit the movement of PDBs and make consolidation difficult where a non-standard character set was required.
In Oracle Database 12c Release 2 (12.2) a PDB can use a different character set to the CDB, provided the character set of the CDB is AL32UTF8, which is now the default character set when using the Database Configuration Assistant

2) When we open Container database seed database is in read only mode and Pdb are in mount stage 

3)  Background processes and memory allocation are at  CDB level only 

4) Log switches occur only at the multitenant container database (CDB) level.

5) Instance recovery is always performed at the CDB level.

6) Patching and Upgrades are   done at Cdb  level

7) User   creation can be done at root level and  Pdb level . User created in root is considered  global and created in all pdb. Though schema objects can vary  in all pdb

8)  While creating  CDB we can place PDB and Seed  files  in different  directory  using below

The SEED FILE_NAME_CONVERT Clause
The PDB_FILE_NAME_CONVERT Initialization Parameter 

9)  A PDB would have its SYSTEM, SYSAUX, TEMP tablespaces.It can also contains other user created tablespaces in it

10) For  revoking  privilege from common user 

If the current container is the root: 

 / Specify CONTAINER = CURRENT to revoke a locally granted system privilege, object privilege, or role from a common user or common role. The privilege or role is revoked from the user or role only in the root. This clause does not revoke privileges granted with CONTAINER = ALL. 

/ Specify CONTAINER = ALL to revoke a commonly granted system privilege, object privilege on a common object, or role from a common user or common role. The privilege or role is revoked from the user or role across the entire CDB. This clause can revoke only a privilege or role granted with CONTAINER = ALL from the specified common user or common role. This clause does not revoke privileges granted locally with CONTAINER = CURRENT. However, any locally granted privileges that depend on the commonly granted privilege being revoked are also revoked. 

If you omit this clause, then CONTAINER = CURRENT is the default.

11) When we   restart  CDB , be default Seed  will be in  Read only mode  and all PDB will be in mount stage



Hoe to  check if database is Container Database / Cdb: 
Select  CDB from v$database ; 


Undo Management : 

 Interesting behavior in 12.1.0.1 DB of creating an undo tablespace in a PDB. With the new Multitenant architecture the undo tablespace resides at the CDB level and PDBs all share the same UNDO tablespace. When the current container is a PDB, an attempt to create an undo tablespace fails without returning an error. 
In Oracle 12c Release 2 each container in an instance can use its own undo tablespace. This new undo management mechanism is called local undo mode, whilst that of previous releases is now known as shared undo mode. Local undo mode is the default mode in newly created databases, so you will probably only need to consider switching undo modes for upgraded instances
Prior to Oracle 19.9, setting the UNDO_RETENTION parameter in the root container meant that value was set for all pluggable databases also. From Oracle 19.9 onward this is not the case. Now there are two ways to set the parameter in the root container.
In addition, the UNDO_RETENTION parameter can be set separately in each PDB, provided local undo is being used.


column property_name format a30
column property_value format a30
select property_name, property_value
from   database_properties
where  property_name = 'LOCAL_UNDO_ENABLED';
PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED             TRUE
SQL>



select con_id, tablespace_name
from   cdb_tablespaces
where  contents = 'UNDO'
order by con_id;
    CON_ID TABLESPACE_NAME
---------- ------------------------------
         1 UNDOTBS1
         3 UNDOTBS1
SQL>



conn / as sysdba
shutdown immediate;
startup upgrade;
alter database local undo off;
shutdown immediate;
startup;


conn / as sysdba
-- Just the root container.
alter system set undo_retention=3000;
-- The root container and all PDBs.
alter system set undo_retention=3000 container=all;



Tablespace and Datafile Management : 

Management of the temporary tablespace in a CDB is unchanged from that of a non-CDB database.

A PDB can either have its owner temporary tablespace, or if it is created without a temporary tablespace, it can share the temporary tablespace with the CBD.

ALTER SESSION SET CONTAINER = pdb1;
CREATE TABLESPACE dummy
  DATAFILE '/u01/app/oracle/oradata/cdb1/pdb1/dummy01.dbf' SIZE 1M
  AUTOEXTEND ON NEXT 1M;

ALTER PLUGGABLE DATABASE DEFAULT TABLESPACE users;
ALTER PLUGGABLE DATABASE DEFAULT TEMPORARY TABLESPACE temp;

SQL> SELECT CON_ID,TABLESPACE_NAME FROM CDB_TABLESPACES WHERE CON_ID=1;
SQL> SELECT CON_ID,FILE_NAME FROM CDB_DATA_FILES WHERE CON_ID=1;
SQL> SELECT CON_ID,FILE_NAME FROM CDB_TEMP_FILES WHERE CON_ID=1;


select con_id, tablespace_name
from   cdb_tablespaces
where  contents = 'UNDO'
order by con_id;


Connect to a PDB directly with ORACLE_PDB_SID
'
This is when you  set environmental variable ORACLE_PDB_SID  on top of ORACLE_SID 
It is done with a trigger called DBMS_SET_PDB.
This trigger most likely comes in with the April 2019 RUs for Oracle Database 19c and 18c. It does not exist in Oracle Database 12.2.0.1 or Oracle 12.1.0.2.

The ORACLE_SID defines to which SID you connect to. ORACLE_PDB_SID does not work without the correct ORACLE_SID
It is an AFTER EVENT LOGON trigger. The trigger fires when you logon as SYS or SYSTEM
 
SQL> select trigger_body from dba_triggers where trigger_name = 'DBMS_SET_PDB'



Saving STATE of PDB 
The PDB defaults to a closed state when the CDB first starts up .Oracle has introduced the “Save state” option for the alter pluggable database command.This command will take the current state of the PDB and preserve that mode after the CDB is restarted.
the save state command is one that is instance specific so if you are in a RAC environment be sure to use the “instances=all” clause when placing the PDB into a specific state.

Alter plugable database abc save state instances=all  ; 
alter pluggable database all save state;
select con_name , instance_name , state   from dba_pdb_saved_states ; 
select con_name , state  from  CDB_PDB_SAVED_STATES ; 

The state is only saved and visible in the DBA_PDB_SAVED_STATES view if the container is in READ ONLY or READ WRITE mode. The ALTER PLUGGABLE DATABASE ... SAVE STATE command does not error when run against a container in MOUNTED mode, but nothing is recorded, as this is the default state after a CDB restart.



-->  Open pdbs automatically and  save its state 


 alter pluggable database PDB1 save state;

select con_name, state from dba_pdb_saved_states;

 alter pluggable database PDB2 discard state;



As this feature is not available in 12.1.0.1 , we can create a trigger to open the PDBs automatically, with container database startup.

CREATE TRIGGER open_all_pdbs
  AFTER STARTUP ON DATABASE
BEGIN
   EXECUTE IMMEDIATE ‘ALTER PLUGGABLE DATABASE ALL OPEN’;
END ;
/


--> Open  Services along with 

alter pluggable database pdborcl open services=('pdborcl_srv');
alter pluggable database pdborcl open services=All;


Reference : 12c services will not start while manually restart pdb (Doc ID 2006021.1)






Removing Pdb from Restricted Mode : 

Ideally Pdb will be in restricted mode due to existing violations .   Please  fix violation and restart PDB 


1) Check PDB  status.

 select INST_ID,NAME,OPEN_MODE,RESTRICTED   from gv$pdbs order by 1,2;

2) Check PDB_PLUG_IN_VIOLATIONS.

 select  status, message, action   from  pdb_plug_in_violations  where  status !='RESOLVED';


 3)Bounce all instances and pdbs.

 ALTER PLUGGABLE DATABASE ALL EXCEPT PDB001 OPEN READ WRITE;
 ALTER PLUGGABLE DATABASE ALL OPEN READ WRITE FORCE;



User  and Role  Management :

 Create User c##pwtest identified by password container=all ; 
grant create session to c#pwtest container=all ; 
connect c#pwtest/password 
alter user c#pwtest identified by password  ;
alter user c#pwtest identified password contrainer=all ;
select username , profile , common ,  con_id  from cdb_users where username='C##a1'; 

 select username,profile,common,con_id from cdb_users where username='C##AA1';

 SELECT ROLE,ROLE_ID,COMMON,CON_ID FROM CDB_ROLES WHERE ROLE='C##ROLE1';
 
 SQL> select username, con_id from cdb_users where username='C##AA1';

 SQL> select d.username, c.con_id,p.name from dba_users d, cdb_users c , v$pdbs p where d.username='C##AA1' and d.username=c.username and p.con_id=c.con_id;

 Create Role C##ROLE1 CONTAINER=ALL ; 
Create Role Lrole container=current ; 
Grant C##ROLE1 to c##user1 ; 
GRANT SELECT ON SYS.V_SESSION TO C##USER2 container=all ; 

There are some predefined Roles  that can be visible using Below 
SELECT ROLE, common, con_id FROM cdb_roles order by  role ,  con_id ; 
select  role ,  commn  from  dba_roles order by role , con_id ; 
select * from session_roles ; 

COL grantee format a12 
col privilege format a30 
select grantee , privilege  , common , con_id from cdb_sys_privs where grantee='C##USER1' ORDER BY 1,3 ; 


#Enabling Common User to view data of specific Pdb 
COL USERNAME format a10 
col default_attrformat a7 
col owner format a6 
col object_name format all 
col all_comtainer format a3 
column container_name format a10 
col con_id format 999 
set pages 100 
set line 200 
select username , default_attr , owner , object_name ,  all_containers , container_name , con_id form  cdb_container_data 
where username  not in ( 'GDADMIN_INTERNAL','APPQOSSYS','DBSNMP')  ORDER BY  USERNAME ; 

ALTER USER C##USER2 SET CONTAINER_DATA = (CDB$ROOT , PDB2) FOR V_SESSION CONTAINER=CURRENT   ;



Starting / Shutting down   PDB

ALTER PLUGGABLE DATABASE OPEN ALL ISSUED from the root
ALTER DATABASE OPEN issued from that PDB
ALTER PLUGGABLE DATABASE OPEN issued from that PDb
alter pluggable database PDB2 close immediate ; 
alter pluggable database PDB2 open ; 

alter pluggable database test1_tech close;
alter pluggable database test1_tech close immediate;
alter pluggable database test1_tech open;
alter pluggable database test1_tech open read only ;
alter pluggable database test1_tech open force;

ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE OPEN READ ONLY;
ALTER PLUGGABLE DATABASE OPEN FORCE;
ALTER PLUGGABLE DATABASE NOLOGGING;
ALTER PLUGGABLE DATABASE ENABLE FORCE LOGGING;
STARTUP OPEN
STARTUP OPEN READ ONLY
STARTUP RESTRICT OPEN READ ONLY


alter pluggable database all close immediate;
alter pluggable database all open;

ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ WRITE [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ ONLY [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN UPGRADE [RESTRICTED];
ALTER PLUGGABLE DATABASE <pdb-name-clause> CLOSE [IMMEDIATE];



We can create startup trigger to start all PDB

 CREATE OR REPLACE TRIGGER open_pdbs 
  AFTER STARTUP ON DATABASE 
BEGIN 
   EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN'; 
END open_pdbs;
/





Backup and  recovery  commands :

1) Backup and recovery can be done at  CDB and PDB level 
 
Backup Commands

RMAN> BACKUP DATABASE ROOT;
RMAN> BACKUP PLUGGABLE DATABASE pdb1, pdb2;
RMAN> BACKUP PLUGGABLE DATABASE pdb1  plus archivelog ;
-- It will skip archivelog backup when connected to Pdb 
RMAN> BACKUP DATABASE;
RMAN> BACKUP TABLESPACE system, sysaux, users;
RMAN> BACKUP TABLESPACE pdb1:system, pdb1:sysaux, pdb1:users, pdb2:system;

list backup of PLUGGABLE DATABASE pdb1 ; 
list backup of tablespace PDB1:users ; 




Restore Commands : 

RESTORE TABLESPACE pdb2:system  ; 


RUN {
  SHUTDOWN IMMEDIATE; # use abort if this fails
  STARTUP MOUNT;
  RESTORE DATABASE;
  RECOVER DATABASE;
  ALTER DATABASE OPEN;
}


RUN {
  SHUTDOWN IMMEDIATE; # use abort if this fails
  STARTUP MOUNT;
  RESTORE DATABASE ROOT;
  RECOVER DATABASE ROOT;
  # Consider recovering PDBs before opening.
  ALTER DATABASE OPEN;
}


RUN {
  ALTER PLUGGABLE DATABASE pdb1 CLOSE;
  SET UNTIL TIME "TO_DATE('23-DEC-2013 12:00:00','DD-MON-YYYY HH24:MI:SS')";
  RESTORE PLUGGABLE DATABASE pdb1;
  RECOVER PLUGGABLE DATABASE pdb1;
  ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;
}



RECOVER TABLE 'TEST'.'T1' OF PLUGGABLE DATABASE pdb1
  UNTIL TIME "TO_DATE('01-JAN-2013 15:00', 'DD-MON-YYYY HH24:MI')"
  AUXILIARY DESTINATION '/u01/aux'  
  REMAP TABLE 'TEST'.'T1':'T1_PREV';


RECOVER TABLE 'TEST'.'T1' OF PLUGGABLE DATABASE pdb1
  UNTIL SCN 5695703
  AUXILIARY DESTINATION '/u01/aux'
  DATAPUMP DESTINATION '/u01/export'
  DUMP FILE 'test_t1_prev.dmp'
  NOTABLEIMPORT;


Datapump :

It’s important to note that Data Pump doesn’t allow you to use a directory that’s owned by the root container or CDB for imports and exports. You also can’t use one that’s owned by a different PDB.

Instead, you need to create a directory under the PDB. The PDB must own the directory for you to use Data Pump export and import functionality.

Using a Data Pump export for PDBs is identical to using a Data Pump export for a non-CDB database. The only difference in using the Data Pump export utility for a PDB is that you must use a connect identifier, or Transparent Network Substrate (TNS) alias, in the export command prompt when you initiate the export. This step ensures that you’re initiating the Data Pump export for a specific PDB.





Parameter check : 

select name , value from v$nls_parameters where parameter='NLS_CHARACTERSET'; 

set linesize 300 
column name  format a30 
column value format a30
select b.name ,  a.name ,  a.value$  "Value"  from pdb_spfile$ a ,  v$pdbs b  where a. pbd_uid=b.con_uid  and a.name in ('cpu_cont','sga_target')    order by 1,2 ; 


alter session set CONTAINER=PDB1; 
select value, ISPDB_MODIFIABLE  from v$system_parameter where name='db_recover_file_dest_size';





Managing Services  : 


SQL> connect /@CDB1 as sysdba
SQL> alter session set container=pdb1;
SQL> exec dbms_service.create_service(service_name=>'pdb1_RW',network_name=>'pdb1_RW');
SQL> exec dbms_service.start_service(service_name=>'pdb1_RW');
SQL> alter session set container=cdb$root;

select name,name_hash,network_name,creation_date,pdb from cdb_services order by con_id,service_id;

select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;

select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;



srvctl add service -db CDB01 -service TEST_NEW -pdb TEST
srvctl modify service -db CDB01 -service TEST_NEW -pdb TEST
srvctl remove service -db CDB01 -service TEST_NEW

BEGIN
DBMS_SERVICE.CREATE_SERVICE(
service_name => 'TEST_NEW',
network_name => 'TEST_NEW.com');
END;
/

BEGIN
DBMS_SERVICE.DELETE_SERVICE(
service_name => 'TEST_NEW');
END;
/

SQL> select name, con_id from v$active_services order by 1;




Adding service with domain name : 
-- on primary
srvctl add service -db CDB01_fra3gh -service hr.oracle.com -pdb pdb01 -role primary
srvctl start service -db CDB01_fra3gh -service hr.oracle.com
 
lsnrctl services
...
Service "hr.oracle.com" has 1 instance(s).
...
 
-- on standby
srvctl add service -db CDB01_amsad1 -service hr.oracle.com -pdb pdb01 -role primary
srvctl start service -db CDB01_amsad1 -service hr.oracle.com
 
lsnrctl services
...
Service "hr.oracle.com" has 1 instance(s).
...




Drop Plugable Database : 

drop pluggable database pdb3_test including datafiles;
 



Rename Plugable Database : 

alter pluggable database TEST rename global_name to TEST1;




Pdb  checks :

show con_name 
show con_id 
show pdbs

alter session set container=cdb$root; 

col con_name for a10 
col instance_name for a10 
select con_name , instance_name , state from dba_pdb_saved_states ; 

select name , open_mode from v$pdbs ; 

COLUMN NAME FORMAT AI5
COLUMN RESTRICTED FORMAT A10 
COLUMN OPEN_TIME FORMAT A130 
SELECT NAME , OPEN_MODE , RESTRICTED , OPEN_TIME FROM $PDBS ;

COLUMN NAME FORMAT A8 
SELECT  name , con_id , dbid , con_uid ,  guid from v$containers  order  by  con_id ; 

COLUMN PDB_NAME FORMAT AIS 
SELECT PDB_ID , PDB_NAME , STATUS FROM DBA_PDDBS order  by pdb_id ; 

COLUMN DB_NAME FORMAT A10
COLUMN CON_ID FORMAT 999
COLUMN PDB_NAME FORMAT A15
COLUMN OPERATION FORMAT A16
COLUMN OP_TIMESTAMP FORMAT A10
COLUMN CLONED_FROM_PDB_NAME FORMAT A15
SELECT DB_NAME, CON_ID, PDB_NAME, OPERATION, OP_TIMESTAMP, CLONED_FROM_PDB_NAME
FROM CDB_PDB_HISTORY
WHERE CON_ID > 2
ORDER BY CON_ID;

COLUMN name FORMAT A30
SELECT name, pdb
FROM   v$services
ORDER BY name;

SQL> select tablespace_name, con_id from cdb_tablespaces where con_id =1;
SQL> select file_name, con_id from cdb_data_files where con_id=1;
SQL> select file_name, con_id from cdb_temp_files where con_id =1 ;



col name for a10
col owner for a8
col object_type for a10
select t.con_id,p.name,t.owner,t.object_type,count(*) from cdb_objects t ,v$containers p
where p.con_id = t.con_id and t.status = 'INVALID' group by t.con_id,p.name,t.owner,t.object_type;



-- Size of Pluggable Database : 
select con_id, name, open_mode, total_size/1024/1024/1024 "PDB_SIZE_GB" from v$pdbs;

-- This limit is visible as a property of my PDB:
SQL> select property_name, property_value from database_properties where property_name ='MAX_PDB_STORAGE';



-- And the current size is visible from V$PDBS:
SQL> select con_id,total_size from v$pdbs;




COLUMN DB_NAME FORMAT A10
COLUMN CON_ID FORMAT 999
COLUMN PDB_NAME FORMAT A15
COLUMN OPERATION FORMAT A16
COLUMN OP_TIMESTAMP FORMAT A10
COLUMN CLONED_FROM_PDB_NAME FORMAT A15
SELECT DB_NAME, CON_ID, PDB_NAME, OPERATION, OP_TIMESTAMP, CLONED_FROM_PDB_NAME
FROM CDB_PDB_HISTORY
WHERE CON_ID > 2
ORDER BY CON_ID;

 
SQL> connect / as sysdba
Connected.
SQL> get show_orclpdb1_params.sql
  1  select  pdb.dbid, pdb.con_uid, sp.name, sp.value$
  2  from v$pdbs pdb, pdb_spfile$ sp
  3  where pdb.name = 'ORCLPDB1'
  4  and pdb.con_uid=sp.pdb_uid
  5* order by sp.name
SQL> /





Flashback Cdb :

SHUTDOWN IMMEDIATE
STARTUP MOUNT
FLASHBACK DATABASE to TIME “TO_DATE (‘08/20/12’ , ‘MM/DD/YY’)”;
ALTER DATABASE OPEN RESETLOGS;
ALTER PLUGGABLE DATABASE ALL OPEN;

Note :
A. Flashback must be enabled at cdb level
B. To enable flashback database, the CDB must be mounted.
c. The DB_FLASHBACK RETENTION_TARGET parameter must be set to enable          
    flashback of the CDB.
d. A CDB can be flashed back specifying the desired target point in time or an SCN, but not      a restore point.





Creating new Pdb 

$ sqlplus sys@cdb1 sys as sysdba

SQL> ALTER SYSTEM SET db_create_file_dest = '/u01/app/oracle/oradata';

SQL> CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pdb1adm IDENTIFIED BY *******;

OR

SQL> CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pdb1adm IDENTIFIED BY *******
2 CREATE_FILE_DEST='/u01/app/oracle/oradata';

OR

SQL> CREATE PLUGGABLE DATABASE repotestpdb ADMIN USER pdbadm IDENTIFIED BY *******
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb2/');

OR

SQL> ALTER SESSION SET PDB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/cdb1/pdbseed/'
,'/u01/app/oracle/oradata/cdb1/pdb3/';

SQL> CREATE PLUGGABLE DATABASE pdb3 ADMIN USER pdb_adm IDENTIFIED BY Password1 ROLES=(DBA);


or 

Creating a CDB: Using SQL*Plus

1) Instance startup:

• Set ORACLE_SID=CDB1

• Set in initCDB1.ora:
• Set CONTROL_FILES to CDB control file names.
• Set DB_NAME to CDB name.
• Set ENABLE_PLUGGABLE_DATABASE to TRUE.

SQL> CONNECT / AS SYSDBA
SQL> STARTUP NOMOUNT


2) Create the database:
• CDB$ROOT container
• PDB$SEED pluggable database

SQL> CREATE DATABASE CDB1 ENABLE PLUGGABLE DATABASE SEED FILE_NAME_CONVERT ('/oracle/dbs','/oracle/seed');

 
3) Run the catcdb.sql script.





Cloning PDB 


CDB1

$ sqlplus sys@cdb1 sys as sysdba

SQL> ALTER PLUGGABLE DATABASE PDB1 CLOSE;
SQL> ALTER PLUGGABLE DATABASE PDB1 OPEN READ ONLY;
SQL> CREATE PLUGGABLE DATABASE PDB2 FROM PDB1
2 STORAGE UNLIMITED TEMPFILE REUSE
3 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/', '/u01/app/oracle/oradata/cdb1/pdb2');


 PDB is in LOCAL UNDO MODE - HOT CLONING

SQL> CREATE PLUGGABLE DATABASE PDB3 FROM PDB1
2 STORAGE UNLIMITED TEMPFILE REUSE
3 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/', '/u01/app/oracle/oradata/cdb1/pdb3');




Plugging In an Unplugged PDB


UNPLUG

SQL> CONNECT SYS@CDB1 AS SYSDBA
SQL> ALTER PLUGGABLE DATABASE pdb1 CLOSE;
SQL> ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO '/u01/app/oracle/oradata/cdb1/pdb1/pdb1.xml';

OR

SQL> ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO '/u01/app/oracle/oradata/cdb1/pdb1/pdb1.pdb';




PLUG-IN

There are multiple  options to  Plug . Copy,Nocopy & Move

Before We  plug In we need to check  for violations 

set serveroutput on
DECLARE
   compatible BOOLEAN := FALSE;
BEGIN  
   compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
        pdb_descr_file => '/u01/app/oracle/oradata/pdb1.xml');
   if compatible then
      DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? YES');
   else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? NO');
   end if;
END;
/

NOTE: If the scripts returns NO, then check PDB_PLUG_IN_VIOLATIONS view

SQL> col cause for a10
col name for a10
col message for a35 word_wrapped
select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where name='PDB1';

Resolve all the violation reported by the above command before proceeding.





SQL> CONNECT SYS@CDB2 as sysdba
SQL> CREATE PLUGGABLE DATABASE pdb2 USING '/u01/app/oracle/oradata/cdb2/pdb2/pdb1.xml'
2 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb2/pdb2/');

OR

SQL> CREATE PLUGGABLE DATABASE pdb2 USING '/u01/app/oracle/oradata/cdb2/pdb2/pdb1.pdb'
2 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb2/pdb2/');
SQL> ALTER PLUGGABLE DATABASE pdb2 OPEN READ WRITE;





Adopting a Non-CDB as a PDB

DB19C NON-CDB

sqlplus / as sysdba
– SQL> shutdown immediate
– SQL> startup mount exclusive
– SQL> alter database open read only;
– SQL> exec dbms_pdb.describe(pdb_descr_file=>'/u01/app/oracle/oradata/db19cnoncdb/noncdb.xml');
– SQL> shutdown immediate


CDB1

sqlplus / as sysdba
– SQL> create pluggable database db19cpdb as clone using '/ u01/app/oracle/oradata/db19cnoncdb/noncdb.xml'
2 file_name_convert=('/u01/app/oracle/oradata/db19cnoncdb','/u01/app/oracle/oradata/db19cpdb') copy;
– SQL> alter pluggable database db19cpdb open;
sqlplus sys@db19cpdb as sysdba
– SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql




PDB Snapshots  

– PDB snapshot is a point-in-time copy of a PDB

– Maximum number of snapshots MAX_PDB_SNAPSHOTS = 8 (default)
NONE : The PDB does not support snapshots.
MANUAL : The PDB supports snapshots, but they are only created manually requested.
EVERY n HOURS : A snapshot is automatically created every "n" hours. Where "n" is between 1 and 1999.
EVERY n MINUTES : A snapshot is automatically created every "n" minutes. Where "n" is between 1 and 2999.

SQL> CONNECT / AS SYSDBA

SQL> CREATE PLUGGABLE DATABASE PDB2 ADMIN USER PDBADMIN IDENTIFIED BY ***********
FILE_NAME_CONVERT=('pdbseed','pdb2')
SNAPSHOT MODE EVERY 24 HOURS;

SQL> ALTER PLUGGABLE DATABASE PDB2 OPEN;
SQL> ALTER PLUGGABLE DATABASE PDB2 SAVE STATE;

SQL> SELECT con_id, con_name,snapshot_name,
snapshot_scn, full_snapshot_path
  FROM cdb_pdb_snapshots
 ORDER BY con_id, snapshot_scn;
CON_ID CON_NAME SNAPSHOT_NAME SNAPSHOT_SCN FULL_SNAPSHOT_PATH
---------- ---------- ------------------------------ ------------ -------------------------------------------
4 PDB2 SNAP_688979926_996491289 1764864 /u02/oradata/snap_688979926_1764864.pdb


Recovering From a PDB Snapshot

SQL> CREATE PLUGGABLE DATABASE PDB2COPY FROM pdb2 USING SNAPSHOT SNAP_688979926_996491631;

OR

SQL> CREATE PLUGGABLE DATABASE PDB2COPY FROM pdb2 USING SNAPSHOT SNAP_688979926_996491631
SNAPSHOT MODE EVERY 24 HOURS;

SQL> ALTER PLUGGABLE DATABASE DROP SNAPSHOT SNAP_688979926_996491289;






Resource Plan for PDB database 


ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'TEST_CDB_PLAN';

DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.create_cdb_plan(
    plan    => l_plan,
    comment => 'A test CDB resource plan');

  DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
    plan                  => l_plan, 
    pluggable_database    => 'pdb1', 
    shares                => 3, 
    utilization_limit     => 100,
    parallel_server_limit => 100);

  DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
    plan                  => l_plan, 
    pluggable_database    => 'pdb2', 
    shares                => 3, 
    utilization_limit     => 100,
    parallel_server_limit => 100);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/



DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.update_cdb_plan_directive(
    plan                      => l_plan, 
    pluggable_database        => 'pdb3', 
    new_shares                => 1, 
    new_utilization_limit     => 100,
    new_parallel_server_limit => 100);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.delete_cdb_plan_directive(
    plan                      => l_plan, 
    pluggable_database        => 'pdb3');

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.update_cdb_default_directive(
    plan                      => l_plan, 
    new_shares                => 1, 
    new_utilization_limit     => 80,
    new_parallel_server_limit => 80);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.update_cdb_autotask_directive(
    plan                      => l_plan, 
    new_shares                => 1, 
    new_utilization_limit     => 75,
    new_parallel_server_limit => 75);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;
  DBMS_RESOURCE_MANAGER.delete_cdb_plan(plan => l_plan);
  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


COLUMN plan FORMAT A30
COLUMN comments FORMAT A30
COLUMN status FORMAT A10
SET LINESIZE 100

SELECT plan_id,
       plan,
       comments,
       status,
       mandatory
FROM   dba_cdb_rsrc_plans
WHERE  plan = 'TEST_CDB_PLAN';



COLUMN plan FORMAT A30
COLUMN pluggable_database FORMAT A25
SET LINESIZE 100
SELECT plan, 
       pluggable_database, 
       shares, 
       utilization_limit AS util,
       parallel_server_limit AS parallel
FROM   dba_cdb_rsrc_plan_directives
WHERE  plan = 'TEST_CDB_PLAN'
ORDER BY pluggable_database;




Other Commands :


Alter pluggable database PDB1  default  temporary tablespace pdb1_temp1 ; 
Alter pluggable database PDB1 set  default  bigfile  tablespace ;
ALter  pluggable database PDB1 STORAGE(MAXSIZE UNLIMITED) ; 
ALTER PLUGGABLE DATABASE pdb1 storage unlimited ; 
Alter pluggable database pdb1  enable force logging ; 


  -- This limit is visible as a property of my PDB:
SQL> select property_name, property_value from database_properties where property_name ='MAX_PDB_STORAGE';



-- And the current size is visible from V$PDBS:
SQL> select con_id,total_size from v$pdbs;







References : 

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/cncpt/overview-of-the-multitenant-architecture.html#GUID-5C339A60-2163-4ECE-B7A9-4D67D3D894FB

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/cncpt/overview-of-the-multitenant-architecture.html#GUID-D0F40745-FC70-4BE0-85D3-3745DE3312AC

https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/pdb/pdb_basics/pdb_basics.html


How to Change Initialization Parameters at PDB Level on Standby Database (Doc ID 2903547.1)


How to Unplug and Plugin PDB (Doc ID 2728046.1)

Character Sets For CDB And PDB in 12.2 (Doc ID 2231602.1)

Moving PDB from Lower Release Update (or Patch Set Update, Bundle Patch) on Same or Higher DB Version (Doc ID 2847234.1)


How to set the Undo_tablespace in PDB in Physical Standby RAC Database. (Doc ID 2726173.1)