Thursday, January 13, 2022

Oracle Multitenant / Container- Pluggable Database -- Pdb / Cdb


                     Oracle Multitenant/ Container  Database 



1) In the previous release the character set for the root container and all pluggable databases associated with it had to be the same. This could limit the movement of PDBs and make consolidation difficult where a non-standard character set was required.
In Oracle Database 12c Release 2 (12.2) a PDB can use a different character set to the CDB, provided the character set of the CDB is AL32UTF8, which is now the default character set when using the Database Configuration Assistant

2) When we open Container database seed database is in read only mode and Pdb are in mount stage 

3)  Background processes and memory allocation are at  CDB level only 

4) Log switches occur only at the multitenant container database (CDB) level.

5) Instance recovery is always performed at the CDB level.

6) Patching and Upgrades are   done at Cdb  level

7) User   creation can be done at root level and  Pdb level . User created in root is considered  global and created in all pdb. Though schema objects can vary  in all pdb

8)  While creating  CDB we can place PDB and Seed  files  in different  directory  using below

The SEED FILE_NAME_CONVERT Clause
The PDB_FILE_NAME_CONVERT Initialization Parameter 

9)  A PDB would have its SYSTEM, SYSAUX, TEMP tablespaces.It can also contains other user created tablespaces in it

10) For  revoking  privilege from common user 

If the current container is the root: 

 / Specify CONTAINER = CURRENT to revoke a locally granted system privilege, object privilege, or role from a common user or common role. The privilege or role is revoked from the user or role only in the root. This clause does not revoke privileges granted with CONTAINER = ALL. 

/ Specify CONTAINER = ALL to revoke a commonly granted system privilege, object privilege on a common object, or role from a common user or common role. The privilege or role is revoked from the user or role across the entire CDB. This clause can revoke only a privilege or role granted with CONTAINER = ALL from the specified common user or common role. This clause does not revoke privileges granted locally with CONTAINER = CURRENT. However, any locally granted privileges that depend on the commonly granted privilege being revoked are also revoked. 

If you omit this clause, then CONTAINER = CURRENT is the default.

11) When we   restart  CDB , be default Seed  will be in  Read only mode  and all PDB will be in mount stage



Hoe to  check if database is Container Database / Cdb: 
Select  CDB from v$database ; 


Undo Management : 

 Interesting behavior in 12.1.0.1 DB of creating an undo tablespace in a PDB. With the new Multitenant architecture the undo tablespace resides at the CDB level and PDBs all share the same UNDO tablespace. When the current container is a PDB, an attempt to create an undo tablespace fails without returning an error. 
In Oracle 12c Release 2 each container in an instance can use its own undo tablespace. This new undo management mechanism is called local undo mode, whilst that of previous releases is now known as shared undo mode. Local undo mode is the default mode in newly created databases, so you will probably only need to consider switching undo modes for upgraded instances
Prior to Oracle 19.9, setting the UNDO_RETENTION parameter in the root container meant that value was set for all pluggable databases also. From Oracle 19.9 onward this is not the case. Now there are two ways to set the parameter in the root container.
In addition, the UNDO_RETENTION parameter can be set separately in each PDB, provided local undo is being used.


column property_name format a30
column property_value format a30
select property_name, property_value
from   database_properties
where  property_name = 'LOCAL_UNDO_ENABLED';
PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ ------------------------------
LOCAL_UNDO_ENABLED             TRUE
SQL>



select con_id, tablespace_name
from   cdb_tablespaces
where  contents = 'UNDO'
order by con_id;
    CON_ID TABLESPACE_NAME
---------- ------------------------------
         1 UNDOTBS1
         3 UNDOTBS1
SQL>



conn / as sysdba
shutdown immediate;
startup upgrade;
alter database local undo off;
shutdown immediate;
startup;


conn / as sysdba
-- Just the root container.
alter system set undo_retention=3000;
-- The root container and all PDBs.
alter system set undo_retention=3000 container=all;



Tablespace and Datafile Management : 

Management of the temporary tablespace in a CDB is unchanged from that of a non-CDB database.

A PDB can either have its owner temporary tablespace, or if it is created without a temporary tablespace, it can share the temporary tablespace with the CBD.

ALTER SESSION SET CONTAINER = pdb1;
CREATE TABLESPACE dummy
  DATAFILE '/u01/app/oracle/oradata/cdb1/pdb1/dummy01.dbf' SIZE 1M
  AUTOEXTEND ON NEXT 1M;

ALTER PLUGGABLE DATABASE DEFAULT TABLESPACE users;
ALTER PLUGGABLE DATABASE DEFAULT TEMPORARY TABLESPACE temp;

SQL> SELECT CON_ID,TABLESPACE_NAME FROM CDB_TABLESPACES WHERE CON_ID=1;
SQL> SELECT CON_ID,FILE_NAME FROM CDB_DATA_FILES WHERE CON_ID=1;
SQL> SELECT CON_ID,FILE_NAME FROM CDB_TEMP_FILES WHERE CON_ID=1;


select con_id, tablespace_name
from   cdb_tablespaces
where  contents = 'UNDO'
order by con_id;


Connect to a PDB directly with ORACLE_PDB_SID
'
This is when you  set environmental variable ORACLE_PDB_SID  on top of ORACLE_SID 
It is done with a trigger called DBMS_SET_PDB.
This trigger most likely comes in with the April 2019 RUs for Oracle Database 19c and 18c. It does not exist in Oracle Database 12.2.0.1 or Oracle 12.1.0.2.

The ORACLE_SID defines to which SID you connect to. ORACLE_PDB_SID does not work without the correct ORACLE_SID
It is an AFTER EVENT LOGON trigger. The trigger fires when you logon as SYS or SYSTEM
 
SQL> select trigger_body from dba_triggers where trigger_name = 'DBMS_SET_PDB'



Saving STATE of PDB 
The PDB defaults to a closed state when the CDB first starts up .Oracle has introduced the “Save state” option for the alter pluggable database command.This command will take the current state of the PDB and preserve that mode after the CDB is restarted.
the save state command is one that is instance specific so if you are in a RAC environment be sure to use the “instances=all” clause when placing the PDB into a specific state.

Alter plugable database abc save state instances=all  ; 
alter pluggable database all save state;
select con_name , instance_name , state   from dba_pdb_saved_states ; 
select con_name , state  from  CDB_PDB_SAVED_STATES ; 

The state is only saved and visible in the DBA_PDB_SAVED_STATES view if the container is in READ ONLY or READ WRITE mode. The ALTER PLUGGABLE DATABASE ... SAVE STATE command does not error when run against a container in MOUNTED mode, but nothing is recorded, as this is the default state after a CDB restart.



-->  Open pdbs automatically and  save its state 


 alter pluggable database PDB1 save state;

select con_name, state from dba_pdb_saved_states;

 alter pluggable database PDB2 discard state;



As this feature is not available in 12.1.0.1 , we can create a trigger to open the PDBs automatically, with container database startup.

CREATE TRIGGER open_all_pdbs
  AFTER STARTUP ON DATABASE
BEGIN
   EXECUTE IMMEDIATE ‘ALTER PLUGGABLE DATABASE ALL OPEN’;
END ;
/


--> Open  Services along with 

alter pluggable database pdborcl open services=('pdborcl_srv');
alter pluggable database pdborcl open services=All;


Reference : 12c services will not start while manually restart pdb (Doc ID 2006021.1)






Removing Pdb from Restricted Mode : 

Ideally Pdb will be in restricted mode due to existing violations .   Please  fix violation and restart PDB 


1) Check PDB  status.

 select INST_ID,NAME,OPEN_MODE,RESTRICTED   from gv$pdbs order by 1,2;

2) Check PDB_PLUG_IN_VIOLATIONS.

 select  status, message, action   from  pdb_plug_in_violations  where  status !='RESOLVED';


 3)Bounce all instances and pdbs.

 ALTER PLUGGABLE DATABASE ALL EXCEPT PDB001 OPEN READ WRITE;
 ALTER PLUGGABLE DATABASE ALL OPEN READ WRITE FORCE;



User  and Role  Management :

 Create User c##pwtest identified by password container=all ; 
grant create session to c#pwtest container=all ; 
connect c#pwtest/password 
alter user c#pwtest identified by password  ;
alter user c#pwtest identified password contrainer=all ;
select username , profile , common ,  con_id  from cdb_users where username='C##a1'; 

 select username,profile,common,con_id from cdb_users where username='C##AA1';

 SELECT ROLE,ROLE_ID,COMMON,CON_ID FROM CDB_ROLES WHERE ROLE='C##ROLE1';
 
 SQL> select username, con_id from cdb_users where username='C##AA1';

 SQL> select d.username, c.con_id,p.name from dba_users d, cdb_users c , v$pdbs p where d.username='C##AA1' and d.username=c.username and p.con_id=c.con_id;

 Create Role C##ROLE1 CONTAINER=ALL ; 
Create Role Lrole container=current ; 
Grant C##ROLE1 to c##user1 ; 
GRANT SELECT ON SYS.V_SESSION TO C##USER2 container=all ; 

There are some predefined Roles  that can be visible using Below 
SELECT ROLE, common, con_id FROM cdb_roles order by  role ,  con_id ; 
select  role ,  commn  from  dba_roles order by role , con_id ; 
select * from session_roles ; 

COL grantee format a12 
col privilege format a30 
select grantee , privilege  , common , con_id from cdb_sys_privs where grantee='C##USER1' ORDER BY 1,3 ; 


#Enabling Common User to view data of specific Pdb 
COL USERNAME format a10 
col default_attrformat a7 
col owner format a6 
col object_name format all 
col all_comtainer format a3 
column container_name format a10 
col con_id format 999 
set pages 100 
set line 200 
select username , default_attr , owner , object_name ,  all_containers , container_name , con_id form  cdb_container_data 
where username  not in ( 'GDADMIN_INTERNAL','APPQOSSYS','DBSNMP')  ORDER BY  USERNAME ; 

ALTER USER C##USER2 SET CONTAINER_DATA = (CDB$ROOT , PDB2) FOR V_SESSION CONTAINER=CURRENT   ;



Starting / Shutting down   PDB

ALTER PLUGGABLE DATABASE OPEN ALL ISSUED from the root
ALTER DATABASE OPEN issued from that PDB
ALTER PLUGGABLE DATABASE OPEN issued from that PDb
alter pluggable database PDB2 close immediate ; 
alter pluggable database PDB2 open ; 

alter pluggable database test1_tech close;
alter pluggable database test1_tech close immediate;
alter pluggable database test1_tech open;
alter pluggable database test1_tech open read only ;
alter pluggable database test1_tech open force;

ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE OPEN READ ONLY;
ALTER PLUGGABLE DATABASE OPEN FORCE;
ALTER PLUGGABLE DATABASE NOLOGGING;
ALTER PLUGGABLE DATABASE ENABLE FORCE LOGGING;
STARTUP OPEN
STARTUP OPEN READ ONLY
STARTUP RESTRICT OPEN READ ONLY


alter pluggable database all close immediate;
alter pluggable database all open;

ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ WRITE [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ ONLY [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN UPGRADE [RESTRICTED];
ALTER PLUGGABLE DATABASE <pdb-name-clause> CLOSE [IMMEDIATE];



We can create startup trigger to start all PDB

 CREATE OR REPLACE TRIGGER open_pdbs 
  AFTER STARTUP ON DATABASE 
BEGIN 
   EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN'; 
END open_pdbs;
/





Backup and  recovery  commands :

1) Backup and recovery can be done at  CDB and PDB level 
 
Backup Commands

RMAN> BACKUP DATABASE ROOT;
RMAN> BACKUP PLUGGABLE DATABASE pdb1, pdb2;
RMAN> BACKUP PLUGGABLE DATABASE pdb1  plus archivelog ;
-- It will skip archivelog backup when connected to Pdb 
RMAN> BACKUP DATABASE;
RMAN> BACKUP TABLESPACE system, sysaux, users;
RMAN> BACKUP TABLESPACE pdb1:system, pdb1:sysaux, pdb1:users, pdb2:system;

list backup of PLUGGABLE DATABASE pdb1 ; 
list backup of tablespace PDB1:users ; 




Restore Commands : 

RESTORE TABLESPACE pdb2:system  ; 


RUN {
  SHUTDOWN IMMEDIATE; # use abort if this fails
  STARTUP MOUNT;
  RESTORE DATABASE;
  RECOVER DATABASE;
  ALTER DATABASE OPEN;
}


RUN {
  SHUTDOWN IMMEDIATE; # use abort if this fails
  STARTUP MOUNT;
  RESTORE DATABASE ROOT;
  RECOVER DATABASE ROOT;
  # Consider recovering PDBs before opening.
  ALTER DATABASE OPEN;
}


RUN {
  ALTER PLUGGABLE DATABASE pdb1 CLOSE;
  SET UNTIL TIME "TO_DATE('23-DEC-2013 12:00:00','DD-MON-YYYY HH24:MI:SS')";
  RESTORE PLUGGABLE DATABASE pdb1;
  RECOVER PLUGGABLE DATABASE pdb1;
  ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;
}



RECOVER TABLE 'TEST'.'T1' OF PLUGGABLE DATABASE pdb1
  UNTIL TIME "TO_DATE('01-JAN-2013 15:00', 'DD-MON-YYYY HH24:MI')"
  AUXILIARY DESTINATION '/u01/aux'  
  REMAP TABLE 'TEST'.'T1':'T1_PREV';


RECOVER TABLE 'TEST'.'T1' OF PLUGGABLE DATABASE pdb1
  UNTIL SCN 5695703
  AUXILIARY DESTINATION '/u01/aux'
  DATAPUMP DESTINATION '/u01/export'
  DUMP FILE 'test_t1_prev.dmp'
  NOTABLEIMPORT;


Datapump :

It’s important to note that Data Pump doesn’t allow you to use a directory that’s owned by the root container or CDB for imports and exports. You also can’t use one that’s owned by a different PDB.

Instead, you need to create a directory under the PDB. The PDB must own the directory for you to use Data Pump export and import functionality.

Using a Data Pump export for PDBs is identical to using a Data Pump export for a non-CDB database. The only difference in using the Data Pump export utility for a PDB is that you must use a connect identifier, or Transparent Network Substrate (TNS) alias, in the export command prompt when you initiate the export. This step ensures that you’re initiating the Data Pump export for a specific PDB.





Parameter check : 

select name , value from v$nls_parameters where parameter='NLS_CHARACTERSET'; 

set linesize 300 
column name  format a30 
column value format a30
select b.name ,  a.name ,  a.value$  "Value"  from pdb_spfile$ a ,  v$pdbs b  where a. pbd_uid=b.con_uid  and a.name in ('cpu_cont','sga_target')    order by 1,2 ; 


alter session set CONTAINER=PDB1; 
select value, ISPDB_MODIFIABLE  from v$system_parameter where name='db_recover_file_dest_size';





Managing Services  : 


SQL> connect /@CDB1 as sysdba
SQL> alter session set container=pdb1;
SQL> exec dbms_service.create_service(service_name=>'pdb1_RW',network_name=>'pdb1_RW');
SQL> exec dbms_service.start_service(service_name=>'pdb1_RW');
SQL> alter session set container=cdb$root;

select name,name_hash,network_name,creation_date,pdb from cdb_services order by con_id,service_id;

select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;

select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;



srvctl add service -db CDB01 -service TEST_NEW -pdb TEST
srvctl modify service -db CDB01 -service TEST_NEW -pdb TEST
srvctl remove service -db CDB01 -service TEST_NEW

BEGIN
DBMS_SERVICE.CREATE_SERVICE(
service_name => 'TEST_NEW',
network_name => 'TEST_NEW.com');
END;
/

BEGIN
DBMS_SERVICE.DELETE_SERVICE(
service_name => 'TEST_NEW');
END;
/

SQL> select name, con_id from v$active_services order by 1;




Adding service with domain name : 
-- on primary
srvctl add service -db CDB01_fra3gh -service hr.oracle.com -pdb pdb01 -role primary
srvctl start service -db CDB01_fra3gh -service hr.oracle.com
 
lsnrctl services
...
Service "hr.oracle.com" has 1 instance(s).
...
 
-- on standby
srvctl add service -db CDB01_amsad1 -service hr.oracle.com -pdb pdb01 -role primary
srvctl start service -db CDB01_amsad1 -service hr.oracle.com
 
lsnrctl services
...
Service "hr.oracle.com" has 1 instance(s).
...




Drop Plugable Database : 

drop pluggable database pdb3_test including datafiles;
 



Rename Plugable Database : 

alter pluggable database TEST rename global_name to TEST1;




Pdb  checks :

show con_name 
show con_id 
show pdbs

alter session set container=cdb$root; 

col con_name for a10 
col instance_name for a10 
select con_name , instance_name , state from dba_pdb_saved_states ; 

select name , open_mode from v$pdbs ; 

COLUMN NAME FORMAT AI5
COLUMN RESTRICTED FORMAT A10 
COLUMN OPEN_TIME FORMAT A130 
SELECT NAME , OPEN_MODE , RESTRICTED , OPEN_TIME FROM $PDBS ;

COLUMN NAME FORMAT A8 
SELECT  name , con_id , dbid , con_uid ,  guid from v$containers  order  by  con_id ; 

COLUMN PDB_NAME FORMAT AIS 
SELECT PDB_ID , PDB_NAME , STATUS FROM DBA_PDDBS order  by pdb_id ; 

COLUMN DB_NAME FORMAT A10
COLUMN CON_ID FORMAT 999
COLUMN PDB_NAME FORMAT A15
COLUMN OPERATION FORMAT A16
COLUMN OP_TIMESTAMP FORMAT A10
COLUMN CLONED_FROM_PDB_NAME FORMAT A15
SELECT DB_NAME, CON_ID, PDB_NAME, OPERATION, OP_TIMESTAMP, CLONED_FROM_PDB_NAME
FROM CDB_PDB_HISTORY
WHERE CON_ID > 2
ORDER BY CON_ID;

COLUMN name FORMAT A30
SELECT name, pdb
FROM   v$services
ORDER BY name;

SQL> select tablespace_name, con_id from cdb_tablespaces where con_id =1;
SQL> select file_name, con_id from cdb_data_files where con_id=1;
SQL> select file_name, con_id from cdb_temp_files where con_id =1 ;



col name for a10
col owner for a8
col object_type for a10
select t.con_id,p.name,t.owner,t.object_type,count(*) from cdb_objects t ,v$containers p
where p.con_id = t.con_id and t.status = 'INVALID' group by t.con_id,p.name,t.owner,t.object_type;



-- Size of Pluggable Database : 
select con_id, name, open_mode, total_size/1024/1024/1024 "PDB_SIZE_GB" from v$pdbs;




COLUMN DB_NAME FORMAT A10
COLUMN CON_ID FORMAT 999
COLUMN PDB_NAME FORMAT A15
COLUMN OPERATION FORMAT A16
COLUMN OP_TIMESTAMP FORMAT A10
COLUMN CLONED_FROM_PDB_NAME FORMAT A15
SELECT DB_NAME, CON_ID, PDB_NAME, OPERATION, OP_TIMESTAMP, CLONED_FROM_PDB_NAME
FROM CDB_PDB_HISTORY
WHERE CON_ID > 2
ORDER BY CON_ID;

 
SQL> connect / as sysdba
Connected.
SQL> get show_orclpdb1_params.sql
  1  select  pdb.dbid, pdb.con_uid, sp.name, sp.value$
  2  from v$pdbs pdb, pdb_spfile$ sp
  3  where pdb.name = 'ORCLPDB1'
  4  and pdb.con_uid=sp.pdb_uid
  5* order by sp.name
SQL> /





Flashback Cdb :

SHUTDOWN IMMEDIATE
STARTUP MOUNT
FLASHBACK DATABASE to TIME “TO_DATE (‘08/20/12’ , ‘MM/DD/YY’)”;
ALTER DATABASE OPEN RESETLOGS;
ALTER PLUGGABLE DATABASE ALL OPEN;

Note :
A. Flashback must be enabled at cdb level
B. To enable flashback database, the CDB must be mounted.
c. The DB_FLASHBACK RETENTION_TARGET parameter must be set to enable          
    flashback of the CDB.
d. A CDB can be flashed back specifying the desired target point in time or an SCN, but not      a restore point.





Creating new Pdb 

$ sqlplus sys@cdb1 sys as sysdba

SQL> ALTER SYSTEM SET db_create_file_dest = '/u01/app/oracle/oradata';

SQL> CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pdb1adm IDENTIFIED BY *******;

OR

SQL> CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pdb1adm IDENTIFIED BY *******
2 CREATE_FILE_DEST='/u01/app/oracle/oradata';

OR

SQL> CREATE PLUGGABLE DATABASE repotestpdb ADMIN USER pdbadm IDENTIFIED BY *******
FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdbseed/','/u01/app/oracle/oradata/cdb1/pdb2/');

OR

SQL> ALTER SESSION SET PDB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/cdb1/pdbseed/'
,'/u01/app/oracle/oradata/cdb1/pdb3/';

SQL> CREATE PLUGGABLE DATABASE pdb3 ADMIN USER pdb_adm IDENTIFIED BY Password1 ROLES=(DBA);


or 

Creating a CDB: Using SQL*Plus

1) Instance startup:

• Set ORACLE_SID=CDB1

• Set in initCDB1.ora:
• Set CONTROL_FILES to CDB control file names.
• Set DB_NAME to CDB name.
• Set ENABLE_PLUGGABLE_DATABASE to TRUE.

SQL> CONNECT / AS SYSDBA
SQL> STARTUP NOMOUNT


2) Create the database:
• CDB$ROOT container
• PDB$SEED pluggable database

SQL> CREATE DATABASE CDB1 ENABLE PLUGGABLE DATABASE SEED FILE_NAME_CONVERT ('/oracle/dbs','/oracle/seed');

 
3) Run the catcdb.sql script.





Cloning PDB 


CDB1

$ sqlplus sys@cdb1 sys as sysdba

SQL> ALTER PLUGGABLE DATABASE PDB1 CLOSE;
SQL> ALTER PLUGGABLE DATABASE PDB1 OPEN READ ONLY;
SQL> CREATE PLUGGABLE DATABASE PDB2 FROM PDB1
2 STORAGE UNLIMITED TEMPFILE REUSE
3 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/', '/u01/app/oracle/oradata/cdb1/pdb2');


 PDB is in LOCAL UNDO MODE - HOT CLONING

SQL> CREATE PLUGGABLE DATABASE PDB3 FROM PDB1
2 STORAGE UNLIMITED TEMPFILE REUSE
3 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/', '/u01/app/oracle/oradata/cdb1/pdb3');




Plugging In an Unplugged PDB


UNPLUG

SQL> CONNECT SYS@CDB1 AS SYSDBA
SQL> ALTER PLUGGABLE DATABASE pdb1 CLOSE;
SQL> ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO '/u01/app/oracle/oradata/cdb1/pdb1/pdb1.xml';

OR

SQL> ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO '/u01/app/oracle/oradata/cdb1/pdb1/pdb1.pdb';




PLUG-IN

There are multiple  options to  Plug . Copy,Nocopy & Move

Before We  plug In we need to check  for violations 

set serveroutput on
DECLARE
   compatible BOOLEAN := FALSE;
BEGIN  
   compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
        pdb_descr_file => '/u01/app/oracle/oradata/pdb1.xml');
   if compatible then
      DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? YES');
   else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? NO');
   end if;
END;
/

NOTE: If the scripts returns NO, then check PDB_PLUG_IN_VIOLATIONS view

SQL> col cause for a10
col name for a10
col message for a35 word_wrapped
select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where name='PDB1';

Resolve all the violation reported by the above command before proceeding.





SQL> CONNECT SYS@CDB2 as sysdba
SQL> CREATE PLUGGABLE DATABASE pdb2 USING '/u01/app/oracle/oradata/cdb2/pdb2/pdb1.xml'
2 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb2/pdb2/');

OR

SQL> CREATE PLUGGABLE DATABASE pdb2 USING '/u01/app/oracle/oradata/cdb2/pdb2/pdb1.pdb'
2 FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb2/pdb2/');
SQL> ALTER PLUGGABLE DATABASE pdb2 OPEN READ WRITE;





Adopting a Non-CDB as a PDB

DB19C NON-CDB

sqlplus / as sysdba
– SQL> shutdown immediate
– SQL> startup mount exclusive
– SQL> alter database open read only;
– SQL> exec dbms_pdb.describe(pdb_descr_file=>'/u01/app/oracle/oradata/db19cnoncdb/noncdb.xml');
– SQL> shutdown immediate


CDB1

sqlplus / as sysdba
– SQL> create pluggable database db19cpdb as clone using '/ u01/app/oracle/oradata/db19cnoncdb/noncdb.xml'
2 file_name_convert=('/u01/app/oracle/oradata/db19cnoncdb','/u01/app/oracle/oradata/db19cpdb') copy;
– SQL> alter pluggable database db19cpdb open;
sqlplus sys@db19cpdb as sysdba
– SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql




PDB Snapshots  

– PDB snapshot is a point-in-time copy of a PDB

– Maximum number of snapshots MAX_PDB_SNAPSHOTS = 8 (default)
NONE : The PDB does not support snapshots.
MANUAL : The PDB supports snapshots, but they are only created manually requested.
EVERY n HOURS : A snapshot is automatically created every "n" hours. Where "n" is between 1 and 1999.
EVERY n MINUTES : A snapshot is automatically created every "n" minutes. Where "n" is between 1 and 2999.

SQL> CONNECT / AS SYSDBA

SQL> CREATE PLUGGABLE DATABASE PDB2 ADMIN USER PDBADMIN IDENTIFIED BY ***********
FILE_NAME_CONVERT=('pdbseed','pdb2')
SNAPSHOT MODE EVERY 24 HOURS;

SQL> ALTER PLUGGABLE DATABASE PDB2 OPEN;
SQL> ALTER PLUGGABLE DATABASE PDB2 SAVE STATE;

SQL> SELECT con_id, con_name,snapshot_name,
snapshot_scn, full_snapshot_path
  FROM cdb_pdb_snapshots
 ORDER BY con_id, snapshot_scn;
CON_ID CON_NAME SNAPSHOT_NAME SNAPSHOT_SCN FULL_SNAPSHOT_PATH
---------- ---------- ------------------------------ ------------ -------------------------------------------
4 PDB2 SNAP_688979926_996491289 1764864 /u02/oradata/snap_688979926_1764864.pdb


Recovering From a PDB Snapshot

SQL> CREATE PLUGGABLE DATABASE PDB2COPY FROM pdb2 USING SNAPSHOT SNAP_688979926_996491631;

OR

SQL> CREATE PLUGGABLE DATABASE PDB2COPY FROM pdb2 USING SNAPSHOT SNAP_688979926_996491631
SNAPSHOT MODE EVERY 24 HOURS;

SQL> ALTER PLUGGABLE DATABASE DROP SNAPSHOT SNAP_688979926_996491289;






Resource Plan for PDB database 


ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'TEST_CDB_PLAN';

DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.create_cdb_plan(
    plan    => l_plan,
    comment => 'A test CDB resource plan');

  DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
    plan                  => l_plan, 
    pluggable_database    => 'pdb1', 
    shares                => 3, 
    utilization_limit     => 100,
    parallel_server_limit => 100);

  DBMS_RESOURCE_MANAGER.create_cdb_plan_directive(
    plan                  => l_plan, 
    pluggable_database    => 'pdb2', 
    shares                => 3, 
    utilization_limit     => 100,
    parallel_server_limit => 100);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/



DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.update_cdb_plan_directive(
    plan                      => l_plan, 
    pluggable_database        => 'pdb3', 
    new_shares                => 1, 
    new_utilization_limit     => 100,
    new_parallel_server_limit => 100);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.delete_cdb_plan_directive(
    plan                      => l_plan, 
    pluggable_database        => 'pdb3');

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.update_cdb_default_directive(
    plan                      => l_plan, 
    new_shares                => 1, 
    new_utilization_limit     => 80,
    new_parallel_server_limit => 80);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;

  DBMS_RESOURCE_MANAGER.update_cdb_autotask_directive(
    plan                      => l_plan, 
    new_shares                => 1, 
    new_utilization_limit     => 75,
    new_parallel_server_limit => 75);

  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


DECLARE
  l_plan VARCHAR2(30) := 'test_cdb_plan';
BEGIN
  DBMS_RESOURCE_MANAGER.clear_pending_area;
  DBMS_RESOURCE_MANAGER.create_pending_area;
  DBMS_RESOURCE_MANAGER.delete_cdb_plan(plan => l_plan);
  DBMS_RESOURCE_MANAGER.validate_pending_area;
  DBMS_RESOURCE_MANAGER.submit_pending_area;
END;
/


COLUMN plan FORMAT A30
COLUMN comments FORMAT A30
COLUMN status FORMAT A10
SET LINESIZE 100

SELECT plan_id,
       plan,
       comments,
       status,
       mandatory
FROM   dba_cdb_rsrc_plans
WHERE  plan = 'TEST_CDB_PLAN';



COLUMN plan FORMAT A30
COLUMN pluggable_database FORMAT A25
SET LINESIZE 100
SELECT plan, 
       pluggable_database, 
       shares, 
       utilization_limit AS util,
       parallel_server_limit AS parallel
FROM   dba_cdb_rsrc_plan_directives
WHERE  plan = 'TEST_CDB_PLAN'
ORDER BY pluggable_database;




Other Commands :


Alter pluggable database PDB1  default  temporary tablespace pdb1_temp1 ; 
Alter pluggable database PDB1 set  default  bigfile  tablespace ;
ALter  pluggable database PDB1 STORAGE(MAXSIZE UNLIMITED) ; 
ALTER PLUGGABLE DATABASE pdb1 storage unlimited ; 
Alter pluggable database pdb1  enable force logging ; 


  






References : 

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/cncpt/overview-of-the-multitenant-architecture.html#GUID-5C339A60-2163-4ECE-B7A9-4D67D3D894FB

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/cncpt/overview-of-the-multitenant-architecture.html#GUID-D0F40745-FC70-4BE0-85D3-3745DE3312AC

https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/pdb/pdb_basics/pdb_basics.html


How to Change Initialization Parameters at PDB Level on Standby Database (Doc ID 2903547.1)


How to Unplug and Plugin PDB (Doc ID 2728046.1)

Character Sets For CDB And PDB in 12.2 (Doc ID 2231602.1)

Moving PDB from Lower Release Update (or Patch Set Update, Bundle Patch) on Same or Higher DB Version (Doc ID 2847234.1)


How to set the Undo_tablespace in PDB in Physical Standby RAC Database. (Doc ID 2726173.1)


Saturday, November 27, 2021

Oracle Data Scrubbing Exadata and Asm

 
Oracle ASM disk scrubbing improves availability and reliability by searching for data that may be less likely to be read. Disk scrubbing checks logical data corruptions and repairs them automatically in normal and high redundancy disks groups. The scrubbing process repairs logical corruptions using the mirror disks. Disk scrubbing can be combined with disk group rebalancing to reduce I/O resources. The disk scrubbing process has minimal impact to the regular I/O in production systems.

You can perform scrubbing on a disk group, a specified disk, or a specified file of a disk group with the ALTER DISKGROUP SQL statement. For example, the following SQL statements show various options used when running the ALTER DISKGROUP disk_group SCRUB SQL statement.



Exadata Disk Scrubbing

A subtler Exadata Maintenance job is the bi-weekly Disk Scrub. This job does not appear in the CellCLI alert history. It only appears in the $CELLTRACE/alert.log.

Disk Scrubbing is designed to periodically validate the integrity of the mirrored ASM extents and thus eliminate latent corruption. The scrubbing is supposed to only run when average I/O utilization is under 25 percent. However, this can still cause spikes in utilization and latency and adversely affect database I/O, Oracle documentation says that a 4TB high capacity hard disk can take 8-12 hours to scrub, but I have seen it run more than 24 hours. Normally, this isn’t noticeable as it runs quietly in the background. 

However, if you have a high I/O workload, the additional 10-15 percent latency is noticeable.
Logs and Schedule

The $CELLTRACE/alert.log on the cell nodes reports the timing and results.
Wed Jan 11 16:00:07 2017
Begin scrubbing CellDisk:CD_11_xxxxceladm01.
Begin scrubbing CellDisk:CD_10_xxxxceladm01.

Thu Jan 12 15:12:37 2017
Finished scrubbing CellDisk:CD_10_xxxxceladm01, scrubbed blocks (1MB):3780032, found bad blocks:0
Thu Jan 12 15:42:02 2017
Finished scrubbing CellDisk:CD_11_xxxxceladm01, scrubbed blocks (1MB):3780032, found bad blocks:0
You can connect to the cell nodes and alter the Start Time and Interval at CellCLI prompt:
CellCLI> alter cell hardDiskScrubStartTime=’2017-01-21T08:00:00-08:00′;
CellCLI> list cell attributes name,hardDiskScrubInterval
biweekly


ASM Disk Scrubbing

ASM Disk Scrubbing performs a similar task to Exadata Disk Scrubbing. It searches the ASM blocks and repairs logical corruption using the mirror disks. The big difference is that ASM Disk scrubbing is run manually at disk group or file level and can be seen in V$ASM_OPERATION view and the alert_+ASM.log
Logs and Views
The alert_+ASM.log on the database node reports the command and duration.
Mon Feb 06 09:03:58 2017
SQL> alter diskgroup DBFS_DG scrub power low
Mon Feb 06 09:03:58 2017
NOTE: Start scrubbing diskgroup DBFS_DG
Mon Jan 06 09:03:58 2017
SUCCESS: alter diskgroup DBFS_DG scrub power low



Performing  Scrubbing :

Scrubbing A data file:  
Within a disk, ASM disk scrubbing can be invoked to scrub individual disk data.
alter diskgroup
   data
scrub file
   '+data/orcl/datafile/example.266.806582193'
repair power high force;



Scrubbing A specific disk:  
You can direct ASM to use the mirrored data to scrub and synchronize a mirrored disk spindle.
alter diskgroup
   data
scrub disk
   data_0005
repair power high force;



Scrubbing An ASM disk group:  
Entire ASM disk groups can be done with disk scrubbing.
alter diskgroup
   data
scrub power low;

 
scrub  . . . repair:   
This option automatically repairs disk corruptions. If the "repair" keywords is not used, Oracle will only identify corruptions and not fix them:
alter diskgroup
   data
scrub disk
   data_0005
repair power low; --> reports and repairs corruptions
alter diskgroup
   data
scrub disk
   data_0005
power high force;  --> reports corruptions



scrub . . . power:  
If the "power" argument is specified with data scrubbing, you can have several levels of power:
alter diskgroup data scrub power low;
alter diskgroup data scrub power auto; --> default
alter diskgroup data scrub power high
alter diskgroup data scrub power max;
SQL> ALTER DISKGROUP data SCRUB POWER LOW;
SQL> ALTER DISKGROUP data SCRUB FILE '+DATA/ORCL/DATAFILE/example.266.806582193' 
       REPAIR POWER HIGH FORCE;
SQL> ALTER DISKGROUP data SCRUB DISK DATA_0005 REPAIR POWER HIGH FORCE;



scrub  . . . wait:  
If the optional "wait" option is specified, the command returns after the scrubbing operation has completed. If the WAIT option is not specified, the scrubbing operation is added into the scrubbing queue and the command returns immediately.

scrub . . . force:  
If the optional "force" option is specified, the command is processed even if the system I/O load is high or if scrubbing has been disabled internally at the system level.




Seeing Scrubbing Process : 

You can monitor the scrubbing progress from V$ASM_OPERATION view.

[grid@dbserver]$ ps -ef | grep asm_sc
grid     17902     1  0 11:27 ?        00:00:00 asm_scrb_+ASM
grid     24365     1  0 11:49 ?        00:00:01 asm_scc0_+ASM
...
SCRB - ASM disk scrubbing master process that coordinates disk scrubbing operations.
SCCn - ASM disk scrubbing slave check process that performs the checking operations. The possible processes are SCC0-SCC9.
We would see additional processes during the repair operation:
SCRn - ASM disk scrubbing slave repair process that performs repair operation. The possible processes are SCR0-SCR9.
  
SCVn - ASM disk scrubbing slave verify process that performs the verifying operations. The possible processes are SCV0-SCV9.



ZFS Data Scrubbing : 
We can also perform  ZFS scrubbing . We can find more information  on this  in below link 

https://docs.oracle.com/cd/E19120-01/open.solaris/817-2271/gbbxi/index.html




Reference :

https://docs.oracle.com/database/121/OSTMG/GUID-ECCAE4F3-CFC5-4B55-81D2-FFB6953C035C.htm#OSTMG95351

http://ora-srv.wlv.ac.uk/oracle19c_doc/ostmg/alter-diskgroups.html#GUID-6BB31112-8687-4C1E-AF14-D94FFCDA736F

Doc 
2094581.1
2049911.1

Saturday, October 30, 2021

Oracle Orion and IO Calibrate -- Check IO efficiency

 

Oracle  Orion - ORacle I/O Numbers

Since Oracle 11g, the Oracle ORION package has bee shipped within the Oracle Database and Grid Infrastructure binaries $ORACLE_HOME/bin/orion and requires no downloads and only minimal set-up, so is really easy to be get started quickly.


NOTE: orion requires you to provide a full path e.g. $ORACLE_HOME/bin/orion , if you try to call without providing a full path orion will return an ORA-56727 error.


Setting-Up Orion
Now we have prepared and presented our storage we can configure Orion, by default Orion looks for orion.lun, you can also use alternative configuration files by using the testname switch. (Example bellow)

[oracle@z-oracle orion_test]$ cat orion.lun 
/dev/mapper/orion-vol1
/dev/mapper/orion-vol2
/dev/mapper/orion-vol3
/dev/mapper/orion-vol4
/dev/mapper/orion-vol5



Orion support 5 types of workload tests Simple, Normal, Advanced, DSS and OLTP. (example output from orion -help

   simple   - Tests random small (8K) IOs at various loads,
              then random large (1M) IOs at various loads.
   normal   - Tests combinations of random small (8K) IOs and 
              random large (1M) IOs.
   advanced - Tests the workload specified by the user 
              using optional parameters.
   dss      - Tests with random large (1M) IOs at increasing 
              loads to determine the maximum throughput.
   oltp     - Tests with random small (8K) IOs at increasing 
              loads to determine the maximum IOPS.


For a preliminary set of data
 -run simple 

For a basic set of data
 -run normal 

To evaluate storage for an OLTP database
 -run oltp 

To evaluate storage for a data warehouse
 -run dss 

To generate combinations of 32KB and 1MB reads to random locations
 -run advanced 
 -size_small 32 -size_large 1024 -type rand 
 -matrix detailed

To generate multiple sequential 1MB write streams, simulating RAID0 striping
 -run advanced 
 -simulate RAID0 -write 100 -type seq
 -matrix col -num_small 0




# $ORACLE_HOME/bin/orion -run simple -testname simpletest

# $ORACLE_HOME/bin/orion -run advanced -testname advancedtest -matrix max -num_small 4 -num_large 4 -size_large 512

# $ORACLE_HOME/bin/orion -run oltp -testname oltp_write -write 20

./orion_linux_em64t -run advanced -testname orion1 -num_disks 1 -write 0 -simulate concat -matrix detailed
./orion_linux_em64t -run advanced -testname orion3 -num_disks 3 -write 0 -simulate concat -matrix detailed


%> . ./orion -run oltp -testname test -write 60


I can see TWP was released in 2009 and not updated after that. 
https://support.oracle.com/epmos/main/downloadattachmentprocessor?parent=DOCUMENT&sourceId=810394.1&attachid=810394.1:LTP&clickstream=yes

Please make sure that Orion tool only interact with RAW disk provided by Infra team. If we use write option and provide ASM disk to orion it will corrupt ASM disk. 



#################################
ORION
#################################

Oracle Orion tool:

1) For Predicting the performance of an Oracle database without having to install Oracle or create a database
2) Orion for Oracle Administrators: Oracle administrators can use Orion to evaluate and compare different storage arrays, based on the expected
workloads.
3) Unlike other I/O calibration tools, Oracle Orion is expressly designed for simulating Oracle database I/O workloads using the same
   I/O software stack as Oracle.
4)  Orion can also simulate the effect of striping performed by Oracle Automatic Storage Management
5) Orion can run tests using different I/O loads to measure performance metrics such as MBPS, IOPS, and I/O latency Run Orion when the storage is idle

ORION (ORacle IO Numbers) mimics the type of I/O performed by Oracle databases, 
which allows you to measure I/O performance for storage systems without actually installing Oracle.
It used to be available to download for a number of platforms from OTN, but that download is no longer available. 
Instead, it is included in the "$ORACLE_HOME/bin" directory of Grid Infrastructure (GI) and database installations.
Doing the run using the "normal" option can take a long time, so you may want to try using the "basic" option first.

Orion Command Line Samples
The following provides sample Orion commands for different types of I/O workloads:
To evaluate storage for an OLTP database:
-run oltp
To evaluate storage for a data warehouse:
-run dss
For a basic set of data:
-run normal
To understand your storage performance with read-only, small and large random I/O workload:
$ orion -run simple 
To understand your storage performance with a mixed small and large random I/O workload:
$ orion -run normal 
To generate combinations of 32KB and 1MB reads to random locations:
$ orion -run advanced -size_small 32 -size_large 1024 -type rand -matrix detailed
To generate multiple sequential 1 MB write streams, simulating 1 MB RAID-0 stripes:
$ orion -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix col -num_small 0
To generate combinations of 32 KB and 1 MB reads to random locations:
 -run advanced -size_small 32 -size_large 1024 -type rand -matrix detailed
To generate multiple sequential 1 MB write streams, simulating RAID0 striping:
 -run advanced -simulate raid0 -write 100 -type seq -matrix col -num_small 0




Create a file to hold the LUN configuration. In this case I will call my test “test”, so my LUN configuration file must be called “test.lun”. It is assumed it is present in the current directory. The file should contain a list of the luns used in the test.
# cat test.lun
/luns/lun1
/luns/lun2
/luns/lun3
/luns/lun4
If you don’t specify a test name, the utility assumes the test is called “orion” and looks for the presence of the “orion.lun” file.
# $ORACLE_HOME/bin/orion -run normal -testname test




#################################
Calibrate I/O 
#################################


Introduced in Oracle Database 11g Release 1, the CALIBRATE_IO procedure gives an idea of the capabilities of the storage system from within Oracle. 
There are a few restrictions associated with the procedure.
The procedure must be called by a user with the SYSDBA priviledge.
TIMED_STATISTICS must be set to TRUE, which is the default when STATISTICS_LEVEL is set to TYPICAL.
Datafiles must be accessed using asynchronous I/O. This is the default when ASM is used.


SELECT d.name,
       i.asynch_io
FROM   v$datafile d,
       v$iostat_file i
WHERE  d.file# = i.file_no
AND    i.filetype_name  = 'Data File';



SET SERVEROUTPUT ON
DECLARE
  l_latency  PLS_INTEGER;
  l_iops     PLS_INTEGER;
  l_mbps     PLS_INTEGER;
BEGIN
   DBMS_RESOURCE_MANAGER.calibrate_io (num_physical_disks => 1, 
                                       max_latency        => 20,
                                       max_iops           => l_iops,
                                       max_mbps           => l_mbps,
                                       actual_latency     => l_latency);
 
  DBMS_OUTPUT.put_line('Max IOPS = ' || l_iops);
  DBMS_OUTPUT.put_line('Max MBPS = ' || l_mbps);
  DBMS_OUTPUT.put_line('Latency  = ' || l_latency);
END;
/






Calibration runs can be monitored using the V$IO_CALIBRATION_STATUS view.
SET LINESIZE 100
COLUMN start_time FORMAT A20
COLUMN end_time FORMAT A20
SELECT TO_CHAR(start_time, 'DD-MON-YYY HH24:MI:SS') AS start_time,
       TO_CHAR(end_time, 'DD-MON-YYY HH24:MI:SS') AS end_time,
       max_iops,
       max_mbps,
       max_pmbps,
       latency,
       num_physical_disks AS disks
FROM   dba_rsrc_io_calibrate;






#################################
Using system views 
#################################



DECLARE
run_duration number := 3600;
capture_gap number := 5;
loop_count number :=run_duration/capture_gap;
rdio number;
wtio number;
prev_rdio number :=0;
prev_wtio number :=0;
rdbt number;
wtbt number;
prev_rdbt number;
prev_wtbt number;
BEGIN s
FOR i in 1..loop_count LOOP
SELECT SUM(value) INTO rdio from gv$sysstat
WHERE name ='physical read total IO requests';
SELECT SUM(value) INTO wtio from gv$sysstat
WHERE name ='physical write total IO requests';
SELECT SUM(value)* 0.000008 INTO rdbt from gv$sysstat
WHERE name ='physical read total bytes';
SELECT SUM(value* 0.000008) INTO wtbt from gv$sysstat
WHERE name ='physical write total bytes';
IF i > 1 THEN
INSERT INTO peak_iops_measurement (capture_timestamp,
total_read_io, total_write_io, total_io, total_read_bytes,
total_write_bytes, total_bytes)
VALUES (sysdate,(rdio-prev_rdio)/5,(wtio-prev_wtio)/5,((rdioprev_rdio)/5)+((wtio-prev_wtio))/5,(rdbt-prev_rdbt)/5,(wtbtprev_wtbt)/5,((rdbt-prev_rdbt)/5)+((wtbt-prev_wtbt))/5);
END IF;
prev_rdio := rdio;
prev_wtio := wtio;
prev_rdbt := rdbt;
prev_wtbt := wtbt;
DBMS_LOCK.SLEEP(capture_gap);
ND LOOP;
COMMIT;
EXCEPTION
 WHEN OTHERS THEN
 ROLLBACK;
END;
/



SELECT SUM(value) , name  from gv$sysstat  WHERE name like 'physical read%'  or name like 'physical write%' group by name  ;
 


#################################
SLOB
#################################



Saturday, October 23, 2021

Oracle Database Smon recovery -- Disable , enable and Tuning Rollback

 

Most of time we face performance issues , we miss to  check if there  is rollback ongoing . However we come across many situations where rollback is ongoing which impacts database performance .

We can speed up and slowdown rollback seeing business hours .

 If Undo tablespace is used up and rollback is slow , we can add new undo tablespace and change default undo tablespace 

If smon recovery is causing performance issues we may opt to disable smon recovery temporarily and re-enable back after business hours .  Similarly we can speed up and reduce speed of smon recovery 

The message ‘Waiting for SMON to disable tx recovery’ will be posted in the alert log as well.

After killing the shadow process or aborting the database the v$transaction entry is lost, so you cannot estimate by examining v$transaction.used_ublk how the rollback procedure proceeds.




How to check Smon recovery : 


set linesize 100
alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';

  select ktuxeusn, to_char(sysdate,'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta
   from x$ktuxe
   where ktuxecfl = 'DEAD';
  
 
select sql_text
from v$sqlarea
where address = (select sql_address from v$session where sid in (select SID 
from v$transaction,dba_rollback_segs,v$session        
where SADDR=SES_ADDR and
      XIDUSN=SEGMENT_ID and
      flag=7811));
 
 


 set linesize 100 
  alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'; 
  select usn, state, undoblockstotal "Total", undoblocksdone "Done", undoblockstotal-undoblocksdone "ToDo", 
             decode(cputime,0,'unknown',sysdate+(((undoblockstotal-undoblocksdone) / (undoblocksdone / cputime)) / 86400)) 
              "Estimated time to complete" 
   from v$fast_start_transactions; 

 alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
 SELECT usn, state, undoblockstotal “Total”, undoblocksdone “Done”,
undoblockstotal-undoblocksdone “ToDo”,
DECODE(cputime,0,’unknown’,SYSDATE+(((undoblockstotal-undoblocksdone) / (undoblocksdone / cputime)) / 86400)) “Finish at”
FROM v$fast_start_transactions;


SELECT r.NAME “RB Segment Name”, dba_seg.size_mb,
DECODE(TRUNC(SYSDATE – LOGON_TIME), 0, NULL, TRUNC(SYSDATE – LOGON_TIME) || ‘ Days’ || ‘ + ‘) ||
TO_CHAR(TO_DATE(TRUNC(MOD(SYSDATE-LOGON_TIME,1) * 86400), ‘SSSSS’), ‘HH24:MI:SS’) LOGON,
v$session.SID, v$session.SERIAL#, p.SPID, v$session.process,
v$session.USERNAME, v$session.STATUS, v$session.OSUSER, v$session.MACHINE,
v$session.PROGRAM, v$session.module, action
FROM v$lock l, v$process p, v$rollname r, v$session,
(SELECT segment_name, ROUND(bytes/(1024*1024),2) size_mb FROM dba_segments
WHERE segment_type = ‘TYPE2 UNDO’ ORDER BY bytes DESC) dba_seg
WHERE l.SID = p.pid(+) AND
v$session.SID = l.SID AND
TRUNC (l.id1(+)/65536)=r.usn AND
l.TYPE(+) = ‘TX’ AND
l.lmode(+) = 6
AND r.NAME = dba_seg.segment_name
–AND v$session.username = ‘SYSTEM’
–AND status = ‘INACTIVE’
ORDER BY size_mb DESC;


SELECT TO_CHAR(s.sid)||’,’||TO_CHAR(s.serial#) sid_serial,
NVL(s.username, ‘None’) orauser,
s.program,
r.name undoseg,
t.used_ublk * TO_NUMBER(x.value)/1024||’K’ “Undo”
FROM sys.v_$rollname r,
sys.v_$session s,
sys.v_$transaction t,
sys.v_$parameter x
WHERE s.taddr = t.addr
AND r.usn = t.xidusn(+)
AND x.name = ‘db_block_size’;





SELECT state,
       UNDOBLOCKSDONE,
       UNDOBLOCKSTOTAL,
       UNDOBLOCKSDONE/UNDOBLOCKSTOTAL*100
FROM gv$fast_start_transactions;

ALTER SESSION
SET nls_date_format='dd-mon-yyyy hh24:mi:ss';

SELECT usn,
       state,
       undoblockstotal "Total",
       undoblocksdone "Done",
       undoblockstotal-undoblocksdone "ToDo",
       decode(cputime, 0, 'unknown', sysdate+(((undoblockstotal-undoblocksdone) / (undoblocksdone / cputime)) / 86400)) "Estimated time to complete"
FROM v$fast_start_transactions;

SELECT a.sid,
       a.username,
       b.xidusn,
       b.used_urec,
       b.used_ublk
FROM v$session a,
     v$transaction b
WHERE a.saddr=b.ses_addr
ORDER BY 5 DESC;

Disable parallel rollback / smon recovery
 
-- to check if parallel smon recovery is in progress
select * from v$fast_start_servers;  
select pid, spid from v$process where pid in ( select pid from v$fast_start_servers);
 

-- set PID of smon
- not be killed . main smon  
select pid, program from v$process where program like '%SMON%'; -

 -- disable SMON transaction rollback/recovery
 oradebug setorapid 10  
oradebug event 10513 trace name context forever, level 2

-- kill parallel  smon processes if its exists
select 'alter system kill session '''||s.sid||','||s.serial#||''' immediate;', 'kill -9 '||spid
from v$session s, (select * from v$process
 where pid in (select pid from v$fast_start_servers)) p
 where s.paddr=p.addr;
 
-- disable parallel smon
alter system set fast_start_parallel_rollback=false;  
select * from v$fast_start_servers;
select pid from v$fast_start_servers ;
show parameter fast
 
-- enable  SMON transaction rollback/recovery
oradebug event 10513 trace name context off ; 
 


 
Speed up smon recovery
 
1)
select * from v$fast_start_servers;
If all the processes are in state RECOVERING, then you can benefit from adding more processes:
  
SQL> alter system set fast_start_parallel_rollback = high;
 
This will create parallel servers as much as 4 times the number of CPUs.
 
 
2)
Increase the parameter ‘_cleanup_rollback_entries’ to 400. The default is 100.
This parameter cannot be changed dynamically, so in order to change this the database will need to be restarted. As outlined above this is not advisable  and basically needs to be done when no alternative. This needs to be done only on suggestion of  Oracle support 

 


References :

 Note 238507.1: How to Disable Parallel Transaction Recovery When Parallel Txn Re very is Active 
Note 144332.1   Parallel Rollback may hang database, Parallel query servers get 100% cpu

Tuesday, October 19, 2021

How to Install and Configure ASM Filter Driver -- For Oracle Database


From Oracle 12.1.0.2 Start , have access to asmfd To replace udev Under the rules asm Disk device binding , At the same time, he also has the ability to filter illegal IO Characteristics of operation .


Check that the operating system version supports ASMFD, You can use the following code :

acfsdriverstate -orahome $ORACLE_HOME supported


 In redhat or centos 7.4 or above, you need to upgrade kmod to enable AFD


If we  are planning to Upgrade   from Asmlib to Afd   we can  refer  (Doc ID 2172754.1)


 

Installation Steps : 


Step  1> Stop Crs


Step 2> Configure AFD (ASM Filter Driver)


 To check whether the operating system version supports ASMFD, you can use the following code:
 
acfsdriverstate -orahome $ORACLE_HOME supported


  asmcmd afd_configure



Step 3> Configure Disk Discovery for AFD

Modify the below mentioned files as shown

 cat /etc/afd.conf
afd_diskstring='/dev/xvd*'
afd_filtering=enable

 cat /etc/oracleafd.conf
afd_diskstring='/dev/xvd*'
afd_filtering=enable
At this point AFD is configured you can verify the status as below

  asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'wdtest05'
I would recommend to restart acfs before you proceed

 acfsload stop

 acfsload start

 lsmod | grep acfs



Or by using dsset  


[root@rac1 ~]# asmcmd dsset '/dev/asm*','AFD:*'

[root@rac1 ~]# $ORACLE_HOME/bin/asmcmd dsget   

parameter:/dev/asm*, AFD:*
profile:/dev/asm*,AFD:*




Step5> Label the Disks using AFD to be used by ASM

#asmcmd afd_label DATA01 /dev/xvhg
#asmcmd afd_label FRA01 /dev/xvhh
#asmcmd afd_label REDO01 /dev/xvhi
#asmcmd afd_lsdsk

asmcmd afd_dsget 





Checking Storage Multipath information :



[root@rac1 yum.repos.d]# ll /dev/mapper/mpath*

lrwxrwxrwx 1 root root 7 Feb 15 17:18 /dev/mapper/mpathc -> ../dm-1
lrwxrwxrwx 1 root root 7 Feb 15 17:18 /dev/mapper/mpathd -> ../dm-0



The multipath devices mpathc and mpathd are used here

[root@rac2 ~]# multipath -ll

mpathd (14f504e46494c45526147693538302d577037452d39596459) dm-1 OPNFILER,VIRTUAL-DISK    
size=30G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 34:0:0:1 sdc 8:32 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 35:0:0:1 sde 8:64 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 36:0:0:1 sdg 8:96 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  `- 37:0:0:1 sdi 8:128 active ready running
mpathc (14f504e46494c45524f444c7844412d717a557a2d6b7a6752) dm-0 OPNFILER,VIRTUAL-DISK    
size=40G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 34:0:0:0 sdb 8:16 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 35:0:0:0 sdd 8:48 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 36:0:0:0 sdf 8:80 active ready running
`-+- policy='service-time 0' prio=1 status=enabled1
  `- 37:0:0:0 sdh 8:112 active ready running





Moving disk from  one  diskgroup to another when using asm filter and multipath : 

1) Drop disk  from asm  ( example ASM_DISKGROUP1_11  ) 
2)  afdtool -delete  ASM_DISKGROUP1_11    ( on one node by  Unix team using root ) 
3)  afdboot -scandisk  / -rescan   ( on all nodes  by  Unix team using root ) 
4)  remap ASM_DISKGROUP1_11 to  new disk name in  /etc/multipath.conf  , relabel  disk  and reload       multipath      ( on all nodes  by  Unix team using root ) 

To  reload Mulipath service :     service multipathd restart 

5)  Wait for disk to be visible in /dev/mapper on all nodes   
    afdtool -add /dev/mapper/ASM_DISKGROUP1_12  ( on one node by  Unix team using root ) 
6)  afdboot -scandisk /  -rescan   ( on all nodes  by  Unix team using root )   
    afdtool -getdevlist 
7) asmcmd afd_lsdsk





Known Issue 1 ) 

AFD: AFD Is Not Loaded Automatically After Node Reboot Due To Incorrect Dependencies (Doc ID 2724726.1)
Bug 31771370 - INCORRECT OHASD/AFD SERVICE DEPENDENCIES AT OS LEVEL AFTER OS UPDATE TO LINUX 7.8 AFFECTING AFD AND/OR CLUSTERWARE STARTUP 


Solution we did was to rescan afd_scan  

. oraenv 
+ASM4

crsctl stop crs -f 
asmcmd afd_state 
acfsload stop  # stop acfs driver stack 
afdload stop # stop acfsd driver  
asmcmd afd_scan # scan the devices 
acfsload start # start acfs driver stack 
asmcmd afd_lsdsk  # list asm disks
asmcmd afd_filter -e   # enable ASM filter 
asmcmd afd_state 
crsctl start crs -wait   # start crs 

$GRID_HOME/bin/afdload start # start acfsd driver  
/usr/bin/afdboot -scandisk 
$GRID_HOME/bin/crsctl stop crs -f 
$GRID_HOME/bin/crsctl start crs 

We same issue was faced after server migration we also  executed below  before enable ASM filter 

asmcmd afd_configure 




References : 

https://docs.oracle.com/database/121/OSTMG/GUID-302F7233-B905-4D1E-A1AD-9A00EDC6A6F3.htm#OSTMG95732

ASMFD : How to Migrate ASM Diskgroups from ASMLIB to ASMFD (ASM Filter Driver) on Oracle Grid Infrastructure (RAC) (Doc ID 2172754.1)

Friday, October 15, 2021

Oracle _fix_control and DBMS_OPTIM_BUNDLE



_FIX_CONTROL is a special hidden dynamic parameter introduced in 10.2.0.2 which can be used to enable or disable certain bug fixes. This is typically used in the optimizer area to allow specific fixes to be turned on or off as some fixes can have a significant  effect on execution plans.
The value parameter set to 1 means bug fix is activated 

When you upgrade oracle database version, you can be face  a problem of CBO that changes its behavior.

 
To enable:
"_fix_control"='Bugno:ON'    (OR)   "_fix_control"="Bugno:1"

To disable:
"_fix_control"='Bugno:OFF'  (OR)   "_fix_control"="Bugno:0"


Eg
alter system set "_fix_control"='27268249:0';   --> Disable 


For _FIX_CONTROL to work, several conditions must be met:

1) The patch that is referenced must have the option to use _FIX_CONTROL.  Using _FIX_CONTROL can't be used to backout any patch.  The patch (usually an Optimizer patch)  has to be enabled to use the _FIX_CONTROL parameter.

2)  The patch must be installed and visible in the V$SYSTEM_FIX_CONTROL view.  To check this:
SQL>  SELECT * FROM V$SYSTEM_FIX_CONTROL;


Note: To determine which bug fixes have been altered one can select from the fixed views GV$SYSTEM_FIX_CONTROL, GV$SESSION_FIX_CONTROL or their V$ counterparts.



Handling _fix_control    at session level  . 


SQL> alter session set "_fix_control"='4728348:OFF';
Session altered.
SQL> select * from v$session_fix_control where session_id=143 and bugno=4728348;



Handling _fix_control   using Hints 

/*+ OPT_PARAM('_fix_control’ ’9195582:0') */



Handling _FIX_CONTROL using DBMS_OPTIM_BUNDLE

This package is created to manage (enable/disable) optimizer fixes provided as part of PSU/bundles. Optimizer fixes are those provided as part of bundle which has a fix-control and can possibly cause a plan change.

This package has existed in some previous versions of the database, was dropped again most recently from 19.3 which is why it was again dropped from the Library. Oracle reintroduced it with 19.4 which the Library did not research so, from our perspective, it is "new" again in 20c.


dbms_optim_bundle.enable_optim_fixes(
action                     IN VARCHAR2 DEFAULT 'OFF',
scope                      IN VARCHAR2 DEFAULT 'MEMORY',
current_setting_precedence IN VARCHAR2 DEFAULT 'YES);


DBMS_OPTIM_BUNDLE.ENABLE_OPTIM_FIXES(
     action                     => 'ON' | 'OFF' ,
     scope                      => 'MEMORY' | 'SPFILE' | 'BOTH' | 'INITORA' ,
     current_setting_precedence => 'YES' | 'NO' )


exec dbms_optim_bundle.enable_optim_fixes('ON','MEMORY', 'NO');
exec dbms_optim_bundle.enable_optim_fixes('ON','BOTH', 'YES');
exec dbms_optim_bundle.enable_optim_fixes;
exec dbms_optim_bundle.enable_optim_fixes('ON', 'INITORA');
execute dbms_optim_bundle.enable_optim_fixes('OFF','BOTH','NO');

set serveroutput on;
execute dbms_optim_bundle.getBugsforBundle;
exec dbms_optim_bundle.listbundleswithfcfixes;
exec dbms_optim_bundle.getBugsForBundle(170718);



From Oracle 19.12.0 the API got extended  .  We can  have one of fix set to disable and rest set to enable using same command .

SQL> exec dbms_optim_bundle.set_fix_controls('27268249:0','*', 'BOTH','YES');




References : 
How to use the _FIX_CONTROL hidden parameter (Doc ID 827984.1)
MOS Note:2147007.1 – Automatic Fix Control Persistence (FCP) for Database Proactive Bundle Patch
 


Saturday, October 2, 2021

Oracle Rac database Cloning Shell Script .

 
We  recently  had requirement to write script to Clone Rac database on same server with different database name . Below is script I prepared.  

In  script i am resetting source database password on each execution . If we don't want to change password  we can keep same password and encrypt password  in script . Below Blog mentions  how we can encrypt password/.

https://abdul-hafeez-kalsekar.blogspot.com/2021/10/oracle-password-encrypt-for-shell.html

Reference : STEP BY STEP RMAN DUPLICATE Database From RAC ASM To RAC ASM (Doc ID 1913937.1) ##########################################################
#Script clone_auto.sh to clone  database on same server 
#Author: Abdul  Hafeez Kalsekar                          #
#Date  : 29th August 2021                                 #

#Usage : Need to pass four argument to run script 
#clone_auto.sh Target_Database_Isntance_Name  Target_Database_Name  ASM_Isntance_Name Source_Database_Instance_Name SOURCE_DATABASE_NAME       #
#clone_auto.sh CLONE1  CLONE  +ASM1 RMAN1 RMAN      #

# Before Runing Cloning we need to ensure we have  backup pfile for new clone database   
     #`echo $DATABASE_HOME`/dbs/`echo $INSTANCE_NAME`_non_rac.ora'

# In case we loose pfile for Clone Database we can create pfile from spfile of source database and chaneg below parameters 
# DB_FILE_NAME_CONVERT / LOG_FILE_NAME_CONVERT / db name / db unique name /  controlfile ,  local_listener , huge pages , cluster_database

# Below is Sequence of script 
# 1) Unlock Sys account in source 
# 2) Drop Existing Clone Daatabase 
# 3) Modfiy new parameters in Clone spfile 
# 4) Copy spfile and Password file to Asm 
# 5) Lock Sys account in source database 

# Before we start Cloning below steps were already completed 
# 1) Create Diskgroup for clone database      
# 2) Create  parameter file for clone databse for cloning 
# 3) Create password file for new clone database 
# 4) Create tns  entry for new database 
# 5) Create  new listener for auxiliary instance  
# 6) Added entry for CLone database in Oratab
# 7) Encrypt Password 
# 8) Do manual cloning once and add database to ocr and   
#    check srvctl config database if password file and spfile pointing to desired location as mentioned in script 
#Below are common issues faced 
#RMAN-04006 ORA-01017 when connecting to auxiliary database on Standby From Primary (Doc ID 2445873.1)
#RMAN Active Duplication Fails With Ora-17629, Ora-17627 Errors (Doc ID 2119741.1)
# Below was used to add new listener and add manual entry to  listener.ora under grid home 
#srvctl add listener -l LISTENER_DBNAME   -p TCP:4191 -s
#SID_LIST_LISTENER_DBNAME =
#  (SID_LIST =
#    (SID_DESC =
#      (GLOBAL_DBNAME=DB_NAME )
#      (ORACLE_HOME =  /u01/app/oracle/product/19c/dbhome_1  )
#      (SID_NAME = Instance_name)
#    )
#  )
#Below  entry   was added to  database   home    tnsnames.ora
#Instance_name  =
#   (DESCRIPTION =
#     (ADDRESS_LIST =
#       (ADDRESS = (PROTOCOL = TCP)(HOST = myserver.example.com)(PORT = 1521))
#    )
#     (CONNECT_DATA =
#       (SID =  Instance_name)
#     )
#   )
############################ Start of actual script  ###############################
set -e
INSTANCE_NAME=$1
DBNAME=$2
ASM_INSTANCE_NAME=$3
SOURCEINSTANCE=$4
SOURCEDATABASE=$5
export DATABASE_HOME=`cat /etc/oratab | grep -w $INSTANCE_NAME  | cut -d ":" -f 2`
export ASM_HOME=`cat /etc/oratab | grep -w $ASM_INSTANCE_NAME | cut -d ":" -f 2`
export LD_LIBRARY_PATH=$ORACLE_HOME/lib 
export PATH=$PATH:$ORACLE_HOME:$ORACLE_HOME/bin:$ORACLE_HOME/lib:.
export PWFILE=$ORACLE_HOME/dbs/orapw`echo $INSTANCE_NAME`_clone_donotdelete
date=$(date +'%a_%m_%d_%Y')
export LOG_FILE=/u01/app/oracle/log/clone_logs/`echo $INSTANCE_NAME`_clone_$date.log
export RMAN_LOG=/u01/app/oracle/log/clone_logs/`echo $INSTANCE_NAME`_RMAN_$date.log

export LOG_DIRECTORY='/u01/app/oracle/log/clone_logs/'
if [ -d $LOG_DIRECTORY ]
then
    echo "Directory already exists"
else
    mkdir -p  $LOG_DIRECTORY
    chmod 744 $LOG_DIRECTORY
fi

if [ -f "$LOG_FILE" ] ; then
    mv "$LOG_FILE" "$LOG_FILE".old
fi
if [ -f "$RMAN_LOG" ] ; then
    mv "$RMAN_LOG" "$RMAN_LOG".old
fi

####################
export ORACLE_SID=$SOURCEINSTANCE
export ORACLE_HOME=`cat /etc/oratab | grep -w $SOURCEINSTANCE  | cut -d ":" -f 2`
export LD_LIBRARY_PATH=$ORACLE_HOME/lib 
export PATH=$PATH:$ORACLE_HOME:$ORACLE_HOME/bin:$ORACLE_HOME/lib:. 
$ORACLE_HOME/bin/sqlplus "/as sysdba"  << EOF  >> $LOG_FILE 
alter user sys IDENTIFIED BY Pass#`echo $date`  account unlock  ; 
exit 
EOF

if [ -f "$ORACLE_HOME/dbs/orapw`echo $SOURCEDATABASE`" ] ; then
    rm $ORACLE_HOME/dbs/orapw`echo $SOURCEDATABASE`
fi
orapwd file=$ORACLE_HOME/dbs/orapw`echo $SOURCEDATABASE` password=Pass#`echo $date`    >> $LOG_FILE 
####################
export ORACLE_SID=$ASM_INSTANCE_NAME
export ORACLE_HOME=`cat /etc/oratab | grep -w $ASM_INSTANCE_NAME  | cut -d ":" -f 2`
export LD_LIBRARY_PATH=$ORACLE_HOME/lib 
export PATH=$PATH:$ORACLE_HOME:$ORACLE_HOME/bin:$ORACLE_HOME/lib:. 
set +e
$ASM_HOME/bin/asmcmd  ls +`echo $SOURCEDATABASE`_DATADG/`echo $SOURCEDATABASE`/PASSWORD/orapw`echo $SOURCEDATABASE`  
if [ $? -eq 0 ]
then
 $ASM_HOME/bin/asmcmd  rm +`echo $SOURCEDATABASE`_DATADG/`echo $SOURCEDATABASE`/PASSWORD/orapw`echo $SOURCEDATABASE`  
fi
$ASM_HOME/bin/asmcmd  ls +`echo $SOURCEDATABASE`_DATADG/`echo $SOURCEDATABASE`/PASSWORD/
if [ $? -ne 0 ]
then
$ASM_HOME/bin/asmcmd mkdir +`echo $SOURCEDATABASE`_DATADG/`echo $SOURCEDATABASE`/PASSWORD/
fi
$ASM_HOME/bin/asmcmd  cp   `echo $DATABASE_HOME`/dbs/orapw`echo $SOURCEDATABASE`  +`echo $SOURCEDATABASE`_DATADG/`echo $SOURCEDATABASE`/PASSWORD/orapw`echo $SOURCEDATABASE`
set -e
export ORACLE_SID=$SOURCEINSTANCE
export ORACLE_HOME=`cat /etc/oratab | grep -w $SOURCEINSTANCE  | cut -d ":" -f 2`
export LD_LIBRARY_PATH=$ORACLE_HOME/lib 
export PATH=$PATH:$ORACLE_HOME:$ORACLE_HOME/bin:$ORACLE_HOME/lib:. 
if [ -f "$ORACLE_HOME/dbs/orapw`echo $SOURCEDATABASE`" ] ; then
    rm $ORACLE_HOME/dbs/orapw`echo $SOURCEDATABASE`
fi
 
####################
if [ `ps -ef | grep pmon | grep -i $INSTANCE_NAME | grep -v grep | wc -l` == 1 ]; then
export ORACLE_SID=$INSTANCE_NAME
export ORACLE_HOME=`cat /etc/oratab | grep -w $INSTANCE_NAME | cut -d ":" -f 2`
export LD_LIBRARY_PATH=$ORACLE_HOME/lib/
export PATH=$PATH:$ORACLE_HOME:$ORACLE_HOME/bin:$ORACLE_HOME/lib:. 
$ORACLE_HOME/bin/sqlplus "/as sysdba"     << EOF >> $LOG_FILE
alter system set cluster_database=false scope=spfile sid='*' ;
exit 
EOF
$ORACLE_HOME/bin/srvctl stop database -db  $DBNAME  >> $LOG_FILE
$ORACLE_HOME/bin/sqlplus "/as sysdba"     << EOF >> $LOG_FILE
startup mount restrict ;
drop database ; 
exit 
EOF
$ORACLE_HOME/bin/srvctl remove database -db `echo $DBNAME`   << EOF  >> $LOG_FILE
y
EOF
fi 

export ORACLE_SID=$INSTANCE_NAME
export ORACLE_HOME=`cat /etc/oratab | grep -w $INSTANCE_NAME | cut -d ":" -f 2`
export LD_LIBRARY_PATH=$ORACLE_HOME/lib/
export PATH=$PATH:$ORACLE_HOME:$ORACLE_HOME/bin:$ORACLE_HOME/lib:. 
if [ -f "$ORACLE_HOME/dbs/orapw`echo $INSTANCE_NAME`" ] ; then
    rm $ORACLE_HOME/dbs/orapw`echo $INSTANCE_NAME`
fi
orapwd file=$ORACLE_HOME/dbs/orapw`echo $INSTANCE_NAME` password=Pass#`echo $date`    >> $LOG_FILE 
$ORACLE_HOME/bin/sqlplus "/as sysdba"    << EOF >> $LOG_FILE
startup nomount pfile='${ORACLE_HOME}/dbs/init`echo $INSTANCE_NAME`_non_rac.ora' ;  
exit 
EOF
echo "cloning start , please refer to  cloning log  `echo $RMAN_LOG`" >> $LOG_FILE
if [ -f ${INSTANCE_NAME}_clone.sql  ] ; then
    rm ${INSTANCE_NAME}_clone.sql 
fi
 
echo " run " >>  `echo $INSTANCE_NAME`_clone.sql 
echo " { " >>    `echo $INSTANCE_NAME`_clone.sql 
echo " allocate channel src1 type disk; " >>    `echo $INSTANCE_NAME`_clone.sql 
echo " allocate channel src2 type disk;" >>    `echo $INSTANCE_NAME`_clone.sql 
echo " allocate channel src3 type disk; " >>     `echo $INSTANCE_NAME`_clone.sql 
echo " allocate channel src4 type disk;  " >>     `echo $INSTANCE_NAME`_clone.sql 
echo " allocate auxiliary channel aux1 type disk; " >>     `echo $INSTANCE_NAME`_clone.sql 
echo " allocate auxiliary channel aux2 type disk; " >>     `echo $INSTANCE_NAME`_clone.sql 
echo " allocate auxiliary channel aux3 type disk; " >>     `echo $INSTANCE_NAME`_clone.sql 
echo " allocate auxiliary channel aux4 type disk;  " >>    `echo $INSTANCE_NAME`_clone.sql 
echo " allocate auxiliary channel aux5 type disk;  " >>   `echo $INSTANCE_NAME`_clone.sql 
echo " allocate auxiliary channel aux6 type disk;  " >>     `echo $INSTANCE_NAME`_clone.sql 
 echo " set newname for database to '+`echo $DBNAME`_DATADG';  " >>     `echo $INSTANCE_NAME`_clone.sql 
echo "  duplicate target database to '`echo $DBNAME`' from active database   NOFILENAMECHECK;  " >>     `echo $INSTANCE_NAME`_clone.sql 
echo "   } " >>     `echo $INSTANCE_NAME`_clone.sql 
$ORACLE_HOME/bin/rman target  sys/Pass#`echo $date`@$SOURCEINSTANCE   auxiliary sys/Pass#`echo $date`@$INSTANCE_NAME  log=$RMAN_LOG    cmdfile=`echo $INSTANCE_NAME`_clone.sql   << EOF >> $LOG_FILE
EOF
echo "cloning completed" >> $LOG_FILE
echo " alter system set cluster_database=true scope=spfile sid='*';  "      >>    `echo $INSTANCE_NAME`_postclone_parameters.sql 
echo " alter system reset  undo_tablespace scope=spfile sid='*' ; "      >>    `echo $INSTANCE_NAME`_postclone_parameters.sql 
echo " alter system set undo_tablespace=UNDOTBS2  scope=spfile sid='`echo $DBNAME`2'  ;  "      >>    `echo $INSTANCE_NAME`_postclone_parameters.sql 
echo " alter system set undo_tablespace=UNDOTBS1  scope=spfile sid='`echo $DBNAME`1'  ;  "      >>    `echo $INSTANCE_NAME`_postclone_parameters.sql 
echo " alter system reset instance_number scope=spfile sid='*' ; "      >>    `echo $INSTANCE_NAME`_postclone_parameters.sql 
echo " alter system set instance_number=1 scope=spfile sid='`echo $DBNAME`1'  ;  "      >>    `echo $INSTANCE_NAME`_postclone_parameters.sql 
echo " alter system set instance_number=2 scope=spfile sid='`echo $DBNAME`2'  ;  "      >>    `echo $INSTANCE_NAME`_postclone_parameters.sql 
echo " alter system reset thread scope=spfile sid='*' ;     "      >>    `echo $INSTANCE_NAME`_postclone_parameters.sql 
echo " alter system set thread=1  scope=spfile sid='`echo $DBNAME`1'  ;  "      >>    `echo $INSTANCE_NAME`_postclone_parameters.sql 
echo " alter system set thread=2  scope=spfile sid='`echo $DBNAME`2'  ;  "      >>    `echo $INSTANCE_NAME`_postclone_parameters.sql 
$ORACLE_HOME/bin/sqlplus "/as sysdba"  << EOF >> $LOG_FILE
@`echo $INSTANCE_NAME`_postclone_parameters.sql     
shu immediate 
exit 
EOF
####################
export ORACLE_SID=$ASM_INSTANCE_NAME
export ORACLE_HOME=`cat /etc/oratab | grep -w $ASM_INSTANCE_NAME  | cut -d ":" -f 2`
export LD_LIBRARY_PATH=$ORACLE_HOME/lib 
export PATH=$PATH:$ORACLE_HOME:$ORACLE_HOME/bin:$ORACLE_HOME/lib:. 
set +e
$ASM_HOME/bin/asmcmd  ls +`echo $DBNAME`_DATADG/`echo $DBNAME`/PASSWORD/orapw`echo $DBNAME`
if [ $? -eq 0 ]
then
 $ASM_HOME/bin/asmcmd  rm +`echo $DBNAME`_DATADG/`echo $DBNAME`/PASSWORD/orapw`echo $DBNAME`
fi
set -e


set +e 
$ASM_HOME/bin/asmcmd  ls +`echo $DBNAME`_DATADG/`echo $DBNAME`/PARAMETERFILE/spfile`echo $DBNAME`.ora
 if [ $? -eq 0 ]
then
 $ASM_HOME/bin/asmcmd  rm +`echo $DBNAME`_DATADG/`echo $DBNAME`/PARAMETERFILE/spfile`echo $DBNAME`.ora
fi
set -e

set +e 
$ASM_HOME/bin/asmcmd  ls +`echo $DBNAME`_DATADG/`echo $DBNAME`/PASSWORD/
if [ $? -ne 0 ]
then
$ASM_HOME/bin/asmcmd mkdir +`echo $DBNAME`_DATADG/`echo $DBNAME`/PASSWORD/
fi
$ASM_HOME/bin/asmcmd  ls  +`echo $DBNAME`_DATADG/`echo $DBNAME`/PARAMETERFILE/
if [ $? -ne 0 ]
then
$ASM_HOME/bin/asmcmd mkdir +`echo $DBNAME`_DATADG/`echo $DBNAME`/PARAMETERFILE/
fi 
set -e
$ASM_HOME/bin/asmcmd cp `echo $DATABASE_HOME`/dbs/spfile`echo $INSTANCE_NAME`.ora  +`echo $DBNAME`_DATADG/`echo $DBNAME`/PARAMETERFILE/spfile`echo $DBNAME`.ora
$ASM_HOME/bin/asmcmd cp `echo $DATABASE_HOME`/dbs/orapw`echo $INSTANCE_NAME`   +`echo $DBNAME`_DATADG/`echo $DBNAME`/PASSWORD/orapw`echo $DBNAME`
 
if [ -f nodename.log ] ; then
    rm nodename.log
fi
$ASM_HOME/bin/olsnodes -n >> nodename.log
export ORACLE_SID=$INSTANCE_NAME
export ORACLE_HOME=`cat /etc/oratab | grep -w $INSTANCE_NAME  | cut -d ":" -f 2`
export LD_LIBRARY_PATH=$ORACLE_HOME/lib 
export PATH=$PATH:$ORACLE_HOME:$ORACLE_HOME/bin:$ORACLE_HOME/lib:. 

srvctl add database -db `echo $DBNAME` -oraclehome `echo $DATABASE_HOME`  -dbtype RAC  -spfile +`echo $DBNAME`_DATADG/`echo $DBNAME`/PARAMETERFILE/spfile`echo $DBNAME`.ora   -pwfile  +`echo $DBNAME`_DATADG/`echo $DBNAME`/PASSWORD/orapw`echo $DBNAME`   -dbname  `echo $DBNAME`   >>  $LOG_FILE 
cat nodename.log  |while read LINE ;
 do 
 srvctl add instance -db `echo $DBNAME`  -i `echo $DBNAME``echo ${LINE}|awk '{print $2}'` -n `echo ${LINE}|awk '{print $1}'`  
 done
srvctl start database -db `echo $DBNAME`  >>  $LOG_FILE
rm `echo $DATABASE_HOME`/dbs/spfile`echo $INSTANCE_NAME`.ora   >>  $LOG_FILE
rm `echo $DATABASE_HOME`/dbs/orapw`echo $INSTANCE_NAME`    >>  $LOG_FILE
export ORACLE_SID=$SOURCEINSTANCE
export ORACLE_HOME=`cat /etc/oratab | grep -w $SOURCEINSTANCE  | cut -d ":" -f 2`
export LD_LIBRARY_PATH=$ORACLE_HOME/lib 
export PATH=$PATH:$ORACLE_HOME:$ORACLE_HOME/bin:$ORACLE_HOME/lib:. 
sqlplus "/as sysdba"  << EOF >>  $LOG_FILE
alter user sys  account lock password expire  ; 
exit 
EOF

###########  end of  script  #####################

Friday, October 1, 2021

Oracle Password Encrypt for shell script -- Unix perl Command

 
We usually face situation where we need to encrypt password for shell scripts .  Recently we faced this  requirement and below is how we added encrypted password in shell  script  

We need to retain  file containing hexadecimal value  .   in our case it was  /dbmonitoring/clone/clone.conf 


################ Encrypt Phase #################
cat /tmp/pass.lst
Pass#Fri_09_24_2021
echo ${pass} > /tmp/pass.lst
hex=`perl -e 'print unpack "H*",\`cat /tmp/pass.lst\`'`
  
echo $hex
50617373234672695f30395f32345f323032315f74617267657a
echo $hex > /dbmonitoring/clone/clone.conf
 
cat /dbmonitoring/clone/clone.conf
50617373234672695f30395f32345f323032315f74617267657a


################ Decrypt Phase #################
decrypt=`perl -e 'print pack "H*",\`cat /dbmonitoring/clone/clone.conf\`'`
echo $decrypt
Pass#Fri_09_24_2021 ▒
Password=${decrypt:0:-2}
echo $Password
Pass#Fri_09_24_2021

sqlplus sys/${Password}@CLONE1 as sysdba