Thursday, September 2, 2021

RMAN-20011: target database incarnation is not current in recovery catalog

 
After some research, we came across Note 412113.1 on http://metalink.oracle.com, which pointed towards the possibility of some non-prod instance getting registered with RMAN catalog with the same DBID as production’s.

In addition, Notes 1076158.6 and 1061206.6 were also *probably* relevant, but the relevance was not striking

SQL> select db_key,DBID,name,current_incarnation from rc_database_incarnation order by 1;


So, in reality, what had happened was that the reincarnation of DR PROD instance had been setup in the rman catalog.

To undo this, we reset the database incarnation to the catalog using the RESET INCARNATION TO DBKEY and then did a manual resync of catalog using the rman RESYNC CATALOG command.


If backups are falling  due to said reason 

Reset PROD to previous incarnation
 1. Check the latest incarnation key for production database in rc_database_incarnation
  select DBID,NAME,DBINC_KEY,  RESETLOGS_CHANGE#, RESETLOGS_TIME   
  from rc_database_incarnation
  where dbid=2284119847;

       ==> Check the DBINC_KEY corresponding to the current incarnation of PROD database
    
2. Connect to the production database with RMAN
     rman catalog <un/pw@catalog_db> target /

     RMAN> reset database to incarnation <dbinc_key>; <<< From step 1
     Or 
     RMAN>    RESET DATABASE;
RMAN> unregister database;
   RMAN> register database;


    RMAN> resync catalog;
    RMAN> list incarnation; => Now the production DB incarnation should be the current one





Reference : 
Target Database Incarnation Is Not Current In Recovery Catalog RMAN-20011 (Doc ID 412113.1)
RMAN restore database fails with RMAN-20011, Target Incarnation Is Not Current (Doc ID 394493.1)

Tuesday, August 31, 2021

Oracle rename Asm diskgroup with DB files


There are basically we have two phases of using renamedg command, phase 1 generates a configuration file to be used by phase 2. In phase 2 actual renaming of diskgroup is done.


renamedg
[phase={ one|two |both } ] dgname=diskgroup
newdgname=newdiskgroup [config=configfile]
[ asm_diskstring=discoverystring, discoverystring ... ]
[ clean={true|false} ] [ check={true|false} ]
[ confirm={true|false}] [ verbose={ true|false} ]
[ keep_voting_files={true|false}]


1) Check Current configuration for database attached to diskgroup 

srvctl config database -d <DB_NAME>
select file_name from  v$datafile ; 


2) Stop/dismount diskgroup 

srvctl stop database -d <DB_NAME>
srvctl status diskgroup -g reco
srvctl stop diskgroup -g recoc1   


3) Rename diskgroup 

renamedg phase=both dgname=RECOC1 newdgname=RECO verbose=true keep_voting_files=true 

If the above command fails, searching for disks, then we need to include the diskstring and then you need to use

renamedg phase=both dgname=<OLD_DG_NAME> newdgname=<NEW_DG_NAME> verbose=true asm_diskstring='<discoverystring1>','<discoverystring2>


SQL> alter diskgroup RECO mount restricted;
SQL> alter diskgroup RECO rename disks all ;

srvctl start diskgroup -g reco   
srvctl status diskgroup -g reco


4) Modify spfile and password file location for database to point to new asm diskgroup 

srvctl modify database -d <db_name> -p <spfile_path_with_new_diskgroup> -a "<diskgroup_list>"


5)  Rename datafile location  in mount stage 

 select 'alter database rename file '''||name||''' to '' +NEWDG'||substr(name,instr(name,'/',1,1))||''';' from V$DATAFILE;

select 'alter database rename file '''||member||''' to '' +NEWDG'||substr(member,instr(member,'/',1,1))||''';' from V$logfile;


V$RECOVER_FILE view can be used to check for any issues with datafiles not being available.
If everything is fine, you can open the database. You would have noticed that I have not done anything for Temp files. Checking alert log, we see that it is renamed automatically


6)   Drop and recreate block change tracking file 

7) Change  Archive location / Fra Location / Snapshot Controlfile location  that is attached to this diskgroup 


References : 

How To Rename A Diskgroup On ASM 11gR2? (Doc ID 948040.1)
How to Change Database to Use a New Diskgroup After Diskgroup Rename (Doc ID 1359625.1)

Tuesday, August 17, 2021

Oracle -- Database Resident Connection Pooling (DRCP)

 DRCP (11g) is especially designed to help architectures such as PHP with the Apache
server, that can’t take advantage of middle-tier connection pooling because they used
multiprocess single-threaded application servers. DRCP enables applications such as these
to easily scale up to server connections in the tens of thousands.


DRCP is controlled by the following configuration parameters:

INACTIVITY_TIMEOUT maximum idle time for a pooled server before it is terminated.

MAX_LIFETIME_SESSION time to live TTL duration for a pooled session.

MAX_USE_SESSION maximum number of times a connection can be taken and released to the
pool.

MAX_SIZE and MIN_SZIE the maximum and minimum number of pooled servers in the connections
pool.

INCRSIZE pool would increment by this number of pooled server when pooled server
are unavailable at application request time.

MAX_THINK_TIME maximum time of inactivity by the client after getting a server from the
pool. If the client does not issue a database call after grabbing a server
from the pool, the client will be forced to relinquish control of the pooled
server and will get an error. The freed up server may or may not be
returned to the pool.

SESSION_CACHED_CURSORS turn on SESSION_CACHED_CURSORS for all connections in the pool. This is an existing initialization parameter



/* Enabling and Disabling DRCP */

conn sys as sysdba
-- the ramins open after DB restart
exec dbms_connection_pool.start_pool();
select connection_pool, status, maxsize from dba_cpool_info;
exec dbms_connection_pool.stop_pool();

-- specify using DRCP
-- in EZCONNECT method (.Net 11g)
myhost.comany.com:1521/mydb.company.com:POOLED

-- tnsnames
mydb = (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=myhost.company.com)
(SERVER=POOLED))) 
Page 241 Oracle DBA Code Examples
/* Configuring DRCP */
begin
 DBMS_CONNECTION_POOL.ALTER_PARAM( PARAM_NAME =>'INACTIVITY_TIMEOUT',
PARAM_VALUE=>'3600');
end;
/
-- restore parameter values to their defaults
exec dbms_connection_pool.restore_defaults()
/* Monitor DRCP */
SELECT
STATUS,MINSIZE,MAXSIZE,INCRSIZE,SESSION_CACHED_CURSORS,INACTIVITY_TIMEOUT
FROM DBA_CPOOL_INFO;
SELECT NUM_OPEN_SERVERS, NUM_BUSY_SERVERS, NUM_REQUESTS, NUM_HITS
 NUM_MISSES, NUM_WAITS, NUM_PURGED, HISTORIC_MAX
FROM V$CPOOL_STATS;

-- class-level stats
Select * From V$CPOOL_CC_STATS

Sunday, August 8, 2021

Oracle -- Open Cursor & Session Cached Cursors

 

This article mainly covers  importance of SESSION_CACHED_CURSORS on performance and  open cursors  .   Apart from SESSION_CACHED_CURSORS ,   other parameters that control   open cursors in database are cursor_sharing and  how developers close cursor in pl/sql block  .


OPEN_CURSORS 
specifies the maximum number of open cursors (handles to private SQL areas) a
session can have at once. You can use this parameter to prevent a session from opening an excessive
number of cursors. This parameter also constrains the size of the PL/SQL cursor cache, which PL/SQL
uses to avoid having to reparse as statements are re-executed by a user. 

SESSION_CACHED_CURSORS 
lets you specify the number of session cursors to cache. Repeated
parse calls of the same SQL statement cause the session cursor for that statement to be moved into the
session cursor cache. Subsequent parse calls will find the cursor in the cache and do not need to reopen
the cursor. 

If the session cursor cache count is maxed out, session_cursor_cache_hits is low compared to all parses, and you suspect that the application is re-submitting the same queries for parsing repeatedly, then increasing SESSION_CURSOR_CACHE_COUNT may help with latch contention and give a slight boost to performance. Note that if your application is not resubmitting the same queries for parsing repeatedly, then session_cursor_cache_hits will be low and the session cursor cache count may be maxed out, but caching cursors by session will not help at all. For example, if your application is using a lot of unsharable SQL, raising this parameter will not help


When a SQL statement is issued, the server process, after checking its syntax and semantics, searches the library cache for an existing cursor for the SQL statement. If a cursor does not already exist, a new cursor is created (hard parse), else existing cursor is used (soft parse). Whereas hard parsing is a resource intensive operation, soft parse,  although less expensive, also incurs some cost, as the server process has to search the library cache for previously parsed SQL, which requires the use of the library cache and shared pool latches. Latches can often become points of contention for busy OLTP systems, thereby affecting response time and scalability.

To minimize the impact on performance, session cursors of repeatedly issued statements can be stored in the session cursor cache to reduce the cost of or even eliminate soft parse. This is called Session Cursor Caching. When session cursor caching is enabled, Oracle caches the cursor of a reentrant SQL statement in the session memory (PGA / UGA). As a result, the session cursor cache now contains a pointer into the library cache where the cursor existed when it was closed. Since presence of a cursor in session cursor cache guarantees the correctness of the corresponding SQL’s syntax and semantics, these checks are bypassed when the same SQL is resubmitted. Subsequently, instead of searching for the cursor in library cache, the server process follows the pointer in the session memory and uses the cursor after having confirmed its presence and validity.

Hence, if a closed cursor is found in the session cursor cache, it is registered as a ‘session cached cursor hit’ and also as a ‘soft parse’, since a visit to the shared SQL area must be made to  confirm its presence and validity


Handling Cursor from Pl/SQL side Using Precompiler Option RELEASE_CURSOR=YES 

Well this is  totally different  topic altogether hence not sharing too much information on this. However  just to give  direction to developer on cursor handling can be done at coding side , 



Find out the session that is causing the error by using the following SQL statement:

select sid ,  count(*)  from v$open_cursor  group by sid order by 2  ;
select sql_id ,  count(*)  from v$open_cursor where sid=193  group by sql_id  ;

col hwm_open_cur format 99,999
col max_open_cur format 99,999
select 
   max(a.value) as hwm_open_cur, 
   p.value      as max_open_cur
from 
   v$sesstat a, 
   v$statname b, 
   v$parameter p
where 
   a.statistic# = b.statistic# 
and 
   b.name = 'opened cursors current'
and 
   p.name= 'open_cursors'
group by p.value;



select a.value, s.username, s.sid, s.serial# from v$sesstat a, v$statname b, v$session s where a.statistic# = b.statistic#  and s.sid=a.sid and b.name = 'opened cursors current' and s.username is not null;

select  sid ,sql_text, count(*) as "OPEN CURSORS", USER_NAME from v$open_cursor where sid in ($SID);

SELECT  max(a.value) as highest_open_cur, p.value as max_open_cur FROM v$sesstat a, v$statname b, v$parameter p WHERE  a.statistic# = b.statistic#  and b.name = 'opened cursors current' and p.name= 'open_cursors' group by p.value;


-- cursor open for  each session and its address 
SELECT C.SID AS "OraSID",
 C.ADDRESS || ':' || C.HASH_VALUE AS "SQL Address",
 COUNT(C.SADDR) AS "Cursor Copies"
 FROM V$OPEN_CURSOR C
GROUP BY C.SID, C.ADDRESS || ':' || C.HASH_VALUE
HAVING COUNT(C.SADDR) > 2
ORDER BY 3 DESC


--  open cursor value and  its usage 
SELECT
 'session_cached_cursors' parameter,
 LPAD(value, 5) value,
 DECODE(value, 0, ' n/a', to_char(100 * used / value, '990') ||
'%') usage
FROM
 (SELECT
 MAX(s.value) used
 FROM
 v$statname n,
 v$sesstat s
 WHERE
 n.name = 'session cursor cache count' and
 s.statistic# = n.statistic#
 ),
 (SELECT
 value
 FROM
 v$parameter
 WHERE
 name = 'session_cached_cursors'
 )
UNION ALL
SELECT
 'open_cursors',
 LPAD(value, 5),
 to_char(100 * used / value, '990') || '%'
FROM
 (SELECT
 MAX(sum(s.value)) used
 FROM
 v$statname n,
 v$sesstat s
 WHERE
 n.name in ('opened cursors current', 'session cursor cache
count') and
 s.statistic# = n.statistic#
 GROUP BY
 s.sid
 ),
 (SELECT
 value
 FROM
 v$parameter
 WHERE
 name = 'open_cursors'
 )


-- Cursor details for particular session 

SELECT b.SID, UPPER(a.NAME), b.VALUE
 FROM v$statname a, v$sesstat b, v$session c
 WHERE a.statistic# = b.statistic#
 AND c.SID = b.SID
 AND LOWER(a.NAME) LIKE '%' || LOWER('CURSOR')||'%' 
 AND b.SID=20
UNION
SELECT SID, 'v$open_cursor opened cursor', COUNT(*)
FROM v$open_cursor
WHERE SID=&sid
GROUP BY SID
ORDER BY SID
/

 SELECT a.value curr_cached, p.value max_cached,
s.username, s.sid, s.serial#
 FROM v$sesstat a, v$statname b, v$session s, v$parameter2 p
 WHERE a.statistic# = b.statistic# and s.sid=a.sid and a.sid=&sid
 AND p.name='session_cached_cursors'
 AND b.name = 'session cursor cache count';


-- CHECKING CACHE CURSOR FOR SESSIONS 

select s.username, s.sid, s.serial#, 'CACHED_CURSORS'
Statistic,
       a.value
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic#  and s.sid=a.sid
and s.username='HAFEEZ' 
and b.name in ('session cursor cache count')
union
select s.username, s.sid, s.serial#, 'CURSORS_HITS',
       a.value
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic#  and s.sid=a.sid
and s.username='HAFEEZ' 
and b.name in ( 'session cursor cache hits')
union
select s.username, s.sid, s.serial#, 'PARSES',
       a.value
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic#  and s.sid=a.sid
and s.username='HAFEEZ' 
and b.name in ( 'parse count (total)')
union
select s.username, s.sid, s.serial#, 'HARD PARSES',
       a.value
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic#  and s.sid=a.sid
and s.username='HAFEEZ' 
and b.name in ( 'parse count (hard)') ;

-- CHECKING DETAIL OF PARTICULAR SQL ID 

SELECT
  CHILD_NUMBER CN,
  NAME,
  VALUE,
  ISDEFAULT DEF
FROM
  V$SQL_OPTIMIZER_ENV
WHERE
  SQL_ID='5ngzsfstg8tmy'
  AND CHILD_NUMBER in (0,2,12)
ORDER BY
  NAME,
  CHILD_NUMBER;


SELECT
  CHILD_NUMBER CN,
  PARSING_SCHEMA_NAME,
  OPTIMIZER_ENV_HASH_VALUE OPTIMIZER_ENV,
  INVALIDATIONS,
  PARSE_CALLS,
  IS_OBSOLETE,
  FIRST_LOAD_TIME,
  TO_CHAR(LAST_ACTIVE_TIME,'YYYY-MM-DD/HH24:MI:SS') LAST_ACTIVE_TIME
FROM
  V$SQL
WHERE
  SQL_ID='5ngzsfstg8tmy';



It will provide highest count of open cursor that opened by a session currently. 
========================================================== 


col open_cur_configured form 9999999999
col max_open_cur form a15
select 
max(a.value) as highest_open_cur,
p.value as max_open_cur
from v$sesstat a, 
v$statname b, 
v$parameter p
where a.statistic# = b.statistic# 
and b.name = 'opened cursors current'
and p.name= 'open_cursors'
group by p.value;


Cursor usage per user:
======================

select 
sum(a.value) total_cur, 
max(a.value) max_cur, 
s.username
from v$sesstat a, 
v$statname b,
v$session s 
where a.statistic# = b.statistic# 
and s.sid=a.sid 
and b.name = 'opened cursors current' 
group by s.username
order by 2 desc;


Cursor usage per session:
=========================

select 
s.sid,
s.serial#,
s.username,
a.value cur_count
from 
v$sesstat a,
v$statname b,
v$session s
where a.statistic# = b.statistic# 
and s.sid = a.sid
and b.name = 'opened cursors current' 
order by 4 desc ;

historical open cursor usage:
============================= 

select to_char(b.begin_interval_time,'DD-MON-YY HH24:MI') begin_time,a.instance_number,a.value Open_cursor from dba_hist_sysstat a, dba_hist_snapshot b 
where a.stat_name='opened cursors current' 
and to_char(b.begin_interval_time,'DD-MON-YY HH24:MI') like '%14-OCT-19%'
and a.snap_id=b.snap_id 
and a.instance_number=b.instance_number 
order by 2,1;



To get the % useage of all available open cursor 
======================================================

SELECT 'session_cached_cursors' parameter, LPAD(value, 5) value, 
DECODE(value, 0, ' n/a', to_char(100 * used / value, '990') || '%' ) usage 
FROM ( SELECT MAX(s.value) used 
FROM v$statname n, v$sesstat s 
WHERE 
n.name = 'session cursor cache count' and 
s.statistic# = n.statistic# ), 
(SELECT value 
FROM v$parameter 
WHERE 
name = 'session_cached_cursors' ) 
UNION ALL 
SELECT 'open_cursors', LPAD(value, 5), to_char(100 * used / value, '990') || '%' 
FROM 
(SELECT MAX(sum(s.value)) used 
FROM v$statname n, v$sesstat s 
WHERE 
n.name in ('opened cursors current', 'session cursor cache count') and s.statistic#=n.statistic# 
GROUP BY 
s.sid 
), 
(SELECT value 
FROM v$parameter 
WHERE 
name = 'open_cursors' ) 







Cursor leak in Oracle :

It is important to be able to diagnose which cursor is being 'leaked' (not closed) to identify what part of the application is responsible for managing the cursor,

  Lists each cursor, which has been opened by the session more than once in descending order.
   SELECT COUNT(*), address
  2  FROM v$open_cursor
  3  WHERE sid = 135
  4  GROUP BY address HAVING COUNT(address) > 1 ORDER BY COUNT(*);



Enable  Traving to diagnose ORA-1000   :
ALTER SYSTEM SET EVENTS '1000 trace name errorstack level 3';



Bugs Related to Cursors :

1) BUG 30518349 - ORA-01000 SELECT DOCID FROM "DR#IX_TS0481$U" , "DR#IX_TS0481$K"...

2) Bug 23608322 - CURSOR LEAK WITH DBMS_SQL.RETURN_RESULT IN JDBC THIN DRIVER 12.1.0.2 was opened for this issue.



References   : 

https://docs.oracle.com/cd/E40329_01/admin.1112/e27149/cursor.htm#OMADM5352
https://docs.oracle.com/cd/B10501_01/appdev.920/a97269/pc_10opt.htm
https://docs.oracle.com/database/121/TGSQL/tgsql_cursor.htm#TGSQL848


Saturday, July 31, 2021

Oracle Database Migration -- Transportable Tablespace


For database migration  we  come up  different migration requirements based on  which  we need to  opt  for migration technologies .

We will overview on  different approaches available and do a deep drive of transportable tablespace . 


Migration Scenarios : 
1) 32bit to 64 bit Operating system 
2) Cross platform  --> different Indian format
3) Cloud migration 
4) Storage migration 
5) Migrate with upgrade to high version of database .


Methods for migration : 
1) Golden gate 
2)  export / Import 
3)  Dataguard  
4)  Heterogeneous Dataguard  
5) Transportable Tablespace and  XTTS 
6) Rman restore ( restoration and  Duplicate ,  ) 


Consideration for choosing migration method :  
1) Downtime 
2) Product license cost 
3) Size of database 





Transportable Tablespace 

Transportable Tablespace method can be used for Oracle database Migration and Upgrade when the operating systems of the source database and the target database are different and conversion between 32 bits and 64 bits.

When using Cross Platform Transportable Tablespaces (XTTS) to migrate database between systems that have different endian formats,
the amount of downtime required is related directly proportional to the size of the data set being moved. To reduce amount of downtime, Oracle recommend  Cross Platform Incremental Backup with Oracle 12c.

We personally used Backup as copy 1 day before migration date  and use incremental backup for update of copy on  date of upgrade  to reduce downtime .


What is Endian?
Endian is the storage method of multi-byte data types in memory. In other words, it determines the byte order of the data. There are two kinds of endian, Little and Big.

Little Endian
The data is stored little end first. That is, the firs byte is the biggest.

Big Endian
The data is stored big end first. That is, the first byte is the smallest.

For example ;

Assume that an integer is stored as 4 bytes (32 bits), then a variable with a value of 0x01234567 (Hexadecimal decimal representation) will be stored in the form of 0x01, 0x23, 0x45, 0x67. In systems with big endian, this data is stored in this order while in small endian systems it is stored in reverse orde

In Oracle databases, endian format is determined by the endian information in the environment in which it works. The endian format in the databases tells us which environments the related database can be moved to. It is not possible to move the database with normal methods between different endian environments. For example, you cannot transfer a database with Data Guard to a system with Big Endian from a Little Endian system.



Transportable tablespaces progressed  in different Version :

We can use the transportable tablespaces feature to copy/move subset of data (set of user tablespaces), from an Oracle database and plug it in to another Oracle database. The tablespaces being transported can be either dictionary managed or locally managed.

With Oracle 8i, Oracle introduced transportable tablespace (TTS) technology that moves tablespaces between databases. Oracle 8i supports tablespace transportation between databases that run on same OS platforms and use the same database block size.

With Oracle 9i, TTS (Transportable Tablespaces) technology was enhanced to support tablespace transportation between databases on platforms of the same type, but using different block sizes.

With Oracle 10g, TTS (Transportable Tablespaces) technology was further enhanced to support transportation of tablespaces between databases running on different OS platforms (e.g. Windows to Linux, Solaris to HP-UX), which has same ENDIAN formats. Oracle Database 10g Release 1 introduced cross platform transportable tablespaces (XTTS), which allows data files to be moved between platforms of different endian format. XTTS is an enhancement to the transportable tablespace (TTS). If ENDIAN formats are different we have to use RMAN (e.g. Windows to Solaris, Tru64 to AIX).

From 10g Release2 we can transport whole database, this is called Transportable Database (TDB).

From Oracle 11g, we can transport single partition of a tablespace between databases.


Below is using traditional method of transportable taablespace however we can also use oracle provided rman_xttconvert package as per 2471245.1  that uses incremental backup concept to reduce downtime . I see below  good writeup on rman_xttconvert method using incremental backup 

https://dohdatabase.com/2020/12/09/how-to-migrate-a-database-using-full-transportable-export-import-and-incremental-backups/




Limitations/Restrictions

1) The source and target database must use the same character set and national character set.
2) System, undo, sysaux and temporary tablespaces cannot be transported.
3) If Automatic Storage Management (ASM) is used with either the source or destination database, you must use RMAN to transport/convert the tablespace.
4) Transportable tablespaces do not support: Materialized views/replication Function-based indexes.
5) Binary_Float and Binary_Double datatypes (new in Oracle 10g) are not supported

Key Note :
1) Source and Target Character Set must match 
2) Source and Target Time Zone must match 
3) Compatible parameter on target must be same or higer then source 
4) Tablespace endianness can be converted with either Rman convert or Dbms_file_transfer 
5) Tablespace may be encrypted with Transparent Data Encryption 



Below are steps used for manual transportable tablespace :  we would suggest to use rman specially for big database where need to take incremental backup on last day

1)   Query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are supported and to determine each platform's endian format (byte ordering).

SQL> COLUMN PLATFORM_NAME FORMAT A32
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;

SELECT tp.platform_id,substr(d.PLATFORM_NAME,1,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;


SQL> set lines 200
SQL> set pages 200
SQL> COL "Source" FORM a32
SQL> COL "Compatible Targets" FORM a40
SQL> select d.platform_name "Source", t.platform_name "Compatible Targets", endian_format
from v$transportable_platform t, v$database d where t.endian_format = (select endian_format from v$transportable_platform t, v$database d where d.platform_name = t.platform_name) 
order by "Compatible Targets"; 



2)   Prepare for export of the tablespace.

Check that the tablespace will be self contained:

SQL> execute sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
SQL> select * from sys.transport_set_violations;

Sql> select * from transport_set_violations;
Violations
---------------------------------------------------------------------------
Constraint dept_fk between table jim.emp in tablespace sales_1 and table
Jim.dept in tablespace other
Partitioned table jim.sales is partially contained in the transportable set
These violations must be resolved before sales_1 and sales_2 are transportable. As noted in the next step, one choice for bypassing the integrity constraint violation is to not export
the integrity constraints


 
3)  The tablespaces need to be in READ ONLY mode in order to successfully run a transport tablespace export:

SQL> ALTER TABLESPACE TBS1 READ ONLY;
SQL> ALTER TABLESPACE TBS2 READ ONLY;



4)  Export the metadata.

you must have been assigned the exp_full_database role to perform a transportable tablespace export operation. If any of the tablespaces have xmltypes, you must use exp instead of data pump


expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y

if you want to perform a transport tablespace operation with a strict containment check, use the transport_full_check parameter, as shown in the following example:

If we want to use full=y transportable=always  we can use below . Please note if source database is of lower version we need to use version= to match target version .


$ expdp system/manager full=y transportable=always version=12 \
directory=dp_dir dumpfile=full_tts.dmp \
metrics=y exclude=statistics \
encryption_password=secret123word456 \
logfile=full_tts_export.log 



5)  Copy datafile to target 

If you see that the endian formats are different and then a conversion is necessary for transporting the tablespace set:
RMAN> convert tablespace TBS1 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';
RMAN> convert tablespace TBS2 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';


6) 
Transport both the datafiles and the export file of the tablespaces to a place that is accessible to the target database either using ftp or The dbms_file_transfer package
or asmcmd cp .



7)  Restore file on target .

In releases lower than 11.2.0.4  you need to follow the same steps specified above for ASM files. But if the endian formats are different then you must use the RMAN convert AFTER  transferring the files. The files cannot be copied directly between two ASM instances at different platforms.


This is an example of usage change of endian format is needed on target if not done in source :

RMAN> CONVERT DATAFILE
      '/path/tbs_31.f',
      '/path/tbs_32.f',
      '/path/tbs_41.f'
      TO PLATFORM="Solaris[tm] OE (32-bit)"
      FROM PLATFORM="HP TRu64 UNIX"
      DB_FILE_NAME_CONVERT= "/path_source/", "/path_dest/"
      PARALLELISM=5;

The same example, but here showing the destination being an +ASM diskgroup:

RMAN> CONVERT DATAFILE
      '/path/tbs_31.f',
      '/path/tbs_32.f',
      '/path/tbs_41.f'
      TO PLATFORM="Solaris[tm] OE (32-bit)"
      FROM PLATFORM="HP TRu64 UNIX"
      DB_FILE_NAME_CONVERT="/path_source/", "+diskgroup"
      PARALLELISM=5;




8)  Import Metadata on target 

impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target) REMAP_SCHEMA=(source_sch2:target_schema_sch2)

or 

$ impdp system@PDB2 FULL=y DIRECTORY=dp_from_source \
TRANSPORT_DATAFILES='/u02/app/oracle/oradata/ORCL/PDB2/example01.dbf', \
'/u02/app/oracle/oradata/ORCL/PDB2/fsdata01.dbf',\
'/u02/app/oracle/oradata/ORCL/PDB2/fsindex01.dbf,'\
'/u02/app/oracle/oradata/ORCL/PDB2/users01.dbf'


There’s no expdp necessary when using Data Pump over a database link (NETWORK_LINK). But in this case you will need the keywords FULL=Y and TRANSPORTABLE=ALWAYS as export parameters as the export portion of Data Pump on the source side will be triggered underneath the covers.

impdp mike/passwd@v121
NETWORK_LINK=v112

FULL=Y
TRANSPORTABLE=ALWAYS
VERSION=12
METRICS=Y
exclude=table_statistics,index_statistics

LOGTIME=ALL
 LOGFILE=ftex_dir:v112fullimp.log
TRANSPORT_DATAFILES='/oracle/DQ1/sapdata50/ts1.dbf'
TRANSPORT_DATAFILES='/oracle/DQ1/sapdata50/ts2.dbf'


9)  Put the tablespaces into read/write mode in source and target 



Views : 

SQL> select platform_name, endian_format from v$transportable_platform;




References : 

1) How to Move a Database Using Transportable Tablespaces (Doc ID 1493809.1)
2) How to Migrate to different Endian Platform Using Transportable Tablespaces With RMAN (Doc ID 371556.1)
3) Reduce Transportable Tablespace Downtime using XTTS (Doc ID 1389592.1)
4) V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

   11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Note 1389592.1)

   12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Note 2005729.1)

12c How Perform Cross-Platform Database Transport to different Endian Platform with RMAN
Backup Sets (Doc ID 2013271.1)


XTTS v4
Doc ID 2471245.1

XTTS v3
Doc ID 1389592.1

XTTS v2
Doc ID 1389592.1

XTTS v1
Doc ID 1389592.1

M5
Doc ID 2999157.1


M5 is the next-generation cross-platform transportable tablespace procedure 
New RMAN functionality combined with 
Full Transportable Export/Import
• Doc ID 2999157.

Friday, July 23, 2021

Oracle maintenance window -- Change scheduled Window time

 
As most of us are  aware Oracle  default maintenance window starts from 10pm server time for weekdays and   at 6am  for Weekends .   There  are situations where this time clashes with Application  jobs and we need to change default maintenance window time .


Below is step we followed to change window time for all days .
 


What Happens During maintenance Widow : 

In oracle Database 10g, we had only the first two automatic maintenance tasks mentioned below. With Oracle 11g, we have a third task name  “Automatic SQL Tunning Advisor”


Automatic Optimizer Statistics Collection:

Collects optimizer statistics for all schema objects in the database for which there are no statistics or only stale statistics. The statistics gathered by this task are used by the SQL query optimizer to improve the performance of SQL execution.
 
Automatic Segment Advisor:
Identifies segments that have space available for reclamation, and makes recommendations on how to de-fragment those segments.
 
Automatic SQL Tuning Advisor :
Examines the performance of high-load SQL statements, and makes recommendations on how to tune those statements.

To view the task, run the below SQL

 SQL> select client_name from DBA_AUTOTASK_CLIENT;

 
CLIENT_NAME
----------------------------------------------------------------
auto optimizer stats collection
auto space advisor
sql tuning advisor
 
 SELECT client_name, status, consumer_group, window_group
FROM dba_autotask_client
ORDER BY client_name;

 SELECT client_name, status FROM dba_autotask_operation;
CLIENT_NAME                                                      STATUS  
---------------------------------------------------------------- --------
auto optimizer stats collection                                  ENABLED  
auto space advisor                                                    ENABLED  
sql tuning advisor                                                     ENABLED  




Check existing window  timing .

set line 400
col window_name format a20 
select window_name,window_next_time from DBA_AUTOTASK_WINDOW_CLIENTS;
select window_name, REPEAT_INTERVAL, DURATION, NEXT_START_DATE from DBA_SCHEDULER_WINDOWS; 
 


Change  maintenance window  schedule as below  to  2pm    Server  time 

SQL> select window_name,window_next_time from DBA_AUTOTASK_WINDOW_CLIENTS;

WINDOW_NAME                    WINDOW_NEXT_TIME
------------------------------ --------------------------------------
MONDAY_WINDOW                  14-MAY-12 10.00.00.000000 PM EST5EDT
TUESDAY_WINDOW                 08-MAY-12 10.00.00.000000 PM EST5EDT
WEDNESDAY_WINDOW               09-MAY-12 10.00.00.000000 PM EST5EDT
THURSDAY_WINDOW                10-MAY-12 10.00.00.000000 PM EST5EDT
FRIDAY_WINDOW                  11-MAY-12 10.00.00.000000 PM EST5EDT
SATURDAY_WINDOW                12-MAY-12 06.00.00.000000 AM EST5EDT
SUNDAY_WINDOW                  13-MAY-12 06.00.00.000000 AM EST5EDT
TEST_WINDOW                    09-MAY-12 05.00.00.000000 AM EST5EDT



BEGIN
DBMS_SCHEDULER.set_attribute (
name => 'MONDAY_WINDOW',
attribute => 'repeat_interval',
VALUE => 'freq=daily;byday=MON;byhour=14;byminute=0;bysecond=0');
END;
/
BEGIN
DBMS_SCHEDULER.set_attribute (
name => 'TUESDAY_WINDOW',
attribute => 'repeat_interval',
VALUE => 'freq=daily;byday=TUE;byhour=14;byminute=0;bysecond=0');
END;
/

BEGIN
DBMS_SCHEDULER.set_attribute (
name => 'WEDNESDAY_WINDOW',
attribute => 'repeat_interval',
VALUE => 'freq=daily;byday=WED;byhour=14;byminute=0;bysecond=0');
END;
/



BEGIN
DBMS_SCHEDULER.set_attribute (
name => 'THURSDAY_WINDOW',
attribute => 'repeat_interval',
VALUE => 'freq=daily;byday=THU;byhour=14;byminute=0;bysecond=0');
END;
/


BEGIN
DBMS_SCHEDULER.set_attribute (
name => 'FRIDAY_WINDOW',
attribute => 'repeat_interval',
VALUE => 'freq=daily;byday=FRI;byhour=14;byminute=0;bysecond=0');
END;
/


BEGIN
DBMS_SCHEDULER.set_attribute (
name => 'SATURDAY_WINDOW',
attribute => 'repeat_interval',
VALUE => 'freq=daily;byday=SAT;byhour=14;byminute=0;bysecond=0');
END;
/


BEGIN
DBMS_SCHEDULER.set_attribute (
name => 'SUNDAY_WINDOW',
attribute => 'repeat_interval',
VALUE => 'freq=daily;byday=SUN;byhour=14;byminute=0;bysecond=0');
END;
/



Best practice is  that you must use the DBMS_SCHEDULER.DISABLE subprogram to disable the window before making changes to it, and then re-enable the window with DBMS_SCHEDULER.ENABLE when you are finished. If you change a window when it is currently open, the change does not take effect until the next time the window opens.

BEGIN 
dbms_scheduler.disable( name => 'MONDAY_WINDOW');
dbms_scheduler.set_attribute( name => 'MONDAY_WINDOW', 
attribute => 'DURATION',
 value => numtodsinterval(3, 'hour')); 
dbms_scheduler.enable( name => 'MONDAY_WINDOW');
END;


See if there is a window currently open. If there is, you can try closing it to see if that helps .

select * from DBA_SCHEDULER_GLOBAL_ATTRIBUTE where attribute_name='CURRENT_OPEN_WINDOW';

exec DBMS_SCHEDULER.close_window ('WEEKNIGHT_WINDOW');



Creating the new Maintenance Window
 
To create a maintenance window, you must create a Scheduler window and then add it to the window group MAINTENANCE_WINDOW_GROUP
 
 
The DBMS_SCHEDULER PL/SQL package provides the ADD_WINDOW_GROUP_MEMBER subprogram, which adds a window to a window group
 
 BEGIN
dbms_scheduler.create_window(
    window_name     => 'TEST_WINDOW',
    duration        =>  numtodsinterval(1, 'hour'),
    resource_plan   => 'DEFAULT_MAINTENANCE_PLAN',
    repeat_interval => 'FREQ=DAILY;BYHOUR=5;BYMINUTE=0;BYSECOND=0');
    dbms_scheduler.add_window_group_member(
    group_name  => 'MAINTENANCE_WINDOW_GROUP',
    window_list => 'TEST_WINDOW');
END;




Change Run Duration of Window : 

In case we want to control window how long 

EXEC DBMS_SCHEDULER.SET_ATTRIBUTE('WEEKNIGHT_WINDOW', 'duration', '+000 06:00:00');

SELECT *
FROM USER_SCHEDULER_JOB_RUN_DETAILS 
ORDER BY LOG_DATE DESC  ;



How to configure (disable/enable) the Automated Maintenance Tasks
 
DBMS_AUTO_TASK_ADMIN PL/SQL package can be used to do the following.
  
To enable the task:
 
BEGIN
dbms_auto_task_admin.disable(
    client_name => 'sql tuning advisor',
    operation   => NULL,
    window_name => NULL);
END;
 
 
To enable it
 
BEGIN
dbms_auto_task_admin.enable(
    client_name => 'sql tuning advisor',
    operation   => NULL,
    window_name => NULL);
END;



Reference : 

How to Change The Automated Maintenance Tasks Weekend Schedule (Doc ID 2036557.1)



Sunday, July 18, 2021

Oracle Database Services - Types and Relation with load balancing advisor

Basically what figured out database load balancing advisor and its what is hort and long service .  Hence  thought of documenting  it .


In this  Blog we will try to cover  below points

1) How To Create and managed services 
2)  Services and load balancing advisor 
3) Short and Long services .
4)  run time load balancing 





Create and Manage  services : 

srvctl add service -d PWIC -s CUHISTORY_SRVC -r PWIC1 -a PWIC2,PWIC3,PWIC4

srvctl add service -d QWIC -s CUHISTORY_SRVC -r QWIC1 -a QWIC2

srvctl add service -d db_unique_name -s service_name -r preferred_list [-a available_list]

srvctl config database -d ORCL

srvctl status service -d CCIHC

srvctl modify service -d PRDBC -s BATCHRPT_SRVC -n -i PRDBC2,PRDBC3 -a PRDBC

srvctl relocate service -d PRDBC -s BATCHRPT_SRVC -i PRDBC4 -t PRDBC2

crsctl relocate resource resource_name –n node_name

srvctl start service -d QGISC -s ora.QGISC.INS_FGEA.QGISC2.srv

srvctl start service -d PRDBC -s BATCHRPT_SRVC -i PRDBC3

srvctl start service –d pmpic –s pmpic_gen

srvctl stop service -d QGISC -s ora.QGISC.INS_FGEA.QGISC2.srv

srvctl stop service –d pmpic –s pmpic_gen

 
crsctl stop resource ora.QGISC.INS_WFC.QGISC2.srv       
crsctl stop resource ora.QGISC.QGISC_INS_FGEA.QGISC2.srv
 
 
 
SELECT username, program, machine, service_name FROM gv$session;
 
select name,network_name,enabled , failover_method,failover_type ,failover_retries
,failover_delay from dba_services ;
 
select name,network_name,enabled ,goal,min_cardinality,max_cardinality  from dba_services ;
 
select inst_id, name,network_name,blocked from gv$active_services ;
 
select service_id, name , goal from v$active_services;
 
show parameter service_name
 
 
SELECT sid, serial#,
            client_identifier, service_name, action, module
       FROM V$SESSION
 
 
select inst_id,service_id,name,network_name
from gv$services
order by 1,2;
pause
select inst_id,service_id,name,network_name
from gv$active_services
order by 1,2;
 
 
col name for a16
col network_name for a16
col failover_method for a16
col failover_type for a16
col service_name for a20
col stat_name for a35
pause see all services that are defined
select service_id,name,network_name,creation_date,
failover_type,failover_method
from dba_services;
 
select service_name,stat_name,value
from v$service_stats
order by service_name
/
  
select INST_ID,VALUE from gv$parameter  where NAME='service_names';
 
alter system set service_names='CFEED_SRVC, BATCHRPT_SRVC,PRDBC.envoy.net, DBR_SRVC'
scope=memory sid='PRDBC2';
 
select inst_id, username, program , logon_time from gv$session where service_name =
'SLICER_SRVC' order by inst_id ;
 
lsnrctl services > ab.log
lsnrctl services | grep service2
 
srvctl status service -d PRDBC
srvctl config service -d PRDBC
srvctl modify service -d PRDBC -s BATCHRPT_SRVC -n -i PRDBC2,PRDBC3 -a PRDBC4
srvctl relocate service -d PRDBC -s BATCHRPT_SRVC -i PRDBC4 -t PRDBC2
srvctl start service -d PRDBC -s BATCHRPT_SRVC -i PRDBC3




Services and Load Balancing Advisor : 

Load balancing distributes work across all of the available Oracle RAC database instances. Oracle recommends that applications use connection pools with persistent connections that span the instances that offer a particular service. When using persistent connections, connections are created infrequently and exist for a long duration. Work comes into the system with high frequency, borrows these connections, and exists for a relatively short duration. The load balancing advisory provides advice about how to direct incoming work to the instances that provide the optimal quality of service for that work. This minimizes the need to relocate the work later.

By using the Load Balancing Advisory and run-time connection load balancing goals, feedback is built in to the system. Work is routed to provide the best service times globally, and routing responds gracefully to changing system conditions. In a steady state, the system approaches equilibrium with improved throughput across all of the Oracle RAC instances.

Standard architectures that can use the load balancing advisory include connection load balancing, transaction processing monitors, application servers, connection concentrators, hardware and software load balancers, job schedulers, batch schedulers, and message queuing systems. All of these applications can allocate work.

The load balancing advisory is deployed with key Oracle clients, such as a listener, the JDBC universal connection pool, OCI session pool, Oracle WebLogic Server Active GridLink for Oracle RAC, and the ODP.NET Connection Pools. Third-party applications can also subscribe to load balancing advisory events by using JDBC and Oracle RAC FAN API or by using callbacks with OCI.



The load balancing advisory has the task of advising the direction of incoming work to the RAC instances that provide optimal quality of service for that work.
To test the load balancing, you can use the scripts in the appendix to apply load and the following section for monitoring.

-- view load balancing gaol of a current service
-- NONE means load balancing advisory is disabled
SELECT NAME, GOAL, CLB_GOAL FROM DBA_SERVICES;
-- SERVICE TIME: response-time based, like online shopping 
Page 481 Oracle DBA Code Examples
execute dbms_service.modify_service (service_name => 'hrserv' -
 , goal => dbms_service.goal_service_time -
 , clb_goal => dbms_service.clb_goal_short);
-- THROUGHPUT: rat at which jobs are completed, like batch processing
execute dbms_service.modify_service (service_name => 'hrserv' -
 , goal => dbms_service.goal_throughput -
 , clb_goal => dbms_service.clb_goal_long);
-- NONE: LBA disabled
execute dbms_service.modify_service (service_name => 'hrserv' -
 , goal => dbms_service.goal_none -
 , clb_goal => dbms_service.clb_goal_long); 




Monitoring Load Balancing Advisory
V$SERVICEMETRIC : service metrics on the most 5-second and 1-minute intervals
V$SERVICEMETRIC_HISTORY : recent history of the metric values

SELECT
 service_name
 ,TO_CHAR(begin_time,'hh24:mi:ss') beg_hist
 ,TO_CHAR(end_time,'hh24:mi:ss') end_hist
 ,inst_id
 ,goodness
 ,delta
 ,flags
 ,cpupercall
 ,dbtimepercall
 ,callspersec
 ,dbtimepersec
 FROM gv$servicemetric
 WHERE service_name ='hrserv'
 ORDER BY service_name, begin_time DESC, inst_id;
SELECT SERVICE_NAME,
 to_char(BEGIN_TIME,'hh24:mi:ss') BEGIN_TIME,
 to_char(END_TIME,'hh24:mi:ss') END_TIME,
 INTSIZE_CSEC, GROUP_ID "Service Metric Group",
 CPUPERCALL "CPU time per call",
 DBTIMEPERCALL "Elapsed time per call",
 CALLSPERSEC "User Calls/s",
 DBTIMEPERSEC "DB Time/s"
from V$SERVICEMETRIC_HISTORY
order by SERVICE_NAME, BEGIN_TIME desc;
-- aggregated
SELECT
 SERVICE_NAME,
 GROUP_ID "Service Metric Group",
 round(sum(CPUPERCALL),2) "CPU time per call",
 round(sum(DBTIMEPERCALL),2) "Elapsed time per call",
 round(sum(CALLSPERSEC),2) "User Calls/s",
 round(sum(DBTIMEPERSEC),2) "DB Time/s"
from V$SERVICEMETRIC_HISTORY
group by SERVICE_NAME, GROUP_ID
order by SERVICE_NAME; 



Short and Long services : 

For each service, you can define the connection load-balancing goal that you want the listener to use. You can use a goal of either long or short for connection load balancing. These goals have the following characteristics:

Short Connections are distributed across instances based on the amount of time that the service is used. Use the short connection load-balancing goal for applications that have connections of brief duration.
Long Connections are distributed across instances based on the number of sessions in each instance, for each instance that supports the service. Use the long connection load-balancing goal for applications that have connections of long duration.


$ srvctl modify service -db db_unique_name -service oltpapp -clbgoal SHORT
 
$ srvctl modify service -db db_unique_name -service batchconn -clbgoal LONG

select service_name , CLB_GOAL  from all_services;






Run Time Load balancing  : 

Typically, there are two types of load balancing:
 
Connection Load Balancing (CLB)
Run-time Load Balancing (RTLB)
  
RTLB has come into existence from 10gR2 which can be either set to “SERVICE_TIME” or “THROUGHPUT”. CLB can be configured at Client-Side or at Server-Side. Of which, Server-Side load balancing is recommended and have better functionality over Client-Side.
 
When a service is created by default Connection Load Balancing is enabled and set to “LONG” and Run-Time Load Balancing is disabled. Irrespective of Connection Load Balancing Goal “SHORT” or “LONG” you can enable Runtime Load Balancing Goal to “SERVICE_TIME” or “THROUGHPUT”.
 
 With Connection Load Balancing goal set to LONG, do not configure Run-time Load Balancing as it is only applicable to applications where next session or job starts only after the current one ends which is not practical. This is the reason you must have read that Run-time Load Balancing must be enabled with CLB goal set to SHORT.
 
 
 
[oracle@RAC1 ~]$ srvctl modify service -d RACDB -s LBASRV1 -B SERVICE_TIME
[oracle@RAC1 ~]$ srvctl config service -d RACDB -s LBASRV1
 
[oracle@RAC1 ~]$ srvctl modify service -d RACDB -s LBASRV1 -B THROUGHPUT
[oracle@RAC1 ~]$ srvctl config service -d RACDB -s
 
  
Disabling CLB on server-side is equivalent to disabling LBA (Load Balancing Advisory). You will not find an option to do it if you search in “srvctl” help.
Oracle document says that configuring GOAL to NONE will disable Load Balancing Advisory (LBA) on the service. Let us try doing it and see what exactly it is.
 
 
I will try to use DBMS_SERVICE package to modify this service and to disable LBA.
 
SQL> EXECUTE DBMS_SERVICE.MODIFY_SERVICE (service_name => ‘LBASRV1’, goal => DBMS_SERVICE.GOAL_NONE);
 
 
SQL> select GOAL,CLB_GOAL from dba_services where name=’LBASRV1′;
 
GOAL         CLB_G
———— —–
NONE         LONG
 
 
But as soon as we restart the service using srvctl GOAL has come back to THROUGHPUT.
 
[oracle@RAC1 ~]$ srvctl stop service -s LBASRV1 -d RACDB
[oracle@RAC1 ~]$ srvctl start service -s LBASRV1 -d RACDB
 
 
 
SQL> select GOAL,CLB_GOAL from dba_services where name=’LBASRV1′;
 
GOAL             CLB_G
————             —–
THROUGHPUT   LONG
 
 
 select user_data from sys.sys$service_metrics_tab order by enq_time;






Manage service using DBMS_SERVICE for Non Rac 
 
exec DBMS_SERVICE.CREATE_SERVICE('service2','service2');
exec dbms_service.start_service('rdbprod.envoy.net');
exec dbms_service.stop_service('bb', dbms_service.all_instances)
 
 
begin
 dbms_service.create_service (
 service_name =>'ABC' , network_name => 'src_net_name' ,
  failover_type => DBMS_SERVICE.FAILOVER_TYPE_SELECT, -
        failover_retries => 10, -
      failover_delay => 1, -
) ;
dbms_service.start_service('abc',dbms_service.all_instances);
end ;
/
 
 
exec DBMS_SERVICE.MODIFY_SERVICE( -
        service_name => 'o11gr1', -
         goal => DBMS_SERVICE.GOAL_THROUGHPUT, -
    failover_method => DBMS_SERVICE.FAILOVER_METHOD_BASIC, -
       failover_type => DBMS_SERVICE.FAILOVER_TYPE_SELECT, -
        failover_retries => 10, -
      failover_delay => 1, -
    clb_goal => DBMS_SERVICE.CLB_GOAL_LONG);







 
Attaching services to jobs ::
 

-- Create OLTP and BATCH job classes.
BEGIN
  DBMS_SCHEDULER.create_job_class(
    job_class_name => 'OLTP_JOB_CLASS',
    service        => 'OLTP_SERVICE');
 
  DBMS_SCHEDULER.create_job_class(
    job_class_name => 'BATCH_JOB_CLASS',
    service        => 'BATCH_SERVICE');
END;
/
 
-- Make sure the relevant users have access to the job classes.
GRANT EXECUTE ON sys.oltp_job_class TO my_user;
GRANT EXECUTE ON sys.batch_job_class TO my_user;
 
 
$sqlplus ‘/as sysdba’
SQL> select * from DBA_SCHEDULER_JOB_CLASSES;
SQL> exec DBMS_SCHEDULER.CREATE_JOB_CLASS (job_class_name => 'AUDITING_JOB_CLASS', service
=> 'AUDITING_SERVICE', comments => 'THIS IS AUDIT SERVICE JOB CLASS');
PL/SQL procedure successfully completed.
 
SQL> select * from DBA_SCHEDULER_JOB_CLASSES; à Entry should be available now
 
 
-- Create a job associated with a job class.
BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'my_user.oltp_job_test',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN NULL; END;',
    start_date      => SYSTIMESTAMP,
    repeat_interval => 'FREQ=DAILY;',
    job_class       => 'SYS.OLTP_JOB_CLASS',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'Job linked to the OLTP_JOB_CLASS.');
END;
/
 
-- Assign a job class to an existing job.
EXEC DBMS_SCHEDULER.set_attribute('MY_BATCH_JOB', 'JOB_CLASS', 'BATCH_JOB_CLASS');
 
 
 


  
Add TAF Service :
 
srvctl status service -d devdb
 
srvctl add service -d devdb -s devdb_oltp -r devdb1,devdb2 -P BASIC
 
srvctl start service -d devdb -s devdb_oltp
 
srvctl status service -d devdb
 
 
set line 130
 
col SERVICE_ID for 999
col NAME for a35
col NETWORK_NAME for a35
col FAILOVER_METHOD for a30
 
select SERVICE_ID, NAME, NETWORK_NAME, FAILOVER_METHOD FROM DBA_SERVICES;


 
Failover  Configuration : 

select failover_type, failover_method, failed_over from v$session where username='SCOTT';
 
exec dbms_service.modify_service( service_name => 'devdb_oltp' , aq_ha_notifications => true, failover_method => dbms_service.failover_method_basic, failover_type => dbms_service.failover_type_select, failover_retries =>180, failover_delay =>5);
 
 
 
exec dbms_service.modify_service( service_name => 'devdb_oltp' -
, aq_ha_notifications => true -
, failover_method => dbms_service.failover_method_basic -
, failover_type => dbms_service.failover_type_select -
, failover_retries =>180 -
, failover_delay =>5 -
)
;
 
 
++++++++++++++
 
++++++++++++++
procedure disconnect_session( service_name in varchar2);
procedure create_service( service_name in varchar2, network_name in varchar2, goal in number, dtp in boolean, aq_ha_notifications in boolean, failover_method in varchar2, failover_type in varchar2, failover_retries in number, failover_delay in number, clb_goal in number);
procedure modify_service( service_name in varchar2, goal in number, dtp in boolean, aq_ha_notifications in boolean, failover_method in varchar2, failover_type in varchar2, failover_retries in number, failover_delay in number, clb_goal in number);
procedure delete_service( service_name in varchar2);
procedure start_service( service_name in varchar2, instance_name in varchar2);
procedure stop_service( service_name in varchar2, instance_name in varchar2);
 
 
connect system/devdb@crm
 
connect system/devdb@devdb_oltp
 
select instance_number instance#, instance_name, host_name, status from v$instance;
 
 
: Remove TAF Service
 
srvctl status service -d devdb
 
srvctl stop service -d devdb -s devdb_oltp
 
srvctl remove service -d devdb -s devdb_oltp
 
srvctl status service -d devdb
 
 
SQL > exec dbms_service.delete_service( service_name => 'devdb_oltp');
 
 
exec dbms_service.modify_service( service_name => 'CRM' , aq_ha_notifications => true, failover_method => dbms_service.failover_method_basic, failover_type => dbms_service.failover_type_select, failover_retries =>180, failover_delay =>5);
 
 



 Using  TNS with Service name : 
 

OLTP =
  (DESCRIPTION =
    (LOAD_BALANCE = ON)
    (FAILOVER = ON)
    (ADDRESS = (PROTOCOL = TCP)(HOST = server01)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = server02)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = server03)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = OLTP_SERVICE)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 20)
        (DELAY = 1)
      )
    )
  )
 
 
BATCH =
  (DESCRIPTION =
    (LOAD_BALANCE = ON)
    (FAILOVER = ON)
    (ADDRESS = (PROTOCOL = TCP)(HOST = server01)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = server02)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = server03)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = BATCH_SERVICE)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 20)
        (DELAY = 1)
      )
    )
  

DEVDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.hclt.com)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.hclt.com)(PORT = 1521))
    (LOAD_BALANCE = no)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = devdb_oltp)
    )
  )
 
 
DEVDB_OLTP =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.hclt.com)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.hclt.com)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = devdb_oltp)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 180)
        (DELAY = 5)
      )
    )
  )




Troubleshooting  Service Connection : 

Note for DBA TEAM: application teams use application specific RAC services to connect to the database.

Following are the tnsname entries they use SS indicates Shared Server and non SS means Dedicated Server.
MARGIN-PRD.world, MARGIN-SS-PRD.world, RISKVIEW-PRD.world,RISKVIEW-SS-PRD.world,

RISKCALC-PRD.world,RISKCALC-SS-PRD.world,MULTIVAL-PRD.world,MULTIVAL-SS-PRD.world,
VARMARGIN-PRD.world,VARMARGIN-SS-PRD.world,CMA-PRD.world,CMA-SS-PRD.world.

 
If application team complains about connection issue to the database first thing ensure

  archive area is not full. Then try connecting using above tnsname entries from global tnsnames.
If you get error like listener currently does not know service specified then follow

below step to start the services :
 

1. check if both the instances are up and running.
2. check all the application specific RAC services are up and running as below : (This command can be run from any node)

 
xstmc021001por(oracle):MFXPRDI1:misc$ $GRID_HOME/bin/srvctl status service -d MFXPRD_xstmc021 -s "MARGIN,RISKVIEW,RISKCALC,MULTIVAL,VARMARGIN,CMA"

Service MARGIN is running on instance(s) MFXPRDI2
Service RISKVIEW is running on instance(s) MFXPRDI1

Service RISKCALC is running on instance(s) MFXPRDI1
Service MULTIVAL is running on instance(s) MFXPRDI1

Service VARMARGIN is running on instance(s) MFXPRDI1
Service CMA is running on instance(s) MFXPRDI1

 
 

3. If not then bring the services on specific instance as per above list.
   Please note MARGIN service runs on MFXPRDI2 and rest all run on MFXPRDI1

eg, to bring up one service as
$GRID_HOME/bin/srvctl start service -d MFXPRD_xstmc021 -s CMA -i MFXPRDI1

 
to bring up all the services in one command

 
$GRID_HOME/bin/srvctl start service -d MFXPRD_xstmc021 -s "CMA,MULTIVAL,RISKCALC,RISKVIEW,VARMARGIN" -i MFXPRDI1

$GRID_HOME/bin/srvctl start service -d MFXPRD_xstmc021 -s MARGIN -i MFXPRDI2
 

4. Wait for sometime and Run listener services from both the nodes and check all the services are registered to listener with   shared servers (dispatcher entries in listener services output).
 

TNS_ADMIN=$GRID_HOME/network/admin $GRID_HOME/bin/lsnrctl services listener_mfxprd
 

5. If not then check service_names parameter for both the instances. And ensure above services   are properly registered.
   If not then use below command to add services to service_name

ALTER SYSTEM SET service_names='CMA','RISKVIEW','RISKCALC','MULTIVAL','VARMARGIN','MFXPRD' SCOPE=BOTH SID='MFXPRDI1';
ALTER SYSTEM SET service_names='MARGIN','MFXPRD' SCOPE=BOTH SID='MFXPRDI2';

 



Measuring Performance by Service Using the Automatic Workload Repository

Services add a new dimension for performance tuning. With services, workloads are visible and measurable, and therefore resource consumption and wait times are attributable by application. Tuning by using "service and SQL" replaces tuning by "session and SQL" in the majority of systems where all sessions are anonymous and shared.
The AWR maintains performance statistics that include information about response time, throughput, resource consumption, and wait events for all services and work that a database performs. Oracle Database also maintains metrics, statistics, wait events, wait classes, and SQL-level traces for services. You can optionally augment these statistics by defining modules within your application to monitor certain statistics. You can also define the actions within those modules that business critical transactions should execute in response to particular statistical values.


/* Enabling Gather statiscitc on Service-Module-Action combination */
-- to display currently connected services, modules and actions
select SID, USERNAME, SERVICE_NAME, MODULE, ACTION from V$SESSION
where SERVICE_NAME in ('hrserv','oeserv')
-- service name and module name are mandatory
begin
 DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(SERVICE_NAME => 'hrserv',
 MODULE_NAME=>'PAYROLL',
 ACTION_NAME => 'EXCEPTIONS PAY');
end;
/


-- gather stats for PAYROLL module and ACTION whose name is null
begin
 DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(SERVICE_NAME => 'hrserv', 
Page 489 Oracle DBA Code Examples
 MODULE_NAME=>'PAYROLL',
 ACTION_NAME => NULL);
end;
/

-- gather stats for PAYROLL module and All its ACTIONs
begin
 DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(SERVICE_NAME => 'hrserv',
 MODULE_NAME=>'PAYROLL',
 ACTION_NAME => '###ALL_ACTIONS');
end;
/

-- to view enabled monitorings
-- types: SERVICE, SERVICE_MODULE, SERVICE_MODULE_ACTION
select A.AGGREGATION_TYPE, A.PRIMARY_ID , A.QUALIFIER_ID1 , A.QUALIFIER_ID2
from DBA_ENABLED_AGGREGATIONS a


-- to view gathered stats
select S.AGGREGATION_TYPE, S.SERVICE_NAME, S.MODULE, S.ACTION, N.CLASS,
 decode(n.CLASS,
'1','User','2','Redo','4','Enqueue','8','Cache','16','OS','32','RAC','64','SQL
','128','Debug', N.CLASS) STAT_CLASS,
 S.STAT_NAME, S.VALUE
from V$SERV_MOD_ACT_STATS s, V$STATNAME n
where S.STAT_ID = N.STAT_ID
order by N.CLASS, S.STAT_ID

-- call times and performance statistics views:
V$SERVICE_STATS
V$SERVICE_EVENTS
V$SERVICE_WAIT_CLASSES
V$SERVICEMETRIC
V$SERVICEMETRIC_HISTORY
/* To Disable Cumulative Stats */
-- stats will be removed from V$SERV_MOD_ACT_STATS
begin
 DBMS_MONITOR.SERV_MOD_ACT_STAT_DISABLE(SERVICE_NAME => 'hrserv',
 MODULE_NAME=>'PAYROLL',
 ACTION_NAME => 'EXCEPTIONS PAY');
end;
/


/* Service Quality Statistics */
-- script from Oracle documentation
-- provides service quality statistics every five seconds
SET PAGESIZE 60 COLSEP '|' NUMWIDTH 8 LINESIZE 132 VERIFY OFF FEEDBACK OFF
COLUMN service_name FORMAT A20 TRUNCATED HEADING 'Service'
COLUMN begin_time HEADING 'Begin Time' FORMAT A10
COLUMN end_time HEADING 'End Time' FORMAT A10
COLUMN instance_name HEADING 'Instance' FORMAT A10
COLUMN service_time HEADING 'Service Time|mSec/Call' FORMAT 999999999
COLUMN throughput HEADING 'Calls/sec'FORMAT 99.99
BREAK ON service_name SKIP 1
SELECT 
Page 490 Oracle DBA Code Examples
service_name
, TO_CHAR(begin_time, 'HH:MI:SS') begin_time
, TO_CHAR(end_time, 'HH:MI:SS') end_time
, instance_name
, elapsedpercall service_time
, callspersec throughput
FROM gv$instance i
, gv$active_services s
, gv$servicemetric m
WHERE s.inst_id = m.inst_id
 AND s.name_hash = m.service_name_hash
 AND i.inst_id = m.inst_id
 AND m.group_id = 10
 ORDER BY service_name , i.inst_id , begin_time ;


Reference : 

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/racad/workload-management-with-dynamic-database-services.html#GUID-095B67FB-3E3A-44BE-84A4-321174015A08

 https://docs.oracle.com/database/121/RACAD/GUID-559FB230-857B-4D97-B36A-F4F76C3A1D47.htm#RACAD7127