Tuesday, May 12, 2020

Oracle Database Auditing

                                           Oracle Database Auditing

Please note that from 12c  Unified Auditing has been introduced . Hence for 12c database and above  views and functionality will be different as pre defined audit policies were introduced  . 

Until Oracle 10g, auditing is disabled by default, but can be enabled by setting the AUDIT_TRAIL static parameter in the init.ora file.

From Oracle 11g, auditing is enabled for some system level privileges.



Oracle stores information that is relevant to auditing in its data dictionary.
Every time a user attempts anything in the database where audit is enabled the Oracle kernel checks to see if an audit record should be created or updated .

Audit data can be stored in a database (SYS.AUD$) table or in an OS file residing in AUDIT_FILE_DEST (default is $ORACLE_HOME/rdbms/audit) directory.
This table is, by default, located in the SYSTEM tablespace. This itself can cause problems with potential denial of service attacks. If the SYSTEM tablespace fills up, the database will hang.




AUDIT_TRAIL can have the following values :

AUDIT_TRAIL={NONE or FALSE| OS| DB or TRUE| DB_EXTENDED| XML |XML_EXTENDED}

The following list provides a description of each value:

NONE or FALSE -> Auditing is disabled. Default until Oracle 10g.
DB or TRUE -> Auditing is enabled, with all audit records stored in the database audit trial (AUD$). Default from Oracle 11g.
DB_EXTENDED –> Same as DB, but the SQL_BIND and SQL_TEXT columns are also populated.
XML-> Auditing is enabled, with all audit records stored as XML format OS files.
XML_EXTENDED –> Same as XML, but the SQL_BIND and SQL_TEXT columns are also populated.
OS -> Auditing is enabled, with all audit records directed to the operating system’s file specified by AUDIT_FILE_DEST





#########################
Unified Auditing in 12c 
#########################


If you enable the unified audit feature announced with 12c, all audit records will be in the UNIFIED_AUDIT_TRAIL table. Logon failure records will not be in this table because the corresponding policy is not enabled by default. The policy ORA_LOGON_FAILURES must be enabled to audit the Failed Logon attempts.


The unified audit trail, which resides in a read-only table in the AUDSYS schema in the SYSAUX tablespace, makes this information available in an uniform format in the UNIFIED_AUDIT_TRAIL data dictionary view.


When the database is writable, audit records are written to the unified audit trail. If the database is not writable, then audit records are written to new format operating system files in the POSIX file system in the $ORACLE_BASE/audit/$ORACLE_SID directory.


Bug 17466854 – Cannot set Unified Auditing in Standard  Edition (MOS)

Patch 17466854



2 new roles were introduced 

AUDIT_ADMIN
Manages audit configuration & audit trail

AUDIT_VIEWER

Analysis audit data



Enable ORA_LOGON_FAILURES to track failed logon attempts . You can enable the ORA_LOGON_FAILURES policy as follows.

SQL> audit policy ORA_LOGON_FAILURES;



To turn off this audit, type the following:

NOAUDIT POLICY hr_emp_select BY dcollins;



Records will then be created. You can see the number of records related to this policy with a query like the one below.

  SELECT audit_type,
         unified_audit_policies,
         action_name,
         return_code,
         COUNT (*)
    FROM unified_audit_trail
   WHERE unified_audit_policies = 'ORA_LOGON_FAILURES'
GROUP BY audit_type,
         unified_audit_policies,
         action_name,
         return_code
ORDER BY COUNT (*);


LOGON success action is audited by unified audit policy ORA_LOGON_FAILURES after executing NOAUDIT & AUDIT command (Doc ID 2305054.1)


SQL> col DBUSERNAME for a10
SQL> col ACTION_NAME for a10
SQL> col UNIFIED_AUDIT_POLICIES for a20
SQL> col RETURN_CODE for 9999999
SQL> select DBUSERNAME,ACTION_NAME,UNIFIED_AUDIT_POLICIES,RETURN_CODE from unified_audit_trail where UNIFIED_AUDIT_POLICIES = 'ORA_LOGON_FAILURES';






To see all the audits captured for the HR user, type

SELECT EVENT_TIMESTAMP, CLIENT_PROGRAM_NAME, ACTION_NAME,
UNIFIED_AUDIT_POLICIES
FROM UNIFIED_AUDIT_TRAIL
WHERE DBUSERNAME = 'HR'
ORDER BY EVENT_TIMESTAMP DESC;



To see which users or roles have been enabled to be audited by policies in the database, type


By default 7 audit policies will be present in a 12c database.

SELECT * FROM AUDIT_UNIFIED_ENABLED_POLICIES;




To see what privileges are being audited by default for specific policies 

 SELECT POLICY_NAME, AUDIT_OPTION, AUDIT_OPTION_TYPE
FROM AUDIT_UNIFIED_POLICIES
order by policy_name, AUDIT_OPTION; 


DEFAULT POLICIES IN 12C DATABASE :-

SQL> select distinct POLICY_NAME from AUDIT_UNIFIED_POLICIES;


select AUDIT_OPTION from AUDIT_UNIFIED_POLICIES where POLICY_NAME='ORA_SECURECONFIG';





Manually creating  Audit policy :

 CREATE AUDIT POLICY MY_POLICY
 PRIVILEGES SELECT ANY TABLE
 ACTIONS CREATE TABLE, DROP TABLE;
AUDIT POLICY MY_POLICY BY HR;


CREATE AUDIT POLICY ORA_SELECT  ACTIONS All on owner.table;

CREATE AUDIT POLICY <POLICY_NAME> ACTIONS ALL ONLY TOPLEVEL;

CREATE AUDIT POLICY test_audit_policy
ACTIONS SELECT ON schema.tab1,
SELECT ON schema.tab2,
SELECT ON schema.tab1_VIEW,
SELECT ON schema.tab1_SYNONYM ;

AUDIT POLICY test_audit_policy

AUDIT POLICY <POLICY_NAME>;

AUDIT POLICY <POLICY_NAME> BY <Desired USER>; <<<<< Can add users separated by ','
OR
AUDIT POLICY <POLICY_NAME> except sys,system  <<<<<<< Can add any more.


NOAUDIT POLICY hr_emp_select BY sys,system  ;

NOAUDIT POLICY hr_emp_select ;




How to view audit information with Oracle 12c

After configuring for and turning on auditing, see what audit data is being collected.

DBA_AUDIT_TRAIL shows all audit entries in the system.

DBA_AUDIT_OBJECT shows all audit entries in the system for objects.

DBA_AUDIT_STATEMENT shows audit entries for the statements GRANT, REVOKE, AUDIT, NOAUDIT, and ALTER SYSTEM.


-- find statements audited in the database:
col user_name for a12 heading "User name"
col audit_option format a30 heading "Audit Option"
set pages 1000
prompt
prompt System auditing options across the system and by user
select user_name,audit_option,success,failure from sys.dba_stmt_audit_opts
order by user_name, proxy_name, audit_option
/



-- Find privileges audited in the database:
col user_name for a12 heading "User name"
col privilege for a30 heading "Privilege"
set pages 1000
prompt
prompt System Privileges audited across system
select user_name,privilege,success,failure from dba_priv_audit_opts
order by user_name, proxy_name, privilege
/


select parameter , value from gv$option where parameter like '%Aud%'; 

select * from dba_audit_mgmt_config_params where parameter_name in ('DB AUDIT TABLESPACE') and audit_trail ='UNIFIED AUDIT TRAIL'; 


SELECT table_name , partition_type , def_tablespace_name ,  interval  from dba_part_tables where table_name ='AUD$UNIFIED'; 


SELECT * FROM DBA_SCHEDULER_JOBS WHERE job_name   like 'UNIFIED_AUDIT%'; 

select  * from   DBA_SCHEDULER_JOB_details where job_name   like 'UNIFIED_AUDIT%'; 





Cleaning sysaux 


If the ORA_LOGON_FAILURES policy is enabled, your SYSAUX tablespace can be full in a short time, as too many records will be created. You can learn more about SYSAUX tablespace in the article named “SYSAUX Tablespace Usage”.

You can create a job to automatically remove old records. The following command can be used to create a job that will delete every 24 hours. The first deletion will take place 24 hours after you run the command.

BEGIN
  DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (
   AUDIT_TRAIL_TYPE            => DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, 
   AUDIT_TRAIL_PURGE_INTERVAL  => 24,
   AUDIT_TRAIL_PURGE_NAME      => 'Audit_Trail_Purge_Job',
   USE_LAST_ARCH_TIMESTAMP     => TRUE,
   CONTAINER                   => DBMS_AUDIT_MGMT.CONTAINER_CURRENT);
END;
/


The following command can be used to change the job’s schedule.

BEGIN
 DBMS_AUDIT_MGMT.SET_PURGE_JOB_INTERVAL(
  AUDIT_TRAIL_PURGE_NAME       => 'Audit_Trail_Purge_Job',
  AUDIT_TRAIL_INTERVAL_VALUE   => 48);
END;
/





Mixed Auditing  Vs  Pure Auditing 

1. Mixed auditing – By default it is enable in 12c. It enables to use both traditional auditing and unified auditing methods. I.e. apart from traditional auditing we can use all the features of unified auditing. Once we are comfortable with the unified concept, we can migrate exiting audit setup to unified policy , we can enable pure auditing.
This serves as a good mediator for an easy and hassle free switch to the preferred Unified auditing.

2. Pure auditing – Once pure auditing is enabled. We cannot use the traditional auditing methods.



Which unified auditing mode enabled for my database

SQL> SELECT value FROM v$option WHERE parameter = 'Unified Auditing';

FALSE – > MIXED AUDTING
TRUE –> PURE AUDITING


How to change from MIXED to PURE auditing:(relink library)

SQL> shut immediate

[oracle@orcl:~ orcldemo] cd $ORACLE_HOME/rdbms/lib
[oracle@orcl:lib orcldemo] make -f ins_rdbms.mk uniaud_on ioracle ORACLE_HOME=$ORACLE_HOME
/usr/bin/ar d /oradb/app/oracle/product/12.2.0.1/db_1/rdbms/lib/libknlopt.a kzanang.o
/usr/bin/ar cr /oradb/app/oracle/product/12.2.0.1/db_1/rdbms/lib/libknlopt.a /oradb/app/oracle/product/12.2.0.1/db_1/rdbms/lib/kzaiang.o
chmod 755 /oradb/app/oracle/product/12.2.0.1/db_1/bin

- Linking Oracle
rm -f /oradb/app/oracle/product/12.2.0.1/db_1/rdbms/lib/oracle

SQL> startup;

SQL> SELECT VALUE FROM V$OPTION WHERE PARAMETER='Unified Auditing'






SYS AUDITING:-

With mix auditing, sys audit records will be written to both os level and unified_trail also if audit action policy is enable.

FOR MIXED AUDITING:
Administrative user sessions generate SYS audit records. These records are written if the AUDIT_SYS_OPERATIONS initialization parameter is set to TRUE.This process writes the records only to the traditional audit trails. However, when unified audit policies are enabled for administrative users,
these unified audit records are also written to unified audit trail.

FOR PURE UNIFIED AUDITING:
All sys audit records will be written to UNIFIED_AUDIT_TRAIL TABLE ONLY



#########################
FGA (FINE_GRAINED_AUDITING)  
#########################


Fine-grain auditing (FGA) allows us to audit users accessing data of a certain criteria. As per standard auditing we can audit select, insert, update and delete operations. We use the package dbms_fga to add, remove, enable and disable FGA auditing policies, it is virtually impossible to bypass these policies, one nice feature is that we can attach handlers (like a trigger) to the policies which can execute procedures.

There are many options that can be applied to the dbms_fga package, here are some simple examples :

The FGA method was introduced in the Oracle 9i database release. However, this method provided support for only the SELECT statements.  With the 10g database release, it becomes possible to extend the FGA method to UPDATE, INSERT, and DELETE statements as well


For the FGA method to work, an audit condition is not needed. The database audits all statements that access the object if you do not specify an audit condition. DELETE statements are always audited because all columns are deleted together. It only creates an audit record if a statement manipulates or selects specific rows that meet the FGA policy.


Creating the FGA policy say “FGA_AUDIT”:

begin
dbms_fga.add_policy (
object_schema=>'ABDUL',
object_name=>'NEW',
policy_name=>'FGA_AUDIT',
audit_column=>'name',
enable=>true,
statement_types=>'select,update');
end;
/




Removing FGA policy :-

begin
dbms_fga.drop_policy (
object_schema=>'ABDUL',
object_name=>'NEW',
policy_name=>'FGA_AUDIT');
end;
/


Enabling FGA policy :-

begin
dbms_fga.enable_policy (
object_schema=>'ABDUL',
object_name=>'NEW',
policy_name=>'FGA_AUDIT');
end;
/


Disabling FGA policy :-

begin
dbms_fga.disable_policy (
object_schema=>'ABDUL',
object_name=>'NEW',
policy_name=>'FGA_AUDIT');
end;
/



Query  FGA audit 

select db_user,scn,sql_text from dba_fga_audit_trail where db_user='ABDUL';




#########################
Audit Management till 11g 
#########################


Levels of auditing  

Three levels of Auditing:-
Statement level
Object level
System/privilege level


Statement level auditing :-

Auditing will be done at statement level.
Statements that can be audited are found in STMT_AUDIT_OPTION_MAP.

SQL> audit table by Abdul;

Audit succeeded.


SQL> select * from DBA_STMT_AUDIT_OPTS;

USER_NAME PROXY_NAME AUDIT_OPTION SUCCESS   FAILURE
--------- ---------- ------------ --------- ---------
ABDUL               TABLE        BY ACCESS BY ACCESS


SQL> conn ABDUL/ABDUL;
Connected.

SQL> create table new(name varchar(20));

Table created.

SQL> select USERNAME,TIMESTAMP,ACTION_NAME from dba_audit_trail where USERNAME='ABDUL';

USERNAME             TIMESTAMP ACTION_NAME
-------------------- --------- ------------
ABDUL                 12-AUG-19 CREATE TABLE




Object level auditing :-

Auditing will be done at object level.
These objects can be audited: tables, views, sequences, packages, stored procedures and stored functions.

SQL> audit insert on ABDUL.testtab;

Audit succeeded.

SQL> insert into ABDUL.testtab values('oracle');

1 row created.


SQL> select object_name,object_type,ins,sel from dba_obj_audit_opts where owner='ABDUL';

OBJECT_NAME  OBJECT_TYPE INS     SEL
------------ ----------- ------- ----
TESTTAB      TABLE       S/S     -/-


select username,timestamp,action_name,sys_privilege,priv_used from dba_audit_trail where username='ABDUL'

USERNAME TIMESTAMP ACTION_NAME SYS_PRIVILEGE PRIV_USED                      
-------- --------- ----------- ------------- ----------------- 
DBWR     12-OCT-18 SESSION_REC               INSERT ANY TABLE   






System/Privilege level auditing :-

Auditing will be done at privilege level.
All system privileges that are found in SYSTEM_PRIVILEGE_MAP can be audited.

SQL> audit create any table by ABDUL;

Audit succeeded.

SQL> select privilege,user_name from dba_priv_audit_opts where user_name='ABDUL';

PRIVILEGE                                USER_NAME
---------------------------------------- --------------------
CREATE ANY TABLE                         ABDUL


SQL> grant dba to ABDUL;

Grant succeeded.

SQL> create table testtab(name varchar(20));

Table created.

select username,timestamp,action_name,sys_privilege,priv_used from dba_audit_trail where username='ABDUL'

USERNAME TIMESTAMP ACTION_NAME SYS_PRIVILEGE PRIV_USED                       
-------- --------- ----------- ------------- -----------------
HARI     12-OCT-19 CREATE TABLE              CREATE ANY TABLE






Audit Failed Logins till 11g 

The Oracle auditing utility has this command:

audit create session whenever not successful;

To enable auditing of failed sign-on attempts:

1 - Add initialization parameters & bounce instance:

audit_trail=true

audit_file_dest='/u01/app/oracle/mysid/mydir/'

2 - Enable auditing of failed logion attempts as SYSDBA:

SQL> audit create session whenever not successful;

3 - You can now view failed login attempts in dba_audit_trail:

select OS_USERNAME,USERNAME,USERHOST,to_char(timestamp,'MM-DD-YYYY HH24:MI:SS'), returncode
from dba_audit_trail 
where returncode > 0


select ntimestamp#, userid, userhost, spare1, comment$text from sys.aud$ where returncode=1017 order by 1;






Purge Audit records :


Purging of  audit records can be done either using  traditional purge method or using New Feature DBMS_AUDIT_MGMT 



Purging using traditional Method :


1)  Take  export  backup of  2019  data  that will be delete using below scrpt 

exp '"/ as sysdba"' FILE=cpp_audit_2019-12-31.dmp log=cpp_audit_2019-12-31.log tables=AUD\$ query=\"WHERE TO_CHAR\(NTIMESTAMP\#,\'YYYY\'\) like \'2019\'\" statistics=none


2) Enable  restricted session 

alter system enable restricted session;


2) create  backup  table with  required  data that needs to be retained  and purge  from aud$


select sum(bytes/1024/1024/1024)  from   dba_segments where segment_name like 'AUD$';    

select count(*)  from aud$ where TO_CHAR(NTIMESTAMP# , 'YYYY') like '%2020%'   ;


( Verify tablespace  has enough space before proceeding )

create table temp_aud tablespace AUDIT_EFP as select * from sys.aud$  where TO_CHAR(NTIMESTAMP# , 'YYYY') like '%2020%'; 

select count(*)  from temp_aud  ;
truncate table sys.aud$;

select count(*) from SYS.AUD$;

insert into sys.aud$ select * from temp_aud;

commit ;

select count(*) from SYS.AUD$;


alter index SYS.I_AUD1 rebuild online;


exec dbms_stats.gather_table_stats (ownname=>'SYS', tabname=>'AUD$' , estimate_percent=>10, cascade=>TRUE, degree=>5);




 3) disable  restricted session ;

alter system disable restricted session;



 4) drop backup table created .


Drop table temp_aud ;





Purging using  New New Feature DBMS_AUDIT_MGMT

New Feature DBMS_AUDIT_MGMT To Manage And Purge Audit Information (Doc ID 731908.1)



Manually Purging 

SELECT * FROM dba_audit_mgmt_config_params;

sql>BEGIN
DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(
audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
use_last_arch_timestamp => FALSE);   NOTE:- (Set FALSE to purge all records/files)
END;
/

OR

sql>BEGIN
DBMS_AUDIT_MGMT.set_last_archive_timestamp(
audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
last_archive_time => SYSTIMESTAMP-5);
END;



Setting auto purge

COLUMN audit_trail FORMAT A20
COLUMN last_archive_ts FORMAT A40

sql>SELECT * FROM dba_audit_mgmt_last_arch_ts;

sql>BEGIN
DBMS_AUDIT_MGMT.clean_audit_trail(
audit_trail_type        => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
use_last_arch_timestamp => TRUE);     NOTE- (Set TRUE to only purge records/files older than the timestamp specified for the audit trail)
END;
/






Generate script to check enabled audit in database  :






Example for statement auditing :



AUDIT ADMINISTER KEY MANAGEMENT  BY ACCESS ;
AUDIT ALTER ANY PROCEDURE  BY ACCESS ;
AUDIT ALTER ANY SQL TRANSLATION PROFILE  BY ACCESS ;
AUDIT ALTER ANY TABLE  BY ACCESS ;
AUDIT ALTER DATABASE  BY ACCESS ;
AUDIT ALTER PROFILE  BY ACCESS ;
AUDIT ALTER SYSTEM  BY ACCESS ;
AUDIT ALTER USER  BY ACCESS ;
AUDIT AUDIT SYSTEM  BY ACCESS ;
AUDIT BECOME USER  BY ACCESS ;
AUDIT COMMENT TABLE  BY ACCESS ;
AUDIT CONTEXT  BY ACCESS ;
AUDIT CREATE ANY JOB  BY ACCESS ;
AUDIT CREATE ANY LIBRARY  BY ACCESS ;
AUDIT CREATE ANY PROCEDURE  BY ACCESS ;
AUDIT CREATE ANY SQL TRANSLATION PROFILE  BY ACCESS ;
AUDIT CREATE ANY TABLE  BY ACCESS ;
AUDIT CREATE EXTERNAL JOB  BY ACCESS ;
AUDIT CREATE LIBRARY  BY ACCESS ;
AUDIT CREATE PUBLIC DATABASE LINK  BY ACCESS ;
AUDIT CREATE SESSION  BY ACCESS ;
AUDIT CREATE SQL TRANSLATION PROFILE  BY ACCESS ;
AUDIT CREATE USER  BY ACCESS ;
AUDIT DATABASE LINK  BY ACCESS ;
AUDIT DIMENSION  BY ACCESS ;
AUDIT DIRECTORY  BY ACCESS ;
AUDIT DROP ANY PROCEDURE  BY ACCESS ;
AUDIT DROP ANY SQL TRANSLATION PROFILE  BY ACCESS ;
AUDIT DROP ANY TABLE  BY ACCESS ;
AUDIT DROP PROFILE  BY ACCESS ;
AUDIT DROP USER  BY ACCESS ;
AUDIT EXEMPT ACCESS POLICY  BY ACCESS ;
AUDIT EXEMPT REDACTION POLICY  BY ACCESS ;
AUDIT GRANT ANY OBJECT PRIVILEGE  BY ACCESS ;
AUDIT GRANT ANY PRIVILEGE  BY ACCESS ;
AUDIT GRANT ANY ROLE  BY ACCESS ;
AUDIT GRANT DIRECTORY  BY ACCESS ;
AUDIT GRANT PROCEDURE  BY ACCESS ;
AUDIT GRANT SEQUENCE  BY ACCESS ;
AUDIT GRANT TABLE  BY ACCESS ;
AUDIT GRANT TYPE  BY ACCESS ;
AUDIT INDEX  BY ACCESS ;
AUDIT LOCK TABLE  BY ACCESS ;
AUDIT LOGMINING  BY ACCESS ;
AUDIT MATERIALIZED VIEW  BY ACCESS WHENEVER NOT SUCCESSFUL  ;
AUDIT NOT EXISTS  BY ACCESS ;
AUDIT PLUGGABLE DATABASE  BY ACCESS ;
AUDIT PROCEDURE  BY ACCESS ;
AUDIT PROFILE  BY ACCESS ;
AUDIT PUBLIC DATABASE LINK  BY ACCESS ;
AUDIT PUBLIC SYNONYM  BY ACCESS ;
AUDIT PURGE DBA_RECYCLEBIN  BY ACCESS ;
AUDIT RESTRICTED SESSION  BY ACCESS ;
AUDIT ROLE  BY ACCESS ;
AUDIT ROLLBACK SEGMENT  BY ACCESS ;
AUDIT SELECT ANY DICTIONARY  BY ACCESS ;
AUDIT SELECT TABLE  BY ACCESS WHENEVER NOT SUCCESSFUL  ;
AUDIT SEQUENCE  BY ACCESS ;
AUDIT SYNONYM  BY ACCESS ;
AUDIT SYSTEM AUDIT  BY ACCESS ;
AUDIT SYSTEM GRANT  BY ACCESS ;
AUDIT TABLE  BY ACCESS ;
AUDIT TRANSLATE ANY SQL  BY ACCESS ;
AUDIT TRIGGER  BY ACCESS ;
AUDIT TYPE  BY ACCESS ;
AUDIT USER  BY ACCESS ;
AUDIT VIEW  BY ACCESS ;






Reference :


https://docs.oracle.com/cd/E25054_01/network.1111/e16543/auditing.htm







#########################
Pointing Audit to syslog 

#########################

Recently we moved our  audit pointing to syslog  since we onboard Splunk in our environment to read  audit . 

Below was steps  for  pointing audit to syslog and settings from splunk agent side 


References :
How To Set the AUDIT_SYSLOG_LEVEL Parameter? (Doc ID 553225.1)
How To Enable Database Syslog Auditing On Solaris (Doc ID 1450842.1)

SYSLOG Auditing Fails To Write Entries To The File Destination Set In SYSLOG.CONF (Doc ID 1314077.1)





For Syslog :


A)  As  root  user


                1) take backup of /etc/syslog.conf

                2) Modify  /etc/syslog.conf   as under ( note to  use tab and not space)

                                local1.warning /var/log/oracle/oracleaudit.log

                3) the syslog services needs restarting after this file is changed

             svcadm restart svc:/system/system-log:default



B) add below database parameters and  restart database as oracle user

                1) table backup of database spfile


        2) Verify below 2 parameters are set 

alter system set audit_file_dest='/oradata/ora12c/product/admin/test/audit' scope=spfile;
alter system set audit_sys_operations=true scope=spfile;


       update database syslog parameters as  under
                               
                                alter system set audit_trail=os scope=spfile;
                                alter system set enable_ddl_logging=TRUE scope=spfile;
                                alter system set audit_syslog_level='local1.warning' scope=spfile;
                               

exec dbms_audit_mgmt.set_audit_trail_property( audit_trail_type=>dbms_audit_mgmt.audit_trail_os,audit_trail_property=>dbms_audit_mgmt.os_file_max_size,audit_trail_property_value=>50);
                               
exec dbms_audit_mgmt.set_audit_trail_property(audit_trail_type=>dbms_audit_mgmt.audit_trail_os,audit_trail_property=>dbms_audit_mgmt.os_file_max_age,audit_trail_property_value=>1);

                                                Shutdown immediate;
                                                Startup;



For Splunk :


A) as splunk user

    1) Ensure  attached Splunk_TA_oracle  directory  is present under /opt/splunkforwarder/etc/apps/  and permission is  owned by  splunk

    2) /opt/splunkforwarder/etc/apps/Splunk_TA_oracle/local/inputs.conf   file  must have below  format if using audit file monitoring

[monitor:///var/log/oracle/oracleaudit.log]
sourcetype = oracle:audit:text
disabled = false
index = moe
crcSalt = <SOURCE>


   3) Need to ensure "  useACK = true" line is  not present in file    /opt/splunkforwarder/etc/apps/org_all_forwarder_outputs/local/outputs.conf
[tcpout]
defaultGroup = primary_indexers
maxQueueSize = 7MB
#useACK = true
forceTimebasedAutoLB = true
forwardedindex.2.whitelist = (_audit|_introspection|_internal)
#IP of Heavy Forwarder
[tcpout:primary_indexers]
server = XX.XXX.XX.XX:XXXX



4) Restart  splunk services with splunk user and  check if audit files are listed 

/opt/splunkforwarder/bin/splunk restart 
/opt/splunkforwarder/bin/splunk list monitor 


Verification: 

Verifiy  entres are getting written in  /var/log/oracle/oracleaudit.log and get confirmation from splunk  team




Rollback steps  : 

1) restore backup of  /etc/syslog  taken   and restart  syslog services

2) restore backup of database spfile and  restart database





References : 


SCRIPT: Generate AUDIT and NOAUDIT Statements for Current Audit Settings (Doc ID 287436.1)

SRDC - How to Collect Standard Information for Database Auditing (Doc ID 1901113.1)





Tuesday, May 5, 2020

Oracle 19c New features / Whats new



                        Oracle 19c  New features / Whats new  


I primarily started writing this  blog for my 19c  Ocp exam  1z0-083. However  since exam  1z0-083   contains  18c features as well  i might happen to  add some features  of 18c too 


Also  exam 1Z0-083 Oracle Database Administration  II     "Oracle Database Administration 2019 Certified Professional"   ,  does not  solely  concentrates  only  on new features   but  its recap of all  general topics and more importantly it concentrates  more on Pdb and Cdb features  may be because from 20c they are planing to mandate pdb/cdb 


Also  in this article i wont be covering Cdb/Pdb  topic and will be  writing another article on that 




############################################
Rac new features 
############################################



Mgmt database  is optional
Staring with 19c Mgmt database  is optional 


Automatic block corruption recovery with the CONTENT.CHECK disk group attribute

It can enable or disable content checking when performing data copy operations like replacing a disk group. We can set TRUE or FALSE. When set to true The logical content checking is enabled for all rebalance operations.



The new ASMCMD mvfile command moves a file to the specified file group in the same disk group where the file is stored.

Moves a file to the specified file group in the same disk group where the file is stored.
Example 
ASMCMD [+] > mvfile +data/orcl/controlfile/Current.260.684924747 --filegroup FG1
ASMCMD [+fra/orcl/archivelog/flashback] > mvfile log_7.264.684968167 --filegroup FG1



SERVICE_NAMES parameter has been deprecated




############################################
Performance Tuning New features 
############################################

Automatic Indexing 

The automatic indexing feature automates the index management tasks in an Oracle database. Automatic indexing automatically creates, rebuilds, and drops indexes in a database based on the changes in application workload, thus improving database performance. The automatically managed indexes are known as auto indexes.



1)  Configuration 

Enable automatic indexing in a database and creates any new auto indexes as visible indexes, so that they can be used in SQL statements:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT'); 


Enable automatic indexing in a database, but creates any new auto indexes as invisible indexes, so that they cannot be used in SQL statements:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','REPORT ONLY'); 


Disable automatic indexing in a database, so that no new auto indexes are created, and the existing auto indexes are disabled:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF'); 


Set retention period for unused auto indexes to 90 days:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_AUTO', '90'); 


Set retention period for unused non-auto indexes to 60 days:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_MANUAL', '60'); 


Define tablespace of TBSAUTO to store auto indexes:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_DEFAULT_TABLESPACE', 'TBSAUTO'); 


Allocates 5% of the tablespace for auto indexes:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SPACE_BUDGET', '5'); 




2)  Generating report 

Generate a report about the automatic indexing operations for last 24 hours in plain txt(default) format:

 set long 9999999 pages 50000 lines 500
 col xxxx for a300
 SELECT dbms_auto_index.Report_activity() as xxxx 
 FROM dual;


Make a report about the automatic indexing operations for specific timestamp in HTML format:

 set long 9999999 pages 50000 lines 500
 col xxxx for a300
 SELECT dbms_auto_index.Report_activity(activity_start => systimestamp-1, 
activity_end => systimestamp, 
type => 'HTML',
section => 'ALL') as xxxx  FROM dual;



Create a report about the last automatic indexing operation:

 set long 9999999 pages 50000 lines 500
 col xxxx for a300
 SELECT dbms_auto_index.Report_last_activity() as xxxx 
 FROM dual;




3) Views 

 col DESCRIPTION for a30
 col ADVISOR_NAME for a25
 col task_name for a30
 select TASK_NAME,DESCRIPTION,ADVISOR_NAME,CREATED,LAST_MODIFIED,LAST_EXECUTION,EXECUTION_START,EXECUTION_END,STATUS,PROGRESS_METRIC
 from dba_advisor_tasks;


 col TASK_NAME for a30
 col EXECUTION_NAME for a30
 set lines 200 pages 5000
 select TASK_NAME,EXECUTION_NAME,EXECUTION_START,EXECUTION_END,STATUS,REQUESTED_DOP,ACTUAL_DOP
 from dba_advisor_executions 
 where TASK_NAME in ('SYS_AUTO_INDEX_TASK','SYS_AI_SPM_EVOLVE_TASK','SYS_AI_VERIFY_TASK')
 order by EXECUTION_START;


 col TASK_NAME for a30
 col EXECUTION_NAME for a30
 set lines 200 pages 5000
 select TASK_NAME,EXECUTION_NAME,EXECUTION_START,EXECUTION_END,STATUS,REQUESTED_DOP,ACTUAL_DOP
 from dba_advisor_executions 
 where TASK_NAME in ('SYS_AUTO_INDEX_TASK','SYS_AI_SPM_EVOLVE_TASK','SYS_AI_VERIFY_TASK')
 order by EXECUTION_START;

 col PARAMETER_NAME for a40
 col PARAMETER_VALUE for a15
 col LAST_MODIFIED for a20
 col MODIFIED_BY for a15
 select * From DBA_AUTO_INDEX_CONFIG;



Real-Time Statistics  

Oracle Database 12c introduced online statistics gathering for CREATE TABLE AS SELECT statements and direct-path inserts.

Oracle Database 19c introduces real-time statistics, which extend online support to conventional DML statements. Because statistics can go stale between DBMS_STATS jobs, real-time statistics helps the optimizer generate more optimal plans.Whereas bulk load operations gather all necessary statistics, real-time statistics augment rather than replace traditional statistics.

Oracle introduced new parameters

"_optimizer_gather_stats_on_conventional_dml" and "_optimizer_use_stats_on_conventional_dml" which are true by default
"_optimizer_stats_on_conventional_dml_sample_rate" at 100%


How does real time statistics works?

1) By default the "_optimizer_gather_stats_on_conventional_dml" is true so its automatically kicks off
2) When a DML operation is currently modifying a table (conventional), Oracle Database dynamically computes values for the most essential statistics if the above parameter is on.
3) Consider a example of table that is having lot of inserts and rows are increasing. Real-time statistics keep track of the increasing row count as rows are being inserted. If the optimizer performs a hard parse of a new query, then the optimizer can use the real-time statistics to obtain a more accurate cost estimate.
4) DBA_TAB_COL_STATISTICS and DBA_TAB_STATISITICS has columns NOTES tell real time statistics have been used. STATS_ON_CONVENTIONAL_DML




Parameters to control this feature

 NAME                                           
 -------------------------------------------------- - ----------
 _optimizer_gather_stats_on_conventional_config 
 _optimizer_gather_stats_on_conventional_dml 
 _optimizer_stats_on_conventional_dml_sample_rate
 _optimizer_use_stats_on_conventional_config   
 _optimizer_use_stats_on_conventional_dml       







Automatic Resolution of SQL Plan Regressions 



SQL plan management searches for SQL statements in the Automatic Workload Repository (AWR). Prioritizing by highest load, it looks for alternative plans in all available sources, adding better-performing plans to the SQL plan baseline. Oracle Database also provides a plan comparison facility and improved hint reporting.

Automatic SQL plan management resolves plan regressions without user intervention. For example, if high-load statements are performing sub optimally, then SQL plan management evolve advisor can locate the statements automatically, and then test and accept the best plans.


The new default AUTO for ALTERNATE_PLAN_BASELINE and ALTERNATE_PLAN_SOURCE enables the Automatic SQL Plan Management in Oracle Database 19c. The attribute ALTERNATE_PLAN_SOURCE lets the advisor identify the sources for plans by itself. The ALTERNATE_PLAN_BASELINE parameter determines which SQL statements are eligible for plan capture. AUTO in this case means that any SQL statement in AWR will be a potential candidate.


column parameter_value format a45
column parameter_name format a25
set pages 200
SELECT PARAMETER_NAME, PARAMETER_VALUE
  FROM   DBA_ADVISOR_PARAMETERS
  WHERE  TASK_NAME = 'SYS_AUTO_SPM_EVOLVE_TASK'
    AND PARAMETER_VALUE <> 'UNUSED'
  ORDER BY 1;




If you’d like to restore the Oracle 12.1 behavior:

BEGIN 
   DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( 
      task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
      parameter => 'ALTERNATE_PLAN_BASELINE', 
      value     => '');
END; 
/

BEGIN 
   DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( 
      task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
      parameter => 'ALTERNATE_PLAN_SOURCE', 
      value     => '');
END; 
/

BEGIN
  DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
     task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
     parameter => 'ALTERNATE_PLAN_LIMIT',
     value => 10);
END;
/



Sql  Quarantine

New in Oracle 19c is the new concept of SQL Quarantine where if a particular SQL statement exceeds the specified resource limit (set via Oracle Resource Manager), then the Resource Manager terminates the execution of that statement and “quarantines” the plan.   SQL Quarantine features helps to prevent reuse of same execution plan which was terminated by resource managers due to resource limits. This will help not to run the rouge queries again on the database if the resource manager is active

This broadly speaking means that the execution plan is now placed on a “blacklist” of plans that the database will not execute.this feature in only available on Oracle Engineered Systems


SQL Quarantine will need to setup configuration and define thresholds for specific execution plan for given sql_id or for all plans for sql_id. The thresholds are similar like resource manager threshold ex: cpu limit, elapsed_time limit etc. To define the thresholds you can use DBMS_SQLQ package.quarantine configuration for an execution plan for a SQL statement.

Please note, the quarantine feature does not kill the session itself, it's just flush out the plan and quarantine that SQL and its plan for ever until it configuration is in enabled state.

This feature is only available in the Cloud and EE-ES database only, not on on-prem standard/enterprise editions as per documentation, however I could create , enable and create some plan on it.




In order to create a SQL Quarantine let's first configure it.

# For SQL_ID and one of its execution plan

DECLARE
quarantine_config VARCHAR2(30);
BEGIN
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id',
PLAN_HASH_VALUE => '&PLAN_HASH_VALUE');
END;
/

# For SQL_ID and all of its executions plans

DECLARE
quarantine_config VARCHAR2(30);
BEGIN
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id')
END;
/

# For SQL_TEXT Only

DECLARE
quarantine_config VARCHAR2(30);
BEGIN
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_TEXt(SQL_TEXT => to_clob('select count(*) from emp'));
END;
/

Secondly, add threshold to it. Note the values are in seconds, even for cpu its cpu_time not percentage

BEGIN
DBMS_SQLQ.ALTER_QUARANTINE(
QUARANTINE_NAME => '&SQL_QUARANTINE_ID,
PARAMETER_NAME  => 'CPU_TIME',
PARAMETER_VALUE => '20');

DBMS_SQLQ.ALTER_QUARANTINE(
QUARANTINE_NAME => '&SQL_QUARANTINE_ID,
PARAMETER_NAME  => 'ELAPSED_TIME',
PARAMETER_VALUE => '10');
END;
/

And finally, enable it

BEGIN
DBMS_SQLQ.ALTER_QUARANTINE(
QUARANTINE_NAME => '&SQL_QUARANTINE_ID,
PARAMETER_NAME  => 'ENABLED',
PARAMETER_VALUE => 'YES');
END;
/



Using the DBMS_SQLQ package subprograms we can also enable or disable a quarantine configuration, delete a quarantine configuration and if required also transfer quarantine configurations from one database to another.


SQL> BEGIN
    DBMS_SQLQ.ALTER_QUARANTINE(
       QUARANTINE_NAME => 'SQL_QUARANTINE_ca0z7uh2sqcbw',
       PARAMETER_NAME  => 'ENABLED',
       PARAMETER_VALUE => 'NO');
END;


PL/SQL procedure successfully completed.



To view which plans got quarantined etc, v$SQL view has new columns as below

select sql_text, plan_hash_value, avoided_executions, sql_quarantine
from v$sql
where sql_quarantine is not null;

select sql_text, name, plan_hash_value, last_executed, enabled
from dba_sql_quarantine;


SQL> select sql_quarantine,avoided_executions
  2  from v$sql where sql_id='491fa2p6qt9h6';

SQL_QUARANTINE
--------------------------------------------------------------------------------
AVOIDED_EXECUTIONS
------------------
SQL_QUARANTINE_ca0z7uh2sqcbw
1




############################################
Dataguard  New features 
############################################


Active Data Guard DML Redirection

You can run DML operations on Active Data Guard standby databases. This enables you to run read-mostly applications, which occasionally execute DML’s, on the standby database.

DML operations on a standby can be transparently redirected to and run on the primary database. This includes DML statements that are part of PL/SQL blocks. The Active Data Guard session waits until the corresponding changes are shipped to and applied to the Active Data Guard standby. Read consistency is maintained during the DML operation and the standby database on which the DML is run can view its uncommitted changes. However, all the other standby database instances can view these changes only after the transaction is committed.


To configure automatic redirection of DML operations for all standby sessions in an Active Data Guard environment:
ADG_REDIRECT_DML=TRUE 


To configure automatic redirection of DML operations for the current session, use the following command:

ALTER SESSION ENABLE ADG_REDIRECT_DML; 




Propagate Restore Points from Primary to Standby 

With Oracle Database 19c, restore points which are created on primary database are automatically replicated on standby.

The restore points on standby are always normal restore points. It doesn't matter it is guaranteed or normal restore points on primary database.

There are 3 prerequisites for this feature:
1. The compatible parameter both on primary and standby database must be 19.0.0 or greater
2. The primary database must be open (not mounted) because of restored point replication is occured  through the redo logs. So, MRP process on standby needs to be running for replication.
3. There shouldn't be any existing restore point on standby database with same name.



Flashback Standby when Primary is flashed back 

In previous releases, getting the secondary to the same point in time as the primary requires a manual procedure to flash back standby databases. A new parameter(_standby_auto_flashback) is introduced which enables the standby database to be flashed back automatically when Flashback Database is performed on the primary database.


Since Oracle 19c, a DBA can put the standby database in MOUNT mode with no managed recovery (or Read Only) and then flash back the primary database. When you restart the MRP after closing the physical standby, the recovery process automatically flashes back the standby database and continues to apply the new branch of redo.

Just restart media recovery on standby and standby will  automatically flashback to restore point.




Finer granularity Supplemental Logging 


Supplemental logging was designed and implemented for Logical Standby or full database replication requirements. This adds unnecessary overhead in environments where only a subset of tables is being replicated. Fine-grained supplemental logging provides a way for partial database replication users to disable supplemental logging for uninteresting tables so that even when supplemental logging is enabled in database or schema level, there is no supplemental logging overhead for uninteresting tables.

Use of this feature can significantly reduce the overhead in terms of resource usage and redo generation in case when only some of the tables in the database require supplemental logging, such as in a Golden Gate partial replication configuration.

If both source and mining database are at redo compatibility 19 or higher and the value of enable_goldengate_replication is TRUE, then Oracle Database 19c and higher provides an advantage of reduced supplemental logging overhead for Oracle GoldenGate.



How to enable

 SQL> alter system set enable_goldengate_replication=true;

 SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION;

 SQL> SELECT supplemental_log_data_min, supplemental_log_data_sr from v$database; 



Way to disable

 SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; 

 SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION; 

 SQL> SELECT supplemental_log_data_min, supplemental_log_data_sr from v$database;





Oracle 19c-Tuning Automatic Outage Resolution With Data Guard



Data Guard maintains internal mechanisms that detect and correct issues with its redo transport and gap resolution processes
In case of network or disk I/O problems, these mechanisms prevent those processes from hanging and causing unnecessarily long gaps

Oracle Data Guard has several processes on the Primary and Standby databases that handle redo transport and archiving which communicate with each other over the network.

In certain failure situations, network hangs, disconnects, and disk I/O issues, these processes can hang potentially causing delays in redo transport and gap resolution. Data Guard has an internal mechanism to detect these hung processes and terminate them allowing the normal outage resolution to occur.

The following parameters allow the waits times to be tuned for a specific Data Guard configuration based on the user network and Disk I/O behavior


SQL> show parameter DATA_GUARD_MAX

NAME TYPE VALUE
———————————— ———– ——————————
data_guard_max_io_time integer 240
data_guard_max_longio_time integer 240

Default: 240
Range of values: 10 to 7200

DATA_GUARD_MAX_IO_TIME

This parameter sets the maximum number of seconds that can elapse before a
process is considered hung while performing a regular I/O operation in an Oracle
Data Guard environment. Regular I/O operations include read, write, and status
operations.

DATA_GUARD_MAX_LONG_IO_TIME

This parameter sets the maximum number of seconds that can elapse before a
process is considered hung while performing a long I/O operation in an Oracle Data

Guard environment. Long I/O operations include open and close operations.




Fast start failover can  be configured with Observer mode only.( No actual failover ) 


############################################
Patching
############################################

 
Zero-Downtime Oracle Grid Infrastructure Patching Using Fleet Patching and Provisioning    -- rhpctl  


Fleet Patching and provisioning formerly  known as  rapid home patching .  Gold image copy creation   concept was  introduced . 
FFP enhancement over  12c out of box patching are "Zero downtime database upgrade"  and "Adaptive Rac rolling ojvm patch"

Since  this is very  wide  topic i wont be explaining  it in very detail here and will look forward another article only for this 

However below  oracle article explains in detail  about FFP  with exact commands. 

https://docs.oracle.com/en/database/oracle/oracle-database/19/sprhp/deploying-and-managing-oracle-software-using-fleet-patching-and-provisioning.pdf




Change how Version / RU / RUR is seen

From 19c  Oracle  has changed the  way RUR release is seen is seen . Refer to Doc ID 2118136.2 






############################################
Upgrade and Migrate 
############################################


Dry run for  grid upgrade

Oracle Grid Infrastructure installation wizard (gridSetup.sh) enables you to perform a dry-run mode upgrade using dryRunForUpgrade flag to check your system’s upgrade readiness.

In dry-run upgrade mode, the installation wizard performs all of the system readiness checks that it would perform in an actual upgrade and enables you to verify whether your system is ready for upgrade before you start the upgrade. This mode does not perform an actual upgrade. It helps anticipate potential problems with the system setup and avoid upgrade failures.

on 1st  node  run grid setup   dry run  from new home      Note , this will copy  software to 2nd  node .

./gridSetup.sh -silent -dryRunForUpgrade 
-responseFile /oragridzj/app/19.3.0/grid/install/response/gridsetup.rsp




AutoUpgrade Tool  

AutoUpgrade enables to upgrade one or many Oracle databases at the command-line with a single command and a single configuration file.

AutoUpgrade will run the preupgrade tasks, perform automated fixups where needed, execute the database upgrade, and finish by taking care of post-upgrade tasks. It includes automatic retry and fallback, the possibility to schedule upgrades for future points in time, and the ability to set, change or remove initialization parameters as desired.

It saves time and money by upgrading hundreds of databases with one command and replacing bespoke high maintenance upgrade solutions.

Reference Doc ID: AutoUpgrade Tool (Doc ID 2485457.1)


Config File

 global.autoupg_log_dir=/home/oracle/DBADB122_UPG/DBADB122
 DBADB122.dbname=DBADB122
 DBADB122.start_time=NOW
 DBADB122.source_home=/u02/app/oracle/product/12c/dbhome_1
 DBADB122.target_home=/opt/oracle/product/19c/dbhome_1
 DBADB122.sid=DBADB122
 DBADB122.log_dir=/home/oracle/DBADB122_UPG/DBADB122
 DBADB122.upgrade_node=dinesh19c
 DBADB122.target_version=19.3
 DBADB122.run_utlrp=yes
 DBADB122.timezone_upg=yes


 Run PRECHECK

 $ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -config DBADB122_config.cfg -mode analyze


 Upgrade Database using DEPLOY


 $ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -config DBADB122_config.cfg -mode deploy





############################################
Other 
############################################


Oracle Network Log File Segmentation

The maximum size and number of text log files can be configured for Oracle Network components such as Oracle Net Listener, Oracle Connection Manager, and Global Service Manager. This feature allows better management of log files, particularly in Cloud environments.

Use below parameters to define number of files with defined size

LOG_FILE_NUM_listener_name
To specify the number of log file segments. At any point of time there can be only “n” log file segments where “n” is LOG_FILE_NUM_listener_name.
If the log grows beyond this number, then the older segments are deleted.

LOG_FILE_SIZE_listener_name
To specify the size of each log file segment. The size is in MB.



Sample listener.ora
  
 LOG_FILE_NUM_DBADB_LISTENER=5
 LOG_FILE_SIZE_DBADB_LISTENER=1



Clear Flash logs periodically for FRA size certainty 

DBA`s cannot manage the flashback logs in the fast recovery area directly other than by setting the flashback retention target or using guaranteed restore points. Nevertheless, you can manage fast recovery area space as a whole to maximize the space available for retention of flashback logs.  The Only quick way we purge the flashback is to turn off Flashback and Turn on Again. Starting with Oracle Database Release 19c, Oracle Database monitors flashback logs in the fast recovery area and automatically deletes flashback logs that are beyond the retention period (Not Waiting for FRA Crunch). When the retention target is reduced, flashback logs that are beyond the retention period are deleted immediately



Schema-only accounts  

This feature will enable you to create a schema account without authentication. This is a great feature, in previous releases schema account is a highly privileged account that can perform “anything” within the database objects stored in the schema. To clarify, simply using schema account you have the power to “turn auditing off” which is a serious security issue.

CREATE USER SPECIAL_SCHEMA NO AUTHENTICATION
DEFAULT TABLESPACE TS_SPECIAL_SCHEMA
TEMPORARY TABLESPACE TEMP;


To connect to the schema only account SPECIAL_SCHEMA you need configure proxy setup:
SQL> ALTER USER SPECIAL_SCHEMA GRANT CONNECT THROUGH tEST ;


SQL> select * from proxy_users where PROXY=’tEST’;

Connecting using easy connect, will throw an error that special_schema account lacks create session !

To find the list of schema-only accounts in your database:
SQL> select * from dba_users where authentication_type=’NONE’;


Before starting your upgrade, determine if you want to use password authenticate to default Oracle Database accounts where their passwords are in EXPIRED status, and their account is in LOCKED status

During upgrades to Oracle Database 19c and later releases, default Oracle accounts that have not had their passwords reset before upgrade (and are set to EXPIRED status), and that are also set to LOCKED status, are set to NO AUTHENTICATION after the upgrade is complete.


Because of this new feature, default accounts that are changed to schema-only accounts become unavailable for password authentication. The benefit of this feature is that administrators no longer have to periodically rotate the passwords for these Oracle Database-provided schemas. This feature also reduces the security risk of attackers using default passwords to hack into these account




ALTER SYSTEM FLUSH PASSWORDFILE_METADATA_CACHE 

The ALTER SYSTEM clause FLUSH PASSWORDFILE_METADATA_CACHE refreshes the metadata cache with the latest details of the database password file. The latest details of the database password file can be retrieved by querying the V$PASSWORDFILE_INFO view.

 If the database password file name or location has been changed and the metadata cache needs to be refreshed with the details of the updated database password file, you can do it simply with the following command:

SQL> alter system flush passwordfile_metadata_cache;

System altered.




Read-only-Oracle-Home


In Oracle 18c Oracle introduced a new feature, called “read-only-Oracle-Home”. By configuring an Oracle-Home-directory of your RDBMS-installation as “read-only”, Oracle will place all the configuration files like spfile etc. outside the directory tree of your $ORACLE_HOME and will basically move them to “$ORACLE_BASE/dbs” and “$ORACLE_BASE/homes”. But if the software is “read-only”, can we place the $ORACLE_HOME-directory in a separate filesystem which will be mounted “read-only” during normal operation? This would help to avoid accidental modifications or an accidental removal of the software 


The tool for enabling a read-only-Oracle-HOME is “roohctl”.

oracle@dirac:~/ [rdbms19] echo $ORACLE_HOME
 /u00/app/oracle/product/19.0.0.0
 oracle@dirac:~/ [rdbms19] roohctl -help
 Usage:  roohctl [] [
 ]
 Following are the possible flags:
     -help
 Following are the possible commands:
     -enable Enable Read-only Oracle Home
         [-nodeList List of nodes in a cluster environment]
(Roohctl -disable is available, too, but is not documented)

oracle@dirac:~/ [rdbms19] roohctl -enable


When it returns your $ORACLE_HOME-directory, your home is not a read-only-Oracle-Home

The configuration information is stored in $ORACLE_HOME/install/orabasetab




Hybrid Partitioned Tables: 

now large portions of a table can reside in external partitions, for example in the Oracle Cloud. With this new feature, you can also easily move non-active partitions to external files, such as Oracle Data Pump files, for a cheaper storage solution. Hybrid partitioned tables support all existing external table types for external partitions: ORACLE_DATAPUMP, ORACLE_LOADER, ORACLE_HDFS, ORACLE_HIVE.





 Using -applyRU and -applyOneOffs in Response File during silent Installation of software


Oracle 18c also introduced the concept of patching the installation media prior to installation or upgrades. The -applyRU flag allows you to specify the location of a Release Update (RU), which is applied to the new ORACLE_HOME before the installation takes place. This saves you from having to install then patch the software as separate steps. If we had a release update "psu" unzipped in the "/u01/software" directory, we might do something like this.

./runInstaller -ignorePrereq -waitforcompletion -silent \
    -applyRU /u01/software/psu
    -responseFile ${ORACLE_HOME}/install/response/db_install.rsp \
    oracle.install.option=INSTALL_DB_SWONLY \
    ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \
    UNIX_GROUP_NAME=oinstall \
    INVENTORY_LOCATION=${ORA_INVENTORY} \
    SELECTED_LANGUAGES=en,en_GB \
    ORACLE_HOME=${ORACLE_HOME} \
    ORACLE_BASE=${ORACLE_BASE} \
    oracle.install.db.InstallEdition=EE \
    oracle.install.db.OSDBA_GROUP=dba \
    oracle.install.db.OSBACKUPDBA_GROUP=dba \
    oracle.install.db.OSDGDBA_GROUP=dba \
    oracle.install.db.OSKMDBA_GROUP=dba \
    oracle.install.db.OSRACDBA_GROUP=dba \
    SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
    DECLINE_SECURITY_UPDATES=true


The -applyOneOffs flag is similar, but as the name suggests allows you to apply one-off patches. These can be combined with combined with release updates if needed. We can also specify multiple one-off patches as a comma-separated list. For the release update "psu" with the one-off patches "oneoff" and "oneoff2" we might do the following.

./runInstaller -ignorePrereq -waitforcompletion -silent \
    -applyRU /u01/software/psu \
    -applyOneOffs /u01/software/oneoff,/u01/software/oneoff2 \
    -responseFile ${ORACLE_HOME}/install/response/db_install.rsp \
    oracle.install.option=INSTALL_DB_SWONLY \
    ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \
    UNIX_GROUP_NAME=oinstall \
    INVENTORY_LOCATION=${ORA_INVENTORY} \
    SELECTED_LANGUAGES=en,en_GB \
    ORACLE_HOME=${ORACLE_HOME} \
    ORACLE_BASE=${ORACLE_BASE} \
    oracle.install.db.InstallEdition=EE \
    oracle.install.db.OSDBA_GROUP=dba \
    oracle.install.db.OSBACKUPDBA_GROUP=dba \
    oracle.install.db.OSDGDBA_GROUP=dba \
    oracle.install.db.OSKMDBA_GROUP=dba \
    oracle.install.db.OSRACDBA_GROUP=dba \
    SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
    DECLINE_SECURITY_UPDATES=true



############################################
Datapump 
############################################

max_datapump_parallel_per_job. 

MAX_DATAPUMP_PARALLEL_PER_JOB is introduced to control the number of parallel workers that can be used for an individual Data Pump job. This is an improvement to implement resource usage policies in DataPump.

In a database with MAX_DATAPUMP_PARALLEL_PER_JOB set to N, even if parallel>N is specified for the DataPump job, only N number of Worker processes will be started.



Oracle Data Pump Ability to Exclude ENCRYPTION Clause on Import - new transform parameter OMIT_ENCRYPTION_CLAUSE  


 Migrate to a database having TDE encrypted tablespaces
– TDE does not support encrypted columns (e.g., Oracle Cloud)
• Example:
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp SCHEMAS=hr
TRANSFORM=OMIT_ENCRYPTION_CLAUSE:Y
• Details:
– Default: N - column encryption clauses in CREATE TABLE are enabled, Y = suppressed

– Valid for TABLE object types





Oracle Data Pump Test Mode for Transportable Tablespaces (TTS) 


TTS_CLOSURE_CHECK=TEST_MODE indicates that the tablespaces are not required to be in Read-Only mode. This option is for testing purposes only to provide timing requirements of a transportable tablespace export operation. The resulting export dump file is not available for use by Data Pump Import.

Now you can more easily determine how long an export takes, and discover unforeseen issues not reported by the closure check.

expdp 
 directory=DATA_PUMP_DIR 
 dumpfile=users_tts_test.dmp 
 logfile=users_tts_test.log 
 transport_tablespaces=users 
 TTS_CLOSURE_CHECK=test_mode




Oracle Data Pump Prevents Inadvertent Use of Protected Roles - new ENABLE_SECURE_ROLES parameter is available 

Specify whether to use authenticated roles for export and import
• ENABLE_SECURE_ROLES=YES | NO
– available for expdp and impdp clients, and for the Oracle Data Pump PL/SQL API
• Default: NO – does not enable authenticated protected roles
• Beginning with release 19c you must explicitly enable authenticated roles

for an export or import job




Oracle Data Pump Loads Partitioned Table Data One Operation - GROUP_PARTITION_TABLE_DATA, a new value for the Import DATA_OPTIONS command line parameter



GROUP_PARTITION_TABLE_DATA: Tells Oracle Data Pump to import the table data in all partitions of a table as one operation. The default behavior is to import each table partition as a separate operation. Import chooses the default. For instance, when this partition is set, and there is a possibility that a table could move to a different partition as part of loading a table as part of the import, Oracle Data Pump groups table data in one partition. Oracle Data Pump also groups all partitions of a table as one operation for tables that are created by the Import operation.





Oracle Data Pump Allows Tablespaces to Stay Read-Only During TTS Import  



• Allows Read-Only Tablespaces during Transportable Tablespaces import
• TRANSPORTABLE=NEVER|ALWAYS|KEEP_READ_ONLY|NO_BITMAP_REBUILD
– Restores pre-12.2 ability to have tablespace files mounted on two databases at once
• Example:
impdp system DIRECTORY=dpump_dir DUMPFILE=dumpfile_name
TRANSPORT_DATAFILES=datafile_name
TRANSPORTABLE=KEEP_READ_ONLY
• Prevents fix-up of timezone data and rebuilding of bitmaps





Data Pump 19c: Suppress Encrypted Columns Clause

• Migrate to a database having TDE encrypted tablespaces
– TDE does not support encrypted columns (e.g., Oracle Cloud)
• Example:
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp SCHEMAS=hr
TRANSFORM=OMIT_ENCRYPTION_CLAUSE:Y
• Details:
– Default: N - column encryption clauses in CREATE TABLE are enabled, Y = suppressed
– Valid for TABLE object types





############################################
Depreciated Features 
############################################


De-support of Leaf Nodes in Flex Cluster Architecture

Leaf nodes are no longer supported in the Oracle Flex Cluster Architecture in Ora‐
cle Grid Infrastructure 19c.
In Oracle Grid Infrastructure 19c (19.1) and later releases, all nodes in an Oracle
Flex Cluster function as hub nodes. The capabilities offered by Leaf nodes in the
original implementation of the Oracle Flex Cluster architecture can as easily




Desupport of Oracle Real Application Clusters for Standard Edition 2 (SE2) Database Edition

Starting with Oracle Database 19c, Oracle Real Application Clusters (Oracle RAC) is not supported in Oracle Database Standard Edition 2 (SE2). Upgrading Oracle Database Standard Edition databases that use




############################################
Depreciated Parameters 
############################################




CLUSTER_DATABASE_INSTANCES  

parameter CLUSTER_DATABASE_INSTANCES specifies the number of
configured Oracle Real Application Clusters (Oracle RAC) instances. Starting with
Oracle Database 19c and later releases, the number of configurable Oracle RAC
instances is derived automatically from the Oracle Clusterware resource defini‐
tions. There is no replacement for this parameter, because there is no longer a
reason to have this parameter.




SERVICE_NAMES 

Starting with Oracle Database 19c, customer use of the SERVICE_NAMES parameter
is deprecated. It can be desupported in a future release.
The use of the SERVICE_NAMES parameter is no longer actively supported. It must
not be used for high availability (HA) deployments. It is not supported to use serv‐
ice names parameter for any HA operations. This restriction includes FAN, load
balancing, FAILOVER_TYPE, FAILOVER_RESTORE, SESSION_STATE_CONSISTENCY, and
any other uses.
To manage your services, Oracle recommends that you use the SRVCTL or GDSCTL
command line utilities, or the DBMS_SERVICE package




############################################
Reference 
############################################

https://docs.oracle.com/en/database/oracle/oracle-database/19/newft/new-features.html#GUID-5490FE65-562B-49DC-9246-661592C630F9

https://docs.oracle.com/en/database/oracle/oracle-database/19/newft/database-new-features-guide.pdf


https://docs.oracle.com/en/database/oracle/oracle-database/19/rilin/real-application-clusters-installation-guide-linux-and-unix.pdf