Tuesday, May 5, 2020

Oracle 19c New features / Whats new



                        Oracle 19c  New features / Whats new  


I primarily started writing this  blog for my 19c  Ocp exam  1z0-083. However  since exam  1z0-083   contains  18c features as well  i might happen to  add some features  of 18c too 


Also  exam 1Z0-083 Oracle Database Administration  II     "Oracle Database Administration 2019 Certified Professional"   ,  does not  solely  concentrates  only  on new features   but  its recap of all  general topics and more importantly it concentrates  more on Pdb and Cdb features  may be because from 20c they are planing to mandate pdb/cdb 


Also  in this article i wont be covering Cdb/Pdb  topic and will be  writing another article on that 




############################################
Rac new features 
############################################



Mgmt database  is optional
Staring with 19c Mgmt database  is optional 


Automatic block corruption recovery with the CONTENT.CHECK disk group attribute

It can enable or disable content checking when performing data copy operations like replacing a disk group. We can set TRUE or FALSE. When set to true The logical content checking is enabled for all rebalance operations.



The new ASMCMD mvfile command moves a file to the specified file group in the same disk group where the file is stored.

Moves a file to the specified file group in the same disk group where the file is stored.
Example 
ASMCMD [+] > mvfile +data/orcl/controlfile/Current.260.684924747 --filegroup FG1
ASMCMD [+fra/orcl/archivelog/flashback] > mvfile log_7.264.684968167 --filegroup FG1



SERVICE_NAMES parameter has been deprecated




############################################
Performance Tuning New features 
############################################

Automatic Indexing 

The automatic indexing feature automates the index management tasks in an Oracle database. Automatic indexing automatically creates, rebuilds, and drops indexes in a database based on the changes in application workload, thus improving database performance. The automatically managed indexes are known as auto indexes.



1)  Configuration 

Enable automatic indexing in a database and creates any new auto indexes as visible indexes, so that they can be used in SQL statements:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT'); 


Enable automatic indexing in a database, but creates any new auto indexes as invisible indexes, so that they cannot be used in SQL statements:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','REPORT ONLY'); 


Disable automatic indexing in a database, so that no new auto indexes are created, and the existing auto indexes are disabled:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF'); 


Set retention period for unused auto indexes to 90 days:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_AUTO', '90'); 


Set retention period for unused non-auto indexes to 60 days:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_MANUAL', '60'); 


Define tablespace of TBSAUTO to store auto indexes:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_DEFAULT_TABLESPACE', 'TBSAUTO'); 


Allocates 5% of the tablespace for auto indexes:

EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SPACE_BUDGET', '5'); 




2)  Generating report 

Generate a report about the automatic indexing operations for last 24 hours in plain txt(default) format:

 set long 9999999 pages 50000 lines 500
 col xxxx for a300
 SELECT dbms_auto_index.Report_activity() as xxxx 
 FROM dual;


Make a report about the automatic indexing operations for specific timestamp in HTML format:

 set long 9999999 pages 50000 lines 500
 col xxxx for a300
 SELECT dbms_auto_index.Report_activity(activity_start => systimestamp-1, 
activity_end => systimestamp, 
type => 'HTML',
section => 'ALL') as xxxx  FROM dual;



Create a report about the last automatic indexing operation:

 set long 9999999 pages 50000 lines 500
 col xxxx for a300
 SELECT dbms_auto_index.Report_last_activity() as xxxx 
 FROM dual;




3) Views 

 col DESCRIPTION for a30
 col ADVISOR_NAME for a25
 col task_name for a30
 select TASK_NAME,DESCRIPTION,ADVISOR_NAME,CREATED,LAST_MODIFIED,LAST_EXECUTION,EXECUTION_START,EXECUTION_END,STATUS,PROGRESS_METRIC
 from dba_advisor_tasks;


 col TASK_NAME for a30
 col EXECUTION_NAME for a30
 set lines 200 pages 5000
 select TASK_NAME,EXECUTION_NAME,EXECUTION_START,EXECUTION_END,STATUS,REQUESTED_DOP,ACTUAL_DOP
 from dba_advisor_executions 
 where TASK_NAME in ('SYS_AUTO_INDEX_TASK','SYS_AI_SPM_EVOLVE_TASK','SYS_AI_VERIFY_TASK')
 order by EXECUTION_START;


 col TASK_NAME for a30
 col EXECUTION_NAME for a30
 set lines 200 pages 5000
 select TASK_NAME,EXECUTION_NAME,EXECUTION_START,EXECUTION_END,STATUS,REQUESTED_DOP,ACTUAL_DOP
 from dba_advisor_executions 
 where TASK_NAME in ('SYS_AUTO_INDEX_TASK','SYS_AI_SPM_EVOLVE_TASK','SYS_AI_VERIFY_TASK')
 order by EXECUTION_START;

 col PARAMETER_NAME for a40
 col PARAMETER_VALUE for a15
 col LAST_MODIFIED for a20
 col MODIFIED_BY for a15
 select * From DBA_AUTO_INDEX_CONFIG;



Real-Time Statistics  

Oracle Database 12c introduced online statistics gathering for CREATE TABLE AS SELECT statements and direct-path inserts.

Oracle Database 19c introduces real-time statistics, which extend online support to conventional DML statements. Because statistics can go stale between DBMS_STATS jobs, real-time statistics helps the optimizer generate more optimal plans.Whereas bulk load operations gather all necessary statistics, real-time statistics augment rather than replace traditional statistics.

Oracle introduced new parameters

"_optimizer_gather_stats_on_conventional_dml" and "_optimizer_use_stats_on_conventional_dml" which are true by default
"_optimizer_stats_on_conventional_dml_sample_rate" at 100%


How does real time statistics works?

1) By default the "_optimizer_gather_stats_on_conventional_dml" is true so its automatically kicks off
2) When a DML operation is currently modifying a table (conventional), Oracle Database dynamically computes values for the most essential statistics if the above parameter is on.
3) Consider a example of table that is having lot of inserts and rows are increasing. Real-time statistics keep track of the increasing row count as rows are being inserted. If the optimizer performs a hard parse of a new query, then the optimizer can use the real-time statistics to obtain a more accurate cost estimate.
4) DBA_TAB_COL_STATISTICS and DBA_TAB_STATISITICS has columns NOTES tell real time statistics have been used. STATS_ON_CONVENTIONAL_DML




Parameters to control this feature

 NAME                                           
 -------------------------------------------------- - ----------
 _optimizer_gather_stats_on_conventional_config 
 _optimizer_gather_stats_on_conventional_dml 
 _optimizer_stats_on_conventional_dml_sample_rate
 _optimizer_use_stats_on_conventional_config   
 _optimizer_use_stats_on_conventional_dml       







Automatic Resolution of SQL Plan Regressions 



SQL plan management searches for SQL statements in the Automatic Workload Repository (AWR). Prioritizing by highest load, it looks for alternative plans in all available sources, adding better-performing plans to the SQL plan baseline. Oracle Database also provides a plan comparison facility and improved hint reporting.

Automatic SQL plan management resolves plan regressions without user intervention. For example, if high-load statements are performing sub optimally, then SQL plan management evolve advisor can locate the statements automatically, and then test and accept the best plans.


The new default AUTO for ALTERNATE_PLAN_BASELINE and ALTERNATE_PLAN_SOURCE enables the Automatic SQL Plan Management in Oracle Database 19c. The attribute ALTERNATE_PLAN_SOURCE lets the advisor identify the sources for plans by itself. The ALTERNATE_PLAN_BASELINE parameter determines which SQL statements are eligible for plan capture. AUTO in this case means that any SQL statement in AWR will be a potential candidate.


column parameter_value format a45
column parameter_name format a25
set pages 200
SELECT PARAMETER_NAME, PARAMETER_VALUE
  FROM   DBA_ADVISOR_PARAMETERS
  WHERE  TASK_NAME = 'SYS_AUTO_SPM_EVOLVE_TASK'
    AND PARAMETER_VALUE <> 'UNUSED'
  ORDER BY 1;




If you’d like to restore the Oracle 12.1 behavior:

BEGIN 
   DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( 
      task_name => 'SYS_AUTO_SPM_EVOLVE_TASK' ,
      parameter => 'ALTERNATE_PLAN_BASELINE', 
      value     => '');
END; 
/

BEGIN 
   DBMS_SPM.SET_EVOLVE_TASK_PARAMETER( 
      task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
      parameter => 'ALTERNATE_PLAN_SOURCE', 
      value     => '');
END; 
/

BEGIN
  DBMS_SPM.SET_EVOLVE_TASK_PARAMETER(
     task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
     parameter => 'ALTERNATE_PLAN_LIMIT',
     value => 10);
END;
/



Sql  Quarantine

New in Oracle 19c is the new concept of SQL Quarantine where if a particular SQL statement exceeds the specified resource limit (set via Oracle Resource Manager), then the Resource Manager terminates the execution of that statement and “quarantines” the plan.   SQL Quarantine features helps to prevent reuse of same execution plan which was terminated by resource managers due to resource limits. This will help not to run the rouge queries again on the database if the resource manager is active

This broadly speaking means that the execution plan is now placed on a “blacklist” of plans that the database will not execute.this feature in only available on Oracle Engineered Systems


SQL Quarantine will need to setup configuration and define thresholds for specific execution plan for given sql_id or for all plans for sql_id. The thresholds are similar like resource manager threshold ex: cpu limit, elapsed_time limit etc. To define the thresholds you can use DBMS_SQLQ package.quarantine configuration for an execution plan for a SQL statement.

Please note, the quarantine feature does not kill the session itself, it's just flush out the plan and quarantine that SQL and its plan for ever until it configuration is in enabled state.

This feature is only available in the Cloud and EE-ES database only, not on on-prem standard/enterprise editions as per documentation, however I could create , enable and create some plan on it.




In order to create a SQL Quarantine let's first configure it.

# For SQL_ID and one of its execution plan

DECLARE
quarantine_config VARCHAR2(30);
BEGIN
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id',
PLAN_HASH_VALUE => '&PLAN_HASH_VALUE');
END;
/

# For SQL_ID and all of its executions plans

DECLARE
quarantine_config VARCHAR2(30);
BEGIN
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '&sql_id')
END;
/

# For SQL_TEXT Only

DECLARE
quarantine_config VARCHAR2(30);
BEGIN
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_TEXt(SQL_TEXT => to_clob('select count(*) from emp'));
END;
/

Secondly, add threshold to it. Note the values are in seconds, even for cpu its cpu_time not percentage

BEGIN
DBMS_SQLQ.ALTER_QUARANTINE(
QUARANTINE_NAME => '&SQL_QUARANTINE_ID,
PARAMETER_NAME  => 'CPU_TIME',
PARAMETER_VALUE => '20');

DBMS_SQLQ.ALTER_QUARANTINE(
QUARANTINE_NAME => '&SQL_QUARANTINE_ID,
PARAMETER_NAME  => 'ELAPSED_TIME',
PARAMETER_VALUE => '10');
END;
/

And finally, enable it

BEGIN
DBMS_SQLQ.ALTER_QUARANTINE(
QUARANTINE_NAME => '&SQL_QUARANTINE_ID,
PARAMETER_NAME  => 'ENABLED',
PARAMETER_VALUE => 'YES');
END;
/



Using the DBMS_SQLQ package subprograms we can also enable or disable a quarantine configuration, delete a quarantine configuration and if required also transfer quarantine configurations from one database to another.


SQL> BEGIN
    DBMS_SQLQ.ALTER_QUARANTINE(
       QUARANTINE_NAME => 'SQL_QUARANTINE_ca0z7uh2sqcbw',
       PARAMETER_NAME  => 'ENABLED',
       PARAMETER_VALUE => 'NO');
END;


PL/SQL procedure successfully completed.



To view which plans got quarantined etc, v$SQL view has new columns as below

select sql_text, plan_hash_value, avoided_executions, sql_quarantine
from v$sql
where sql_quarantine is not null;

select sql_text, name, plan_hash_value, last_executed, enabled
from dba_sql_quarantine;


SQL> select sql_quarantine,avoided_executions
  2  from v$sql where sql_id='491fa2p6qt9h6';

SQL_QUARANTINE
--------------------------------------------------------------------------------
AVOIDED_EXECUTIONS
------------------
SQL_QUARANTINE_ca0z7uh2sqcbw
1




############################################
Dataguard  New features 
############################################


Active Data Guard DML Redirection

You can run DML operations on Active Data Guard standby databases. This enables you to run read-mostly applications, which occasionally execute DML’s, on the standby database.

DML operations on a standby can be transparently redirected to and run on the primary database. This includes DML statements that are part of PL/SQL blocks. The Active Data Guard session waits until the corresponding changes are shipped to and applied to the Active Data Guard standby. Read consistency is maintained during the DML operation and the standby database on which the DML is run can view its uncommitted changes. However, all the other standby database instances can view these changes only after the transaction is committed.


To configure automatic redirection of DML operations for all standby sessions in an Active Data Guard environment:
ADG_REDIRECT_DML=TRUE 


To configure automatic redirection of DML operations for the current session, use the following command:

ALTER SESSION ENABLE ADG_REDIRECT_DML; 




Propagate Restore Points from Primary to Standby 

With Oracle Database 19c, restore points which are created on primary database are automatically replicated on standby.

The restore points on standby are always normal restore points. It doesn't matter it is guaranteed or normal restore points on primary database.

There are 3 prerequisites for this feature:
1. The compatible parameter both on primary and standby database must be 19.0.0 or greater
2. The primary database must be open (not mounted) because of restored point replication is occured  through the redo logs. So, MRP process on standby needs to be running for replication.
3. There shouldn't be any existing restore point on standby database with same name.



Flashback Standby when Primary is flashed back 

In previous releases, getting the secondary to the same point in time as the primary requires a manual procedure to flash back standby databases. A new parameter(_standby_auto_flashback) is introduced which enables the standby database to be flashed back automatically when Flashback Database is performed on the primary database.


Since Oracle 19c, a DBA can put the standby database in MOUNT mode with no managed recovery (or Read Only) and then flash back the primary database. When you restart the MRP after closing the physical standby, the recovery process automatically flashes back the standby database and continues to apply the new branch of redo.

Just restart media recovery on standby and standby will  automatically flashback to restore point.




Finer granularity Supplemental Logging 


Supplemental logging was designed and implemented for Logical Standby or full database replication requirements. This adds unnecessary overhead in environments where only a subset of tables is being replicated. Fine-grained supplemental logging provides a way for partial database replication users to disable supplemental logging for uninteresting tables so that even when supplemental logging is enabled in database or schema level, there is no supplemental logging overhead for uninteresting tables.

Use of this feature can significantly reduce the overhead in terms of resource usage and redo generation in case when only some of the tables in the database require supplemental logging, such as in a Golden Gate partial replication configuration.

If both source and mining database are at redo compatibility 19 or higher and the value of enable_goldengate_replication is TRUE, then Oracle Database 19c and higher provides an advantage of reduced supplemental logging overhead for Oracle GoldenGate.



How to enable

 SQL> alter system set enable_goldengate_replication=true;

 SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION;

 SQL> SELECT supplemental_log_data_min, supplemental_log_data_sr from v$database; 



Way to disable

 SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; 

 SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION; 

 SQL> SELECT supplemental_log_data_min, supplemental_log_data_sr from v$database;





Oracle 19c-Tuning Automatic Outage Resolution With Data Guard



Data Guard maintains internal mechanisms that detect and correct issues with its redo transport and gap resolution processes
In case of network or disk I/O problems, these mechanisms prevent those processes from hanging and causing unnecessarily long gaps

Oracle Data Guard has several processes on the Primary and Standby databases that handle redo transport and archiving which communicate with each other over the network.

In certain failure situations, network hangs, disconnects, and disk I/O issues, these processes can hang potentially causing delays in redo transport and gap resolution. Data Guard has an internal mechanism to detect these hung processes and terminate them allowing the normal outage resolution to occur.

The following parameters allow the waits times to be tuned for a specific Data Guard configuration based on the user network and Disk I/O behavior


SQL> show parameter DATA_GUARD_MAX

NAME TYPE VALUE
———————————— ———– ——————————
data_guard_max_io_time integer 240
data_guard_max_longio_time integer 240

Default: 240
Range of values: 10 to 7200

DATA_GUARD_MAX_IO_TIME

This parameter sets the maximum number of seconds that can elapse before a
process is considered hung while performing a regular I/O operation in an Oracle
Data Guard environment. Regular I/O operations include read, write, and status
operations.

DATA_GUARD_MAX_LONG_IO_TIME

This parameter sets the maximum number of seconds that can elapse before a
process is considered hung while performing a long I/O operation in an Oracle Data

Guard environment. Long I/O operations include open and close operations.




Fast start failover can  be configured with Observer mode only.( No actual failover ) 


############################################
Patching
############################################

 
Zero-Downtime Oracle Grid Infrastructure Patching Using Fleet Patching and Provisioning    -- rhpctl  


Fleet Patching and provisioning formerly  known as  rapid home patching .  Gold image copy creation   concept was  introduced . 
FFP enhancement over  12c out of box patching are "Zero downtime database upgrade"  and "Adaptive Rac rolling ojvm patch"

Since  this is very  wide  topic i wont be explaining  it in very detail here and will look forward another article only for this 

However below  oracle article explains in detail  about FFP  with exact commands. 

https://docs.oracle.com/en/database/oracle/oracle-database/19/sprhp/deploying-and-managing-oracle-software-using-fleet-patching-and-provisioning.pdf




Change how Version / RU / RUR is seen

From 19c  Oracle  has changed the  way RUR release is seen is seen . Refer to Doc ID 2118136.2 






############################################
Upgrade and Migrate 
############################################


Dry run for  grid upgrade

Oracle Grid Infrastructure installation wizard (gridSetup.sh) enables you to perform a dry-run mode upgrade using dryRunForUpgrade flag to check your system’s upgrade readiness.

In dry-run upgrade mode, the installation wizard performs all of the system readiness checks that it would perform in an actual upgrade and enables you to verify whether your system is ready for upgrade before you start the upgrade. This mode does not perform an actual upgrade. It helps anticipate potential problems with the system setup and avoid upgrade failures.

on 1st  node  run grid setup   dry run  from new home      Note , this will copy  software to 2nd  node .

./gridSetup.sh -silent -dryRunForUpgrade 
-responseFile /oragridzj/app/19.3.0/grid/install/response/gridsetup.rsp




AutoUpgrade Tool  

AutoUpgrade enables to upgrade one or many Oracle databases at the command-line with a single command and a single configuration file.

AutoUpgrade will run the preupgrade tasks, perform automated fixups where needed, execute the database upgrade, and finish by taking care of post-upgrade tasks. It includes automatic retry and fallback, the possibility to schedule upgrades for future points in time, and the ability to set, change or remove initialization parameters as desired.

It saves time and money by upgrading hundreds of databases with one command and replacing bespoke high maintenance upgrade solutions.

Reference Doc ID: AutoUpgrade Tool (Doc ID 2485457.1)


Config File

 global.autoupg_log_dir=/home/oracle/DBADB122_UPG/DBADB122
 DBADB122.dbname=DBADB122
 DBADB122.start_time=NOW
 DBADB122.source_home=/u02/app/oracle/product/12c/dbhome_1
 DBADB122.target_home=/opt/oracle/product/19c/dbhome_1
 DBADB122.sid=DBADB122
 DBADB122.log_dir=/home/oracle/DBADB122_UPG/DBADB122
 DBADB122.upgrade_node=dinesh19c
 DBADB122.target_version=19.3
 DBADB122.run_utlrp=yes
 DBADB122.timezone_upg=yes


 Run PRECHECK

 $ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -config DBADB122_config.cfg -mode analyze


 Upgrade Database using DEPLOY


 $ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -config DBADB122_config.cfg -mode deploy





############################################
Other 
############################################


Oracle Network Log File Segmentation

The maximum size and number of text log files can be configured for Oracle Network components such as Oracle Net Listener, Oracle Connection Manager, and Global Service Manager. This feature allows better management of log files, particularly in Cloud environments.

Use below parameters to define number of files with defined size

LOG_FILE_NUM_listener_name
To specify the number of log file segments. At any point of time there can be only “n” log file segments where “n” is LOG_FILE_NUM_listener_name.
If the log grows beyond this number, then the older segments are deleted.

LOG_FILE_SIZE_listener_name
To specify the size of each log file segment. The size is in MB.



Sample listener.ora
  
 LOG_FILE_NUM_DBADB_LISTENER=5
 LOG_FILE_SIZE_DBADB_LISTENER=1



Clear Flash logs periodically for FRA size certainty 

DBA`s cannot manage the flashback logs in the fast recovery area directly other than by setting the flashback retention target or using guaranteed restore points. Nevertheless, you can manage fast recovery area space as a whole to maximize the space available for retention of flashback logs.  The Only quick way we purge the flashback is to turn off Flashback and Turn on Again. Starting with Oracle Database Release 19c, Oracle Database monitors flashback logs in the fast recovery area and automatically deletes flashback logs that are beyond the retention period (Not Waiting for FRA Crunch). When the retention target is reduced, flashback logs that are beyond the retention period are deleted immediately



Schema-only accounts  

This feature will enable you to create a schema account without authentication. This is a great feature, in previous releases schema account is a highly privileged account that can perform “anything” within the database objects stored in the schema. To clarify, simply using schema account you have the power to “turn auditing off” which is a serious security issue.

CREATE USER SPECIAL_SCHEMA NO AUTHENTICATION
DEFAULT TABLESPACE TS_SPECIAL_SCHEMA
TEMPORARY TABLESPACE TEMP;


To connect to the schema only account SPECIAL_SCHEMA you need configure proxy setup:
SQL> ALTER USER SPECIAL_SCHEMA GRANT CONNECT THROUGH tEST ;


SQL> select * from proxy_users where PROXY=’tEST’;

Connecting using easy connect, will throw an error that special_schema account lacks create session !

To find the list of schema-only accounts in your database:
SQL> select * from dba_users where authentication_type=’NONE’;


Before starting your upgrade, determine if you want to use password authenticate to default Oracle Database accounts where their passwords are in EXPIRED status, and their account is in LOCKED status

During upgrades to Oracle Database 19c and later releases, default Oracle accounts that have not had their passwords reset before upgrade (and are set to EXPIRED status), and that are also set to LOCKED status, are set to NO AUTHENTICATION after the upgrade is complete.


Because of this new feature, default accounts that are changed to schema-only accounts become unavailable for password authentication. The benefit of this feature is that administrators no longer have to periodically rotate the passwords for these Oracle Database-provided schemas. This feature also reduces the security risk of attackers using default passwords to hack into these account




ALTER SYSTEM FLUSH PASSWORDFILE_METADATA_CACHE 

The ALTER SYSTEM clause FLUSH PASSWORDFILE_METADATA_CACHE refreshes the metadata cache with the latest details of the database password file. The latest details of the database password file can be retrieved by querying the V$PASSWORDFILE_INFO view.

 If the database password file name or location has been changed and the metadata cache needs to be refreshed with the details of the updated database password file, you can do it simply with the following command:

SQL> alter system flush passwordfile_metadata_cache;

System altered.




Read-only-Oracle-Home


In Oracle 18c Oracle introduced a new feature, called “read-only-Oracle-Home”. By configuring an Oracle-Home-directory of your RDBMS-installation as “read-only”, Oracle will place all the configuration files like spfile etc. outside the directory tree of your $ORACLE_HOME and will basically move them to “$ORACLE_BASE/dbs” and “$ORACLE_BASE/homes”. But if the software is “read-only”, can we place the $ORACLE_HOME-directory in a separate filesystem which will be mounted “read-only” during normal operation? This would help to avoid accidental modifications or an accidental removal of the software 


The tool for enabling a read-only-Oracle-HOME is “roohctl”.

oracle@dirac:~/ [rdbms19] echo $ORACLE_HOME
 /u00/app/oracle/product/19.0.0.0
 oracle@dirac:~/ [rdbms19] roohctl -help
 Usage:  roohctl [] [
 ]
 Following are the possible flags:
     -help
 Following are the possible commands:
     -enable Enable Read-only Oracle Home
         [-nodeList List of nodes in a cluster environment]
(Roohctl -disable is available, too, but is not documented)

oracle@dirac:~/ [rdbms19] roohctl -enable


When it returns your $ORACLE_HOME-directory, your home is not a read-only-Oracle-Home

The configuration information is stored in $ORACLE_HOME/install/orabasetab




Hybrid Partitioned Tables: 

now large portions of a table can reside in external partitions, for example in the Oracle Cloud. With this new feature, you can also easily move non-active partitions to external files, such as Oracle Data Pump files, for a cheaper storage solution. Hybrid partitioned tables support all existing external table types for external partitions: ORACLE_DATAPUMP, ORACLE_LOADER, ORACLE_HDFS, ORACLE_HIVE.





 Using -applyRU and -applyOneOffs in Response File during silent Installation of software


Oracle 18c also introduced the concept of patching the installation media prior to installation or upgrades. The -applyRU flag allows you to specify the location of a Release Update (RU), which is applied to the new ORACLE_HOME before the installation takes place. This saves you from having to install then patch the software as separate steps. If we had a release update "psu" unzipped in the "/u01/software" directory, we might do something like this.

./runInstaller -ignorePrereq -waitforcompletion -silent \
    -applyRU /u01/software/psu
    -responseFile ${ORACLE_HOME}/install/response/db_install.rsp \
    oracle.install.option=INSTALL_DB_SWONLY \
    ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \
    UNIX_GROUP_NAME=oinstall \
    INVENTORY_LOCATION=${ORA_INVENTORY} \
    SELECTED_LANGUAGES=en,en_GB \
    ORACLE_HOME=${ORACLE_HOME} \
    ORACLE_BASE=${ORACLE_BASE} \
    oracle.install.db.InstallEdition=EE \
    oracle.install.db.OSDBA_GROUP=dba \
    oracle.install.db.OSBACKUPDBA_GROUP=dba \
    oracle.install.db.OSDGDBA_GROUP=dba \
    oracle.install.db.OSKMDBA_GROUP=dba \
    oracle.install.db.OSRACDBA_GROUP=dba \
    SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
    DECLINE_SECURITY_UPDATES=true


The -applyOneOffs flag is similar, but as the name suggests allows you to apply one-off patches. These can be combined with combined with release updates if needed. We can also specify multiple one-off patches as a comma-separated list. For the release update "psu" with the one-off patches "oneoff" and "oneoff2" we might do the following.

./runInstaller -ignorePrereq -waitforcompletion -silent \
    -applyRU /u01/software/psu \
    -applyOneOffs /u01/software/oneoff,/u01/software/oneoff2 \
    -responseFile ${ORACLE_HOME}/install/response/db_install.rsp \
    oracle.install.option=INSTALL_DB_SWONLY \
    ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \
    UNIX_GROUP_NAME=oinstall \
    INVENTORY_LOCATION=${ORA_INVENTORY} \
    SELECTED_LANGUAGES=en,en_GB \
    ORACLE_HOME=${ORACLE_HOME} \
    ORACLE_BASE=${ORACLE_BASE} \
    oracle.install.db.InstallEdition=EE \
    oracle.install.db.OSDBA_GROUP=dba \
    oracle.install.db.OSBACKUPDBA_GROUP=dba \
    oracle.install.db.OSDGDBA_GROUP=dba \
    oracle.install.db.OSKMDBA_GROUP=dba \
    oracle.install.db.OSRACDBA_GROUP=dba \
    SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
    DECLINE_SECURITY_UPDATES=true



############################################
Datapump 
############################################

max_datapump_parallel_per_job. 

MAX_DATAPUMP_PARALLEL_PER_JOB is introduced to control the number of parallel workers that can be used for an individual Data Pump job. This is an improvement to implement resource usage policies in DataPump.

In a database with MAX_DATAPUMP_PARALLEL_PER_JOB set to N, even if parallel>N is specified for the DataPump job, only N number of Worker processes will be started.



Oracle Data Pump Ability to Exclude ENCRYPTION Clause on Import - new transform parameter OMIT_ENCRYPTION_CLAUSE  


 Migrate to a database having TDE encrypted tablespaces
– TDE does not support encrypted columns (e.g., Oracle Cloud)
• Example:
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp SCHEMAS=hr
TRANSFORM=OMIT_ENCRYPTION_CLAUSE:Y
• Details:
– Default: N - column encryption clauses in CREATE TABLE are enabled, Y = suppressed

– Valid for TABLE object types





Oracle Data Pump Test Mode for Transportable Tablespaces (TTS) 


TTS_CLOSURE_CHECK=TEST_MODE indicates that the tablespaces are not required to be in Read-Only mode. This option is for testing purposes only to provide timing requirements of a transportable tablespace export operation. The resulting export dump file is not available for use by Data Pump Import.

Now you can more easily determine how long an export takes, and discover unforeseen issues not reported by the closure check.

expdp 
 directory=DATA_PUMP_DIR 
 dumpfile=users_tts_test.dmp 
 logfile=users_tts_test.log 
 transport_tablespaces=users 
 TTS_CLOSURE_CHECK=test_mode




Oracle Data Pump Prevents Inadvertent Use of Protected Roles - new ENABLE_SECURE_ROLES parameter is available 

Specify whether to use authenticated roles for export and import
• ENABLE_SECURE_ROLES=YES | NO
– available for expdp and impdp clients, and for the Oracle Data Pump PL/SQL API
• Default: NO – does not enable authenticated protected roles
• Beginning with release 19c you must explicitly enable authenticated roles

for an export or import job




Oracle Data Pump Loads Partitioned Table Data One Operation - GROUP_PARTITION_TABLE_DATA, a new value for the Import DATA_OPTIONS command line parameter



GROUP_PARTITION_TABLE_DATA: Tells Oracle Data Pump to import the table data in all partitions of a table as one operation. The default behavior is to import each table partition as a separate operation. Import chooses the default. For instance, when this partition is set, and there is a possibility that a table could move to a different partition as part of loading a table as part of the import, Oracle Data Pump groups table data in one partition. Oracle Data Pump also groups all partitions of a table as one operation for tables that are created by the Import operation.





Oracle Data Pump Allows Tablespaces to Stay Read-Only During TTS Import  



• Allows Read-Only Tablespaces during Transportable Tablespaces import
• TRANSPORTABLE=NEVER|ALWAYS|KEEP_READ_ONLY|NO_BITMAP_REBUILD
– Restores pre-12.2 ability to have tablespace files mounted on two databases at once
• Example:
impdp system DIRECTORY=dpump_dir DUMPFILE=dumpfile_name
TRANSPORT_DATAFILES=datafile_name
TRANSPORTABLE=KEEP_READ_ONLY
• Prevents fix-up of timezone data and rebuilding of bitmaps





Data Pump 19c: Suppress Encrypted Columns Clause

• Migrate to a database having TDE encrypted tablespaces
– TDE does not support encrypted columns (e.g., Oracle Cloud)
• Example:
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp SCHEMAS=hr
TRANSFORM=OMIT_ENCRYPTION_CLAUSE:Y
• Details:
– Default: N - column encryption clauses in CREATE TABLE are enabled, Y = suppressed
– Valid for TABLE object types





############################################
Depreciated Features 
############################################


De-support of Leaf Nodes in Flex Cluster Architecture

Leaf nodes are no longer supported in the Oracle Flex Cluster Architecture in Ora‐
cle Grid Infrastructure 19c.
In Oracle Grid Infrastructure 19c (19.1) and later releases, all nodes in an Oracle
Flex Cluster function as hub nodes. The capabilities offered by Leaf nodes in the
original implementation of the Oracle Flex Cluster architecture can as easily




Desupport of Oracle Real Application Clusters for Standard Edition 2 (SE2) Database Edition

Starting with Oracle Database 19c, Oracle Real Application Clusters (Oracle RAC) is not supported in Oracle Database Standard Edition 2 (SE2). Upgrading Oracle Database Standard Edition databases that use




############################################
Depreciated Parameters 
############################################




CLUSTER_DATABASE_INSTANCES  

parameter CLUSTER_DATABASE_INSTANCES specifies the number of
configured Oracle Real Application Clusters (Oracle RAC) instances. Starting with
Oracle Database 19c and later releases, the number of configurable Oracle RAC
instances is derived automatically from the Oracle Clusterware resource defini‐
tions. There is no replacement for this parameter, because there is no longer a
reason to have this parameter.




SERVICE_NAMES 

Starting with Oracle Database 19c, customer use of the SERVICE_NAMES parameter
is deprecated. It can be desupported in a future release.
The use of the SERVICE_NAMES parameter is no longer actively supported. It must
not be used for high availability (HA) deployments. It is not supported to use serv‐
ice names parameter for any HA operations. This restriction includes FAN, load
balancing, FAILOVER_TYPE, FAILOVER_RESTORE, SESSION_STATE_CONSISTENCY, and
any other uses.
To manage your services, Oracle recommends that you use the SRVCTL or GDSCTL
command line utilities, or the DBMS_SERVICE package




############################################
Reference 
############################################

https://docs.oracle.com/en/database/oracle/oracle-database/19/newft/new-features.html#GUID-5490FE65-562B-49DC-9246-661592C630F9

https://docs.oracle.com/en/database/oracle/oracle-database/19/newft/database-new-features-guide.pdf


https://docs.oracle.com/en/database/oracle/oracle-database/19/rilin/real-application-clusters-installation-guide-linux-and-unix.pdf


No comments:

Post a Comment