RMAN -- Cookbook
########################################################################
########################################################################
Enable & Disable Block change tracking
BCT to be active only changed blocks to
keep track of:
SQL>alter database enable block change
tracking using file '/rman_bkups/change.log';
SQL>ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE
'+ASM_QGISC_DATA_DG1'
Disabling Block change tracking:
SQL>alter database disable block change
tracking;
SQL> select file, status, bytes
2
from v$block_change_tracking;
STATUS FILE BYTES
----------- ---------------------------- ---------------
ENABLED /dba/backup/01_mf_yzmrr7.chg 10,000,000
Block change tracking records the modified
blocks since last backup and stores this logs in block
change tracking file.During backups, RMAN uses this log file to identify the
specific blocks
(changed block) that must be backed up.
Benifits: BCT improves RMAN backup
performance as it would not required to scan whole datafiles
to detect changed blocks.
It will only see the changed block through
block chnage tracking files and take the back up that clock.
Whenever data blocks change, the Change
Tracking Writer (CTWR) background process tracks the changed
blocks in a private area of memory.
When a commit is issued against the data
block, the block change tracking information is copied
to a shared area in Large Pool called the
CTWR buffer.
During the checkpoint, the CTWR process
writes the information from the CTWR RAM buffer to the block
change-tracking file.
To see the status of BCT Status
SQL>select status from
v$block_change_tracking;
To Enable BCT
SQL>alter database enable block change
tracking;
OR
SQL>alter database enable block change
tracking USING FILE
'/u01/app/oracle/bct/rman_bct.f' reuse;"
The "USING FILE os_file_name"
syntax allows you to define the location of the change tracking file on the OS,
or you can omit this clause by enabling OMF.
TO disable BCT
SQL>alter database disable block change tracking;
To Monitor BCT
SQL>select filename,status, bytes from
v$block_change_tracking;
To view the size of the CTWR
SQL>select * from v$sgastat where name
like 'CTWR%';
########################################################################
Parallel backup of same datafile – 11g ########################################################################
backup as COMPRESSED BACKUPSET section size 500M tag
"tdybkp" database;
########################################################################
########################################################################
select CONTROLFILE_TYPE from v$database ;
select FUZZY,count(*) from
v$datafile_header group by fuzzy ;
set lines 187
set numwidth 20
set pages 999
alter session set
nls_date_format='DD-MON-YYYY:HH24:MI:SS';
select * from v$recover_file;
select min(fhrba_Seq)min_seq,
max(fhrba_Seq) max_seq_required from x$kcvfh;
select distinct fuzzy from v$datafile_header;
alter session set
nls_date_format='YYYY/MON/DD hh24:mi:ss';
select
checkpoint_time,fuzzy,count(*),status
from ( select checkpoint_time,fuzzy,status
from v$datafile_header
union all
select controlfile_time,'CTL',null from v$database)
group by checkpoint_time,fuzzy,status;
########################################################################
read only tablespace restore ########################################################################
1)
take at least one backup of the tablespace after it has been made read
only and thereafter we can use the SKIP READONLY command to exclude these
tablespaces from the daily or weekly database backups.
2)
while doing a restore we need to use the CHECK READONLY keywords
otherwise by default the read only tablespaces will not be restored and hence
the recovery will also bypass these tablespaces. Subssequent attempts to open
the database will fail.
RESTORE DATABASE CHECK READONLY;
and
RECOVER DATABASE CHECK READONLY;
3) The BACKUP OPTIMIZATION feature of
Oracle RMAN identifies files not changed since the last backup as files for
which the backup can be "optimized" -- i.e. *skipped*
########################################################################
Rman
New features ########################################################################
è
rman 9i new feature :
1) configure option for persistent
configuration
2) block media recovery
3) "PLUS ARCHIVELOG " to take
backup of archivelogs
4) "NOT BACKED UP [SINCE TIME….].
" -- to take backup of not been backed up
5) "recover database allow 2
corruption" -- to allow certain number of corruptions while recovery
6) trail recovery
7) REPORT OBSOLETE RECOVERY WINDOW OF 7
DAYS -- report obsolete and delete
obsolete
8) crosscheck backup
9) show command
10)retention period / recovery window .
11) controlfile auto backup
12)
RMAN now supports the multiple block sizes for tablespaces
13) skip of tablespace .
14)
è
10g rman new feature :
1) scn level backup -- to take after
certain scn
2) Fast incremental backup -- enabling
block change tracking file . V$BLOCK_CHANGE_TRACKING
alter database enable block change tracking using file
'C:/changetracking/chg01.dbf';
3) drop database including backups; -- dropping database through rman
4) Automatic Channel Failover in case of
one channel failure . error recorded in V$RMAN_OUTPUT
5) BACKUP DURATION 2:00 TABLESPACE users;
-- "BACKUP......DURATION " option .
replaces RATE and READRATE in previous version of RMAN.
6) backup duration 2:00 partial tablespace
users filesperset 1;
BACKUP.....DURATION.....PARTIAL
option
7) BACKUP DURATION 4:00 PARTIAL MINIMIZE
LOAD DATABASE FILESPERSET 1;
BACKUP.....DURATION.....MINIMIZE LOAD
option
8) backup duration 2:00 minimize time
tablespace users;
BACKUP....DURATION....MINIMIZE TIME.
9) BACKUP
AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;
CONFIGURE
DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;
new
compression option
10) backup copy of database;
11) catalog : backupset
12) configure archive log deletion policy
to ensure that archives applied on standby
13)
è
11g rman new feature
1) recovery advisor .(list failure , advice
failure , repair failure preview , repair failure.
2) validate database; -- checks database
for physical corruption
3) multiple channel doingbackup of single
datafile :
In
10g each datafile is backed by only one channel. In Oracle Database 11g RMAN,
the multiple channels can backup one datafiles parallel by breaking the
datafile
into chunks known as "sections."
4) In Oracle Database 11g, RMAN bypasses
backing up the committed undo data that is
not required in recovery.
5) Improved Block Media Recovery
Performance if flashback logs are present
6) improved zip option :
CONFIGURE COMPRESSION ALGORITHM 'ZLIB';
CONFIGURE COMPRESSION ALGORITHM 'BZIP2';
RMAN now supports the ZLIB binary compression algorithm as part of the
Oracle Advanced Compression
option. The ZLIB algorithm is optimized for CPU efficiency, but produces
larger zip files
than the BZIP2 algorithm available previously, which is optimized for
compression.
7) Network-Enabled Database Duplication
Without Backups -- online clone .
Once the DUPLICATE command is initiated, RMAN automatically performs the
following steps:
Copies the spfile to the destination server.
Starts the auxiliary instance with the spfile.
Copies the relevant database files and archived redo logs over the
network to the destination server.
Recovers
the database.
Opens the database with the RESETLOGS option.
8) virtual private catalog : -->
recovery catalog enhancement
9) import catalog , --> to import
previous version catalog in 11g
10) SECTION SIZE parameter to control
backupset size --- different backupset with different section size
11) Improved Scripting with RMAN
Substitution Variables:
CONNECT TARGET /
BACKUP DATABASE TAG '&1';
BACKUP ARCHIVELOG ALL TAG '&2';
EXIT;
$ rman @'/tmp/backup.cmd' USING DB_20070108
ARCH_20070108
12)
3) duplicate database
enhancement . :
à active database duplication if no files transferred
à if files transferred no need to
connect to target database .
########################################################################
########################################################################
One of the good new features in 11g Release 2 is that it enables
us to recover from a case of a dropped tablespace. TSPITR (tablespace point in
time recovery) has been around in earlier releases, but we could not recover a
dropped tablespace.
What 11gR2 does is that it peforms a fully automated RMAN managed
TSPITR. It creates and starts the auxiliary instance and restores
just the datafiles it requires – SYSTEM,SYSAUX,UNDO and the
files pertaining to the dropped tablespace – in this case datafiles 1,2,3 and
7- in the location which we specify as the ‘Auxiliary Destination’. It will first
perform a recovery of the tablespace on the auxiliary instance and then use
Data Pump and Transportable Tablespace technology to extract and import the
tablespace meta data into the original source database.
TSPITR has a number of
restrictions, a quick summary of which follows:
the database has to be
open and tablespace accessible for tbspitr
1. You can’t
restore tablespaces with objects owned by SYS. Any tablespace with replicated
master tables cannot be recovered with TSPITR.
2. Tablespaces with
snapshot logs are not supported.
3. You can’t restore
tablespaces that contain rollback segments.
4. If an object within
the tablespace to be recovered has one of the following types, then TSPITR is
not supported:
1.
VARRAY
2. Nested tables
3. External files
2. Nested tables
3. External files
5. The current
physical structure of the undo and rollback segments in the target database
must be unchanged between the point you wish to recover to and the current point.
must be unchanged between the point you wish to recover to and the current point.
6. Once you have
completed TSPITR on a given tablespace, all previous backups of that tablespace
are no longer usable for future TSPITR recoveries of that tablespace. So you
should backup your tablespace after TSPITR in case you need to run another
TSPITR.
Automated Tablespace point in time :
Make sure that objects
in that tablespace are self-contained.
Let’s say we have the
tablesapace called “TEST”. It contains table “TEST_TABLE” but indexes on
that table resides on another tablespace called “TEST_IDX”. In this case, if
you want to recover “TEST” tablespace to some point-in-time , then you must
also recover “TEST_IDX” tablespace to the same point-in-time(or you will loose
indexes).
To identify dependent
tablesapaces, run the following:
SQL> select
obj1_owner
,obj1_name
,obj1_type
,ts2_name
,reason
from sys.ts_pitr_check
where ( ts1_name in (‘TEST’)
and ts2_name not in (‘TEST’) )
or ( ts1_name not in (‘TEST’)
and ts2_name in (‘TEST’) );
,obj1_name
,obj1_type
,ts2_name
,reason
from sys.ts_pitr_check
where ( ts1_name in (‘TEST’)
and ts2_name not in (‘TEST’) )
or ( ts1_name not in (‘TEST’)
and ts2_name in (‘TEST’) );
Identify what objects
will be dropped after recovering tablespace.
Run the following,
SQL> select *
from ts_pitr_objects_to_be_dropped
where tablespace_name=’TEST
from ts_pitr_objects_to_be_dropped
where tablespace_name=’TEST
Clean the previous
failed TSPIR
SQL> exec
dbms_backup_restore.manageauxinstance (‘instance_name’,1) ;
Check Tablespace Recoverability
SQL>
begin
2
dbms_tts.transport_set_check('TBSPIT',TRUE,TRUE);
3
end;
4 /
PL/SQL
procedure successfully completed.
SQL>
select * from transport_set_violations;
The activities that happens upon kicking off
the tbspitr rman commands are as listed below:
*Create
auxiliary instance – pvqe with db_create_file_dest set to the location we
provide on the rman script.
*Reboots
the target instance - RPROD
*Run
TRANSPORT_SET_CHECK on recovery set tablespaces (here TBSPIT) to check
dependency. (We will manually check this before starting tbspitr lest it fails at
this point).
*Restore
the controlfile and mount the clone database in auxilary location
*Restore
SYSTEM, SYSAUX, UNDO datafiles at the auxillary destination.
*Restore
TBSPIT datafile at target database (original) destination (RPROD) itself.
*Switch
SYSTEM, SYSAUX & UNDO datafiles to clone copy.
*Bring
the datafiles of these clone tablespaces online.
*Recover
SYSTEM, SYSAUX, UNDO & TBSPIT clone tablespaces.
*Open
resetlogs clone database.
*Make
TBSPIT tablespace read-only
*Create
directory for data pump as the same auxdest location.
*Do a
transportable tablespace (Metadata) datapump export
*Shutdown
clone database.
*Drop
tablespace TBSPIT including contents. Keep datafile (for transport import)
*Do
metadata datapump import.
*Make
tablespace TBSPIT READ WRITE
*Take
tablespace TBSPIT offline.
*Clean
up the clone instance – remove restored datafiles, onlinelogs and controlfiles.
Using RMAN backup the database and arvhivelogs
rman target /
backup database plus
archivelog;
9)Get the latest log
sequence number
SQL> select
sequence#,thread# from v$log;
SEQUENCE#
THREAD#
———- ———-
720 1
718 1
719 1
———- ———-
720 1
718 1
719 1
9) Create a small parameter
file
cat
/u01/dba/temp/auto_tsptir_check/initEEeE.ora
db_name=TESTDB1
db_unique_name=EEeE_tspitr_TESTDB1
compatible=11.2.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=20
db_create_file_dest=/u01/dba/temp/auto_tsptir_check
log_archive_dest_1=’location=/u01/dba/temp/auto_tsptir_check’
db_unique_name=EEeE_tspitr_TESTDB1
compatible=11.2.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=20
db_create_file_dest=/u01/dba/temp/auto_tsptir_check
log_archive_dest_1=’location=/u01/dba/temp/auto_tsptir_check’
**note Automated
TSPITR creates its own parameter file but the SGA_TARGET in that is only 200m
which will cause the whole procedure to fail due to insufficient memory, That
is the reason we created our own parameter file and will pass the same during
ATSPIR
RMAN> set auxiliary
INSTANCE parameter file to “/u01/dba/temp/auto_tsptir_check/initEEeE.ora”;
executing command: SET
auxiliary parameter file
using target database control file instead of recovery catalog
using target database control file instead of recovery catalog
10) Perform Automated
Tablespace point in time recovery.
RMAN> RECOVER TABLESPACE ts_auto UNTIL LOGSEQ 720 THREAD 1 AUXILIARY DESTINATION ‘/u01/dba/temp/auto_tsptir_check';
RMAN> RECOVER TABLESPACE ts_auto UNTIL LOGSEQ 720 THREAD 1 AUXILIARY DESTINATION ‘/u01/dba/temp/auto_tsptir_check';
RMAN> recover tablespace "TEST","TEST_IDX"
until scn 2091595 auxiliary destination ‘D:\oracle’;
RMAN> recover tablespace
"TEST" until time "23-SEP-2005 10:00:00','DD-MON-YYYY HH24:MI:SS"
auxiliary destination 'D:\BAckup\temp';
**AUXILIARY
DESTINATION is where rman stores all the database related
files for the auxiliary instance.
11) Change the status of the tablespace to online
11) Change the status of the tablespace to online
alter tablespace ts_auto
online;
########################################################################
Incomplete recovery ########################################################################
.
until a log sequence number:
We need to determine the log sequence we
need to recover until :
select sequence#,first_change#,
to_char(first_time,'HH24:MI:SS') from v$log order by 3 ;
RMAN> run {
2> set until sequence=14; >>> add one to the sequence number
we have to recover until
3> restore database;
4> recover database;
5> }
Rman
:
set until time ’2007-12-10 11:29:00’ (note: this
is based on the NLS_DATE_FORMAT)
set until sequence 10305 thread 1; (note: it is one less than specified)
set until scn 67543;
set until sequence 10305 thread 1; (note: it is one less than specified)
set until scn 67543;
RMAN> RECOVER DATABASE UNTIL
LOGSEQ=
RMAN> recover database until sequence
162280;
RMAN> recover database until SCN
34527024;
RMAN> recover database until time
'10-11-08 06:31:15'
Sql
plus :
recover database until cancel (At random type cancel to stop the recovery)
recover database until change 309121
recover database until time ’2007-12-10 11:29:00’ (see below for opti
recover database until change 309121
recover database until time ’2007-12-10 11:29:00’ (see below for opti
SCN:
LOOK AT THE
CURRENT SCN NUMBER AND NOTE IT
SQL> SELECT
GROUP# ,SEQUENCE# ,STATUS , ARCHIVED , FIRST_CHANGE#
2 FROM
V$LOG;
GROUP#
SEQUENCE# STATUS ARC
FIRST_CHANGE#
----------
---------- ---------------- --- -------------
1 2 ACTIVE YES 689996
2 3 CURRENT NO 689998
3 1 ACTIVE YES 678956
Time
:
SQL> ALTER SESSION SET NLS_DATE_FORMAT =
'DD-MM-YY HH24:MI:SS';
SQL> select NAME, SEQUENCE#, THREAD#,
FIRST_TIME, FIRST_CHANGE#, NEXT_TIME, NEXT_CHANGE# from v$archived_log where
SEQUENCE# > 166;
########################################################################
Restore preview ########################################################################
restore
datafile'/usr05/oradata/WMST/WMST/test01.dbf' preview ;
restore database preview;
restore tablespace users preview;
# Preview
RESTORE DATABASE PREVIEW;
RESTORE TABLESPACE users PREVIEW;
# Preview Summary
RESTORE DATABASE PREVIEW SUMMARY;
RESTORE TABLESPACE users PREVIEW SUMMARY;
########################################################################
########################################################################
11g Compress backupset enhancement .
-- uses new ZLIB algorithm instead of old
BZIP2 algorithm . ZLIB is 40% faster .
-- “configure compression algorithm to
ZLIB”
CONFIGURE DEVICE TYPE DISK PARALLELISM 6
BACKUP TYPE TO COMPRESSED BACKUPSET;
RUN {
BACKUP AS COMPRESSED BACKUPSET CHECK LOGICAL FULL DATABASE
PLUS ARCHIVELOG
INCLUDE CURRENT CONTROLFILE FOR STANDBY ;
}
11g
advance compression :
Oracle 11g Release 2 introduced compression
algorithm levels which can be used for compressing table data, Data Pump
exports and RMAN backups as well.
The compression levels are BASIC, LOW,
MEDIUM and HIGH and each affords a trade off related to backup througput and
the degree of compression afforded.
To use this option, we can run the
following RMAN commands
RMAN> CONFIGURE DEVICE TYPE DISK BACKUP
TYPE TO COMPRESSED BACKUPSET;
followed by ..
RMAN> CONFIGURE COMPRESSION ALGORITHM
‘HIGH’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM
‘MEDIUM’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM
‘LOW’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM
‘BASIC’;
Tests carried out on a small 1.5 GB
database returned compression ratios of about 4.7 at best case. However, it
should be noted that use of LOW, MEDIUM and HIGH requires the Advanced
Compression license. The backupset size and backup durations are shown below.
Compression Level ‘HIGH’
backupset size: 226.18M
time: 00:02:21
Compression Level ‘Medium’
backupset size: 293.80M
time: 00:00:30
Compression Level ‘Low’
backupset size: 352.59M
time: 00:00:20
Compression Level ‘Basic’
backupset size: 276.55M
time: 00:00:50
To summarise we can conclude:
LOW – corresponds to LZO (11gR2) – smallest
compression ratio, fastest
MEDIUM – corresponds to ZLIB (11gR1) – good
compression ratio, slower than LOW
HIGH – corresponds to unmodified BZIP2
(11gR2) – highest compression ratio, slowest
BASIC (which is free) – corresponds to
BZIP2 (10g style compression) – compression ratio in the range of MEDIUM, but
slower
No comments:
Post a Comment