Wednesday, January 27, 2016

RMAN -- Cookbook




                                             RMAN   -- Cookbook 





########################################################################
########################################################################


Enable & Disable Block change tracking
BCT to be active only changed blocks to keep track of:

SQL>alter database enable block change tracking using file '/rman_bkups/change.log';
SQL>ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '+ASM_QGISC_DATA_DG1'

Disabling Block change tracking:

SQL>alter database disable block change tracking;


SQL> select file, status, bytes
   2 from   v$block_change_tracking;

STATUS         FILE                                       BYTES
-----------          ---------------------------- ---------------
ENABLED     /dba/backup/01_mf_yzmrr7.chg         10,000,000




Block change tracking records the modified blocks since last backup and stores this logs in block
change tracking file.During backups,  RMAN uses this log file to identify the specific blocks
(changed block) that must be backed up.

Benifits: BCT improves RMAN backup performance as it would not required to scan whole datafiles
to detect changed blocks.
It will only see the changed block through block chnage tracking files and take the back up that clock.

Whenever data blocks change, the Change Tracking Writer (CTWR) background process tracks the changed
blocks in a private area of memory.

When a commit is issued against the data block, the block change tracking information is copied
to a shared area in Large Pool called the CTWR buffer.
During the checkpoint, the CTWR process writes the information from the CTWR RAM buffer to the block
change-tracking file.

To see the status of BCT Status
SQL>select status from v$block_change_tracking;

To Enable BCT
SQL>alter database enable block change tracking;

OR

SQL>alter database enable block change tracking USING FILE  '/u01/app/oracle/bct/rman_bct.f' reuse;"

The "USING FILE os_file_name" syntax allows you to define the location of the change tracking file on the OS, or you can omit this clause by enabling OMF.

TO disable BCT
SQL>alter database disable block change tracking;

To Monitor BCT
SQL>select filename,status, bytes from v$block_change_tracking;

To view the size of the CTWR

SQL>select * from v$sgastat where name like 'CTWR%';



########################################################################
Parallel backup of same datafile – 11g  ########################################################################

backup as COMPRESSED BACKUPSET section size 500M tag "tdybkp" database;





########################################################################
########################################################################


select CONTROLFILE_TYPE from v$database ;
select FUZZY,count(*) from v$datafile_header group by fuzzy ;


set lines 187
set numwidth 20
set pages 999
alter session set nls_date_format='DD-MON-YYYY:HH24:MI:SS';
select * from v$recover_file;
select min(fhrba_Seq)min_seq, max(fhrba_Seq) max_seq_required from x$kcvfh;
select distinct fuzzy from v$datafile_header;



alter session set nls_date_format='YYYY/MON/DD hh24:mi:ss';

select checkpoint_time,fuzzy,count(*),status
from ( select checkpoint_time,fuzzy,status
       from v$datafile_header
       union all
       select controlfile_time,'CTL',null from v$database)
group by checkpoint_time,fuzzy,status;





########################################################################
read only tablespace restore  ########################################################################



1)  take at least one backup of the tablespace after it has been made read only and thereafter we can use the SKIP READONLY command to exclude these tablespaces from the daily or weekly database backups.
2)  while doing a restore we need to use the CHECK READONLY keywords otherwise by default the read only tablespaces will not be restored and hence the recovery will also bypass these tablespaces. Subssequent attempts to open the database will fail.

RESTORE DATABASE CHECK READONLY;
and
RECOVER DATABASE CHECK READONLY;


3) The BACKUP OPTIMIZATION feature of Oracle RMAN identifies files not changed since the last backup as files for which the backup can be "optimized" -- i.e. *skipped*






########################################################################
Rman New features ########################################################################



è  rman 9i new feature :
1) configure option for persistent configuration
2) block media recovery
3) "PLUS ARCHIVELOG " to take backup of archivelogs
4) "NOT BACKED UP [SINCE TIME….]. " -- to take backup of not been backed up
5) "recover database allow 2 corruption" -- to allow certain number of corruptions while recovery
6) trail recovery
7) REPORT OBSOLETE RECOVERY WINDOW OF 7 DAYS  -- report obsolete and delete obsolete
8) crosscheck backup
9) show command
10)retention period / recovery window .
11) controlfile auto backup
12)  RMAN now supports the multiple block sizes for tablespaces
13) skip of tablespace .
14)




è  10g rman new feature :
1) scn level backup -- to take after certain scn
2) Fast incremental backup -- enabling block change tracking file . V$BLOCK_CHANGE_TRACKING
   alter database enable block change tracking using file 'C:/changetracking/chg01.dbf';
3) drop database including backups;  -- dropping database through rman
4) Automatic Channel Failover in case of one channel failure . error recorded in V$RMAN_OUTPUT
5) BACKUP DURATION 2:00 TABLESPACE users; -- "BACKUP......DURATION " option .
   replaces RATE and READRATE in previous version of RMAN.
6) backup duration 2:00 partial tablespace users filesperset 1;
   BACKUP.....DURATION.....PARTIAL  option
7) BACKUP DURATION 4:00 PARTIAL MINIMIZE LOAD DATABASE FILESPERSET 1;
   BACKUP.....DURATION.....MINIMIZE LOAD  option
8) backup duration 2:00 minimize time tablespace users;
   BACKUP....DURATION....MINIMIZE TIME.
9) BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;
  new compression option

10) backup copy of database;
11) catalog :  backupset
12) configure archive log deletion policy to ensure that archives applied on standby
13)




è  11g rman new feature
1) recovery advisor .(list failure , advice failure , repair failure preview , repair failure.
2) validate database; -- checks database for physical corruption
3) multiple channel doingbackup of single datafile :
   In 10g each datafile is backed by only one channel. In Oracle Database 11g RMAN,
   the multiple channels can backup one datafiles parallel by breaking the datafile
   into chunks known as "sections."
4) In Oracle Database 11g, RMAN bypasses backing up the committed undo data that is
   not required in recovery.
5) Improved Block Media Recovery Performance if flashback logs are present
6) improved zip option :
   CONFIGURE COMPRESSION ALGORITHM 'ZLIB';
   CONFIGURE COMPRESSION ALGORITHM 'BZIP2';
   RMAN now supports the ZLIB binary compression algorithm as part of the Oracle Advanced Compression
   option. The ZLIB algorithm is optimized for CPU efficiency, but produces larger zip files
   than the BZIP2 algorithm available previously, which is optimized for compression.
7) Network-Enabled Database Duplication Without Backups  -- online clone .
   Once the DUPLICATE command is initiated, RMAN automatically performs the following steps:
   Copies the spfile to the destination server.
   Starts the auxiliary instance with the spfile.
   Copies the relevant database files and archived redo logs over the network to the destination server.
   Recovers the database.
   Opens the database with the RESETLOGS option.
8) virtual private catalog : --> recovery catalog enhancement
9) import catalog , --> to import previous version catalog in 11g
10) SECTION SIZE parameter to control backupset size --- different backupset with different section size
11) Improved Scripting with RMAN Substitution Variables:
    CONNECT TARGET /
BACKUP DATABASE TAG '&1';
BACKUP ARCHIVELOG ALL TAG '&2';
EXIT;
$ rman @'/tmp/backup.cmd' USING DB_20070108 ARCH_20070108
12)

3)      duplicate database enhancement . :
à active database duplication if no files transferred
à if files transferred no need to connect to target database .






########################################################################
########################################################################

One of the good new features in 11g Release 2 is that it enables us to recover from a case of a dropped tablespace. TSPITR (tablespace point in time recovery) has been around in earlier releases, but we could not recover a dropped tablespace.
What 11gR2 does is that it peforms a fully automated RMAN managed TSPITR. It creates and starts the auxiliary instance and restores just the datafiles it requires – SYSTEM,SYSAUX,UNDO and the files pertaining to the dropped tablespace – in this case datafiles 1,2,3 and 7- in the location which we specify as the ‘Auxiliary Destination’. It will first perform a recovery of the tablespace on the auxiliary instance and then use Data Pump and Transportable Tablespace technology to extract and import the tablespace meta data into the original source database.


TSPITR has a number of restrictions, a quick summary of which follows:

the database has to be open and tablespace accessible for tbspitr

1.  You can’t restore tablespaces with objects owned by SYS. Any tablespace with replicated master tables cannot be recovered with TSPITR.
2. Tablespaces with snapshot logs are not supported.
3. You can’t restore tablespaces that contain rollback segments.
4. If an object within the tablespace to be recovered has one of the following types, then TSPITR is not supported:
    1. VARRAY 
    2. Nested tables  
    3. External files
5. The current physical structure of the undo and rollback segments in the target database 
must be unchanged between the point you wish to recover to and the current point.
6. Once you have completed TSPITR on a given tablespace, all previous backups of that tablespace are no longer usable for future TSPITR recoveries of that tablespace. So you should backup your tablespace after TSPITR in case you need to run another TSPITR.



Automated Tablespace point in time :
Make sure that objects in that tablespace are self-contained.
Let’s say we have the tablesapace called “TEST”.  It contains table “TEST_TABLE” but indexes on that table resides on another tablespace called “TEST_IDX”. In this case, if you want to recover “TEST” tablespace to some point-in-time , then you must also recover “TEST_IDX” tablespace to the same point-in-time(or you will loose indexes).
To identify dependent tablesapaces, run the following:
SQL> select obj1_owner 
       ,obj1_name 
       ,obj1_type 
       ,ts2_name 
       ,reason 
from sys.ts_pitr_check 
where ( ts1_name in (‘TEST’) 
and ts2_name not in (‘TEST’) ) 
or ( ts1_name not in (‘TEST’) 
and ts2_name in (‘TEST’) );

Identify what objects will be dropped after recovering tablespace.
Run the following,
SQL> select * 
from ts_pitr_objects_to_be_dropped 
where tablespace_name=’TEST

Clean the previous failed TSPIR
SQL> exec dbms_backup_restore.manageauxinstance (‘instance_name’,1) ;


Check Tablespace Recoverability
SQL> begin
2 dbms_tts.transport_set_check('TBSPIT',TRUE,TRUE);
3 end;
4 /
PL/SQL procedure successfully completed.
SQL> select * from transport_set_violations;



The activities that happens upon kicking off the tbspitr rman commands are as listed below:

*Create auxiliary instance – pvqe with db_create_file_dest set to the location we provide on the rman script.
*Reboots the target instance - RPROD
*Run TRANSPORT_SET_CHECK on recovery set tablespaces (here TBSPIT) to check dependency. (We will manually check this before starting tbspitr lest it fails at this point).
*Restore the controlfile and mount the clone database in auxilary location
*Restore SYSTEM, SYSAUX, UNDO datafiles at the auxillary destination.
*Restore TBSPIT datafile at target database (original) destination (RPROD) itself.
*Switch SYSTEM, SYSAUX & UNDO datafiles to clone copy.
*Bring the datafiles of these clone tablespaces online.
*Recover SYSTEM, SYSAUX, UNDO & TBSPIT clone tablespaces.
*Open resetlogs clone database.
*Make TBSPIT tablespace read-only
*Create directory for data pump as the same auxdest location.
*Do a transportable tablespace (Metadata) datapump export
*Shutdown clone database.
*Drop tablespace TBSPIT including contents. Keep datafile (for transport import)
*Do metadata datapump import.
*Make tablespace TBSPIT READ WRITE
*Take tablespace TBSPIT offline.
*Clean up the clone instance – remove restored datafiles, onlinelogs and controlfiles.


Using RMAN backup the database and arvhivelogs
rman target /
backup database plus archivelog;
9)Get the latest log sequence number
SQL> select sequence#,thread# from  v$log;
SEQUENCE#    THREAD#
———- ———-
720          1
718          1
719          1
9) Create a small parameter file
cat /u01/dba/temp/auto_tsptir_check/initEEeE.ora
db_name=TESTDB1
db_unique_name=EEeE_tspitr_TESTDB1
compatible=11.2.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=20
db_create_file_dest=/u01/dba/temp/auto_tsptir_check
log_archive_dest_1=’location=/u01/dba/temp/auto_tsptir_check’
**note Automated TSPITR creates its own parameter file but the SGA_TARGET in that is only 200m which will cause the whole procedure to fail due to insufficient memory, That is the reason we created our own parameter file and will pass the same during ATSPIR
RMAN> set auxiliary INSTANCE parameter file to “/u01/dba/temp/auto_tsptir_check/initEEeE.ora”;
executing command: SET auxiliary parameter file
using target database control file instead of recovery catalog
10) Perform Automated Tablespace point in time recovery.
RMAN> RECOVER TABLESPACE ts_auto      UNTIL LOGSEQ 720 THREAD 1       AUXILIARY DESTINATION ‘/u01/dba/temp/auto_tsptir_check';
RMAN> recover tablespace "TEST","TEST_IDX" until scn 2091595 auxiliary destination ‘D:\oracle’;
RMAN> recover tablespace "TEST" until time "23-SEP-2005 10:00:00','DD-MON-YYYY HH24:MI:SS" auxiliary destination 'D:\BAckup\temp';


**AUXILIARY DESTINATION is where rman stores all the database related files for the auxiliary instance. 
11) Change the status of the tablespace to online
alter tablespace ts_auto online;





########################################################################
Incomplete recovery ########################################################################


.
until a log sequence number:

We need to determine the log sequence we need to recover until :
select sequence#,first_change#, to_char(first_time,'HH24:MI:SS') from v$log order by 3 ;


RMAN> run {
2> set until sequence=14;  >>> add one to the sequence number we have to recover until
3> restore database;
4> recover database;
5> }


Rman :
set until time ’2007-12-10 11:29:00’ (note: this is based on the NLS_DATE_FORMAT)
set until sequence 10305 thread 1; (note: it is one less than specified) 
set until scn 67543;

RMAN> RECOVER DATABASE UNTIL LOGSEQ=
RMAN> recover database until sequence 162280;
RMAN> recover database until SCN 34527024;
RMAN> recover database until time '10-11-08 06:31:15'


Sql plus :
recover database until cancel (At random type cancel to stop the recovery) 
recover database until change 309121 
recover database until time ’2007-12-10 11:29:00’ (see below for opti



SCN:

LOOK AT THE CURRENT SCN NUMBER AND NOTE IT

SQL> SELECT GROUP# ,SEQUENCE# ,STATUS , ARCHIVED , FIRST_CHANGE#
  2  FROM V$LOG;

    GROUP#  SEQUENCE# STATUS           ARC FIRST_CHANGE#
---------- ---------- ---------------- --- -------------
         1          2 ACTIVE           YES        689996
         2          3 CURRENT          NO         689998
         3          1 ACTIVE           YES        678956



Time :

SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MM-YY HH24:MI:SS';
SQL> select NAME, SEQUENCE#, THREAD#, FIRST_TIME, FIRST_CHANGE#, NEXT_TIME, NEXT_CHANGE# from v$archived_log where SEQUENCE# > 166;






########################################################################
Restore preview   ########################################################################


restore datafile'/usr05/oradata/WMST/WMST/test01.dbf' preview ;
restore database preview;
restore tablespace users preview;

# Preview
RESTORE DATABASE PREVIEW;
RESTORE TABLESPACE users PREVIEW;

# Preview Summary
RESTORE DATABASE PREVIEW SUMMARY;
RESTORE TABLESPACE users PREVIEW SUMMARY;






########################################################################
########################################################################


11g Compress backupset enhancement .
-- uses new ZLIB algorithm instead of old BZIP2 algorithm . ZLIB is 40% faster .
-- “configure compression algorithm to ZLIB”

CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO COMPRESSED BACKUPSET;



RUN {
      BACKUP AS COMPRESSED BACKUPSET CHECK LOGICAL FULL DATABASE
      PLUS ARCHIVELOG
      INCLUDE CURRENT CONTROLFILE FOR STANDBY ;
}




11g advance compression :

Oracle 11g Release 2 introduced compression algorithm levels which can be used for compressing table data, Data Pump exports and RMAN backups as well.

The compression levels are BASIC, LOW, MEDIUM and HIGH and each affords a trade off related to backup througput and the degree of compression afforded.

To use this option, we can run the following RMAN commands

RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;

followed by ..

RMAN> CONFIGURE COMPRESSION ALGORITHM ‘HIGH’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘MEDIUM’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘LOW’;
or
RMAN> CONFIGURE COMPRESSION ALGORITHM ‘BASIC’;

Tests carried out on a small 1.5 GB database returned compression ratios of about 4.7 at best case. However, it should be noted that use of LOW, MEDIUM and HIGH requires the Advanced Compression license. The backupset size and backup durations are shown below.

Compression Level ‘HIGH’

backupset size: 226.18M
time: 00:02:21

Compression Level ‘Medium’

backupset size: 293.80M
time: 00:00:30

Compression Level ‘Low’

backupset size: 352.59M
time: 00:00:20

Compression Level ‘Basic’

backupset size: 276.55M
time: 00:00:50

To summarise we can conclude:
LOW – corresponds to LZO (11gR2) – smallest compression ratio, fastest
MEDIUM – corresponds to ZLIB (11gR1) – good compression ratio, slower than LOW
HIGH – corresponds to unmodified BZIP2 (11gR2) – highest compression ratio, slowest
BASIC (which is free) – corresponds to BZIP2 (10g style compression) – compression ratio in the range of MEDIUM, but slower


Tuesday, January 26, 2016

ACFS -- Creating New Oracle Acfs filesystem

 
                                                          

Considering we are  moving  ACFS from  ODA X-5 using 12C ,  i have  tried to consolidate some information  On acfs that might help on  daily basis 


Starting with Oracle Grid Infrastructure 12c (12.1), Oracle Cloud File system (ASM Cluster File System in cluster environment ) supports database files (database version 11.2.0.4 and up) in addition to general purpose files so that entire Oracle databases can be stored inside Oracle Cloud FS and can leverage the Advanced Data Services such as snapshots, tagging and auditing although replication and encryption are not supported in conjunction with datafiles.

The ability to perform snapshots of ACFS file system is a very powerful feature. An Oracle ACFS Snapshot is a read-write or read-only, space efficient, point-in-time copy of a file system which is immediately available for use after it is created and is always online while the file system is mounted.

The snapshot copy is initially sparse as it merely references the storage allocation information maintained by the source file system. A snapshot utilizes a Copy-On-Write (COW) technology and maintains point in time view of the file system. Whenever an extent is modified by the user, the current extent is copied to the snapshot before modifying the source file extent. When a recovery is required, only the changed blocks are replaced by their “before images” stored in the snapshot.

For a database having its files stored on Oracle Cloud file system, Oracle ACFS Snapshots may serve as point-in-time backups of the database which can be used for online recovery of database files.

Moreover, with Oracle Database 12c, Cloud FS supports Snaps-of-Snaps feature so that snapshots of existing snapshot of the same ACFS file system may be created. Any combination of read-only and read-write snapshots is supported i.e. a read-write snapshot can be based on an existing read-only snapshot, and a read-only snapshot can be based on an existing read-write snapshot. Each ACFS file system can support a total of 63 snapshots, including Snaps-of-Snaps.

The funny bit is that /u02/app/oracle/oradata/datastore contains snapshots… This is visible in the data file location where you find an .ACFS/snaps component. I believe the naming convention is to use .ACFS (invisible) followed by the snapshot name. You can view the snapshot detail using acfsutil:

From ODA release 12.1.2 onwards 11.2.0.4 and 12.1.x databases will be created on ACFS by default. 


The ACFS/ADVM product consists of several processes in the ASM instance, three separate kernel device drivers and several command and utility programs. The three kernel drivers that make up the ACFS products are:

- Oracle ACFS – ASM Cluster File system
- Oracle ADVM – ASM Cluster Dynamic Volume Manager
- Oracle OKS – Oracle Kernel Services



Diagnostic information related to ACFS/ADVM is located in several areas including:

System configuration (e.g. OS version, platform type, etc.)
ASM alert log and process trace files
CSS trace files
Operating system log/event files
CRS resource status
ACFS/ADVM specific log files
Operating system crash dumps
ACFS Replication trace and log file






1. Create ASM disk group
2. Change the compatible parameter of disk groups to 11.2.x.x
3. Create ASM dynamic volume(s)
4. mkfs & mount the file system
5. Extend ACFS on other nodes in cluster

select name, state,total_mb, free_mb,usable_file_mb  from v$asm_diskgroup
select compatibility, database_compatibility from v$asm_diskgroup where name = 'DATADG';
alter diskgroup datadg set attribute 'compatible.asm' = '11.2.0.2';
alter diskgroup datadg set attribute 'compatible.rdbms' = '11.2.0.2';
select compatibility, database_compatibility from v$asm_diskgroup where name = 'DATADG';
alter diskgroup datadg add volume datavol1 size 500M;
select volume_name, volume_device from  v$asm_volume;

SQL> !ls -al /dev/asm/datavol*

ASMCMD [+] > volinfo -a

# mkfs -t acfs -b 8k /dev/asm/datavol1-316
# mkfs -t acfs -b 4k /dev/asm/datavol2-316

# mkdir -p /data /data2

# mount -t acfs /dev/asm/datavol1-316 /data
# mount -t acfs /dev/asm/datavol2-316 /data2






# df -h
# mount  | grep asm
# chown ora11gr2:dba /data /data2
ASMCMD [+] > volinfo -a





SQL> alter diskgroup datadg mount
ASMCMD [+] > volinfo -a
ASMCMD [+] > volenable -G DATADG DATAVOL1
ASMCMD [+] > volenable -G DATADG DATAVOL2
# mount -t acfs /dev/asm/datavol1-316 /data
# mount -t acfs /dev/asm/datavol2-316 /data2
# chown ora11gr2:dba /data /data2




# uname -n
# cd /data
# ls -al / > test.txt
# ls -al
# uname -n
# cd /data
# ls -al





############### to remove ACFS entry from registry ######################

Fire:-
/sbin/acfsutil info fs
**it should return :-
acfsutil info fs: ACFS-03036: no mounted ACFS file systems

Use the below command for removing entries from ACFS registry.

/sbin/acfsutil registry -d /dev/ofsctl

And check with below command, whethere entries removed from ACFS registry.

/sbin/acfsutil registry -l

Let me know the results if you still get the errors in clusterware alert log




Create a Cloud file system

1.     Create mountpoint
[root@host01 ~]# mkdir -p /mnt/acfs


2.     Modify the DATA diskgroup to ensure that all the new ADVM features in release 12.1 are enabled.
ASMCMD>setattr -G DATA compatible.advm 12.1.0.0.0


3.     Create a volume VOL1 in the DATA diskgroup having a size of 5G.
ASMCMD>volcreate -G DATA -s 5g --column 1 VOL1


4.     Examine the new volume and take note of the volume device associated with it.
ASMCMD> volinfo -G DATA VOL1
Diskgroup Name: DATA
        Volume Name: VOL1
        Volume Device: /dev/asm/vol1-190
        State: ENABLED
        Size (MB): 5120
        Resize Unit (MB): 8
        Redundancy: MIRROR
        Stripe Columns: 1
        Stripe Width (K): 8192
        Usage:
        Mountpath:


Before  Proceeding with Mkfs ,    Verify Advm status 

crsctl stat res ora.DATAC1.COMMON.advm -t



5.     Make a cloud file system on the newly-created volume VOL1 using the volume device identified in step 4.

[root@host01 ~]# mkfs -t acfs /dev/asm/vol1-190
mkfs.acfs: version = 12.1.0.2.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/vol1-199
mkfs.acfs: volume size = 5368709120 ( 5.00 GB )
mkfs.acfs: Format complete.


6.     Mount the ACFS on the mount point created earlier.
[root@host01 ~]# mount -t acfs /dev/asm/vol1-190 /mnt/acfs


7.     Create a new cloud file system resource using the volume device identified above and the mount point created earlier.
[root@host01 ~]# srvctl add filesystem -m /mnt/acfs -d /dev/asm/vol1-190  -u oracle  -fstype ACFS  -autostart always 


8.     Start the new cloud file system.
[root@host01 ~]# srvctl start file system -d /dev/asm/vol1-190


9.     Confirm that the new file system is mounted.

[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol1-190

ACFS file system /mnt/acfs is mounted on nodes host01

[root@host01 ~]# mount | grep acfs

/dev/asm/vol1-190 on /mnt/acfs type acfs (rw)


10.  Modify the access privileges for the new cloud file system to enable access by any user.
# chmod 777 /mnt/acfs


11.  Modify the newly created cloud FS to enable full control by members of OS group dba which corresponds to SYSDBA privilege on database instance.
[root@host01 ~]# crsctl modify resource ora.data.vol1.acfs -attr "ACL='owner:root:rwx,pgrp:dba:rwx,other::r--'" -unsupported


Troubleshooting  Acfs Issues : 

Main Components :

oracleacfs (oracleacfs.ko): manages all ACFS filesystem operations.
oracleavdm (oracleavdm.ko): AVDM module enabling direct interface with the filesystem 
oracleoks (oracleoks.ko): provides memory management, lock and cluster synchronization

--> To check Corruption 
/sbin/acfsutil info fs

/sbin/acfsutil info fs -o iscorrupt /acfs/backup 


--> To check modules loaded 
 /sbin/lsmod | grep oracle


-->  To manually load modules 
/bin/acfsload start


-->  Try Start / Stop  crs and reinstall  the ACFS driver
/u01/GRID/11.2.0.4/bin/crsctl stop crs
/u01/GRID/11.2.0.4/bin/acfsroot install
/u01/GRID/11.2.0.4/bin/crsctl start crs

--> Restart acfs filesytem 
/bin/mount -t acfs /dev/asm/vol_cloudfs-390 /cloudfs



Runing Fsck  
Fsck attempts to validate all of a file system’s metadata. Part of this process includes validation of metadata block free lists. The free block itself has to be checked to ensure that is it the correct type of metadata block and the free list has to be checked to ensure that there are no missing entries and that there are no loops in the list. Since files can be deleted in any order, these free blocks can be located anywhere on the volume. Processing and validating these lists can produce lots of random I/O due to the random nature of file deletion. If fsck takes too long it is ok to interrupt it given it’s running in check mode.


fsck checks and repairs an existing Oracle ACFS. This command can only be run on a dismounted file system. root privileges are required to run fsck. The Oracle ACFS driver must be loaded for fsck to work

IMPORTANT: By default, fsck only checks for and reports any errors. In check mode fsck can be cancelled if it is taking a long time. The -a flag must be specified to instruct fsck to repair errors in the file system. If check mode completed in a reasonable amount of time, and if it reported problems, run fsck in repair mode. In repair mode fsck cannot be interrupted without risk of leaving the file system in a worse state (loss of data depending on the nature of the corruption).


#  /usr/sbin/umountall -F acfs
/sbin/fsck -a -y -t acfs /dev/asm/[VOLUME_NAME]
# /usr/sbin/mount -v acfs /dev/asm/[VOLUME_NAME] [MOUNT_POINT]




Create a read-only snapshot of the cloud File System

[root@host01 .ACFS]# acfsutil snap create ro_dbsnap /mnt/acfs
acfsutil snap create: Snapshot operation is complete.


1.     Verify that the snapshot created above exists and is a read-only snapshot.
[root@host01 .ACFS]# acfsutil snap info /mnt/acfs
snapshot name: ro_dbsnap
snapshot location: /mnt/acfs/.ACFS/snaps/ro_dbsnap
RO snapshot or RW snapshot: RO
parent name: /mnt/acfs
snapshot creation time: Tue Jun 16 11:14:20 2015
number of snapshots: 1
snapshot space usage: 25313280 ( 24.14 MB )






Oracle ACFS snapshot

Oracle ACFS snapshot is an online, read-only or read-write, point in time copy of an Oracle ACFS file system.
The snapshot copy is space-efficient and uses Copy-On-Write functionality. Oracle ACFS snapshots are immediately available
for use after they are created. The snapshots are created in the .ACFS/snaps/ directory of the file system. They are always online
 while the file system is mounted. Oracle ACFS read-write snapshots enable fast creation of a snapshot image that can be both
 read and written without impacting the state of the Oracle ACFS file system

oracrs +ASM1 > /sbin/acfsutil info fs –h

oracrs +ASM1 > /sbin/acfsutil info fs /oracle/dbdump -s


CREATE ACFS SNAPSHOTS
TO create a snapshot in read-write or read-only mode
oracrs +ASM1> /sbin/acfsutil snap create SNAP_PREUPGRADE /oracle/dbadmin
acfsutil snap create: Snapshot operation is complete.



VIEW ACFS SNAPSHOTS
To know about the snapshot for the particular mount in the ACFS, below command will work from 11.2.0.3 onwards
$ /sbin/acfsutil snap info /oracle/dbdump


DELETE ACFS SNAPSHOTS
oracrs +ASM1 > /sbin/acfsutil snap delete SNAP_PREUPGRADE /oracle/dbadmin
acfsutil snap delete: Snapshot operation is complete.


To identify the file system which are allocated through the ACFS
 col FS_NAME format a15
 select * from V$ASM_FILESYSTEM;


To know about the mount point and volumes allocated in the ACFS
 col FS_NAME format a15
 col VOL_DEVICE format a15
 col VOL_LABEL format a15    
 select * from v$asm_acfsvolumes;


TO understand more about the current volume statistics for ACFS volumes
col VOLUME_NAME format a15
select * from V$ASM_VOLUME_STAT;


col VOLUME_NAME format a15
col USAGE format a15
 col MOUNTPATH  format a15
 select VOLUME_NAME,SIZE_MB,STRIPE_COLUMNS,USAGE,MOUNTPATH from v$asm_volume;


 SELECT SUBSTR(fs_name,1,34) FILESYSTEM,SUBSTR(snap_name,1,28) SNAPSHOT, CREATE_TIME TIME FROM V$ASM_ACFSSNAPSHOTS;


In side the ASMCMD tool, we can find about the volumes through the below commands
volcreate
voldelete
voldisable
volenable
volinfo
volresize
volset
volstat


Other  commands :
[oracle@server1 ~]$ srvctl config database -d ron     ( To see  database acfs  information )
[grid@server1 ~]$ acfsutil snap info RON /u02/app/oracle/oradata/datastore/
[root@oracle-one ~]# acfsutil info fs
[root@oracle-one snaps]# acfsutil snap delete acfsvol0_bak /ginf/grid/acfssystem/acfsvol0


ASMCMD> volcreate -G data -s 10G volume1
ASMCMD> volinfo --all

Once the volume is created it needs to be formatted using ACFS. This can be done in this way:
[oracle@server5 ~]$  mkfs -t acfs /dev/asm/volume1-162

acfsutil registry -a /dev/asm/volume1-162 /u01/oradata


SELECT volume_name, volume_device FROM V$ASM_VOLUME  WHERE volume_name ='VOLUME1';
 select volume_name,size_mb,state,volume_device from v$asm_volume;

col vol_label for a20
 col fs_name for a40
select * from v$asm_acfsvolumes;








PDB duplication using the storage snapshot feature.

 # Ensure that the source database – TOOLS is in read-only mode before clone.
This is required for creating a PDB from an existing PDB, when we use the CREATE PLUGGABLE DATABASE…FROM statement.


SQL> alter pluggable database tools close;

Pluggable database altered.

SQL> alter pluggable database tools open read only;

Pluggable database altered.



# Create the snapshot based clone (note the SNAPSHOT COPY syntax below):
The CREATE TABLE DDL is shown below. The command requests a copy of the TOOLS to be created in the same database, with a new name TOOLSCOPY. The SNAPSHOT COPY specifies that the duplication should be done using storage snapshots. The storage snapshot function is provided by the underlying ACFS drivers.



create pluggable database TOOLSCOPY from TOOLS
 file_name_convert=('/u01/app/grid/acfsmounts/data_vol1/oradata/CDB2/TOOLS',
                    '/u01/app/grid/acfsmounts/data_vol1/oradata/CDB2/TOOLSCOPY') SNAPSHOT COPY
/


SQL> @create_snap.sql;
Pluggable database created.



# Open the new db in read-write mode at least once to integrate it with the CDB.

SQL> alter pluggable database toolscopy open read write;



Check the status of the cloned database to make sure its integrated properly into the CDB.

SQL> select pdb_name,status from cdb_pdbs;







Reference : 

1) What diagnostic information to collect for ADVM/ACFS related issues (Doc ID 885363.1)

2) Create a file system with ACFS hang at mkfs -t acfs (Doc ID 2331497.1)

3) https://access.redhat.com/solutions/727333

4) http://www.asktheway.org/official-documents/oracle/E11882_01/server.112/e16102/asmfs_util001.htm

5) https://www.appservgrid.com/documentation111/docs/rdbms18c/ostmg/acfs-advanced-topics.html#GUID-67360198-D34F-4FBE-98A0-FAB1458FAABB