Tuesday, May 6, 2014

Exadata exam 1z0-027

                  

                              Exadata exam 1z0-027 


Exadata -- For Exadata Database Machine Admins -- Filtered Information



This is actually someone else post. i have copied since it exactly covers only what is required for exam 

This post will be based on the information concerning Exadata Administrators.. You will find information gathered on the way for being certified on Exadata..
The information provided will be item by item, and I think
this post will be useful for Exadata Admins even though they already have field experience..
This post will be unstructured,.I mean the information provided in this document is not grouped into
subtitles, but it will present the whole picture about Exadata Administration, when you will read the all of it.
 The information provided in this document will be like snips from the Exadata knowledge..
In order to understand this documents, you should already know the basic concepts of Exadata.
( like Storage indexes, smart scan and etc.. )

Let start;

You can make indexes invisible to increase the chance for using storage indexes.
You can load your data stored on the filtered columnd to maximize the benefit of storage indexes.
Bind variables can be used with storage indexes.
Oracle Database QOS management can offer recommendations for the CPU bottlenecks. QOS
 management can not provide recommendations for the Global Cache resource bottlenecks. OQS
 management cannot resolve IO resource bottlenecks, too.
For media based backups, it s recommended to allocate equivalent number of channels and instances
 per tape drive.. For this type of backups, the network cables between Exadata and media servers
 should be connected through Exadata Database nodes. 
Bonding on Exadata can be used both for reliability and load balancing, but when I look to a Production Exadata Machine , which is X3, I see that the bonding is configured for reliability.

cat /proc/net/bonding/bondeth0
Ethernet Channel Bonding Driver: v3.2.3 (December 6, 2007)
Bonding Mode: fault-tolerance (active-backup)
....
BONDING_OPTS="mode=active-backup miimon=100 downdelay=5000 updelay=5000 num_grat_arp=100" 


active-backup or 1 -> Sets an active-backup policy for fault tolerance.
Transmissions are received and sent out via the first available bonded slave interface.
Another bonded slave interface is only used if the active bonded slave interface fails.


To guarentee proper cooling for an Exadata Machine, perforated floor tiles should be placed
at the front, because the air flow is from front to back.

Creating multiple grid disks on a single disk in Exadata provides us multiple storage pools
 with different performance characteristics and multiple pools that can be assigned to different
 databases.

Here is a general information about Disk layout in Exadata
Fiziksel disk -> Lun -> Cell Disk( its like a filsystem on Lun..) -> GridDisk ( it s like a partition)
Lastly Grid disks are served to the ASM diskgroups..
Note that , we can create flash based ASM diskgroups,too. They occupy may share space
with the Flash Cache on the flash disks.
Things to consider for Exadata migrations,
For Transportable Database method-> Source database should be 10.2.0.5 and little endian
For Data Pump method ->, 10.2.0.5 is good, but it is time consuming..
For Data guard physical method ->, Source platform must be Linux, Solaris x86 or Windows
 and source must be on 11.2.
For ASM rebalance method -> 11.2 Linux x86 64 database that uses  ASM w/ 4MB AU.
Transportable tablespaces method -> Big endian source >= 10.1 or Little endian source>=10.1,
 <11.2  is needed
Logical standby method -> Source does not need to be on 11g. Logical Standby is not
 supported from HPUX to Linux migrations.

Also here is an useful information about the methods:

Standby Physical Standby:
-------------------------------------------------
Source platform must be Linux, Solaris x86 or Windows (see Document 413484.1) and
source on 11.2
Source database in ARCHIVELOG mode
NOLOGGING operations to the database are not permitted, i.e. FORCE LOGGING is
 turned on at the database level

Transportable Database
-------------------------------------------------

Source system must be 10gR2 (10.2.0.5) and little endian
Stricter service level requirements that obviate the required downtime with Oracle Data Pump

Transportable Tablespaces
-------------------------------------------------
Any source platform
EBS Release 12 with source database RDBMS release 10gR2 (10.2.0.5) or higher
EBS 11i with source database RDBMS release 11.2
Service levels cannot tolerate a significant outage.  Using transportable tablespaces instead
 of Oracle Data Pump export/import can significantly reduce the outage time for large (> 300 GB)
EBS databases.  Thorough testing will determine the precise outage time.
For a point of reference, tests on an Oracle Exadata Database Machine quarter rack with the
 Rapid Install Vision database (about 300 GB) took about 12 hours.  This time should remain
 about the same regardless of the amount of data in the database.  This is because the metadata
creation takes the longest time in the migration process and accounts for the bulk of time.

Oracle Data Pump
-------------------------------------------------
Any source platform
Source database is RDBMS release 10.2 or higher
To implement Oracle Exadata Database Machine best practices on the target
Service levels can tolerate a significant outage.
For a point of reference, tests on an Oracle Exadata Database Machine quarter rack with the
Rapid Install Vision database (about 300 GB) took about 24 hours (export - 7:42; import - 16:42)
 using Network Storage and no dump file copy (i.e. the export dump storage was mounted on the
source and the target).
Timings will vary depending on your system configuration and increase as the amount of data
increases.

In Exadata, Oracle Enterprise Agents must be deployed to the compute nodes. .Oracle
Exadata Plug-in deployed with the Agent. Plugins allow you to monitor the following key
components of Exadata machine. There are several plugins for Grid Control and Cloud
Control (such as Avocent MergePoint Unity Switch, Cisco switch, Oracle ILOM, Infiniband switch, PDU) .. Note that : A trap forwarder is required to catch cisco switch and kvm traps due to
a port mismatch..
Agent communicates with Storage Server and Infiniband Switch targets directly. Oracle
 Exadata Plug-in also monitors the other DBM components. Oracle Enterprise Manager
12c agent collects data and communicates with the remote Enterprise Manager Repository.

In Exadata X3, we have 512 MB flashlogs on the storage servers.
In compute nodes, we have raid 5 arrays;

Virtual Drives
Virtual drive : Target Id 0 ,VD name DBSYS
Size : 556.929 GB 
State : Optimal
RAID Level : 5

Exadata -> Action plan to replace a Flash disk when Griddisks are created them,
Ref: Replacing FlashCards or FDOM's when Griddisks are created on FlashDisk's
 (Doc ID 1545103.1)

If the flash card needs to be replaced, drop the disks used in +ASM for the FlashDisk
Drop FlashCache / Flashlog and delete celldisks of type Flashdisk
Shutdown
Replace
create flashdisks
Now create your griddisk's back on DiskType: FlashDisk
add back the disks to the diskgroup. (it is optional ->If you used 'force' when dropping
the disks from ASM then Exadata auto management should automatically add these
 disks back into ASM.)

Note that, we have different failure types in Exadata. For example a disk which is in predictive
 failure state must be replaced immediately. ASM will drop these kind of disks automatically
from the associated disk group , and start a rebalance operation.
For flash a flash disk also; if the disk is in predictive failure then ->
If the flash disk is used for flash cache, then flash cache will be disabled on this disk thus
 reducing the effective flash cache size. If the flash disk is used for flash log, then flash log
will be disabled on this disk thus reducing the effective flash log size. If the flash disk is used
 for grid disks, then Oracle ASM rebalance will automatically restore the data redundancy..

If you want to change a memory dimm or want to make a similar activity, you just need
 to shutdown the affected cell.. The database, and the other cell servers will not be affected
 from this operation.
What you have to do is;
Ensure asmdeactivityoutcome of the Griddisks first..
Then inactive all the griddisks on that cell and ,shutdown the cell using shutdown -h now ..

quartery battery learn cycle is a normal maintanence activity.. It is used for charging the batteries
. When this kind of activity is happening, you can end up with a performance degregation in terms
 of I/O, as this event may require the related device to be put in the write through mode..
 Keep that in mind and, If you can or need, set the device back to write back mode...

Configure and use smart flash logs if you need a better LGWR performance, as LGWR writes
 redo data both flash and disk in parallel.. It considers whichever of these writes completes first,
 as done..
Smart Flash Logs are a new feature comes with 11.2.2.2.4 cell software. They are not for reading,
 they are used like a circular buffer for Redo writes.. Smart Flash logs can enhance the performance
of an OLTP database. By default they occupy 512 MB per cell.
 (32 MB size per flash disk *16 flash disk) space in the Flash Cards.. So Smart Flash logs reduce
 the size of Flash Cache in a manner.
Note that , Flash Smart logs can be enabled or disabled using IORM according to the
databases if needed..
Also LGWR writes and Controlfile IO will be automatically on high priority in IORM..
On the other hand, DBWR IO is managed automatically at normal priority.
Flash logs if needed somehow, can be dropped using drop flashlog command through the
cellcli utility residing on the storage servers..

Consider using CTAS for bulk data loading from external tables to Exadata.. CTAS automatically
uses direct path load.. Insert /*+append*/ can also be used for this kind of data loading, as by
using the append hint, oracle will use direct path loading in insert operations, too..

cellip.ora is the configuration file , if you want to separete cells which are connected by the
asm instances. This file is located on where ASM resides(compute node), and it basically
tells ASM which cells are available. cellip.ora is located in every compute node, and its
 contents are like following;

cat /etc/oracle/cell/network-config/cellip.ora
cell="192.168.10.10"
cell="192.168.10.11"
cell="192.168.10.12" 

This file should also be used if you want to add an expansion rack to the storage grid.
In default configuration , DATA and RECO diskgroups are build on top of non-interleaving disks.
 For detailed information about Interleaving , please see my following posthttp://ermanarslan.blogspot.com.tr/2013/12/exadata-zbr-and-interleaving.html
Also, in default configuration, we dont have any free space in Flash Disks;

CellCLI> list celldisk where name='FD_15_cel01' detail
name: FD_15_cel01
comment:
creationTime: 2012-01-10T10:13:06+00:00
deviceName: /dev/sdy
devicePartition: /dev/sdy
diskType: FlashDisk
errorCount: 0
freeSpace: 0 
id: 8ddbd2c8-8446-4735-8948-d8aea5744b35
interleaving: none
lun: 5_3
size: 22.875G
status: normal

Note that , you can use infiniband to connect an Exadata to a Exalogic. You can also use
 infiniband to connect and Exadata to a Oracle ZFS Storage ZS3, as ZS3 has a native infiniband
 connectivity.
Alternatively, Sun Zfs Storage 7420 Appliance can be connected to Exadata directly from infiniband.. 
In addition, you can connect any media servers which have infiniband cards, to Exadata via infiniband.
 Then you can connect those media servers to tape libraries to maximize the tape backup thorughput..
In terms of tape backups, Oracle docs suggest to have Disk-to-disk-to-tape, in other words D2D2T strategy, which allows keeping old backups on tape while retaining new/fresh backups on disks for achieving fast recovery times.  You can consider the following Oracle Slide as a good Exadata-tape backup scenario;


Uncommited transactions and migrated rows can cause cell single block physical reads even if you
are doing a Full table scan. Note that: Single block reads and Multi block reads may benefit from the
 Flash Cache,
Also note that, smart scan can not be done against Index Organized Tables and clustered tables.

When you need to apply a bundle patch  to Oracle Homes in Exadata, you will need to use oplan utility.
oplan utility generates instructions for applying patches, as well as instructions for rollback. It generates instructions for all the nodes in the cluster. Note that, oplan does not support DataGuard ..
 Oplan is supported since release 11.2.0.2 . It basically eases the patching process, because
 without it you need to read Readme files and extract your instructions yourself..
It is used as follows;
as Oracle software owner,(Grid or RDBMS) execute oplan;
$ORACLE_HOME/oplan/oplan generateApplySteps <bundle patch location>
it will create you patch instructions in html and txt formats;
$ORACLE_HOME/cfgtoollogs/oplan/<TimeStamp>/InstallInstructions.html
$ORACLE_HOME/cfgtoollogs/oplan/<TimeStamp>/InstallInstructions.txt
Then, choose the apply strategy according to your needs and follow the patching instructions to
 apply the patch to the target.
That 's it.. 
If you want to rollback the patch;
execute the following;(replacing bundle patch location)
$ORACLE_HOME/oplan/oplan generateRollbackSteps <bundle patch location>
Again,   choose the rollback strategy according to your needs and follow the patching instructions
 to rollback the patch from target.

It is mandatory to know which components/tools are running on which servers on Exadata;
Here is the list;

DCLI -> storage cell and compute nodes, execute cellcli commands on multiple storageservers
ASM -> compute nodes -- it is ASM instance basically
RDBMS -> compute nodes -- it is Databas software
MS-> storage cells, provides a Java interface to the CellCLI command line interface, as well as providing an interface for 

Enterprise Manager plugins.
RS -> storage cells,RS, is a set of processes responsible for managing and restarting other processes.
Cellcli -> storage cell, to run storage commands
Cellsrv ->storage cell. It receives and unpacks iDB messages transmitted over the InfiniBand interconnect and examines 

metadata contained in the messages
Diskmon -> compute node, In Exadata, the diskmon is responsible for:,Handling of storage cell failures and I/O fencing
,Monitoring of Exadata Server state on all storage cells in the cluster (heartbeat),Broadcasting intra database IORM 

(I/O Resource Manager) plans from databases to storage cells,Monitoring or the control messages from database and 
ASM instances to storage cells ,Communicating with other diskmons in the cluster. 

The following strategy should be used for applying patches in Exadata:
Review the patch README file (know what you are doing)
Run Exachk utility before the patch application (check the system , know current the situation of the system)
Automate the patch application process (automate it for being fast and to minimize problems)
Apply the patch
Run Exachk utility again -- after the patch application
Verify the patch ( Does it fix the problem or does it supply the related stuff)
Check the performance of the system(are there any abnormal performance decrease)
Test the failback procedure (as you may need to fail back in Production, who knows)


Multiple Grid disks can be created on a single Cell Disks, as you know. While creating multiple
Grid disks on a Single Disk, you can end up having multiple disks which have different performance characteristics . In order to have more balanced disk layout, you can use interleaving options or
Intelligent Data Placement technology which is based on ASM..

The internal Infiband network used in Exadata transmits IDB messages between compute nodes
and storage servers as well as Rac interconnect traffic packets between the compute nodes
 in the cluster.

Dbfs or a Nfs filesystem can be used as a stage for loading data from the external tables.
 If choosing Nfs to load data, the Nfs share should be mounted to the preferred compute node..
Dbfs can be created on DBFS_DG , as well as on a standart filesystem..
It can enhance performance if you need to bulk load data in to your Production System ,
which resides on Exadata.. By using DBFS created on ASM, you will have a parallelization in
 storage, which will enhance perfomance of IO.

Diagget.sh script can be used to gather the diagnostic information with software logs, trace files
and OS information..

Exadata storage server has alerts defined on it by default.. This alerts are based on some defined metrics.
. We can define new metrics also, this metrics will persist accross cell node reboots.

If you have a 11.1.0.2 database(little endian) and want to migrate it to Exadata with minimum downtime,
 you can upgrade it to 11.2.0 and use Data pump physical to minimize downtime or alternatively, you
 can use Golden Gate for this. Ofcourse you can use datapump, as well, but your with datapump the
 downtime will be significant..

According to the Non-interleaving disk configuration in Exadata(which is default), we can say that
 the first Grid diskdisks created using Create Griddisk... command will have the best performance.
  So the Diskgroup created based on the first created griddisk will have better performance than the
other Diskgroup on the Same Exadata Machine. ...

If you want to do  some administrating stuff on Exadata Storage servers, like dropping the cell disk
and etc;
you can use celladmin OS account in storage servers to do this kind of an operation. You can
use cellcli(on every cell) or dcli(from one cell to administer all of the cells) utility to execute your
 commands..
DCLI is a pyhton script. It is used to execute command on the cells remotely without login in to them
.. (In first execution it create ssh keys with the following -> dcli -k -g mycells.)

Note that, we have firewall (iptables) configured with Oracle Supplied rules on cell servers.

In OLTP systems,
Exadata write-back flash cache can enhance performance , as it also caches database block writes.
Also Flash log can be useful for enhanching perfomance of OLTP systems, as fast response time for Log Writer is crucial in OLTP..
For Big OLTP systems, Flash cache and Flash log can enhance performance and High Capacity Disks in Exadata can meet the storage size needs..

Note that , IDP, IDB and IORM manager can only seen on an Exadata Environment.

IORM is used for managing the Storage IO resources.

Here is a diagram for the description of the architecture of IORM. (reference: Centroid)


So we can manage our IO resources based on the Categories, Databases and consumer groups.
There is a hierarchy as you see in the picture.. The hierarchy used to distribute I/O.
IORM must be used on Exadata if you have a lot of databases running on Exadata Machine..
IORM is a friend of consolidation projects, in my opinion..

If you configured ASR manager in your environment, note that : database uses SNMP to transfer the notifications from database to ASR Manager and these notifications are forwarded using SNMP from
 ASR to Enterprise Managern In addition Faults are tranferred to the Oracle securely using https.

The fault telemetry sent from ASR manager is in the below format;

Telemetry Data:

System-id: System serial number.
¦ Host-id: hostname of the system sending the message.
¦ Message-time: time the message was generated.
¦ Product-name: Model name of the server


As known, Exachk is the utility to validate an Exadata Environments. We check the system using this
utility time to time(before patches, after patches and etc..) Also, it can be scheduled to run regularly..
To schedule exachk, we can create a cron job or we can create a job in Enterprise Manager..

Compression on Exadata can only be done on the Compute nodes.. Decompression, on the other hand,
 can be done on Compute Nodes or Storage Cells.. Decompression can be done on Storage Cells if the associated operation is based on the Smart Scan.. Same rule applies for Encryption and Decryption, too..

Here is a general information about compression types:
BASIC compression, introduced in Oracle 8 already and only recommended for Data Warehouse
OLTP compression, introduced in Oracle 11 and recommended for OLTP Databases as well
QUERY LOW compression (Exadata only), recommended for Data Warehouse with Load Time as a critical factor  --> HCC
QUERY HIGH compression (Exadata only), recommended for Data Warehouse with focus on Space Saving --> HCC
ARCHIVE LOW compression (Exadata only), recommended for Archival Data with Load Time as a critical factor --> HCC
ARCHIVE HIGH compression (Exadata only), recommended for Archival Data with maximum Space Saving --> HCC

If your index creation takes long time in Exadata, consider the following information:
Cell single block physical read can impact the peformance of an index creation activity. Cell single block physical reads are like db file sequential reads in a tradional system.
Migrated and chained rows can cause cell single block read events on an Exadata Machine.  Also uncommited rows during a query is handled based on the consistency.. For supplying the consistency, Database nodes may require additional blocks. These blocks are sent by the Cell servers to the database nodes.. This activity can cause cell single block physical reads , as well. If you have a lot of blocks in one of these conditions, then your index creation time can take long time..

When migrating to Exadata, you should consider Database Type(oltp or warehouse), size of the source database, the version of the source database and the Endian format of the Source Operating Systems.. By analyzing these inputs you can choose an optimal migration method and strategy.

In Exadata Enterprise Manager monitoring, the communication flow from the ILOM of the Storage Servers to Enterprise Manager is through the Storage Server's MSprocesses. ILOM sends data using snmp to MSprocess, and MSprocess send the data to the Enterprise Manager using snmp. Data is triggered and transfered through the snmp traps.
Based on the preset thresholds defined in ILOM, we can monitor motherboard,memory,power, and network cards of Database Nodes using Enterprise Manager. We can see the faults and alerts  produced for these hardware components from Enterprise Manager.

Oracle Auto Service Request (ASR) is a secure, scalable, customer-installable software feature of warranty and Oracle Support Services that provides auto-case generation when common hardware component faults occur. ASR manager can be installed on an external Oracle Linux or Oracle Solaris server. Also you can use one of Exadata db nodes for installing ASR manager (not preferred).
ASR manager communicates with Oracle using https..

In Exadata, some database work can be offloaded to Storage Servers as you know.. Besides operations like full table scan, single row functions and simple comparison operators, some joins can be offloaded to storage. Column filtering, Predicate filtering  and Virtual Column filtering are the things that can be offloaded to the Exadata Storage Servers, as well.
If you want to see all the functions that can benefit from smart scan; you can use the following sql:
select name from v$sqlfn_metadata
The output will be like;

>
<
>=
<=
=
!=
OPTTIS
OPTTUN
OPTTMI
OPTTAD
OPTTSU
OPTTMU
OPTTDI
OPTTNG
AVG
SUM
COUNT
MIN
MAX
OPTDESC
TO_NUMBER
TO_CHAR
NVL
CHARTOROWID
ROWIDTOCHAR
OPTTLK
OPTTNK
CONCAT
SUBSTR
LENGTH
INSTR
LOWER
UPPER
ASCII
CHR
SOUNDEX
ROUND
TRUNC
..
.....
.....

Operations like full table scan and fast full index scan executed in parallel always generate Smart scans.. For full table scan, you can see cell smart table scan, and for fast full index scan, you can see cell smart index scan events.. Fast full index scan operations can be executed through the smart scan because in fast full index scan, Oracle just reads the index block as they exist on the storage.. I mean a bulk read will be performed, and that makes Oracle to use smart scan even if it is an index operation.  Smart scan is performed during Direct path read operations.. Parallel queries use direct path read, that 's why they make use of smart scans.. 
So if we need to gather all together; in order to have smart scans, we need to execute queries in parallel, we need to use direct path reads towards the process memory and we need to have cell.smart_scan_capable parameter set to TRUE for our ASM diskgroups.

We have Sun servers in Exadata.. Compute nodes and storage servers are acutally Sun Servers. They have ILOM cards on them. These ILOM cardscan be used to administer these servers remotely.. For Example ILOM can be used to power-on database servers or open a remote console for a Storage Server.

Note that, if you want to check the status of all ports located in the Infiniband Switch, you can use Enterprise Manager or ibqueryerros.pl script located in the Infiniband switch.

To properly shutdown the Exadata, we need to first stop the database and grid services. We may use crsctl stop cluster -all command for this. Then we need to shutdown Exadata storage servers first. Then we need to shutdown Database servers. Lastly we need to power off the network switches and cut the power using power switches on PDUs. Why firstly shutdown storage servers? ( this is still a mystery for me)

After Exadata migrations, analysis are made to drop the indexes . This activity is done to increase the chance of using smart scans on our query, but care must be taken while dropping those indexes. For example, in an OLTP system we need fast response times for single block reads, so dropping an index may result a negative performance impact for some of our OLTP queries.. So it s better to drop an index after analyzing the queries of the corresponding application. You can use invisible indexes to see the difference .. You can check Execution plans to see if Oracle wants to use smart scan instead of an index access for a query.. According to your analysis, make the decision to drop the unnecessary indexes..

As you know Exadata contains Compute Nodes and Storage Nodes.. For Storage Nodes, you dont have the choice to use a different OS than Linux. Exadata Storage Servers are Linux, and will continue to be Linux. Oracle Linux servers are coming with Uek kernels.
 On the other hand, you can choose to have Solaris 11 rather than Linux for compute nodes. Operating systems are selectable at install time.

Note that, In Exadata we have different networks, like Management network and public network, infiniband network.

Following is a picture representing those networks; (it is for X4 actually, but it is useful)



Ssh, for example , works from the Management network, as it is an utility to manage the corresponding servers.  If you want to change the network that ssh listens from, you can use ssh config file to do that.

Following inputs can be used for configuring the Exadata Machine at install time..

Customer name
applcation
region
Timezone
Compute OS
Database Machine_Prefix
Admin Network -> Start Ip address for pool,pool size,Ending Ip address for pool,Subnet Mask,Gateway
Admin Network -> Database Server admin name, Storage Server Admin Name, ILON Name,
Client Network ->Start Ip address for pool,pool size,Ending Ip address for pool,Subnet Mask,Gateway, Adapter speed (1gbe/10gbe Base-t or 10gbE sFP+optical)
Infiniband NEtwork -> Start Ip address for pool, pool size, ending ip address for pool, subnet mask, compute node private name
Backup / Data Guard Ethernet NEtwork
Os configuration -> Domain , DNS, NTP , Grid ASM home os user ,asm dba group, asm home oper group, asm home admin group, rdbms home os user, rdbms dba group, oinstall group, rdbms home oper group, base loc for grid and rdbms --> you can set userid,groupid of all users and groups
Home and Database -> ınv location, grid home, db home loc, software install, db name, data reco group names and redundancy (cant change their sizes), block size , type DW or OLTP
Cell Alerting -> Enable Email Address
ASR conf
OCM conf
Grid Control agent

In order to actually have the Exadata Storage Servers send notifications via email (or alternately SNMP) each of the servers has to be configured with the appropriate settings. This is done using the ALTER CELL command in CELLCLI.

ALTER CELL smtpServer='mailserver', -
smtpFromAddr='exacel@blabla.com, -
smtpPwd='email_password', -
smtpToAddr='erm@blabla.com', -
notificationPolicy='critical,warning,clear', -
notificationMethod='mail'

The alerts may be stateful and stateless, as well..
If the alert is based on a threshold, it gets cleared automatically when it no longer violates the threshold;
For example; a filesystem alerts , as follows;

CellCLI> list alerthistory detail
name: 4_1
alertMessage: "File system "/" is 82% full, which is above the 80% threshold.
Accelerated space reclamation has started.
This alert will be cleared when file system "/" becomes less than 75% full."

Also alerts can be fired for Critical, Warning and Information purposes..

We have Storage indexes located in the physical memory of  the  Storage Servers.. Storage indexes are maintained automatically  the cellsrv and they are not persistent across reboots..(as they are in memory).. Cellsrv build these indexes based on the filter columns of the offloaded queries.
In Storage indexes, Oracle builds range regions based on the column values. By the use of storage indexes Oracle can easily find where to look in the storage for a given column value..
Maximum 8 columns for a table are indexed per storage region, but different storage regions can have different columns indexed for the same table.

Storage servers are very sensitive environments in Exadata, so Oracle doesnt support a lot of activities on them.. For example, we can change root password of these servers or we can set up a ssh equivalence for cellmonitor users(http://docs.oracle.com/cd/E11857_01/install.111/e12651/pisag.htm#CIHJGEHI)
 But for example, if we want to upgrade the storage server software, we need to use patchmgr utility..
the patchmgr utility is a tool Exadata Database Administrators use to apply (or rollback) an update to the Oracle Exadata Storage Cell.
Here is another restrictions is ;
Oracle Exadata Storage Server Software and the operating systems cannot be modified, and customers cannot install any additional software or agents on the Exadata Storage Servers.

Lastly, I will mention about placing multiple Exadata Machines in a System Room/Data Center.
If you have multiple Exadata Database machines, you need to place them side by side while ensuring the exhaust air of one rack does not enter the airinlet of another. If you have multiple clusters running on several Exadata Machines, you can place the racks that are part of a common cluster together/side by side..

Thursday, April 24, 2014

Oracle Sql Profile to Fix Sql Plan issue


In past post  you can find  how to check different plan for same sql ID . No we can step forward to check
how to force sql plan hash value.  Basically till 10g it was done using  Sql Profile and sql outlines and from 11g onward to can be done using  sql baselines . I am yet to explore outlines but  will be posting  here  steps to force sql using sql profile

There  are multiple options to do so

1)  Manually creating using dbms_sqltune.import_sql_profile  pl/sql  ( easiest for me ) 
2) Manually   create using outline hint
3)  Manually create  sql profile  adding hint to sqlprof_attr
4)  Manually  create  using oracle coe script (  you can also get script from sqlt )
5)  Create SQL PROFILE in Grid Control  :




There are certain case where plan hash value is null 

-- when sql is cexecuted inside procedure/plsql 
-- when we fetch datra from dblink 





Manually creating using dbms_sqltune.import_sql_profile  pl/sql   

If  good plan is not there in memory use   dba_hist_sql_plan



declare
pln_sql_id varchar2(20) := 'sqlid';
pln_plan_hash_value number :=  goodpan
orig_sql_id varchar2(20) := 'sqlid';
new_prof_name varchar2(20) := 'SP_sqlid'';
ar_profile_hints sys.sqlprof_attr;
cl_sql_text clob;
begin
select extractvalue(value(d), '/hint') as outline_hints
bulk collect into ar_profile_hints
from xmltable('/*/outline_data/hint'
passing (select xmltype(other_xml) as xmlval from v$sql_plan
where sql_id = pln_sql_id and plan_hash_value = pln_plan_hash_value
and other_xml is not null)) d;
select sql_text into cl_sql_text
from v$sql_plan where sql_id = orig_sql_id;
dbms_sqltune.import_sql_profile(
sql_text => cl_sql_text,
profile => ar_profile_hints,
name => new_prof_name,
force_match => true);
end;
/

  

declare
pln_sql_id varchar2(20) := 'sqlid';
pln_plan_hash_value number :=  goodpan
orig_sql_id varchar2(20) := 'sqlid';
new_prof_name varchar2(20) := 'SP_sqlid'';
ar_profile_hints sys.sqlprof_attr;
cl_sql_text clob;
begin
select extractvalue(value(d), '/hint') as outline_hints
bulk collect into ar_profile_hints
from xmltable('/*/outline_data/hint'
passing (select xmltype(other_xml) as xmlval from dba_hist_sql_plan
where sql_id = pln_sql_id and plan_hash_value = pln_plan_hash_value
and other_xml is not null)) d;
select sql_text into cl_sql_text
from dba_hist_sqltext where sql_id = orig_sql_id;
dbms_sqltune.import_sql_profile(
sql_text => cl_sql_text,
profile => ar_profile_hints,
name => new_prof_name,
force_match => true);
end;
/





 Manually   create using outline hint  : 



explain plan for
select * from table(dbms_xplan.display_cursor(sql_id => '7cm8cz0k1y0zc', cursor_child_info =>0, format=>'OUTLINE'));



add below outline hint in bad sql  which is  retrieved from good sql above     :

Outline Data
------------- 

  /*+
     
 BEGIN_OUTLINE_DATA
      USE_HASH_AGGREGATION(@"SEL$639F1A6F")
      FULL(@"SEL$639F1A6F" "T_ORDER_OR"@"SEL$2")
      USE_NL(@"SEL$1" "INNER_VIEW"@"SEL$1")
      LEADING(@"SEL$1" "T_SUPPLIER_SU"@"SEL$1" "INNER_VIEW"@"SEL$1")
      NO_ACCESS(@"SEL$1" "INNER_VIEW"@"SEL$1")
      FULL(@"SEL$1" "T_SUPPLIER_SU"@"SEL$1")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      OUTLINE_LEAF(@"SEL$1")
      PUSH_PRED(@"SEL$1" "INNER_VIEW"@"SEL$1" 1)
      OUTLINE_LEAF(@"SEL$639F1A6F")
      ALL_ROWS
      DB_VERSION('11.2.0.3')
      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */






The optimizer hints / context we retrieved here above are used together with the attribute sqlprof_attr.
  
DECLARE
     l_sql               clob;
     BEGIN
     l_sql := q'!select su_pk,su_name,su_comment,inner_view.maxamount from
                 t_supplier_su,
                 ( select max(or_totalamount) maxamount,su_fk from t_order_or group by su_fk ) inner_view
                 where t_supplier_su.su_pk = inner_view.su_fk(+) and t_supplier_su.su_name is not null!';
      dbms_sqltune.import_sql_profile( sql_text => l_sql, 
                                     name => 'SQLPROFILE_01',
                                     profile => sqlprof_attr
(q'!USE_HASH_AGGREGATION(@"SEL$639F1A6F")!',
             q'!FULL(@"SEL$639F1A6F" "T_ORDER_OR"@"SEL$2")!',
             q'!USE_NL(@"SEL$1" "INNER_VIEW"@"SEL$1")!',
             q'!LEADING(@"SEL$1" "T_SUPPLIER_SU"@"SEL$1" "INNER_VIEW"@"SEL$1")!',
             q'!NO_ACCESS(@"SEL$1" "INNER_VIEW"@"SEL$1")!',
             q'!FULL(@"SEL$1" "T_SUPPLIER_SU"@"SEL$1")!',
             q'!OUTLINE(@"SEL$1")!',
             q'!OUTLINE(@"SEL$2")!',
             q'!OUTLINE_LEAF(@"SEL$1")!',
             q'!PUSH_PRED(@"SEL$1" "INNER_VIEW"@"SEL$1" 1)!',
             q'!OUTLINE_LEAF(@"SEL$639F1A6F")!',
             q'!ALL_ROWS!',
             q'!DB_VERSION('11.2.0.3')!',
             q'!OPTIMIZER_FEATURES_ENABLE('11.2.0.3')!',
             q'!IGNORE_OPTIM_EMBEDDED_HINTS!'),
             force_match => true );
     end;
     /




Manually create  sql profile  adding hint to sqlprof_attr


By query v$sql we found the sql_id, child_number and plan_hash_value also checked there isn't a sql_profile attached to the SQL.
SQL> select sql_id, child_number, sql_profile, plan_hash_value, sql_text from v$sql where sql_id IN ('63cg18v928540', '0tjtg6yqqbbxk');



Displaying the execution plan of the 2nd SQL with the outline option
SQL> SELECT * FROM TABLE(dbms_xplan.display_cursor('0tjtg6yqqbbxk', 0, 'outline'));

Outline Data
————-
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
DB_VERSION('11.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@”SEL$1″)
FULL(@”SEL$1″ “E”@”SEL$1″)
END_OUTLINE_DATA
*/



From the outline generated used the hint to add to first SQL to change its execution plan i.e. full table scan.
define SQL_ID = '63cg18v928540';

DECLARE
clsql_text CLOB;
BEGIN
SELECT sql_fulltext INTO clsql_text FROM V$sqlarea where sql_id = '&SQL_ID';
dbms_sqltune.import_sql_profile(sql_text => clsql_text,
profile=> sqlprof_attr('FULL(@SEL$1 E@SEL$1)'),
name=>'PROFILE_&SQL_ID',
force_match=>true);
end;
/




Manually  create  using oracle coe script (  you can also get script from sqlt ) 


Make  sql file   using below  script eg. profile.sql .  When you will run  profile.sql  it will ask for 2  parameter.
It will automatically create sql profile to use good plan . It is not my own made, but extracted from sqlt . a you need to later  run coe_xfr_sql_profile_&&sql_id._&&plan_hash_value..sql    generated by below   COE  script 

sqlid : 
plan hash value : --> need to pass hash value of good plan 

Sometimes coe profile script might raise ORA-06502  as documented in 2043600.1 . For that we are asked to get latest sql profile script from Document 215187.1    . If  with new script also we are facing issues  then we  need to check if we have sqlt already installed   in database , if yes same issue can be faced due to  version mismatch  of sqlt installed and sqlprofile coe script used and we might have to re-install sqlt or  use   coe sqpfile script from same binary that is used to install  sqlt . 



Encouraging CBO to Pickup a Better Execution Plan Using the COE XFR SQL Profile Script (Doc ID 1955195.1)


SPO coe_xfr_sql_profile.log;
SET DEF ON TERM OFF ECHO ON FEED OFF VER OFF HEA ON LIN 2000 PAGES 100 LONG 8000000 LONGC 800000 TRIMS ON TI OFF TIMI OFF SERVEROUT ON SIZE 1000000 NUMF "" SQLP SQL>;
SET SERVEROUT ON SIZE UNL;
REM
REM $Header: 215187.1 coe_xfr_sql_profile.sql 11.4.5.5 2013/03/01 carlos.sierra $
REM
REM Copyright (c) 2000-2013, Oracle Corporation. All rights reserved.
REM
REM AUTHOR
REM   carlos.sierra@oracle.com
REM
REM SCRIPT
REM   coe_xfr_sql_profile.sql
REM
REM DESCRIPTION
REM   This script generates another that contains the commands to
REM   create a manual custom SQL Profile out of a known plan from
REM   memory or AWR. The manual custom profile can be implemented
REM   into the same SOURCE system where the plan was retrieved,
REM   or into another similar TARGET system that has same schema
REM   objects referenced by the SQL that generated the known plan.
REM
REM PRE-REQUISITES
REM   1. Oracle Tuning Pack license.
REM
REM PARAMETERS
REM   1. SQL_ID (required)
REM   2. Plan Hash Value for which a manual custom SQL Profile is
REM      needed (required). A list of known plans is presented.
REM      You may choose from list provided or enter a valid phv
REM      from a version of the SQL modified with Hints.
REM
REM EXECUTION
REM   1. Connect into SQL*Plus as user with access to data dictionary.
REM      Do not use SYS.
REM   2. Execute script coe_xfr_sql_profile.sql passing SQL_ID and
REM      plan hash value (parameters can be passed inline or until
REM      requested).
REM
REM EXAMPLE
REM   # sqlplus system
REM   SQL> START coe_xfr_sql_profile.sql [SQL_ID] [PLAN_HASH_VALUE];
REM   SQL> START coe_xfr_sql_profile.sql gnjy0mn4y9pbm 2055843663;
REM   SQL> START coe_xfr_sql_profile.sql gnjy0mn4y9pbm;
REM   SQL> START coe_xfr_sql_profile.sql;
REM
REM NOTES
REM   1. For possible errors see coe_xfr_sql_profile.log
REM   2. If SQLT is installed in SOURCE, you can use instead:
REM      sqlt/utl/sqltprofile.sql
REM   3. Be aware that using DBMS_SQLTUNE requires a license for
REM      Oracle Tuning Pack.
REM   4. Use a DBA user but not SYS.
REM   5. If you get "ORA-06532: Subscript outside of limit, ORA-06512: at line 1"
REM      Then you may consider this change (only in a test and disposable system):
REM      create or replace TYPE sys.sqlprof_attr AS VARRAY(5000) of VARCHAR2(500);
REM
SET TERM ON ECHO OFF;
PRO
PRO Parameter 1:
PRO SQL_ID (required)
PRO
DEF sql_id = '&1';
PRO
WITH
p AS (
SELECT plan_hash_value
  FROM gv$sql_plan
 WHERE sql_id = TRIM('&&sql_id.')
   AND other_xml IS NOT NULL
 UNION
SELECT plan_hash_value
  FROM dba_hist_sql_plan
 WHERE sql_id = TRIM('&&sql_id.')
   AND other_xml IS NOT NULL ),
m AS (
SELECT plan_hash_value,
       SUM(elapsed_time)/SUM(executions) avg_et_secs
  FROM gv$sql
 WHERE sql_id = TRIM('&&sql_id.')
   AND executions > 0
 GROUP BY
       plan_hash_value ),
a AS (
SELECT plan_hash_value,
       SUM(elapsed_time_total)/SUM(executions_total) avg_et_secs
  FROM dba_hist_sqlstat
 WHERE sql_id = TRIM('&&sql_id.')
   AND executions_total > 0
 GROUP BY
       plan_hash_value )
SELECT p.plan_hash_value,
       ROUND(NVL(m.avg_et_secs, a.avg_et_secs)/1e6, 3) avg_et_secs
  FROM p, m, a
 WHERE p.plan_hash_value = m.plan_hash_value(+)
   AND p.plan_hash_value = a.plan_hash_value(+)
 ORDER BY
       avg_et_secs NULLS LAST;
PRO
PRO Parameter 2:
PRO PLAN_HASH_VALUE (required)
PRO
DEF plan_hash_value = '&2';
PRO
PRO Values passed to coe_xfr_sql_profile:
PRO ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PRO SQL_ID         : "&&sql_id."
PRO PLAN_HASH_VALUE: "&&plan_hash_value."
PRO
SET TERM OFF ECHO ON;
WHENEVER SQLERROR EXIT SQL.SQLCODE;

-- trim parameters
COL sql_id NEW_V sql_id FOR A30;
COL plan_hash_value NEW_V plan_hash_value FOR A30;
SELECT TRIM('&&sql_id.') sql_id, TRIM('&&plan_hash_value.') plan_hash_value FROM DUAL;

VAR sql_text CLOB;
VAR sql_text2 CLOB;
VAR other_xml CLOB;
EXEC :sql_text := NULL;
EXEC :sql_text2 := NULL;
EXEC :other_xml := NULL;

-- get sql_text from memory
DECLARE
  l_sql_text VARCHAR2(32767);
BEGIN -- 10g see bug 5017909
  FOR i IN (SELECT DISTINCT piece, sql_text
              FROM gv$sqltext_with_newlines
             WHERE sql_id = TRIM('&&sql_id.')
             ORDER BY 1, 2)
  LOOP
    IF :sql_text IS NULL THEN
      DBMS_LOB.CREATETEMPORARY(:sql_text, TRUE);
      DBMS_LOB.OPEN(:sql_text, DBMS_LOB.LOB_READWRITE);
    END IF;
    -- removes NUL characters
    l_sql_text := REPLACE(i.sql_text, CHR(00), ' ');
    -- adds a NUL character at the end of each line
    DBMS_LOB.WRITEAPPEND(:sql_text, LENGTH(l_sql_text) + 1, l_sql_text||CHR(00));
  END LOOP;
  -- if found in memory then sql_text is not null
  IF :sql_text IS NOT NULL THEN
    DBMS_LOB.CLOSE(:sql_text);
  END IF;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting sql_text from memory: '||SQLERRM);
    :sql_text := NULL;
END;
/

SELECT :sql_text FROM DUAL;

-- get sql_text from awr
DECLARE
  l_sql_text VARCHAR2(32767);
  l_clob_size NUMBER;
  l_offset NUMBER;
BEGIN
  IF :sql_text IS NULL OR NVL(DBMS_LOB.GETLENGTH(:sql_text), 0) = 0 THEN
    SELECT sql_text
      INTO :sql_text2
      FROM dba_hist_sqltext
     WHERE sql_id = TRIM('&&sql_id.')
       AND sql_text IS NOT NULL
       AND ROWNUM = 1;
  END IF;
  -- if found in awr then sql_text2 is not null
  IF :sql_text2 IS NOT NULL THEN
    l_clob_size := NVL(DBMS_LOB.GETLENGTH(:sql_text2), 0);
    l_offset := 1;
    DBMS_LOB.CREATETEMPORARY(:sql_text, TRUE);
    DBMS_LOB.OPEN(:sql_text, DBMS_LOB.LOB_READWRITE);
    -- store in clob as 64 character pieces plus a NUL character at the end of each piece
    WHILE l_offset < l_clob_size
    LOOP
      IF l_clob_size - l_offset > 64 THEN
        l_sql_text := REPLACE(DBMS_LOB.SUBSTR(:sql_text2, 64, l_offset), CHR(00), ' ');
      ELSE -- last piece
        l_sql_text := REPLACE(DBMS_LOB.SUBSTR(:sql_text2, l_clob_size - l_offset + 1, l_offset), CHR(00), ' ');
      END IF;
      DBMS_LOB.WRITEAPPEND(:sql_text, LENGTH(l_sql_text) + 1, l_sql_text||CHR(00));
      l_offset := l_offset + 64;
    END LOOP;
    DBMS_LOB.CLOSE(:sql_text);
  END IF;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting sql_text from awr: '||SQLERRM);
    :sql_text := NULL;
END;
/

SELECT :sql_text2 FROM DUAL;
SELECT :sql_text FROM DUAL;

-- validate sql_text
SET TERM ON;
BEGIN
  IF :sql_text IS NULL THEN
    RAISE_APPLICATION_ERROR(-20100, 'SQL_TEXT for SQL_ID &&sql_id. was not found in memory (gv$sqltext_with_newlines) or AWR (dba_hist_sqltext).');
  END IF;
END;
/
SET TERM OFF;

-- get other_xml from memory
BEGIN
  FOR i IN (SELECT other_xml
              FROM gv$sql_plan
             WHERE sql_id = TRIM('&&sql_id.')
               AND plan_hash_value = TO_NUMBER(TRIM('&&plan_hash_value.'))
               AND other_xml IS NOT NULL
             ORDER BY
                   child_number, id)
  LOOP
    :other_xml := i.other_xml;
    EXIT; -- 1st
  END LOOP;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting other_xml from memory: '||SQLERRM);
    :other_xml := NULL;
END;
/

-- get other_xml from awr
BEGIN
  IF :other_xml IS NULL OR NVL(DBMS_LOB.GETLENGTH(:other_xml), 0) = 0 THEN
    FOR i IN (SELECT other_xml
                FROM dba_hist_sql_plan
               WHERE sql_id = TRIM('&&sql_id.')
                 AND plan_hash_value = TO_NUMBER(TRIM('&&plan_hash_value.'))
                 AND other_xml IS NOT NULL
               ORDER BY
                     id)
    LOOP
      :other_xml := i.other_xml;
      EXIT; -- 1st
    END LOOP;
  END IF;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting other_xml from awr: '||SQLERRM);
    :other_xml := NULL;
END;
/

-- get other_xml from memory from modified SQL
BEGIN
  IF :other_xml IS NULL OR NVL(DBMS_LOB.GETLENGTH(:other_xml), 0) = 0 THEN
    FOR i IN (SELECT other_xml
                FROM gv$sql_plan
               WHERE plan_hash_value = TO_NUMBER(TRIM('&&plan_hash_value.'))
                 AND other_xml IS NOT NULL
               ORDER BY
                     child_number, id)
    LOOP
      :other_xml := i.other_xml;
      EXIT; -- 1st
    END LOOP;
  END IF;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting other_xml from memory: '||SQLERRM);
    :other_xml := NULL;
END;
/

-- get other_xml from awr from modified SQL
BEGIN
  IF :other_xml IS NULL OR NVL(DBMS_LOB.GETLENGTH(:other_xml), 0) = 0 THEN
    FOR i IN (SELECT other_xml
                FROM dba_hist_sql_plan
               WHERE plan_hash_value = TO_NUMBER(TRIM('&&plan_hash_value.'))
                 AND other_xml IS NOT NULL
               ORDER BY
                     id)
    LOOP
      :other_xml := i.other_xml;
      EXIT; -- 1st
    END LOOP;
  END IF;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting other_xml from awr: '||SQLERRM);
    :other_xml := NULL;
END;
/

SELECT :other_xml FROM DUAL;

-- validate other_xml
SET TERM ON;
BEGIN
  IF :other_xml IS NULL THEN
    RAISE_APPLICATION_ERROR(-20101, 'PLAN for SQL_ID &&sql_id. and PHV &&plan_hash_value. was not found in memory (gv$sql_plan) or AWR (dba_hist_sql_plan).');
  END IF;
END;
/
SET TERM OFF;

-- generates script that creates sql profile in target system:
SET ECHO OFF;
PRO coe_xfr_sql_profile_&&sql_id._&&plan_hash_value..sql.
SET FEED OFF LIN 666 TRIMS ON TI OFF TIMI OFF SERVEROUT ON SIZE 1000000 FOR WOR;
SET SERVEROUT ON SIZE UNL FOR WOR;
SPO OFF;
SPO coe_xfr_sql_profile_&&sql_id._&&plan_hash_value..sql;
DECLARE
  l_pos NUMBER;
  l_clob_size NUMBER;
  l_offset NUMBER;
  l_sql_text VARCHAR2(32767);
  l_len NUMBER;
  l_hint VARCHAR2(32767);
BEGIN
  DBMS_OUTPUT.PUT_LINE('SPO coe_xfr_sql_profile_&&sql_id._&&plan_hash_value..log;');
  DBMS_OUTPUT.PUT_LINE('SET ECHO ON TERM ON LIN 2000 TRIMS ON NUMF 99999999999999999999;');
  DBMS_OUTPUT.PUT_LINE('REM');
  DBMS_OUTPUT.PUT_LINE('REM $Header: 215187.1 coe_xfr_sql_profile_&&sql_id._&&plan_hash_value..sql 11.4.4.4 '||TO_CHAR(SYSDATE, 'YYYY/MM/DD')||' carlos.sierra $');
  DBMS_OUTPUT.PUT_LINE('REM');
  DBMS_OUTPUT.PUT_LINE('REM Copyright (c) 2000-2012, Oracle Corporation. All rights reserved.');
  DBMS_OUTPUT.PUT_LINE('REM');
  DBMS_OUTPUT.PUT_LINE('REM AUTHOR');
  DBMS_OUTPUT.PUT_LINE('REM   carlos.sierra@oracle.com');
  DBMS_OUTPUT.PUT_LINE('REM');
  DBMS_OUTPUT.PUT_LINE('REM SCRIPT');
  DBMS_OUTPUT.PUT_LINE('REM   coe_xfr_sql_profile_&&sql_id._&&plan_hash_value..sql');
  DBMS_OUTPUT.PUT_LINE('REM');
  DBMS_OUTPUT.PUT_LINE('REM DESCRIPTION');
  DBMS_OUTPUT.PUT_LINE('REM   This script is generated by coe_xfr_sql_profile.sql');
  DBMS_OUTPUT.PUT_LINE('REM   It contains the SQL*Plus commands to create a custom');
  DBMS_OUTPUT.PUT_LINE('REM   SQL Profile for SQL_ID &&sql_id. based on plan hash');
  DBMS_OUTPUT.PUT_LINE('REM   value &&plan_hash_value..');
  DBMS_OUTPUT.PUT_LINE('REM   The custom SQL Profile to be created by this script');
  DBMS_OUTPUT.PUT_LINE('REM   will affect plans for SQL commands with signature');
  DBMS_OUTPUT.PUT_LINE('REM   matching the one for SQL Text below.');
  DBMS_OUTPUT.PUT_LINE('REM   Review SQL Text and adjust accordingly.');
  DBMS_OUTPUT.PUT_LINE('REM');
  DBMS_OUTPUT.PUT_LINE('REM PARAMETERS');
  DBMS_OUTPUT.PUT_LINE('REM   None.');
  DBMS_OUTPUT.PUT_LINE('REM');
  DBMS_OUTPUT.PUT_LINE('REM EXAMPLE');
  DBMS_OUTPUT.PUT_LINE('REM   SQL> START coe_xfr_sql_profile_&&sql_id._&&plan_hash_value..sql;');
  DBMS_OUTPUT.PUT_LINE('REM');
  DBMS_OUTPUT.PUT_LINE('REM NOTES');
  DBMS_OUTPUT.PUT_LINE('REM   1. Should be run as SYSTEM or SYSDBA.');
  DBMS_OUTPUT.PUT_LINE('REM   2. User must have CREATE ANY SQL PROFILE privilege.');
  DBMS_OUTPUT.PUT_LINE('REM   3. SOURCE and TARGET systems can be the same or similar.');
  DBMS_OUTPUT.PUT_LINE('REM   4. To drop this custom SQL Profile after it has been created:');
  DBMS_OUTPUT.PUT_LINE('REM      EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE(''coe_&&sql_id._&&plan_hash_value.'');');
  DBMS_OUTPUT.PUT_LINE('REM   5. Be aware that using DBMS_SQLTUNE requires a license');
  DBMS_OUTPUT.PUT_LINE('REM      for the Oracle Tuning Pack.');
  DBMS_OUTPUT.PUT_LINE('REM   6. If you modified a SQL putting Hints in order to produce a desired');
  DBMS_OUTPUT.PUT_LINE('REM      Plan, you can remove the artifical Hints from SQL Text pieces below.');
  DBMS_OUTPUT.PUT_LINE('REM      By doing so you can create a custom SQL Profile for the original');
  DBMS_OUTPUT.PUT_LINE('REM      SQL but with the Plan captured from the modified SQL (with Hints).');
  DBMS_OUTPUT.PUT_LINE('REM');
  DBMS_OUTPUT.PUT_LINE('WHENEVER SQLERROR EXIT SQL.SQLCODE;');
  DBMS_OUTPUT.PUT_LINE('REM');
  DBMS_OUTPUT.PUT_LINE('VAR signature NUMBER;');
  DBMS_OUTPUT.PUT_LINE('VAR signaturef NUMBER;');
  DBMS_OUTPUT.PUT_LINE('REM');
  DBMS_OUTPUT.PUT_LINE('DECLARE');
  DBMS_OUTPUT.PUT_LINE('sql_txt CLOB;');
  DBMS_OUTPUT.PUT_LINE('h       SYS.SQLPROF_ATTR;');
  DBMS_OUTPUT.PUT_LINE('PROCEDURE wa (p_line IN VARCHAR2) IS');
  DBMS_OUTPUT.PUT_LINE('BEGIN');
  DBMS_OUTPUT.PUT_LINE('DBMS_LOB.WRITEAPPEND(sql_txt, LENGTH(p_line), p_line);');
  DBMS_OUTPUT.PUT_LINE('END wa;');
  DBMS_OUTPUT.PUT_LINE('BEGIN');
  DBMS_OUTPUT.PUT_LINE('DBMS_LOB.CREATETEMPORARY(sql_txt, TRUE);');
  DBMS_OUTPUT.PUT_LINE('DBMS_LOB.OPEN(sql_txt, DBMS_LOB.LOB_READWRITE);');
  DBMS_OUTPUT.PUT_LINE('-- SQL Text pieces below do not have to be of same length.');
  DBMS_OUTPUT.PUT_LINE('-- So if you edit SQL Text (i.e. removing temporary Hints),');
  DBMS_OUTPUT.PUT_LINE('-- there is no need to edit or re-align unmodified pieces.');
  l_clob_size := NVL(DBMS_LOB.GETLENGTH(:sql_text), 0);
  l_offset := 1;
  WHILE l_offset < l_clob_size
  LOOP
    l_pos := DBMS_LOB.INSTR(:sql_text, CHR(00), l_offset);
    IF l_pos > 0 THEN
      l_len := l_pos - l_offset;
    ELSE -- last piece
      l_len := l_clob_size - l_pos + 1;
    END IF;
    l_sql_text := DBMS_LOB.SUBSTR(:sql_text, l_len, l_offset);
    /* cannot do such 3 replacement since a line could end with a comment using "--"
    l_sql_text := REPLACE(l_sql_text, CHR(10), ' '); -- replace LF with SP
    l_sql_text := REPLACE(l_sql_text, CHR(13), ' '); -- replace CR with SP
    l_sql_text := REPLACE(l_sql_text, CHR(09), ' '); -- replace TAB with SP
    */
    l_offset := l_offset + l_len + 1;
    IF l_len > 0 THEN
      IF INSTR(l_sql_text, '''[') + INSTR(l_sql_text, ']''') = 0 THEN
        l_sql_text := '['||l_sql_text||']';
      ELSIF INSTR(l_sql_text, '''{') + INSTR(l_sql_text, '}''') = 0 THEN
        l_sql_text := '{'||l_sql_text||'}';
      ELSIF INSTR(l_sql_text, '''<') + INSTR(l_sql_text, '>''') = 0 THEN
        l_sql_text := '<'||l_sql_text||'>';
      ELSIF INSTR(l_sql_text, '''(') + INSTR(l_sql_text, ')''') = 0 THEN
        l_sql_text := '('||l_sql_text||')';
      ELSIF INSTR(l_sql_text, '''"') + INSTR(l_sql_text, '"''') = 0 THEN
        l_sql_text := '"'||l_sql_text||'"';
      ELSIF INSTR(l_sql_text, '''|') + INSTR(l_sql_text, '|''') = 0 THEN
        l_sql_text := '|'||l_sql_text||'|';
      ELSIF INSTR(l_sql_text, '''~') + INSTR(l_sql_text, '~''') = 0 THEN
        l_sql_text := '~'||l_sql_text||'~';
      ELSIF INSTR(l_sql_text, '''^') + INSTR(l_sql_text, '^''') = 0 THEN
        l_sql_text := '^'||l_sql_text||'^';
      ELSIF INSTR(l_sql_text, '''@') + INSTR(l_sql_text, '@''') = 0 THEN
        l_sql_text := '@'||l_sql_text||'@';
      ELSIF INSTR(l_sql_text, '''#') + INSTR(l_sql_text, '#''') = 0 THEN
        l_sql_text := '#'||l_sql_text||'#';
      ELSIF INSTR(l_sql_text, '''%') + INSTR(l_sql_text, '%''') = 0 THEN
        l_sql_text := '%'||l_sql_text||'%';
      ELSIF INSTR(l_sql_text, '''$') + INSTR(l_sql_text, '$''') = 0 THEN
        l_sql_text := '$'||l_sql_text||'$';
      ELSE
        l_sql_text := CHR(96)||l_sql_text||CHR(96);
      END IF;
      DBMS_OUTPUT.PUT_LINE('wa(q'''||l_sql_text||''');');
    END IF;
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('DBMS_LOB.CLOSE(sql_txt);');
  DBMS_OUTPUT.PUT_LINE('h := SYS.SQLPROF_ATTR(');
  DBMS_OUTPUT.PUT_LINE('q''[BEGIN_OUTLINE_DATA]'',');
  FOR i IN (SELECT /*+ opt_param('parallel_execution_enabled', 'false') */
                   SUBSTR(EXTRACTVALUE(VALUE(d), '/hint'), 1, 4000) hint
              FROM TABLE(XMLSEQUENCE(EXTRACT(XMLTYPE(:other_xml), '/*/outline_data/hint'))) d)
  LOOP
    l_hint := i.hint;
    WHILE NVL(LENGTH(l_hint), 0) > 0
    LOOP
      IF LENGTH(l_hint) <= 500 THEN
        DBMS_OUTPUT.PUT_LINE('q''['||l_hint||']'',');
        l_hint := NULL;
      ELSE
        l_pos := INSTR(SUBSTR(l_hint, 1, 500), ' ', -1);
        DBMS_OUTPUT.PUT_LINE('q''['||SUBSTR(l_hint, 1, l_pos)||']'',');
        l_hint := '   '||SUBSTR(l_hint, l_pos);
      END IF;
    END LOOP;
  END LOOP;
  DBMS_OUTPUT.PUT_LINE('q''[END_OUTLINE_DATA]'');');
  DBMS_OUTPUT.PUT_LINE(':signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);');
  DBMS_OUTPUT.PUT_LINE(':signaturef := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt, TRUE);');
  DBMS_OUTPUT.PUT_LINE('DBMS_SQLTUNE.IMPORT_SQL_PROFILE (');
  DBMS_OUTPUT.PUT_LINE('sql_text    => sql_txt,');
  DBMS_OUTPUT.PUT_LINE('profile     => h,');
  DBMS_OUTPUT.PUT_LINE('name        => ''coe_&&sql_id._&&plan_hash_value.'',');
  DBMS_OUTPUT.PUT_LINE('description => ''coe &&sql_id. &&plan_hash_value. ''||:signature||'' ''||:signaturef||'''',');
  DBMS_OUTPUT.PUT_LINE('category    => ''DEFAULT'',');
  DBMS_OUTPUT.PUT_LINE('validate    => TRUE,');
  DBMS_OUTPUT.PUT_LINE('replace     => TRUE,');
  DBMS_OUTPUT.PUT_LINE('force_match => TRUE  /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );');
  DBMS_OUTPUT.PUT_LINE('DBMS_LOB.FREETEMPORARY(sql_txt);');
  DBMS_OUTPUT.PUT_LINE('END;');
  DBMS_OUTPUT.PUT_LINE('/');
  DBMS_OUTPUT.PUT_LINE('WHENEVER SQLERROR CONTINUE');
  DBMS_OUTPUT.PUT_LINE('SET ECHO OFF;');
  DBMS_OUTPUT.PUT_LINE('PRINT signature');
  DBMS_OUTPUT.PUT_LINE('PRINT signaturef');
  DBMS_OUTPUT.PUT_LINE('PRO');
  DBMS_OUTPUT.PUT_LINE('PRO ... manual custom SQL Profile has been created');
  DBMS_OUTPUT.PUT_LINE('PRO');
  DBMS_OUTPUT.PUT_LINE('SET TERM ON ECHO OFF LIN 80 TRIMS OFF NUMF "";');
  DBMS_OUTPUT.PUT_LINE('SPO OFF;');
  DBMS_OUTPUT.PUT_LINE('PRO');
  DBMS_OUTPUT.PUT_LINE('PRO COE_XFR_SQL_PROFILE_&&sql_id._&&plan_hash_value. completed');
END;
/
SPO OFF;
SET DEF ON TERM ON ECHO OFF FEED 6 VER ON HEA ON LIN 80 PAGES 14 LONG 80 LONGC 80 TRIMS OFF TI OFF TIMI OFF SERVEROUT OFF NUMF "" SQLP SQL>;
SET SERVEROUT OFF;
PRO
PRO Execute coe_xfr_sql_profile_&&sql_id._&&plan_hash_value..sql
PRO on TARGET system in order to create a custom SQL Profile
PRO with plan &&plan_hash_value linked to adjusted sql_text.
PRO
UNDEFINE 1 2 sql_id plan_hash_value
CL COL
PRO
PRO COE_XFR_SQL_PROFILE completed.

                            





 Create SQL PROFILE in Grid Control  : 

1. Login into Grid Control and select the target database
2. Click ‘Performance’ tab to go to Performance page
3. In the Performance page, click ‘Top Activity’
4. Select a problem statement by clicking the ‘SQL ID’
5. Click ‘Schedule SQL Tuning Advisor’
6. In order to get SQL PROFILE recommendation, ‘Comprehensive’ has to be checked. It takes awhile for the tuning process runs.
7. Check the ‘Benefit’ percentage and check ‘Compare Explain Plans’ by clicking the glasses icon. If the SQL PROFILE improves the performance, click ‘Implement’.
8. After implementation, query dba_sql_profiles to check the information:
(select name, SQL_TEXT, LAST_MODIFIED, DESCRIPTION, STATUS from dba_sql_profiles order by 1;)

9. Change SQL PROFILE
a. To disable a SQL profile:
exec dbms_sqltune.alter_sql_profile('', 'STATUS', 'DISABLED');

b. To add description to a SQL profile:
exec DBMS_SQLTUNE.alter_sql_profile('sqlprofile_name','DESCRIPTION','this is a test sql profile');

10. To delete SQL Profile:
exec dbms_sqltune.drop_sql_profile('SYS_SQLPROF_0132f8432cbc0000');







Purge Sql ID from  shared pool : 

Once we are done with pinning good plan we need  purge sql  from shared pool .



-- select 'exec dbms_shared_pool.purge('''||ADDRESS||','||HASH_VALUE||''',''C'')' from V$SQLAREA where SQL_ID='&sqlid';


-- SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID like '7yc%';

ADDRESS HASH_VALUE
---------------- ----------
000000085FD77CF0  808321886

SQL> exec DBMS_SHARED_POOL.PURGE ('000000085FD77CF0, 808321886', 'C');

PL/SQL procedure successfully completed.

SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID like '7yc%';

no rows selected





References :

https://docs.oracle.com/database/121/TGSQL/tgsql_profiles.htm#TGSQL610