Tuesday, May 6, 2014

Exadata exam 1z0-027

                  

                              Exadata exam 1z0-027 


Exadata -- For Exadata Database Machine Admins -- Filtered Information



This is actually someone else post. i have copied since it exactly covers only what is required for exam 

This post will be based on the information concerning Exadata Administrators.. You will find information gathered on the way for being certified on Exadata..
The information provided will be item by item, and I think
this post will be useful for Exadata Admins even though they already have field experience..
This post will be unstructured,.I mean the information provided in this document is not grouped into
subtitles, but it will present the whole picture about Exadata Administration, when you will read the all of it.
 The information provided in this document will be like snips from the Exadata knowledge..
In order to understand this documents, you should already know the basic concepts of Exadata.
( like Storage indexes, smart scan and etc.. )

Let start;

You can make indexes invisible to increase the chance for using storage indexes.
You can load your data stored on the filtered columnd to maximize the benefit of storage indexes.
Bind variables can be used with storage indexes.
Oracle Database QOS management can offer recommendations for the CPU bottlenecks. QOS
 management can not provide recommendations for the Global Cache resource bottlenecks. OQS
 management cannot resolve IO resource bottlenecks, too.
For media based backups, it s recommended to allocate equivalent number of channels and instances
 per tape drive.. For this type of backups, the network cables between Exadata and media servers
 should be connected through Exadata Database nodes. 
Bonding on Exadata can be used both for reliability and load balancing, but when I look to a Production Exadata Machine , which is X3, I see that the bonding is configured for reliability.

cat /proc/net/bonding/bondeth0
Ethernet Channel Bonding Driver: v3.2.3 (December 6, 2007)
Bonding Mode: fault-tolerance (active-backup)
....
BONDING_OPTS="mode=active-backup miimon=100 downdelay=5000 updelay=5000 num_grat_arp=100" 


active-backup or 1 -> Sets an active-backup policy for fault tolerance.
Transmissions are received and sent out via the first available bonded slave interface.
Another bonded slave interface is only used if the active bonded slave interface fails.


To guarentee proper cooling for an Exadata Machine, perforated floor tiles should be placed
at the front, because the air flow is from front to back.

Creating multiple grid disks on a single disk in Exadata provides us multiple storage pools
 with different performance characteristics and multiple pools that can be assigned to different
 databases.

Here is a general information about Disk layout in Exadata
Fiziksel disk -> Lun -> Cell Disk( its like a filsystem on Lun..) -> GridDisk ( it s like a partition)
Lastly Grid disks are served to the ASM diskgroups..
Note that , we can create flash based ASM diskgroups,too. They occupy may share space
with the Flash Cache on the flash disks.
Things to consider for Exadata migrations,
For Transportable Database method-> Source database should be 10.2.0.5 and little endian
For Data Pump method ->, 10.2.0.5 is good, but it is time consuming..
For Data guard physical method ->, Source platform must be Linux, Solaris x86 or Windows
 and source must be on 11.2.
For ASM rebalance method -> 11.2 Linux x86 64 database that uses  ASM w/ 4MB AU.
Transportable tablespaces method -> Big endian source >= 10.1 or Little endian source>=10.1,
 <11.2  is needed
Logical standby method -> Source does not need to be on 11g. Logical Standby is not
 supported from HPUX to Linux migrations.

Also here is an useful information about the methods:

Standby Physical Standby:
-------------------------------------------------
Source platform must be Linux, Solaris x86 or Windows (see Document 413484.1) and
source on 11.2
Source database in ARCHIVELOG mode
NOLOGGING operations to the database are not permitted, i.e. FORCE LOGGING is
 turned on at the database level

Transportable Database
-------------------------------------------------

Source system must be 10gR2 (10.2.0.5) and little endian
Stricter service level requirements that obviate the required downtime with Oracle Data Pump

Transportable Tablespaces
-------------------------------------------------
Any source platform
EBS Release 12 with source database RDBMS release 10gR2 (10.2.0.5) or higher
EBS 11i with source database RDBMS release 11.2
Service levels cannot tolerate a significant outage.  Using transportable tablespaces instead
 of Oracle Data Pump export/import can significantly reduce the outage time for large (> 300 GB)
EBS databases.  Thorough testing will determine the precise outage time.
For a point of reference, tests on an Oracle Exadata Database Machine quarter rack with the
 Rapid Install Vision database (about 300 GB) took about 12 hours.  This time should remain
 about the same regardless of the amount of data in the database.  This is because the metadata
creation takes the longest time in the migration process and accounts for the bulk of time.

Oracle Data Pump
-------------------------------------------------
Any source platform
Source database is RDBMS release 10.2 or higher
To implement Oracle Exadata Database Machine best practices on the target
Service levels can tolerate a significant outage.
For a point of reference, tests on an Oracle Exadata Database Machine quarter rack with the
Rapid Install Vision database (about 300 GB) took about 24 hours (export - 7:42; import - 16:42)
 using Network Storage and no dump file copy (i.e. the export dump storage was mounted on the
source and the target).
Timings will vary depending on your system configuration and increase as the amount of data
increases.

In Exadata, Oracle Enterprise Agents must be deployed to the compute nodes. .Oracle
Exadata Plug-in deployed with the Agent. Plugins allow you to monitor the following key
components of Exadata machine. There are several plugins for Grid Control and Cloud
Control (such as Avocent MergePoint Unity Switch, Cisco switch, Oracle ILOM, Infiniband switch, PDU) .. Note that : A trap forwarder is required to catch cisco switch and kvm traps due to
a port mismatch..
Agent communicates with Storage Server and Infiniband Switch targets directly. Oracle
 Exadata Plug-in also monitors the other DBM components. Oracle Enterprise Manager
12c agent collects data and communicates with the remote Enterprise Manager Repository.

In Exadata X3, we have 512 MB flashlogs on the storage servers.
In compute nodes, we have raid 5 arrays;

Virtual Drives
Virtual drive : Target Id 0 ,VD name DBSYS
Size : 556.929 GB 
State : Optimal
RAID Level : 5

Exadata -> Action plan to replace a Flash disk when Griddisks are created them,
Ref: Replacing FlashCards or FDOM's when Griddisks are created on FlashDisk's
 (Doc ID 1545103.1)

If the flash card needs to be replaced, drop the disks used in +ASM for the FlashDisk
Drop FlashCache / Flashlog and delete celldisks of type Flashdisk
Shutdown
Replace
create flashdisks
Now create your griddisk's back on DiskType: FlashDisk
add back the disks to the diskgroup. (it is optional ->If you used 'force' when dropping
the disks from ASM then Exadata auto management should automatically add these
 disks back into ASM.)

Note that, we have different failure types in Exadata. For example a disk which is in predictive
 failure state must be replaced immediately. ASM will drop these kind of disks automatically
from the associated disk group , and start a rebalance operation.
For flash a flash disk also; if the disk is in predictive failure then ->
If the flash disk is used for flash cache, then flash cache will be disabled on this disk thus
 reducing the effective flash cache size. If the flash disk is used for flash log, then flash log
will be disabled on this disk thus reducing the effective flash log size. If the flash disk is used
 for grid disks, then Oracle ASM rebalance will automatically restore the data redundancy..

If you want to change a memory dimm or want to make a similar activity, you just need
 to shutdown the affected cell.. The database, and the other cell servers will not be affected
 from this operation.
What you have to do is;
Ensure asmdeactivityoutcome of the Griddisks first..
Then inactive all the griddisks on that cell and ,shutdown the cell using shutdown -h now ..

quartery battery learn cycle is a normal maintanence activity.. It is used for charging the batteries
. When this kind of activity is happening, you can end up with a performance degregation in terms
 of I/O, as this event may require the related device to be put in the write through mode..
 Keep that in mind and, If you can or need, set the device back to write back mode...

Configure and use smart flash logs if you need a better LGWR performance, as LGWR writes
 redo data both flash and disk in parallel.. It considers whichever of these writes completes first,
 as done..
Smart Flash Logs are a new feature comes with 11.2.2.2.4 cell software. They are not for reading,
 they are used like a circular buffer for Redo writes.. Smart Flash logs can enhance the performance
of an OLTP database. By default they occupy 512 MB per cell.
 (32 MB size per flash disk *16 flash disk) space in the Flash Cards.. So Smart Flash logs reduce
 the size of Flash Cache in a manner.
Note that , Flash Smart logs can be enabled or disabled using IORM according to the
databases if needed..
Also LGWR writes and Controlfile IO will be automatically on high priority in IORM..
On the other hand, DBWR IO is managed automatically at normal priority.
Flash logs if needed somehow, can be dropped using drop flashlog command through the
cellcli utility residing on the storage servers..

Consider using CTAS for bulk data loading from external tables to Exadata.. CTAS automatically
uses direct path load.. Insert /*+append*/ can also be used for this kind of data loading, as by
using the append hint, oracle will use direct path loading in insert operations, too..

cellip.ora is the configuration file , if you want to separete cells which are connected by the
asm instances. This file is located on where ASM resides(compute node), and it basically
tells ASM which cells are available. cellip.ora is located in every compute node, and its
 contents are like following;

cat /etc/oracle/cell/network-config/cellip.ora
cell="192.168.10.10"
cell="192.168.10.11"
cell="192.168.10.12" 

This file should also be used if you want to add an expansion rack to the storage grid.
In default configuration , DATA and RECO diskgroups are build on top of non-interleaving disks.
 For detailed information about Interleaving , please see my following posthttp://ermanarslan.blogspot.com.tr/2013/12/exadata-zbr-and-interleaving.html
Also, in default configuration, we dont have any free space in Flash Disks;

CellCLI> list celldisk where name='FD_15_cel01' detail
name: FD_15_cel01
comment:
creationTime: 2012-01-10T10:13:06+00:00
deviceName: /dev/sdy
devicePartition: /dev/sdy
diskType: FlashDisk
errorCount: 0
freeSpace: 0 
id: 8ddbd2c8-8446-4735-8948-d8aea5744b35
interleaving: none
lun: 5_3
size: 22.875G
status: normal

Note that , you can use infiniband to connect an Exadata to a Exalogic. You can also use
 infiniband to connect and Exadata to a Oracle ZFS Storage ZS3, as ZS3 has a native infiniband
 connectivity.
Alternatively, Sun Zfs Storage 7420 Appliance can be connected to Exadata directly from infiniband.. 
In addition, you can connect any media servers which have infiniband cards, to Exadata via infiniband.
 Then you can connect those media servers to tape libraries to maximize the tape backup thorughput..
In terms of tape backups, Oracle docs suggest to have Disk-to-disk-to-tape, in other words D2D2T strategy, which allows keeping old backups on tape while retaining new/fresh backups on disks for achieving fast recovery times.  You can consider the following Oracle Slide as a good Exadata-tape backup scenario;


Uncommited transactions and migrated rows can cause cell single block physical reads even if you
are doing a Full table scan. Note that: Single block reads and Multi block reads may benefit from the
 Flash Cache,
Also note that, smart scan can not be done against Index Organized Tables and clustered tables.

When you need to apply a bundle patch  to Oracle Homes in Exadata, you will need to use oplan utility.
oplan utility generates instructions for applying patches, as well as instructions for rollback. It generates instructions for all the nodes in the cluster. Note that, oplan does not support DataGuard ..
 Oplan is supported since release 11.2.0.2 . It basically eases the patching process, because
 without it you need to read Readme files and extract your instructions yourself..
It is used as follows;
as Oracle software owner,(Grid or RDBMS) execute oplan;
$ORACLE_HOME/oplan/oplan generateApplySteps <bundle patch location>
it will create you patch instructions in html and txt formats;
$ORACLE_HOME/cfgtoollogs/oplan/<TimeStamp>/InstallInstructions.html
$ORACLE_HOME/cfgtoollogs/oplan/<TimeStamp>/InstallInstructions.txt
Then, choose the apply strategy according to your needs and follow the patching instructions to
 apply the patch to the target.
That 's it.. 
If you want to rollback the patch;
execute the following;(replacing bundle patch location)
$ORACLE_HOME/oplan/oplan generateRollbackSteps <bundle patch location>
Again,   choose the rollback strategy according to your needs and follow the patching instructions
 to rollback the patch from target.

It is mandatory to know which components/tools are running on which servers on Exadata;
Here is the list;

DCLI -> storage cell and compute nodes, execute cellcli commands on multiple storageservers
ASM -> compute nodes -- it is ASM instance basically
RDBMS -> compute nodes -- it is Databas software
MS-> storage cells, provides a Java interface to the CellCLI command line interface, as well as providing an interface for 

Enterprise Manager plugins.
RS -> storage cells,RS, is a set of processes responsible for managing and restarting other processes.
Cellcli -> storage cell, to run storage commands
Cellsrv ->storage cell. It receives and unpacks iDB messages transmitted over the InfiniBand interconnect and examines 

metadata contained in the messages
Diskmon -> compute node, In Exadata, the diskmon is responsible for:,Handling of storage cell failures and I/O fencing
,Monitoring of Exadata Server state on all storage cells in the cluster (heartbeat),Broadcasting intra database IORM 

(I/O Resource Manager) plans from databases to storage cells,Monitoring or the control messages from database and 
ASM instances to storage cells ,Communicating with other diskmons in the cluster. 

The following strategy should be used for applying patches in Exadata:
Review the patch README file (know what you are doing)
Run Exachk utility before the patch application (check the system , know current the situation of the system)
Automate the patch application process (automate it for being fast and to minimize problems)
Apply the patch
Run Exachk utility again -- after the patch application
Verify the patch ( Does it fix the problem or does it supply the related stuff)
Check the performance of the system(are there any abnormal performance decrease)
Test the failback procedure (as you may need to fail back in Production, who knows)


Multiple Grid disks can be created on a single Cell Disks, as you know. While creating multiple
Grid disks on a Single Disk, you can end up having multiple disks which have different performance characteristics . In order to have more balanced disk layout, you can use interleaving options or
Intelligent Data Placement technology which is based on ASM..

The internal Infiband network used in Exadata transmits IDB messages between compute nodes
and storage servers as well as Rac interconnect traffic packets between the compute nodes
 in the cluster.

Dbfs or a Nfs filesystem can be used as a stage for loading data from the external tables.
 If choosing Nfs to load data, the Nfs share should be mounted to the preferred compute node..
Dbfs can be created on DBFS_DG , as well as on a standart filesystem..
It can enhance performance if you need to bulk load data in to your Production System ,
which resides on Exadata.. By using DBFS created on ASM, you will have a parallelization in
 storage, which will enhance perfomance of IO.

Diagget.sh script can be used to gather the diagnostic information with software logs, trace files
and OS information..

Exadata storage server has alerts defined on it by default.. This alerts are based on some defined metrics.
. We can define new metrics also, this metrics will persist accross cell node reboots.

If you have a 11.1.0.2 database(little endian) and want to migrate it to Exadata with minimum downtime,
 you can upgrade it to 11.2.0 and use Data pump physical to minimize downtime or alternatively, you
 can use Golden Gate for this. Ofcourse you can use datapump, as well, but your with datapump the
 downtime will be significant..

According to the Non-interleaving disk configuration in Exadata(which is default), we can say that
 the first Grid diskdisks created using Create Griddisk... command will have the best performance.
  So the Diskgroup created based on the first created griddisk will have better performance than the
other Diskgroup on the Same Exadata Machine. ...

If you want to do  some administrating stuff on Exadata Storage servers, like dropping the cell disk
and etc;
you can use celladmin OS account in storage servers to do this kind of an operation. You can
use cellcli(on every cell) or dcli(from one cell to administer all of the cells) utility to execute your
 commands..
DCLI is a pyhton script. It is used to execute command on the cells remotely without login in to them
.. (In first execution it create ssh keys with the following -> dcli -k -g mycells.)

Note that, we have firewall (iptables) configured with Oracle Supplied rules on cell servers.

In OLTP systems,
Exadata write-back flash cache can enhance performance , as it also caches database block writes.
Also Flash log can be useful for enhanching perfomance of OLTP systems, as fast response time for Log Writer is crucial in OLTP..
For Big OLTP systems, Flash cache and Flash log can enhance performance and High Capacity Disks in Exadata can meet the storage size needs..

Note that , IDP, IDB and IORM manager can only seen on an Exadata Environment.

IORM is used for managing the Storage IO resources.

Here is a diagram for the description of the architecture of IORM. (reference: Centroid)


So we can manage our IO resources based on the Categories, Databases and consumer groups.
There is a hierarchy as you see in the picture.. The hierarchy used to distribute I/O.
IORM must be used on Exadata if you have a lot of databases running on Exadata Machine..
IORM is a friend of consolidation projects, in my opinion..

If you configured ASR manager in your environment, note that : database uses SNMP to transfer the notifications from database to ASR Manager and these notifications are forwarded using SNMP from
 ASR to Enterprise Managern In addition Faults are tranferred to the Oracle securely using https.

The fault telemetry sent from ASR manager is in the below format;

Telemetry Data:

System-id: System serial number.
¦ Host-id: hostname of the system sending the message.
¦ Message-time: time the message was generated.
¦ Product-name: Model name of the server


As known, Exachk is the utility to validate an Exadata Environments. We check the system using this
utility time to time(before patches, after patches and etc..) Also, it can be scheduled to run regularly..
To schedule exachk, we can create a cron job or we can create a job in Enterprise Manager..

Compression on Exadata can only be done on the Compute nodes.. Decompression, on the other hand,
 can be done on Compute Nodes or Storage Cells.. Decompression can be done on Storage Cells if the associated operation is based on the Smart Scan.. Same rule applies for Encryption and Decryption, too..

Here is a general information about compression types:
BASIC compression, introduced in Oracle 8 already and only recommended for Data Warehouse
OLTP compression, introduced in Oracle 11 and recommended for OLTP Databases as well
QUERY LOW compression (Exadata only), recommended for Data Warehouse with Load Time as a critical factor  --> HCC
QUERY HIGH compression (Exadata only), recommended for Data Warehouse with focus on Space Saving --> HCC
ARCHIVE LOW compression (Exadata only), recommended for Archival Data with Load Time as a critical factor --> HCC
ARCHIVE HIGH compression (Exadata only), recommended for Archival Data with maximum Space Saving --> HCC

If your index creation takes long time in Exadata, consider the following information:
Cell single block physical read can impact the peformance of an index creation activity. Cell single block physical reads are like db file sequential reads in a tradional system.
Migrated and chained rows can cause cell single block read events on an Exadata Machine.  Also uncommited rows during a query is handled based on the consistency.. For supplying the consistency, Database nodes may require additional blocks. These blocks are sent by the Cell servers to the database nodes.. This activity can cause cell single block physical reads , as well. If you have a lot of blocks in one of these conditions, then your index creation time can take long time..

When migrating to Exadata, you should consider Database Type(oltp or warehouse), size of the source database, the version of the source database and the Endian format of the Source Operating Systems.. By analyzing these inputs you can choose an optimal migration method and strategy.

In Exadata Enterprise Manager monitoring, the communication flow from the ILOM of the Storage Servers to Enterprise Manager is through the Storage Server's MSprocesses. ILOM sends data using snmp to MSprocess, and MSprocess send the data to the Enterprise Manager using snmp. Data is triggered and transfered through the snmp traps.
Based on the preset thresholds defined in ILOM, we can monitor motherboard,memory,power, and network cards of Database Nodes using Enterprise Manager. We can see the faults and alerts  produced for these hardware components from Enterprise Manager.

Oracle Auto Service Request (ASR) is a secure, scalable, customer-installable software feature of warranty and Oracle Support Services that provides auto-case generation when common hardware component faults occur. ASR manager can be installed on an external Oracle Linux or Oracle Solaris server. Also you can use one of Exadata db nodes for installing ASR manager (not preferred).
ASR manager communicates with Oracle using https..

In Exadata, some database work can be offloaded to Storage Servers as you know.. Besides operations like full table scan, single row functions and simple comparison operators, some joins can be offloaded to storage. Column filtering, Predicate filtering  and Virtual Column filtering are the things that can be offloaded to the Exadata Storage Servers, as well.
If you want to see all the functions that can benefit from smart scan; you can use the following sql:
select name from v$sqlfn_metadata
The output will be like;

>
<
>=
<=
=
!=
OPTTIS
OPTTUN
OPTTMI
OPTTAD
OPTTSU
OPTTMU
OPTTDI
OPTTNG
AVG
SUM
COUNT
MIN
MAX
OPTDESC
TO_NUMBER
TO_CHAR
NVL
CHARTOROWID
ROWIDTOCHAR
OPTTLK
OPTTNK
CONCAT
SUBSTR
LENGTH
INSTR
LOWER
UPPER
ASCII
CHR
SOUNDEX
ROUND
TRUNC
..
.....
.....

Operations like full table scan and fast full index scan executed in parallel always generate Smart scans.. For full table scan, you can see cell smart table scan, and for fast full index scan, you can see cell smart index scan events.. Fast full index scan operations can be executed through the smart scan because in fast full index scan, Oracle just reads the index block as they exist on the storage.. I mean a bulk read will be performed, and that makes Oracle to use smart scan even if it is an index operation.  Smart scan is performed during Direct path read operations.. Parallel queries use direct path read, that 's why they make use of smart scans.. 
So if we need to gather all together; in order to have smart scans, we need to execute queries in parallel, we need to use direct path reads towards the process memory and we need to have cell.smart_scan_capable parameter set to TRUE for our ASM diskgroups.

We have Sun servers in Exadata.. Compute nodes and storage servers are acutally Sun Servers. They have ILOM cards on them. These ILOM cardscan be used to administer these servers remotely.. For Example ILOM can be used to power-on database servers or open a remote console for a Storage Server.

Note that, if you want to check the status of all ports located in the Infiniband Switch, you can use Enterprise Manager or ibqueryerros.pl script located in the Infiniband switch.

To properly shutdown the Exadata, we need to first stop the database and grid services. We may use crsctl stop cluster -all command for this. Then we need to shutdown Exadata storage servers first. Then we need to shutdown Database servers. Lastly we need to power off the network switches and cut the power using power switches on PDUs. Why firstly shutdown storage servers? ( this is still a mystery for me)

After Exadata migrations, analysis are made to drop the indexes . This activity is done to increase the chance of using smart scans on our query, but care must be taken while dropping those indexes. For example, in an OLTP system we need fast response times for single block reads, so dropping an index may result a negative performance impact for some of our OLTP queries.. So it s better to drop an index after analyzing the queries of the corresponding application. You can use invisible indexes to see the difference .. You can check Execution plans to see if Oracle wants to use smart scan instead of an index access for a query.. According to your analysis, make the decision to drop the unnecessary indexes..

As you know Exadata contains Compute Nodes and Storage Nodes.. For Storage Nodes, you dont have the choice to use a different OS than Linux. Exadata Storage Servers are Linux, and will continue to be Linux. Oracle Linux servers are coming with Uek kernels.
 On the other hand, you can choose to have Solaris 11 rather than Linux for compute nodes. Operating systems are selectable at install time.

Note that, In Exadata we have different networks, like Management network and public network, infiniband network.

Following is a picture representing those networks; (it is for X4 actually, but it is useful)



Ssh, for example , works from the Management network, as it is an utility to manage the corresponding servers.  If you want to change the network that ssh listens from, you can use ssh config file to do that.

Following inputs can be used for configuring the Exadata Machine at install time..

Customer name
applcation
region
Timezone
Compute OS
Database Machine_Prefix
Admin Network -> Start Ip address for pool,pool size,Ending Ip address for pool,Subnet Mask,Gateway
Admin Network -> Database Server admin name, Storage Server Admin Name, ILON Name,
Client Network ->Start Ip address for pool,pool size,Ending Ip address for pool,Subnet Mask,Gateway, Adapter speed (1gbe/10gbe Base-t or 10gbE sFP+optical)
Infiniband NEtwork -> Start Ip address for pool, pool size, ending ip address for pool, subnet mask, compute node private name
Backup / Data Guard Ethernet NEtwork
Os configuration -> Domain , DNS, NTP , Grid ASM home os user ,asm dba group, asm home oper group, asm home admin group, rdbms home os user, rdbms dba group, oinstall group, rdbms home oper group, base loc for grid and rdbms --> you can set userid,groupid of all users and groups
Home and Database -> ınv location, grid home, db home loc, software install, db name, data reco group names and redundancy (cant change their sizes), block size , type DW or OLTP
Cell Alerting -> Enable Email Address
ASR conf
OCM conf
Grid Control agent

In order to actually have the Exadata Storage Servers send notifications via email (or alternately SNMP) each of the servers has to be configured with the appropriate settings. This is done using the ALTER CELL command in CELLCLI.

ALTER CELL smtpServer='mailserver', -
smtpFromAddr='exacel@blabla.com, -
smtpPwd='email_password', -
smtpToAddr='erm@blabla.com', -
notificationPolicy='critical,warning,clear', -
notificationMethod='mail'

The alerts may be stateful and stateless, as well..
If the alert is based on a threshold, it gets cleared automatically when it no longer violates the threshold;
For example; a filesystem alerts , as follows;

CellCLI> list alerthistory detail
name: 4_1
alertMessage: "File system "/" is 82% full, which is above the 80% threshold.
Accelerated space reclamation has started.
This alert will be cleared when file system "/" becomes less than 75% full."

Also alerts can be fired for Critical, Warning and Information purposes..

We have Storage indexes located in the physical memory of  the  Storage Servers.. Storage indexes are maintained automatically  the cellsrv and they are not persistent across reboots..(as they are in memory).. Cellsrv build these indexes based on the filter columns of the offloaded queries.
In Storage indexes, Oracle builds range regions based on the column values. By the use of storage indexes Oracle can easily find where to look in the storage for a given column value..
Maximum 8 columns for a table are indexed per storage region, but different storage regions can have different columns indexed for the same table.

Storage servers are very sensitive environments in Exadata, so Oracle doesnt support a lot of activities on them.. For example, we can change root password of these servers or we can set up a ssh equivalence for cellmonitor users(http://docs.oracle.com/cd/E11857_01/install.111/e12651/pisag.htm#CIHJGEHI)
 But for example, if we want to upgrade the storage server software, we need to use patchmgr utility..
the patchmgr utility is a tool Exadata Database Administrators use to apply (or rollback) an update to the Oracle Exadata Storage Cell.
Here is another restrictions is ;
Oracle Exadata Storage Server Software and the operating systems cannot be modified, and customers cannot install any additional software or agents on the Exadata Storage Servers.

Lastly, I will mention about placing multiple Exadata Machines in a System Room/Data Center.
If you have multiple Exadata Database machines, you need to place them side by side while ensuring the exhaust air of one rack does not enter the airinlet of another. If you have multiple clusters running on several Exadata Machines, you can place the racks that are part of a common cluster together/side by side..