Wednesday, December 2, 2020

All you need to know - To support Oracle database on Aws RDS

 

After moving  to Public cloud AWS , most of  dba have doubts on what changes after moving cloud and how to manage  database on AWS . 

Since i am  preparing for AWS exam and getting more insight , i thought of documenting key things that dba need to know to manage database on AWS .

On Aws  dataguard feature can be achieved using replicas  and Rac feature can be achieved using  ami -- amazon managed image 


#################################################
#################################################

Creating Your First RDS DB Instance Running the Oracle Database Engine

To make the process of setting up your database instances easier, AWS offers an Easy Create option. 

This option enables you to create an instance with default settings and configurations for faster deployment. 


1. In the AWS Management Console, select the RDS console.
2. Select your desired AWS Region from the upper right corner.
3. Select Databases and click Create database.

4. Specify your database options as follows:

  • Creation method: Standard Create
  • Engine options: Oracle
  • Edition: Select the edition you wish to use. 
         Keep in mind that edition availability is restricted by region.
  • Version: Select your desired version.
5. Define how you plan to use your database instance.

If you want to use the instance in production you also need to define configurations for deployment failover, 
provisioned IOPS storage, and deletion protection.

6. Enter your master password. This password is automatically generated by default. 

If you would prefer to set this password yourself, you can do the following:

Click Credential Settings in your Settings section. 

Uncheck Auto-generate a password.

Enter and confirm your desired password.

7. Configure your remaining settings. You can find more about these settings in the documentation.
8. Once your settings are configured, click Create database.
9. If you left the auto-generated password option checked, this is the only time you will be able to see that 
    password so make sure you note it. You can view this information via View credential details on your Databases page.
10. After your instance is created you can assign it a name and view all relevant information in the console.



  • #################################################
    #################################################

Cross-Region Read Replicas for Amazon RDS




Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances.

Its option for scaling . 

 Cross-region read replicas can yield tremendous performance benefits for read-only applications for users based out in multiple regions. This feature gives organizations to scale geographically as AWS has its services in many regions of the world. Amazon RDS allows up to five in-region and cross-region replicas per source with a single API call or a couple of clicks in the AWS Management Console.

Common use cases for cross-region read replicas can best be described as cross-region disaster recovery model, scale out globally, or migrate an existing database to a new region. Let’s examine the benefits:

Disaster recovery: Having the read replica and the master on different regions, you will be able to promote the replica to the new master in case a region gets completely disrupted due to unavoidable circumstances. This helps keep the business operational during emergencies.

Scale out globally: If your application has a user base that is spread out all over the planet, you can use cross-region read replicas to serve read queries from an AWS region that is close to the user.

Region migration: Cross-region read replicas make it easy for you to migrate your application from one AWS region to another. Simply create the replica, ensure that it is current, promote it to be a master database instance, and point your application to it.

The cross-region replicas are very easy to create. You simply select the desired region (and optional availability zone) in the AWS Management Console from the Instance Actions menu.


Before we proceed to create read replica we need to create VPC and subnet group as per requirement . 

Benefits of Read Replicas
  • Read Replicas helps in decreasing load on the primary DB by serving read-only traffic.
  • You can create Read Replicas within AZ, Cross-AZ or Cross-Region.
  • Read Replica can be manually promoted as a standalone database instance.
  • Read Replicas support Multi-AZ deployments.
  • You can use Read Replicas to take logical backups, if you want to store the backups externally to RDS.
  • You can have Read Replicas of Read Replicas.
  • Read Replica helps to maintain a copy of databases in a different region for disaster recovery.
  • You can have up to five Read Replicas per master, each with own DNS endpoint. Unlike a Multi-AZ standby replica, you can connect to each Read Replica and use them for read scaling.


Reference :  
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create

#################################################
#################################################


Creating AWS console user in IAM for console access .

To  access console we need  to create user for team using  IAM . 

Reference : https://docs.aws.amazon.com/polly/latest/dg/setting-up.html#setting-up-iam


1)   Visit the IAM management console

console.aws.amazon.com/iam/home#users  .

You'll be prompted to log in with your own Amazon credentials first.

IAM is Amazons access management system, in it you can create users with access to as much or little of your amazon AWS account as you like.


2)   Click Create New Users  

Enter a username which makes sense. Like
Firstname.Lastname
or
FirstnameL
Click create for the user. Don't bother generating Access Keys for this new user, they can generate their own later on.


3) Give the new user Administrator Access
You've created the new user, here called "test.jim" let's give them Administrator Access

Select the user from the list of users on the screen
Click the "Permissions" tab that appears in the pane below the users list
Click the "attach user policy" button in that "permissions" tab.




















4) Select Administrator Access 

























5) 
Apply the policy 

Leave the suggested permissions at their defaults, and click "Apply Policy"
























6) Give your teammate a password

Click on the "Security Credentials" tab next to the "Permissions" one you were using earlier.













7) Copy the password to your teammate

Either on the phone, a piece of paper on their desk, or in an Instance Message. They should change their password soon after you give it to them

















8) Provide instructions to your teammate for logging in
Your teammate will need some instructions for logging into your management console.

The login url for your AWS account is located on your dashboard. From the image below:

1. Click the Dashboard link  (1) to get to your dashboard
2. Write down the sign in url for your AWS console (2)






9)  Customize the Sign In Url
You can customize the URL by giving it a name that is familiar, like your company name.
Afterwards the link will look like this:



10) Tell your employee the username + password + sign in url
Your employee will need the Username, password and sign in URL that you created in order to log in. They cannot log in on the normal Amazon website, they must use
the special sign in url that you give them.

When they visit the link it will look like the below.

Congratulations you've just added an administrator to your Amazon AWS account!


Bonus email or instant message template to send your teammate - just change the bits in bold

-----

Hi Test Jim,

I've given you Administrator Access to the company Amazon AWS account.

Username: test.jim
Password: ********
Sign In URL: https://whit.signin.aws.amazon.com/console

Please bookmark and use the Sign In URL to access our company AWS console in future.
Also please change your password to something more secure once inside.

You are a huge asset to our team.

Love,

Jervis

----

















#################################################
#################################################


Changing database parameter  using parameter group . 

As we know  that alter system commands are not  allowed in aws rds  . Changing of database parameter can be achieved using parameter groups . 

To modify a DB parameter group

  1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.

  2. In the navigation pane, choose Parameter groups.

  3. In the list, choose the parameter group that you want to modify.

  4. For Parameter group actions, choose Edit.

  5. Change the values of the parameters that you want to modify. You can scroll through the parameters using the arrow keys at the top right of the dialog box.

    You can't change values in a default parameter group.

  6. Choose Save changes.


For more reference refer to aws documentation : 
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html

#################################################
#################################################

Connecting aws rds oracle database using tns : 


1)  Get endpoint details for database from aws console . 

  • Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.
  • In the upper-right corner of the console, choose the AWS Region of your DB instance.
  • Find the DNS name and port number for your DB Instance.
  • Choose Databases to display a list of your DB instances.
  • Choose the Oracle DB instance name to display the instance details.
  • On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need both the endpoint and the port number to connect to the DB instance.





















2)  Check security group and add rules to VPC security group. While Database instance creation if it is default security group then this instance firewall prevents connections.
Refer :   https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html

3)  Add  Tns  entry   to connect  to   (using endpoints) 

testaws =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = testaws.cjhtbujgykil.us-east-1.rds.amazonaws.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = testaws)
    )
  )

For you to connect to your DB instance, it must be associated with a security group that contains the IP addresses and network configuration that you use to access the DB instance. You might have associated your DB instance with an appropriate security group when you created it. If you assigned a default, non configured security group when you created the DB instance, the DB instance firewall prevents connections.

If you need to create a new security group to enable access, the type of security group that you create depends on which Amazon EC2 platform your DB instance is on. To determine your platform, see Determining whether you are using the EC2-VPC or EC2-Classic platform. In general, if your DB instance is on the EC2-Classic platform, you create a DB security group; if your DB instance is on the VPC platform, you create a VPC security group. For information about creating a new security group, see Controlling access with security groups.

After you create the new security group, you modify your DB instance to associate it with the security group. For more information, see Modifying an Amazon RDS DB instance.

You can enhance security by using SSL to encrypt connections to your DB instance. For more information, see Oracle Secure Sockets Layer.


#################################################
#################################################

Ensure database have high availability

Ensure that RDS clusters are using Multi-AZ deployment configurations for high availability and automatic failover support fully managed by AWS.

Enabling this option is really simple and provides great advantages to prevent an affected availability area can leave your database without service.



#################################################
#################################################

Multi-Availability Zone (Multi-AZ) RDS Deployments




When Multi-AZ is enabled for MySQL, MariaDB, Oracle, and PostgreSQL engines, AWS automatically provisions and maintains a synchronous database standby replica on a dedicated hardware in a separate data center (known as Availability Zone, or AZ). When deploying SQL Server in a Multi-AZ infrastructure, a SQL server engine can be configured to use SQL-Server native mirroring technology for synchronous logical replication. AWS RDS will automatically switch from the primary instance to the available standby replica in the event of a failure, such as an Availability Zone outage, an internal hardware or network outage, a software failure; or in case of planned interruptions, such as software patching or changing the RDS instance type.

SSDs are not only durable in terms of read and write throughput, but also cost-effective, high-performance robust storage devices for database workloads. Amazon Aurora/RDS uses SSD-backed virtual storage layer for all its database services. Amazon Aurora automatically replicates the storage volume six ways across three Availability Zones, making it extremely durable and fault-tolerant.

Multi-AZ increases durability and availability of the databases and is most suitable for production deployments. If an Availability Zone failure or DB instance failure occurs, the availability impact is limited to the time automatic failover takes to complete, which is typically under one minute for Amazon Aurora and one to two minutes for other database engines. When operations such as DB instance scaling or system upgrades such as OS patching are initiated for Multi-AZ deployments, it is applied on the standby prior to an automatic failover. As a result, the availability impact is limited only to the time required for automatic failover to complete.


Benefits of Multi-AZ deployment:

  • Replication to a standby replica is synchronous which is highly durable.
  • Endpoint of DB instance remains the same after a failover, the application can resume database operations without manual intervention.
  • If a failure occurs, your availability impact is limited to time that automatic failover takes to complete. This helps to achieve increased availability.
  • It reduces the impact of maintenance. RDS performs maintenance on the standby first, promotes the standby to primary master, and then performs maintenance on the old master which is now a standby replica.
  • To prevent any negative impact of the backup process on performance, Amazon RDS creates a backup from the standby replica.
  • When a problem is detected on the primary instance, it will automatically failover to the standby in the following conditions: 1) The primary DB instance fails. 2) An Availability Zone outage. 3) The DB instance server type is changed. 4) The operating system of DB instance is undergoing software patching. 5) Manual failover of DB instance was initiated using reboot with failover.


Note : 
1) Cross-region Multi-AZ is not yet supported . 
2) Multi-AZ deployments are not a read scaling solution, you cannot use a standby replica to serve read traffic. The standby is only there for failover

#################################################
#################################################


Snapshot Backup and Restore  database

Ensure that RDS database instances have automated backups enabled for point-in-time recovery. To back up your database instances, RDS take automatically a full daily snapshot of your data, with transaction logs, during the specified backup window and keeps the backups for a limited period of time (known as retention period) defined by the instance owner.




RDS creates a storage volume snapshot of the DB instance, backing up the entire DB instance and not just individual databases.

RDS provides two different methods Automated and Manual for backing up your DB instances .

Automated Backups:
  • Backups of the DB instance are automatically created and retained
  • Automated backups are enabled by default for a new DB instance.
  • Automated backups occur during a daily user-configurable period of time, known as preferred backup window.
    • If a preferred backup window is not specified when an DB instance is created, RDS assigns a default 30-minute backup window which is selected at random from an 8-hour block of time per region.
    • Changes to the backup window take effect immediately.
    • Backup window cannot overlap with the weekly maintenance window for the DB instance.
  • Backups created during the backup window are retained for a user-configurable number of days , known as backup retention period
    • If the backup retention period is not set, RDS defaults the period retention period to one day, if created using RDS API or the AWS CLI, or seven days if created AWS Console
    • Backup retention period can be modified with valid values are 0 (for no backup retention) to a maximum of 35 days.
  • Manual snapshot limits (50 per region) do not apply to automated backups
  • If the backup requires more time than allotted to the backup window, the backup will continue to completion.
  • An immediate outage occurs if the backup retention period is changed
    • from 0 to a non-zero value as the first backup occurs immediately or
    • from non-zero value to 0 as it turns off automatic backups, and deletes all existing automated backups for the instance.
  • RDS uses the periodic data backups in conjunction with the transaction logs to enable restoration of the DB Instance to any second during the retention period, up to the LatestRestorableTime (typically up to the last few minutes).
  • During the backup window,
    • for Single AZ instance, storage I/O may be briefly suspended while the backup process initializes (typically under a few seconds) and a brief period of elevated latency might be experienced.
    • for Multi-AZ DB deployments, there is No I/O suspension since the backup is taken from the standby instance
  • First backup is a full backup, while the others are incremental.
  • Automated DB backups are deleted when
    • the retention period expires
    • the automated DB backups for a DB instance is disabled
    • the DB instance is deleted
  • When a DB instance is deleted,
    • a final DB snapshot can be created upon deletion; which can be used to restore the deleted DB instance at a later date.
    • RDS retains the final user-created DB snapshot along with all other manually created DB snapshots
    • all automated backups are deleted and cannot be recovered

Point-In-Time Recovery

  • In addition to the daily automated backup, RDS archives database change logs. This enables recovery of the database to any point in time during the backup retention period, up to the last five minutes of database usage.
  • Disabling automated backups also disables point-in-time recovery
  • RDS stores multiple copies of your data, but for Single-AZ DB instances these copies are stored in a single availability zone.
  • If for any reason a Single-AZ DB instance becomes unusable, point-in-time recovery can be used to launch a new DB instance with the latest restorable data
DB Snapshots (User Initiated – Manual)
  • DB snapshots are manual, user-initiated backups that enables to back up a DB instance to a known state, and restore to that specific state at any time
  • RDS keeps all manual DB snapshots until explicitly deleted


Restore a DB instance from a DB snapshot

 
1) Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.
2) In the navigation pane, choose Snapshots.
3) Choose the DB snapshot that you want to restore from.
4) For Actions, choose Restore Snapshot.
    Note: This will create a new instance with the existing configuration of the instance from which the        snapshot has been taken
5) On the Restore DB Instance page, for DB Instance Identifier, enter the name for your restored DB          instance.
6) Choose Restore DB Instance.



Reference : 
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html


#################################################
#################################################

Applying Oracle Psu patch on  Aws rds 


A minor version upgrade applies an Oracle Database Patch Set Update (PSU) or Release Update (RU) in a major version.

An Amazon RDS for Oracle DB instance is scheduled to be upgraded automatically during its next maintenance window when it meets the following condition:

  • The DB instance has the Auto minor version upgrade option enabled.

  • The DB instance is not running the latest minor DB engine version.

01Login to the AWS Management Console.

02Navigate to RDS dashboard at https://console.aws.amazon.com/rds/.

03In the navigation panel, under RDS Dashboard, click Instances.

04Select the RDS instance that you want to examine.

05Click Instance Actions button from the dashboard top menu and select See Details.

06Under Maintenance Details section, search for the Auto Minor Version Upgrade status:

Under Maintenance Details section, search for the Auto Minor Version Upgrade status

If the current status is set to No, the feature is not enabled and the minor engine upgrades released will not be applied to the selected RDS instance.

07Repeat steps no. 4 – 6 for each RDS instance provisioned in the current region. Change the AWS region from the navigation bar to repeat the process for other regions


Reference : 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Oracle.html



#################################################
#################################################

Amazon RDS Resources Encryption

Compliance is one of the key concerns for DBAs whose data is on the cloud. When dealing with production databases that hold sensitive and critical data, it is highly recommended to implement encryption in order to protect the data from unauthorized access. With RDS encryption enabled, the data stored on the instance underlying storage, the automated backups, read replicas, and snapshots all become encrypted. The RDS encryption keys implement the AES-256 algorithm and are entirely managed and protected by the AWS key management infrastructure through AWS Key Management Service (AWS KMS).

Options for encryption of data at rest provided by AWS are as listed below:
RDS for Oracle Database – AWS-managed keys for Oracle Enterprise Edition (EE)
RDS for SQL Server – AWS-managed keys for SQL Server Enterprise Edition (EE)

RDS for MySQL – Customer-managed keys using AWS Key Management Service (KMS)
RDS for PostgreSQL – Customer-managed keys using AWS Key Management Service (KMS)
RDS for Oracle Database – Customer-managed keys for Oracle Enterprise Edition using AWS CloudHSM

For all of the above-listed database engines and key management options, encryption (AES-256) and decryption are applied both automatically and transparently to RDS storage and database snapshots. You do not need to make any changes to your code or operating model in order to benefit from this important data protection feature. AWS CloudHSM is a service that helps meet stringent compliance requirements for cryptographic operations and storage of encryption keys by using a single tenant Hardware Security Module (HSM) appliances within the AWS cloud. To learn more, read the documentation on Encrypting RDS Resources


#################################################
#################################################

Encrypt your database

Ensure that RDS database instances are encrypted to fulfill compliance requirements for data-at-rest encryption. The RDS data encryption and decryption is handled transparently and don’t require any additional action from you or your application, you just need to select the Master Key which is going to encrypt the instance.

To enable data encryption for an existing RDS instance you need to re-create it (back-up and restore) with encryption flag enabled, as you can see below:



Enable RDS instance encryption in Edit view.


#################################################
#################################################

Go to the RDS console and select the region your database is in. Click on the Show Monitoring button and pick your database instance. There will be a graph (like below image) that shows Free Storage Space.

This is documented over at AWS RDS documentation.

 






#################################################
#################################################

Increase storage for a DB instance :


  1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/
  2.  In the navigation pane, choose Databases.
  3.  Choose the DB instance that you want to modify.
  4.  Choose Modify.
  5.  Enter a new value for Allocated storage. It must be greater than the current value.
  6.   Choose Continue to move to the next screen.
  7.   Choose Apply immediately in the Scheduling of modifications section to apply the storage changes to the DB instance immediately. Or choose Apply during the next scheduled maintenance window to apply the changes during the next maintenance window.
  8. When the settings are as you want them, choose Modify DB instance

Note

When you increase the allocated storage, it must be by at least 10 percent. If you try to increase the value by less than 10 percent, you get an error.


#################################################
#################################################

AWS Database Migration Service (DMS)

AWS DMS can help DBAs in planning and migration of on-premises relational data stored in Oracle, SQL Server, MySQL, MariaDB, or PostgreSQL databases. DMS can help in migrating to cloud in virtually no downtime so you can take advantage of the scale, operational efficiency, and the multitude of data storage options available. DMS works by setting up and then managing a replication instance on AWS. This instance unloads data from the source database and loads it into the destination database, and can be used for a one-time migration followed by ongoing replication to support a migration that entails minimal downtime. DBAs can set up their first migration with just a couple of clicks. The process can be started by DMS Console as shown below:



It provides a step-by-step process to set up the migration. The screenshot below gives an overview of the entire process.



What each step means: DMS requires at least a source, a target, and a replication instance.  

Source database: An on-premises or EC2 instance-based database.

Target database: Target database is the actual AWS RDS database to which the source database has to be migrated.

Replication instance: Processes the migration tasks.

Replication task: Once configuration of the source database and target databases using connection details like DB name, hostname, port, username, and password is completed, the replication tasks are created. It contains one or more table mappings that define what data is being moved from the source to the target. If a table does not exist in the target, then it can be automatically created. The AWS Database Migration Service offers many options. For example, you can choose to migrate specific tables. You can also create several different types of replication tasks and activate them at different times. Step-by-step DMS documentation can be very useful before starting the process


#################################################
#################################################


Amazon RDS Oracle doesn't support the following Oracle Database features:

  • Automatic Storage Management (ASM)
  • Database Vault
  • Flashback Database
  • Multitenant
  • Oracle Enterprise Manager Cloud Control Management Repository
  • Real Application Clusters (Oracle RAC)
  • Real Application Testing
  • Unified Auditing, Pure Mode
  • Workspace Manager (WMSYS) schema


#################################################
#################################################

Amazon RDS Oracle supports the following Oracle Database features:

  • Advanced Compression
  • Application Express (APEX)
  • Automatic Memory Management
  • Automatic Undo Management
  • Automatic Workload Repository (AWR)
  • Active Data Guard with Maximum Performance in the same AWS Region or across AWS Regions
  • Continuous Query Notification (version 12.1.0.2.v7 and later
  • Data Redaction
  • Database Change Notification (version 11.2.0.4.v11 and later 11g versions)
  • Database In-Memory (version 12.1 and later)
  • Distributed Queries and Transactions
  • Edition-Based Redefinition
  • Enterprise Manager Database Control (11g) and EM Express (12c)
  • Fine-Grained Auditing
  • Flashback Table, Flashback Query, Flashback Transaction Query
  • Import/export (legacy and Data Pump) and SQL*Loader
  • Java Virtual Machine (JVM)
  • Label Security (version 12.1 and later)
  • Locator
  • Materialized Views
  • Multimedia
  • Network encryption
  • Partitioning
  • Spatial and Graph
  • Star Query Optimization
  • Streams and Advanced Queuing
  • Summary Management – Materialized View Query Rewrite
  • Text (File and URL data store types are not supported)
  • Total Recall
  • Transparent Data Encryption (TDE)
  • Unified Auditing, Mixed Mode (version 12.1 and later)
  • XML DB (without the XML DB Protocol Server)
  • Virtual Private Database


#################################################
#################################################

Monitoring Using Amazon CloudWatch

Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon RDS or any database system. DBAs need to put a plan in place to oversee databases on RDS in order to monitor network throughput, I/O for read, write, or metadata operations, client connections, and keep track of historical performance KPIs to use as a baseline for comparison with current performance. Additionally, Amazon RDS provides multiple ways to manage automation of many DBA activities and has many different capabilities to do detailed performance analysis of the overall system. Let’s look at some of the monitoring techniques that Amazon RDS provides.

Using Amazon CloudWatch, DBAs can automate most monitoring tasks. CloudWatch collects and processes raw data from Amazon RDS into readable, near real-time metrics. These statistics are recorded for a period of two weeks so DBAs can maintain a historical performance statistics of databases. Using AWS Console, DBAs can find the metrics related to RDS. The screenshot below shows how many RDS metrics produced by database engines can be monitored.



You can also use CloudWatch Alarms for single metrics to take actions such as sending a notification to a concerned person or autoscaling policies, which in turn may automatically handle an increase in load by adding additional resources.

Enhanced Monitoring

Advanced RDS DBAs may want to have more insight into the inner workings of database services. Once the RDS Enhanced Monitoring feature for a database instance is enabled, DBAs gain access to more than 50 new CPU, memory, file system, and disk I/O metrics. DBAs can enable these features on a per-instance basis and can choose the granularity (all the way down to one second). Here is the list of available metrics:























#################################################
#################################################

Database delete protection 

Deletion protection is now enabled by default when you select the "production" option for database instances created through the AWS Console. You can also turn on or off deletion protection for an existing database instance or cluster with a few clicks in the AWS Console or the AWS Command Line Interface. Deletion protection is enforced in the AWS Console, the CLI, and API.




#################################################
#################################################

Make sure database is not public

Check for any public RDS database instances provisioned in your AWS account and restrict unauthorized access in order to minimize security risks. To restrict access to any publicly accessible RDS database instance, you must disable the database Publicly Accessible flag and update the VPC security group associated with the instance, as you can see below:




#################################################
#################################################

Using S3 buckets along with RDS 


Reference : https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-s3-integration.html

This feature allows RDS Oracle customers to easily, efficiently and securely transfer data between their RDS Oracle DB Instances and Amazon S3. 

Previously, data transfers from an RDS Oracle DB Instance required an additional DB Instance and a networking link between the two DB Instances. With Amazon S3 Integration, you can now use the RDS Oracle DB instance to transfer files to or from Amazon S3 using new RDS Oracle procedures.
With Amazon S3 Integration, you can perform data ingress with Oracle Data Pump to migrate workloads into your RDS Oracle DB Instance. After exporting your data from your source instance, you upload your Data Pump files to your Amazon S3 bucket. You can then copy the files from your S3 bucket to the RDS Oracle host and finally use the DBMS_DATAPUMP package to import your data into the target RDS Oracle DB Instance.  

To work with Amazon RDS for Oracle integration with Amazon S3, the Amazon RDS DB instance must have access to an Amazon S3 bucket. For this, you create an AWS Identity and Access Management (IAM) policy and an IAM role. The Amazon VPC used by your DB instance doesn't need to provide access to the Amazon S3 endpoints.

Note
To add a role to a DB instance, the status of the DB instance must be available.

Please refer Aws document below for creatng iam policy for s3 access 
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-s3-integration.html



Below are steps to  transfer file to and  from S3 

— Upload a file to S3 bucket:
SQL> SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
    p_bucket_name => ‘<bucket_name>’,    –bucket name where you want to upload to
    p_prefix => ‘<file_name>’,        –File name you want to upload
    prefix => ”,
    p_directory_name => ‘DATA_PUMP_DIR’)    –Directory Name you want to upload from
     AS TASK_ID FROM DUAL;


— Download all files exist in S3 bucket:
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  ‘my-bucket’,                  
–bucket name where you want to download from.
      p_directory_name =>  ‘DATA_PUMP_DIR’) –Directory Name you want to download to.
   AS TASK_ID FROM DUAL; 


— Download all files exist inside a named folder in S3 bucket: 
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  ‘my-bucket’,      –bucket name where you want to download from.
      p_s3_prefix          =>  ‘export_files/‘,    –All files under this folder will be downloaded [don’t forget the slash / after the directory name]
      p_directory_name =>  ‘DATA_PUMP_DIR’) –Directory Name you want to download to.
   AS TASK_ID FROM DUAL;


Download one named file exist under a folder from an S3 bucket:
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  ‘my-bucket’,      –bucket name where you want to download from.
      p_s3_prefix          =>  ‘prod_db/export_files‘,  –Folder name full path
      p_prefix                =>  ‘EXPORT_STG_04-03-19.dmp‘, –file name
      p_directory_name =>  ‘DATA_PUMP_DIR’) –Directory Name you want to download to.
    AS TASK_ID FROM DUAL;


Check the download progress: [By TaskID provided from above commands i.e. 1580377405757-6339]
SQL> SELECT text FROM table(rdsadmin.rds_file_util.read_text_file(‘BDUMP’,’dbtask-1580377405757-6339.log’));


#################################################
#################################################

Amazon RDS License : 

You have the option to include the cost of the Oracle Database license in the hourly price of the Amazon RDS service if you use the License Included service model. In this case, you do not need to purchase Oracle licenses separately; the Oracle Database software has been licensed by AWS. License Included per-hour pricing includes software, underlying hardware resources, and Amazon RDS management capabilities. This service model optimizes license costs, and gives you flexibility when scaling your Amazon RDS instances up or down. You can take advantage of hourly pricing with no upfront fees or long-term commitments. In addition, you can purchase Amazon RDS Reserved Instances under one-year or three-year reservation terms. With Reserved Instances, you can make a low, one-time payment up front for each database instance, and then pay a significantly discounted hourly usage rate.

Note: The hourly license for the License Included model in Amazon RDS is available only for Oracle Standard Edition One and Standard Edition Two. For other editions of Oracle Database on Amazon RDS and any edition of Oracle Database on Amazon EC2, you need to use your own license (that is, acquire a license from Oracle), as discussed in the following section.


#################################################
#################################################

Handling rman operations

  
— Full RMAN database Backup in RDS:
 SQL> BEGIN
    rdsadmin.rdsadmin_rman_util.backup_database_full(
        p_owner               => ‘SYS’,
        p_directory_name      => ‘BKP_DIR’,
        p_level               => 0,               — 0 For FULL, 1 for Incremental
     –p_parallel            => 4,              — To be hashed if using a Standard Edition
        p_section_size_mb     => 10,
        p_rman_to_dbms_output => TRUE);
END;
/
 
— Backup ALL ARCHIVELOGS:
SQL> BEGIN
    rdsadmin.rdsadmin_rman_util.backup_archivelog_all(
        p_owner                     => ‘SYS’,
        p_directory_name      => ‘BKP_DIR’,
     –p_parallel                   => 6,              — To be hashed if using a Standard Edition
        p_rman_to_dbms_output => TRUE);
END;
/
 
— Backup ARCHIVELOGS between a date range:
 SQL> BEGIN
    rdsadmin.rdsadmin_rman_util.backup_archivelog_date(
        p_owner                 => ‘SYS’,
        p_directory_name  => ‘BKP_DIR’,
        p_from_date           => ’01/15/2020 00:00:00′,
        p_to_date                => ’01/16/2020 00:00:00′,
     –p_parallel                => 4,              — To be hashed if using a Standard Edition
        p_rman_to_dbms_output => TRUE);
END;
/
 
Note: in case of using SCN/sequence replace “p_from_date” with “p_from_scn” or “p_from_sequence” and “p_to_date” with “p_to_scn” or “p_to_sequence”.
 
— Show Running RMAN Backups: [Manual backups running by you not by RDS as RDS is using Hot Backup method to backup the DB]
 SQL> SELECT to_char (start_time,’DD-MON-YY HH24:MI’) START_TIME, to_char(end_time,’DD-MON-YY HH24:MI’) END_TIME, time_taken_display, status,
input_type, output_device_type,input_bytes_display, output_bytes_display, output_bytes_per_sec_display,COMPRESSION_RATIO COMPRESS_RATIO
FROM v$rman_backup_job_details
WHERE status like ‘RUNNING%’;
 
— Show current Running Hot Backups:
SQL> SELECT t.name AS “TB_NAME”, d.file# as “DF#”, d.name AS “DF_NAME”, b.status
FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS=’ACTIVE’;
 
— Validate the database for Physical/Logical corruption on RDS:
SQL> BEGIN
    rdsadmin.rdsadmin_rman_util.validate_database(
        p_validation_type     => ‘PHYSICAL+LOGICAL’,
      –p_parallel                  => 2,              — To be hashed if running a Standard Edition
        p_section_size_mb     => 10,
        p_rman_to_dbms_output => TRUE);
END;
/
 
 — Enable BLOCK CHANGE TRACKING on RDS: [Avoid enabling it on 11g as you will hit a bug will hamper you from restoring the database later consistently]
SQL> SELECT status, filename FROM V$BLOCK_CHANGE_TRACKING;
SQL> EXEC rdsadmin.rdsadmin_rman_util.enable_block_change_tracking;
 
— Disable BLOCK CHANGE TRACKING on RDS:
SQL> EXEC rdsadmin.rdsadmin_rman_util.disable_block_change_tracking;
 
— Crosscheck and delete expired ARCHIVELOGS: [Which are not exist anymore on disk]
 SQL> EXEC rdsadmin.rdsadmin_rman_util.crosscheck_archivelog(p_delete_expired => TRUE, p_rman_to_dbms_output => TRUE);


#################################################
#################################################

Skipping Corrupted Blocks procedure on RDS

 
In case you have a corrupted blocks on a table/index whenever any query try to access those corrupted blocks it will keep getting ORA-1578
 
From the error message try to find the corrupted object_name and its owner :
SQL> Select relative_fno,owner,segment_name,segment_type
from dba_extents
where
file_id = <DATAFILE_NUMBER_IN_THE_ERROR_MESSAGE_HERE>
and
<CORRUPTED_BLOCK_NUMBER_IN_THE_ERROR_MESSAGE_HERE> between block_id and block_id + blocks – 1;
 
Next follow the next steps to SKIP the corrupted blocks on the underlying object to allow queries running against to succeed:
 
— Create REPAIR tables:
exec rdsadmin.rdsadmin_dbms_repair.create_repair_table;
exec rdsadmin.rdsadmin_dbms_repair.create_orphan_keys_table;                      
exec rdsadmin.rdsadmin_dbms_repair.purge_repair_table;
exec rdsadmin.rdsadmin_dbms_repair.purge_orphan_keys_table;
 
— Check Corrupted blocks on the corrupted object and populate them in the REPAIR tables:
set serveroutput on
declare v_num_corrupt int;
begin
  v_num_corrupt := 0;
  rdsadmin.rdsadmin_dbms_repair.check_object (
    schema_name => ‘&corrupted_Object_Owner’,
    object_name => ‘&corrupted_object_name’,
    corrupt_count =>  v_num_corrupt
  );
dbms_output.put_line(‘number corrupt: ‘||to_char(v_num_corrupt));
end;
/
 
col corrupt_description format a30
col repair_description format a30
select object_name, block_id, corrupt_type, marked_corrupt, corrupt_description, repair_description from sys.repair_table;
 
select skip_corrupt from dba_tables where owner = upper(‘&corrupted_Object_Owner’) and table_name = upper(‘&corrupted_object_name’);                                                         
 

— Enable the CORRUPTION SKIPPING on the corrupted object:
begin
  rdsadmin.rdsadmin_dbms_repair.skip_corrupt_blocks (
    schema_name => ‘&corrupted_Object_Owner’,
    object_name => ‘&corrupted_object_name’,
    object_type => rdsadmin.rdsadmin_dbms_repair.table_object,
    flags => rdsadmin.rdsadmin_dbms_repair.skip_flag);
end;
/
 
select skip_corrupt from dba_tables where owner =  upper(‘&corrupted_Object_Owner’) and table_name = upper(‘&corrupted_object_name’);       
 
 
— Disable the CORRUPTION SKIPPING on the corrupted object:
 
begin
  rdsadmin.rdsadmin_dbms_repair.skip_corrupt_blocks (
    schema_name => ‘&corrupted_Object_Owner’,
    object_name => ‘&corrupted_object_name’,
    object_type => rdsadmin.rdsadmin_dbms_repair.table_object,
    flags => rdsadmin.rdsadmin_dbms_repair.noskip_flag);
end;
/
 
select skip_corrupt from dba_tables where owner =  upper(‘&corrupted_Object_Owner’) and table_name = upper(‘&corrupted_object_name’);                    
 
— Finally DROP the repair tables:
exec rdsadmin.rdsadmin_dbms_repair.drop_repair_table;
exec rdsadmin.rdsadmin_dbms_repair.drop_orphan_keys_table;                
 


#################################################
#################################################

Using rdsadmin utility for daily operations : 

After moving to AWS we need to perform all task using rdsadmin utility . Though some task works traditional way  its recommended to perform all tsk using rdsadmin  


Killing Session : 

begin
rdsadmin.rdsadmin_util.kill(
sid => &sid,
serial => &serial,
method => 'IMMEDIATE');
end;


Flush database pools  : 

exec rdsadmin.rdsadmin_util.flush_shared_pool;
exec rdsadmin.rdsadmin_util.flush_buffer_cache;


Perform rman operations  : 

BEGIN

 rdsadmin.rdsadmin_rman_util.validate_database(
 p_validation_type => 'PHYSICAL+LOGICAL',
 p_parallel => 4,
 p_section_size_mb => 10,
 p_rman_to_dbms_output => FALSE);
END;
/


Giving and revoking grants  : 

exec rdsadmin.rdsadmin_master_util.grant_drop_any_directory;

begin
    rdsadmin.rdsadmin_util.grant_sys_object(
        p_obj_name  => 'V_$SESSION',
        p_grantee   => 'PYTHIAN',
        p_privilege => 'SELECT');
end;
/
 
begin
    rdsadmin.rdsadmin_util.grant_sys_object(
        p_obj_name     => 'V_$SESSION',
        p_grantee      => 'PYTHIAN',
        p_privilege    => 'SELECT',
        p_grant_option => true);
end;
/


begin

    rdsadmin.rdsadmin_util.grant_sys_object(
        p_obj_name  => 'V_$SESSION',
        p_grantee   => 'USER1',
        p_privilege => 'SELECT');
end;
/

begin
    rdsadmin.rdsadmin_util.grant_sys_object(
        p_obj_name     => 'V_$SESSION',
        p_grantee      => 'USER1',
        p_privilege    => 'SELECT',
        p_grant_option => true);
end;
/


begin
    rdsadmin.rdsadmin_util.revoke_sys_object(
        p_obj_name  => 'V_$SESSION',
        p_revokee   => 'PYTHIAN',
        p_privilege => 'SELECT');
end;
/


Create password verification functions : 

begin
    rdsadmin.rdsadmin_password_verify.create_verify_function(
        p_verify_function_name => 'CUSTOM_PASSWORD_FUNCTION',
        p_min_length           => 12,
        p_min_uppercase        => 2,
        p_min_digits           => 1,
        p_min_special          => 1,
        p_disallow_at_sign     => true);
end;
/
 
col text format a150
select TEXT  from DBA_SOURCE
where OWNER = 'SYS' and NAME = 'CUSTOM_PASSWORD_FUNCTION' order by LINE;

 
Changing database archivelog retention: 

begin
    rdsadmin.rdsadmin_util.set_configuration(
        name  => 'archivelog retention hours',
        value => '24');
end;
/
 
commit
/

set serveroutput on
exec rdsadmin.rdsadmin_util.show_configuration


Listing trace files in bdump

EXEC rdsadmin.manage_tracefiles.refresh_tracefile_listing;
SELECT * FROM rdsadmin.tracefile_listing;
 or
SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir('BDUMP'));


Checking database listener log and alert log : 

set markup html on spool on
SPOOL listener_log.HTML
set echo on
select ORIGINATING_TIMESTAMP,MESSAGE_TEXT from listenerlog where  ORIGINATING_TIMESTAMP >= TO_DATE (‘2018-05-31,12:01:56’, ‘yyyy-mm-dd, hh24:mi:ss’)
AND ORIGINATING_TIMESTAMP < TO_DATE (‘2018-06-01, 12:01:56’, ‘yyyy-mm-dd, hh24:mi:ss’) order by originating_timestamp ;
spool off
set markup html off spool off


set markup html on spool on

SPOOL alert_log.HTML
set echo on
select ORIGINATING_TIMESTAMP,MESSAGE_TEXT from alertlog where upper(message_text) like ‘%ORA-60%’ and ORIGINATING_TIMESTAMP >= TO_DATE (‘2018-05-31,12:01:56’, ‘yyyy-mm-dd, hh24:mi:ss’)
AND ORIGINATING_TIMESTAMP < TO_DATE (‘2018-06-01, 12:01:56’, ‘yyyy-mm-dd, hh24:mi:ss’) order by originating_timestamp ;
spool off
set markup html off spool off

select message_text from ALERTLOG where rownum<=10 order by indx;

or

SQL> SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','alert_TECHDATABASKET.log'));

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','alert_MYDB.log')) where rownum<=10;


Validate database: 

BEGIN

    rdsadmin.rdsadmin_rman_util.validate_database(
        p_validation_type     => 'PHYSICAL+LOGICAL',
        p_parallel            => 4, 
        p_section_size_mb     => 10,
        p_rman_to_dbms_output => FALSE);
END;
/
 
exec rdsadmin.rdsadmin_rman_util.validate_database
/



Adding / removing supplementary logging

 exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');

exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type='PRIMARY KEY');
exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type='ALL');
exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'DROP');



Adding / dropping logfile

exec rdsadmin.rdsadmin_util.add_logfile(p_size => '2G');
/
 
exec rdsadmin.rdsadmin_util.drop_logfile(grp => 1);
/


Creating directory

 exec rdsadmin.rdsadmin_util.create_directory(p_directory_name => 'MyCustomDirectory');


Disconnect session

 begin
    rdsadmin.rdsadmin_util.disconnect(
        sid    => sid,
        serial => serial_number);
end;



Enable / Disable distributed recovery

exec rdsadmin.rdsadmin_util.enable_distr_recovery
/
 exec rdsadmin.rdsadmin_util.disable_distr_recovery
/


Enable / disable restricted session

exec rdsadmin.rdsadmin_util.restricted_session(p_enable => true)
/
 exec rdsadmin.rdsadmin_util.restricted_session(p_enable => false)
/
 


Compile invalid objects

SET LINES 180
SET ECHO OFF
SET HEA OFF
SET FEEDBACK OFF
 
SPOOL /tmp/fixinvalid.sql
select distinct('exec SYS.UTL_RECOMP.RECOMP_SERIAL('''||owner||''');') from dba_objects WHERE status = 'INVALID'
/
SPOOL OFF
 
SET FEEDBACK ON
SET ECHO ON
@/tmp/fixinvalid.sql



Listing files in database directory

 select * from table(rdsadmin.rds_file_util.listdir(p_directory => '&MyOracleDatabaseDirectoryName'));



Enable / Disable force logging

 
exec rdsadmin.rdsadmin_util.force_logging(p_enable => true);
/
 
exec rdsadmin.rdsadmin_util.force_logging(p_enable => false);
/


Switch logfile and perform checkpoint

 exec rdsadmin.rdsadmin_util.switch_logfile;
/
 
exec rdsadmin.rdsadmin_util.checkpoint;
/


Changing default  tablespace of database

 exec rdsadmin.rdsadmin_util.alter_default_tablespace(tablespace_name => 'MyUser');
/
 
exec rdsadmin.rdsadmin_util.alter_default_temp_tablespace(tablespace_name => 'TEMP')
/


Read an import log under DATA_PUMP_DIR directory

 SQL> SELECT text FROM table(rdsadmin.rds_file_util.read_text_file(‘DATA_PUMP_DIR’,’import_schema.LOG’));  

 


Taking hanganalyze and systemstate

 oradebug hanganalyze 3
EXEC rdsadmin.manage_tracefiles.hanganalyze;
 
oradebug dump systemstate 266
EXEC rdsadmin.manage_tracefiles.dump_systemstate;


#################################################
#################################################


References : 

1)   https://aws.amazon.com/blogs/database/part-1-role-of-the-dba-when-moving-to-amazon-rds-responsibilities/

2) https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.html

3)  https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Oracle.html

4) https://rdsora.workshop.aws/ 

5) https://www.youtube.com/watch?v=vpSWZx4-M-M&feature=emb_logo

     6)  https://www.sqlshack.com/migrating-your-on-premises-sql-databases-to-aws-rds-sql-server-using-aws-dms/


7)  https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html


#################################################
#################################################

Running Production Oracle Databases on Amazon RDS for Oracle - AWS Virtual Workshop :