Tuesday, August 16, 2022

Awr in Oracle Multitenant/ Pluggable Database Pdb/Cdb - awr_pdb_autoflush_enabled ,AWR_SNAPSHOT_TIME_OFFSET , AWR_PDB_MAX_PARALLEL_SLAVES


Since we were exploring enabling awr snapshots at  pdb level ,  we came across   below 3 parameters  that control  awr   generation at pluggable database level .

AWR_PDB_AUTOFLUSH_ENABLED
Awr_snapshot_time_offset
AWR_PDB_MAX_PARALLEL_SLAVES




AWR_PDB_AUTOFLUSH_ENABLED

1. AWR Snapshots and reports can be created only at the container database (CDB) level in 
   Oracle 12c R1 (12.1.0.1.0 / 12.1.0.2.0)
2. AWR Snapshots and reports can be created at the container database (CDB) level as well as pluggable database (PDB) level 
   in Oracle 12c R2 (12.2.0.1.0)
3. By default, AWR Snapshots and reports can be generated only at the container database (CDB) level


The default value of AWR_PDB_AUTOFLUSH_ENABLED is false. Thus, by default, automatic AWR snapshots are disabled for all the PDBs in a CDB.

When you change the value of AWR_PDB_AUTOFLUSH_ENABLED in the CDB root, the new value takes effect in all the PDBs in the CDB.

You can also change the value of AWR_PDB_AUTOFLUSH_ENABLED in any of the individual PDBs in a CDB, and the value that is set for each individual PDB will be honored. This enables you to enable or disable automatic AWR snapshots for individual PDBs.



for specified PDB

alter session set container=PDBtest;
alter system set awr_pdb_autoflush_enabled=true;



for all pdbs and CDB

alter session set container=CDB$ROOT;
alter system set awr_pdb_autoflush_enabled=true;



set interval and retention period by PDB or CDB$ROOT level. for all PDBSs and CDB we have to run script separately. for 30 days retention and 60 minutes interval script can be below.
 
alter session set container=PDB1;  
execute dbms_workload_repository.modify_snapshot_settings(interval => 60, retention=>64800);  
 


SQL> set lines 100
SQL> select * from cdb_hist_wr_control;

 SQL> SELECT con_id, to_char(end_interval_time, 'HH24:MI:SS') AS snap_time
  2  FROM cdb_hist_snapshot
  3  WHERE end_interval_time > to_timestamp('2020-10-02 11:45','YYYY-MM-DD HH24:MI')
  4  ORDER BY snap_time, con_id;


select * from awr_pdb_snapshot ; 






Awr_snapshot_time_offset

You might have observed those spikes on the top of every hour when all the AWR snapshots are taken and to  avoid thiswe  want snapshots taken on hour bases but within 5 minutes difference for every database


The parameter is specified in seconds. Normally, you set it to a value less than 3600. If you set the special value 1000000 (1,000,000), you get an automatic mode, in which the offset is based on the database name.

The automatic mode is an effective way of getting a reasonable distribution of offset times when you have a very large number of instances running on the same node.



SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> SHOW CON_NAME

CON_NAME
------------------------------
CDB$ROOT



SQL> show parameter AWR_SNAPSHOT_TIME_OFFSET

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
awr_snapshot_time_offset             integer     0

SQL> alter system set AWR_SNAPSHOT_TIME_OFFSET=1000000 scope=both;





AWR_PDB_MAX_PARALLEL_SLAVES

From version 18c onward, with the dynamic initialization parameter AWR_PDB_MAX_PARALLEL_SLAVES, you can specify the maximum number of background processes that the database engine can concurrently use to take automatic PDB-level snapshots. Valid values go from 1 to 30; the default is 10. Even though this initialization parameter doesn’t affect the automatic snapshots in the root container, it can only be set in the root container. The following examples illustrate the behaviour with three PDBs and an interval of 15 minutes:

AWR_PDB_MAX_PARALLEL_SLAVES = 1: only one PDB-level snapshot is taken at the same time as the snapshot in the root container

AWR_PDB_MAX_PARALLEL_SLAVES = 10: 10 PDB-level snapshots are taken at the same time as the snapshot in the root container



Known issue : 

If   _cursor_stats_enabled  is set  , sql statistics wont be generated on PDB level and pdb awr  will have sql details missing in it . 



 

References : 

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/AWR_PDB_AUTOFLUSH_ENABLED.html#GUID-08FA21BC-8FB1-4C51-BEEA-139C734D17A7

https://it.inf.unideb.hu/oracle/refrn/AWR_SNAPSHOT_TIME_OFFSET.html#GUID-90CD8379-DCB2-4681-BB90-0A32C6029C4E

Sunday, August 7, 2022

Troubleshooting a datapatch issue in Oracle Pluggable database using -verbose -debug

 

We had situation  where  datapatch was not applied to PDB .  Oracle Suggested to apply datapatch using  verbose  mode and  issue  was captured in log 


./dataoatch  -verbose -debug 



We can apply  datapatch on selective PDB using below 

./datapatch -verbose -pdbs PDB1,PDB2



It is always advisable to  check PDB violations after applying datapatch in pluggable environment . 

select name,cause,message from pdb_plug_in_violations;
select name,cause,type,action from pdb_plug_in_violations where status <> 'RESOLVED';



Below are other options 

$ORACLE_HOME/OPatch/datapatch -rollback all -force --pdb pdb1 
$ORACLE_HOME/OPatch/datapatch  -apply 7777 -force - bundle_series DBRU   --pdb pdb1 



Sql Used to verify : 

SET LINESIZE 500
SET PAGESIZE 1000
SET SERVEROUT ON
SET LONG 2000000
COLUMN action_time FORMAT A20
COLUMN action FORMAT A10
COLUMN status FORMAT A10
COLUMN description FORMAT A40
COLUMN source_version FORMAT A10
COLUMN target_version FORMAT A10
alter session set "_exclude_seed_cdb_view"=FALSE;
 select CON_ID,
        TO_CHAR(action_time, 'YYYY-MM-DD') AS action_time,
        PATCH_ID,
        PATCH_TYPE,
        ACTION,
        DESCRIPTION,
        SOURCE_VERSION,
        TARGET_VERSION
   from CDB_REGISTRY_SQLPATCH
  order by CON_ID, action_time, patch_id;


select patch_id, patch_uid, target_version, status, description, action_time
from dba_registry_sqlpatch
where action = 'APPLY';  


select * from OPATCH_XML_INV;




Reference: 

After running datapatch, PDB plugin or cloned db returns violations shown in PDB_PLUG_IN_VIOLATION (Doc ID 1635482.1)

Datapatch User Guide (Doc ID 2680521.1)


Friday, August 5, 2022

Oracle Cloud Exacc Basics Key-Words / Terms of OCI

 

Before we start  to Support  Oci  we need to Understand  Basic Terminology   Of OCI .  


bare metal host

Oracle Cloud Infrastructure provides you control of the physical host (“bare metal”) machine. Bare metal compute instances run directly on bare metal servers without a hypervisor. When you provision a bare metal compute instance, you maintain sole control of the physical CPU, memory, and network interface card (NIC). You can configure and utilize the full capabilities of each physical machine as if it were hardware running in your own data center. You do not share the physical machine with any other tenants.


regions and availability domains

Oracle Cloud Infrastructure is physically hosted in regions and availability domains. A region is a localized geographic area, and an availability domain is one or more data centers located within a region. A region is composed of one or more availability domains. Oracle Cloud Infrastructure resources are either region-specific, such as a virtual cloud network, or availability domain-specific, such as a compute instance.

Availability domains are isolated from each other, fault tolerant, and very unlikely to fail simultaneously or be impacted by the failure of another availability domain. When you configure your cloud services, use multiple availability domains to ensure high availability and to protect against resource failure. Be aware that some resources must be created within the same availability domain, such as an instance and the storage volume attached to it.
For more details see Regions and Availability Domains.


realm

A realm is a logical collection of regions. Realms are isolated from each other and do not share any data. Your tenancy exists in a single realm and has access to the regions that belong to that realm. Oracle Cloud Infrastructure currently offers a realm for commercial regions and two realms for government cloud regions: FedRAMP authorized and IL5 authorized.


Console

The simple and intuitive web-based user interface you can use to access and manage Oracle Cloud Infrastructure.



tenancy

When you sign up for Oracle Cloud Infrastructure, Oracle creates a tenancy for your company, which is a secure and isolated partition within Oracle Cloud Infrastructure where you can create, organize, and administer your cloud resources.


compartments

Compartments allow you to organize and control access to your cloud resources. A compartment is a collection of related resources (such as instances, virtual cloud networks, block volumes) that can be accessed only by certain groups that have been given permission by an administrator. A compartment should be thought of as a logical group and not a physical container. When you begin working with resources in the Console, the compartment acts as a filter for what you are viewing.

When you sign up for Oracle Cloud Infrastructure, Oracle creates your tenancy, which is the root compartment that holds all your cloud resources. You then create additional compartments within the tenancy (root compartment) and corresponding policies to control access to the resources in each compartment. When you create a cloud resource such as an instance, block volume, or cloud network, you must specify to which compartment you want the resource to belong.

Ultimately, the goal is to ensure that each person has access to only the resources they need.



security zones

A security zone is associated with a compartment. When you create and update cloud resources in a security zone, Oracle Cloud Infrastructure validates these operations against security zone policies. If any policy is violated, then the operation is denied. Security zones let you be confident that your resources comply with Oracle security principles.
virtual cloud network (VCN)

A virtual cloud network is a virtual version of a traditional network—including subnets, route tables, and gateways—on which your instances run. A cloud network resides within a single region but includes all the region's availability domains. Each subnet you define in the cloud network can either be in a single availability domain or span all the availability domains in the region (recommended). You need to set up at least one cloud network before you can launch instances. You can configure the cloud network with an optional internet gateway to handle public traffic, and an optional IPSec connection or FastConnect to securely extend your on-premises network.


instance

An instance is a compute host running in the cloud. An Oracle Cloud Infrastructure compute instance allows you to utilize hosted physical hardware, as opposed to the traditional software-based virtual machines, ensuring a high level of security and performance.


image

The image is a template of a virtual hard drive that defines the operating system and other software for an instance, for example, Oracle Linux. When you launch an instance, you can define its characteristics by choosing its image. Oracle provides a set of platform images you can use. You can also save an image from an instance that you have already configured to use as a template to launch more instances with the same software and customizations.


shape

In Compute, the shape specifies the number of CPUs and amount of memory allocated to the instance. Oracle Cloud Infrastructure offers shapes to fit various computing requirements. See the list of compute shapes.

In Load Balancing, the shape determines the load balancer's total pre-provisioned maximum capacity (bandwidth) for ingress plus egress traffic. Available shapes include 100 Mbps, 400 Mbps, and 8000 Mbps.


key pair

A key pair is an authentication mechanism used by Oracle Cloud Infrastructure. A key pair consists of a private key file and a public key file. You upload your public key to Oracle Cloud Infrastructure. You keep the private key securely on your computer. The private key is private to you, like a password.
Key pairs can be generated according to different specifications. Oracle Cloud Infrastructure uses two types of key pairs for specific purposes:

Instance SSH Key pair: This key pair is used to establish secure shell (SSH) connection to an instance. When you provision an instance, you provide the public key, which is saved to the instance's authorized key file. To log on to the instance, you provide your private key, which is verified with the public key.
API signing key pair: This key pair is in PEM format and is used to authenticate you when submitting API requests. Only users who will be accessing Oracle Cloud Infrastructure via the API need this key 
pair.

 



block volume

A block volume is a virtual disk that provides persistent block storage space for Oracle Cloud Infrastructure instances. Use a block volume just as you would a physical hard drive on your computer, for example, to store data and applications. You can detach a volume from one instance and attach it to another instance without loss of data.


Object Storage

Object Storage is a storage architecture that allow you to store and manage data as objects. Data files can be of any type and up to 10 TiB in size. Once you upload data to Object Storage it can be accessed from anywhere. Use Object Storage when you want to store a very large amount of data that does not change very frequently. Some typical use cases for Object Storage include data backup, file sharing, and storing unstructured data like logs and sensor-generated data.


bucket

A bucket is a logical container used by Object Storage for storing your data and files. A bucket can contain an unlimited number of objects.
Oracle Cloud Identifier (OCID)
Every Oracle Cloud Infrastructure resource has an Oracle-assigned unique ID called an Oracle Cloud Identifier (OCID). This ID is included as part of the resource's information in both the Console and API.




References:

https://docs.oracle.com/en-us/iaas/Content/GSG/Concepts/concepts.htm#:~:text=you%20are%20viewing.-,When%20you%20sign%20up%20for%20Oracle%20Cloud%20Infrastructure%2C%20Oracle%20creates,the%20resources%20in%20each%20compartment



Monday, July 25, 2022

Oracle Dataguard Snapshot Standby Testing

 
This is something every dba will know but still documenting for handy steps .



Automated 
*********

To snapshot standby -

alter database recover managed standby database cancel;
alter database convert to snapshot standby;


SQL> select CURRENT_SCN, SWITCHOVER_STATUS, DATABASE_ROLE, open_mode from v$database;

CURRENT_SCN SWITCHOVER_STATUS DATABASE_ROLE   OPEN_MODE
----------- -------------------- ---------------- --------------------
  0 NOT ALLOWED SNAPSHOT STANDBY MOUNTED



To physical standby -

alter database close;

select CURRENT_SCN, SWITCHOVER_STATUS, DATABASE_ROLE, open_mode from v$database;

CURRENT_SCN SWITCHOVER_STATUS DATABASE_ROLE   OPEN_MODE
----------- -------------------- ---------------- --------------------
  0 NOT ALLOWED SNAPSHOT STANDBY MOUNTED

alter database convert to physical standby;

shut immediate
startup mount
alter database recover managed standby database using current logfile disconnect from session;

SQL> select CURRENT_SCN, SWITCHOVER_STATUS, DATABASE_ROLE, open_mode from v$database;

CURRENT_SCN SWITCHOVER_STATUS DATABASE_ROLE   OPEN_MODE
----------- -------------------- ---------------- --------------------
     378629 NOT ALLOWED PHYSICAL STANDBY MOUNTED






MANUAL
******

PRIMARY
=======
-- Archive the current log and defer the log_archive_dest_2
alter system archive log current;
alter system set log_archive_dest_state_2=DEFER;


STANDBY
=======
-- Activating the standby

-- Stop managed recovery, create a guaranteed restore point and activate the standby. Ensure db_recovery_file_dest is set.
alter database recover managed standby database cancel;
alter system set log_archive_dest_state_2=DEFER;
create restore point before_testing guarantee flashback database;
alter database activate physical standby database;
alter database open;
select CURRENT_SCN, SWITCHOVER_STATUS, DATABASE_ROLE, open_mode from v$database;
select CONTROLFILE_TYPE from v$database; 

-- Converting back to standby
startup mount force
flashback database to restore point before_testing;
alter database convert to physical standby;
startup mount force
drop restore point before_testing;
alter database recover managed standby database using current logfile disconnect from session;




Reference -
How To Open Physical Standby For Read Write Testing and Flashback (Doc ID 805438.1)

 




Monday, July 18, 2022

Oracle database final_blocking_session when we have multiple blocking sessions


When  there are multiple blockings in database  its confusing to gind  actual blocking .  Oracle has simplified it by   final_blocking_session .


final_blocking_instance:  This column of gv$session is the instance identifier of the final blocking session. This column is valid only if final_blocking_session_status=valid.

final_blocking_session:  This column of gv$session is the session identifier of the final blocking session. This column is valid only if final_blocking_session_status=valid.



exec dbms_output.put_line('========== Session Details ==========');
set feedback on
set lines 300
set pages 1000
column TERMINAL format a30
column USERNAME format a15
column OSUSER format a20
column MACHINE format a20
column PROGRAM format a30
select inst_id,sid,FINAL_BLOCKING_SESSION ,final_blocking_instance ,serial#,username,osuser,machine,terminal,program,status,to_char(logon_time, 'dd-mon-yy hh24:mi:ss') , event, blocking_session , sql_id 
from gv$session 
where FINAL_BLOCKING_SESSION_STATUS='VALID'   or blocking_session is not NULL
order by LOGON_TIME,machine,program;


set pages 1000
set lines 120
set heading off
column w_proc format a50 tru
column instance format a20 tru
column inst format a28 tru
column wait_event format a50 tru
column p1 format a16 tru
column p2 format a16 tru
column p3 format a15 tru
column Seconds format a50 tru
column sincelw format a50 tru
column blocker_proc format a50 tru
column fblocker_proc format a50 tru
column waiters format a50 tru
column chain_signature format a100 wra
column blocker_chain format a100 wra
SELECT * 
FROM (SELECT 'Current Process: '||osid W_PROC, 'SID '||i.instance_name INSTANCE, 
 'INST #: '||instance INST,'Blocking Process: '||decode(blocker_osid,null,'<none>',blocker_osid)|| 
 ' from Instance '||blocker_instance BLOCKER_PROC,
 'Number of waiters: '||num_waiters waiters,
 'Final Blocking Process: '||decode(p.spid,null,'<none>',
 p.spid)||' from Instance '||s.final_blocking_instance FBLOCKER_PROC, 
 'Program: '||p.program image,
 'Wait Event: ' ||wait_event_text wait_event, 'P1: '||wc.p1 p1, 'P2: '||wc.p2 p2, 'P3: '||wc.p3 p3,
 'Seconds in Wait: '||in_wait_secs Seconds, 'Seconds Since Last Wait: '||time_since_last_wait_secs sincelw,
 'Wait Chain: '||chain_id ||': '||chain_signature chain_signature,'Blocking Wait Chain: '||decode(blocker_chain_id,null,
 '<none>',blocker_chain_id) blocker_chain
FROM v$wait_chains wc,
 gv$session s,
 gv$session bs,
 gv$instance i,
 gv$process p
WHERE wc.instance = i.instance_number (+)
 AND (wc.instance = s.inst_id (+) and wc.sid = s.sid (+)
 and wc.sess_serial# = s.serial# (+))
 AND (s.inst_id = bs.inst_id (+) and s.final_blocking_session = bs.sid (+))
 AND (bs.inst_id = p.inst_id (+) and bs.paddr = p.addr (+))
 AND ( num_waiters > 0
 OR ( blocker_osid IS NOT NULL
 AND in_wait_secs > 10 ) )
ORDER BY chain_id,
 num_waiters DESC)
WHERE ROWNUM < 101;

Wednesday, July 6, 2022

Oracle Database - Query opatch inventory using SQL interface

 
Listing alternate way to check opatch  details , using   sqlplus 




==> Sql to check output similar to  opatch lsinventory 

with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  select x.*
    from a,
         xmltable('InventoryInstance/patches/*'
            passing a.patch_output
            columns
               patch_id number path 'patchID',
               patch_uid number path 'uniquePatchID',
               description varchar2(80) path 'patchDescription',
               applied_date varchar2(30) path 'appliedDate',
               sql_patch varchar2(8) path 'sqlPatch',
               rollbackable varchar2(8) path 'rollbackable'
         ) x;



 select bugno, value, optimizer_feature_enable, is_default from v$system_fix_control  ;




==> Sql  to check detailed  output . similar to opatch lsinventory detail

with a as (select dbms_qopatch.get_opatch_bugs patch_output from dual)
  select x.*
    from a,
         xmltable('bugInfo/bugs/*'
            passing a.patch_output
            columns
               bug_id number path '@id',
               description varchar2(160) path 'description'
         ) x;



==> Checking Precise output 

select patch_id, patch_uid, target_version, status, description, action_time
from dba_registry_sqlpatch
where action = 'APPLY';  

Saturday, July 2, 2022

Oracle Checking Hang sessions in Rac Database -- Rac Hang Manager

 
For  Rac  ,   checking  hung session is simplified using  Rac Hung Manager . For  Non  Rac  i personally use v$sess_io  or  try enabling session tracing  


 In 12.1.0.1, hang manager can detect hang between database and asm. 2.Deadlock or Closed Chain

Deadlock or close the chain. The only way to break the deadlock chain is to let some of these sessions complete their work or be terminated. 3.Hang or Open Chain


In the Oracle database, suspend (hang) refers to the waiting state entered by a process due to the inability to obtain the requested resources, which can be lifted only after the requested resources have been obtained, and the HM implements the management of hangs, including the monitoring, analysis, recording and resolution of hang.

The wait chain is made up of blocking processes and waiting processes, while one or more root blocking processes exist in the blocking process, which blocks all other processes, and if the root blocking process is busy with some operations, then perhaps the presence of such a wait chain is normal, if the blocking process is idle, Then perhaps the emergence of this wait chain is not normal, and the way to break the wait chain is to terminate the root blocking process. HM can proactively discover the existence of the waiting chain in the database, and from the perspective of the analysis of them, if found to really affect the performance of the data block hang, depending on the specific circumstances to determine whether to solve the problem, and even if not directly resolved, the corresponding diagnostic information will be recorded and continuous monitoring.



V$hang_info: This view contains details of the hang that was found by HM.
V$hang_session_info: This view contains the session information related to hang.
V$hang_statistics: This view contains statistics related to hang.



The work of HM is composed of seven stages

Phase 1 (Collection Phase): At this stage, the DIA0 process for each instance collects hang analyze information on a regular basis.

Phase 2 (Discovery phase): At this stage, the DIA0 process for each instance analyzes the collected hang Alalyze information, locates the session where hang is present, and sends the DIA0 process to the master node.

Phase 3 (Drawing phase): At this stage, the dia0 process of the master node draws the message from each instance of the DIA0 process, drawing the wait chain.

Phase 4 (Analysis Phase): At this stage, the master node dia0 the process according to the drawn wait chain and analyzes whether hang is indeed present.

Phase 5 (Validation phase): At this stage, the master node dia0 process executes phase 1-4 again, then compares the analysis results of phase 4 with this one, and verifies that hang is really happening.

Phase 6 (Positioning phase): At this stage, the results of the master node dia0 process More validation phase are positioned to the root blocking process of the wait chain.

Phase 7 (resolution Phase): At this stage, the master node dia0 process determines whether hang can be resolved based on the value of the parameter _hang_resoluton_scope.



Trace log files for the DIA0 process

Main trace file (<SID>_DIA0_<PID>.TRC): This log file records the details of the DIA0 process, including the process of discovering, analyzing, and handling the hang.

History Tracker File (<sid>_dia0_<pid>_ N.TRC): Because the trace log file of the DIA0 process constantly generates information as the database runs, it can make the log file very large, and the DIA0 process periodically writes log information to its history log file, where n is a positive integer and increases over time.

Incident Log file: If HM resolves the hang by terminating the process, the ORA-32701 error is first recorded in the Alert.log, and because of the existence of the ADR, the DIA0 process also produces a incident log file that records the details of the problem.






Parameters of HM

_hang_detection_enabled: This parameter determines whether the HM attribute is enabled in the database, and the default value is true.

_hang_detection_interval: This parameter specifies the time interval for which HM collects hang analyze information, and the default value is 32s.

_hang_verification_interval: This parameter specifies the time interval for the HM Validation hang, and the default value is 46s.

_hang_resolution_scope: This parameter specifies the range that HM can operate when the hang is resolved, the default value is process, and the allowable values are as follows:
OFF: The HM will only continue to monitor hang, and will not do anything to fix hang.
Process: Indicates that HM can resolve hang by terminating the root blocking process, but the root blocking process here cannot be an important background process for the database because it causes the instance to crash.
Instance: Indicates that HM can resolve the hang by terminating the instance





Related parameters:

NAME                                               VALUE                          ISDEFAULT ISMOD      ISADJ
-------------------------------------------------- ------------------------------ --------- ---------- -----
_hang_analysis_num_call_stacks                     3                              TRUE      FALSE      FALSE
_hang_base_file_count                              5                              TRUE      FALSE      FALSE
_hang_base_file_space_limit                        10000000                       TRUE      FALSE      FALSE
_hang_bool_spare1                                  TRUE                           TRUE      FALSE      FALSE
_hang_delay_resolution_for_libcache                TRUE                           TRUE      FALSE      FALSE
_hang_detection_enabled                            TRUE                           TRUE      FALSE      FALSE
_hang_detection_interval                           32                             TRUE      FALSE      FALSE
_hang_hang_analyze_output_hang_chains              TRUE                           TRUE      FALSE      FALSE
_hang_hiload_promoted_ignored_hang_count           2                              TRUE      FALSE      FALSE
_hang_hiprior_session_attribute_list                                              TRUE      FALSE      FALSE
_hang_ignored_hang_count                           1                              TRUE      FALSE      FALSE
_hang_ignored_hangs_interval                       300                            TRUE      FALSE      FALSE
_hang_int_spare2                                   FALSE                          TRUE      FALSE      FALSE
_hang_log_verified_hangs_to_alert                  FALSE                          TRUE      FALSE      FALSE
_hang_long_wait_time_threshold                     0                              TRUE      FALSE      FALSE
_hang_lws_file_count                               5                              TRUE      FALSE      FALSE
_hang_lws_file_space_limit                         10000000                       TRUE      FALSE      FALSE
_hang_monitor_archiving_related_hang_interval      300                            TRUE      FALSE      FALSE
_hang_msg_checksum_enabled                         TRUE                           TRUE      FALSE      FALSE
_hang_resolution_allow_archiving_issue_termination TRUE                           TRUE      FALSE      FALSE
_hang_resolution_confidence_promotion              FALSE                          TRUE      FALSE      FALSE
_hang_resolution_global_hang_confidence_promotion  FALSE                          TRUE      FALSE      FALSE
_hang_resolution_policy                            HIGH                           TRUE      FALSE      FALSE
_hang_resolution_promote_process_termination       FALSE                          TRUE      FALSE      FALSE
_hang_resolution_scope                             PROCESS                        TRUE      FALSE      FALSE
_hang_short_stacks_output_enabled                  TRUE                           TRUE      FALSE      FALSE
_hang_signature_list_match_output_frequency        10                             TRUE      FALSE      FALSE
_hang_statistics_collection_interval               15                             TRUE      FALSE      FALSE
_hang_statistics_collection_ma_alpha               30                             TRUE      FALSE      FALSE
_hang_statistics_high_io_percentage_threshold      15                             TRUE      FALSE      FALSE
_hang_verification_interval                        46                             TRUE      FALSE      FALSE


Saturday, June 18, 2022

Oracle Database Parameters influencing Optimizer -- v$sys_optimizer_env , v$sql_optimizer_env and v$ses_optimizer_env

 

We  always  have question  what database  parameters influence optimizer behavior .  
Oracle  has  views  v$sys_optimizer_env ,  v$sql_optimizer_env and v$sess_optimizer_env  that reflects  database parameters  that influence database optimizer behavior 

Ideally  its useful  when comparing 2 database performance and comparing pre and post migration performance . 



Optionally we can  set  Optimizer trace 10053

How to Obtain Tracing of Optimizer Computations (EVENT 10053) (Doc ID 225598.1)



SQL> select name, isdefault from v$ses_optimizer_env
  2  where sid = 265 
  3  order by isdefault, name;

NAME                                     ISD
---------------------------------------- ---
_pga_max_size                            NO
active_instance_count                    YES
bitmap_merge_area_size                   YES
cpu_count                                YES
cursor_sharing                           YES
hash_area_size                           YES
is_recur_flags                           YES
optimizer_capture_sql_plan_baselines     YES
optimizer_dynamic_sampling               YES
optimizer_features_enable                YES
optimizer_index_caching                  YES
optimizer_index_cost_adj                 YES
optimizer_mode                           YES
optimizer_secure_view_merging            YES
optimizer_use_invisible_indexes          YES
optimizer_use_pending_statistics         YES
optimizer_use_sql_plan_baselines         YES
parallel_ddl_mode                        YES
parallel_degree                          YES
parallel_dml_mode                        YES
parallel_execution_enabled               YES
parallel_query_default_dop               YES
parallel_query_mode                      YES
parallel_threads_per_cpu                 YES
pga_aggregate_target                     YES
query_rewrite_enabled                    YES
query_rewrite_integrity                  YES
result_cache_mode                        YES
skip_unusable_indexes                    YES
sort_area_retained_size                  YES
sort_area_size                           YES
star_transformation_enabled              YES
statistics_level                         YES
transaction_isolation_level              YES
workarea_size_policy                     YES






select
        child_number, name, value
from    v$sql_optimizer_env
where
    sql_id = 'b6pkmrqrgxh2d'
order by
        child_number,
        name
;   

                                        CHILD_NUMBER NAME                                     VALUE
                                ------------ ----------------------------------------             -------------------------
             _db_file_optimizer_read_count            16
             active_instance_count                    1
             bitmap_merge_area_size                   1048576
             cpu_count                                2
             cursor_sharing                           exact
             hash_area_size                           131072
             optimizer_dynamic_sampling               2
             optimizer_features_enable                10.2.0.3
             optimizer_index_caching                  0
             optimizer_index_cost_adj                 100
             optimizer_mode                           first_rows_1
             optimizer_secure_view_merging            true
             parallel_ddl_mode                        enabled
             parallel_dml_mode                        disabled
             parallel_execution_enabled               true
             parallel_query_mode                      enabled
             parallel_threads_per_cpu                 2
             pga_aggregate_target                     204800 KB
             query_rewrite_enabled                    true
             query_rewrite_integrity                  enforced
             skip_unusable_indexes                    true
             sort_area_retained_size                  0
             sort_area_size                           65536
             sqlstat_enabled                          true
             star_transformation_enabled              false
             statistics_level                         typical
             workarea_size_policy                     auto 
             _db_file_optimizer_read_count            16
             _hash_join_enabled                       false
             active_instance_count                    1
             bitmap_merge_area_size                   1048576
             cpu_count                                2
             cursor_sharing                           exact
             hash_area_size                           131072
             optimizer_dynamic_sampling               2
             optimizer_features_enable                10.2.0.3
             optimizer_index_caching                  0
             optimizer_index_cost_adj                 100
             optimizer_mode                           first_rows_1
             optimizer_secure_view_merging            true
             parallel_ddl_mode                        enabled
             parallel_dml_mode                        disabled
             parallel_execution_enabled               true
             parallel_query_mode                      enabled
             parallel_threads_per_cpu                 2
             pga_aggregate_target                     204800 KB
             query_rewrite_enabled                    true
             query_rewrite_integrity                  enforced
             skip_unusable_indexes                    true
             sort_area_retained_size                  0
             sort_area_size                           65536
             sqlstat_enabled                          true
             star_transformation_enabled              false
             statistics_level                         typical
             workarea_size_policy                     auto 

Tuesday, June 14, 2022

Checking Ongoing Oracle rman backup and restore progress

 

During Restore we normally  have  to see progress for  big database  .   Posting  script below used  by me  to check progress . 




SELECT sid, serial#, context, sofar, totalwork,
 round(sofar/totalwork*100,2) "% Complete"
 FROM v$session_longops
 WHERE opname LIKE 'RMAN%'
 AND opname NOT LIKE '%aggregate%'
 AND totalwork != 0
 AND sofar != totalwork;



select device_type "Device", type, filename, to_char(open_time, 'mm/dd/yyyy hh24:mi:ss') open,
 to_char(close_time,'mm/dd/yyyy hh24:mi:ss') close,elapsed_time ET, effective_bytes_per_second EPS
 from v$backup_async_io;



TTITLE LEFT '% Completed. Aggregate is the overall progress:'
SET LINE 132
SELECT opname, round(sofar/totalwork*100) "% Complete"
  FROM gv$session_longops
 WHERE opname LIKE 'RMAN%'
   AND totalwork != 0
   AND sofar <> totalwork
 ORDER BY 1;



TTITLE LEFT 'Channels waiting:'
COL client_info FORMAT A15 TRUNC
COL event FORMAT A20 TRUNC
COL state FORMAT A7
COL wait FORMAT 999.90 HEAD "Min waiting"
SELECT s.sid, p.spid, s.client_info, status, event, state, seconds_in_wait/60 wait
  FROM gv$process p, gv$session s
 WHERE p.addr = s.paddr
   AND client_info LIKE 'rman%';
TTITLE LEFT 'Files currently being written to:'
COL filename FORMAT a50
SELECT filename, bytes, io_count
  FROM v$backup_async_io
 WHERE status='IN PROGRESS'
/
TTITLE OFF
SET HEAD OFF


SELECT 'Throughput: '||
       ROUND(SUM(v.value/1024/1024),1) || ' Meg so far @ ' ||
       ROUND(SUM(v.value     /1024/1024)/NVL((SELECT MIN(elapsed_seconds)
            FROM v$session_longops
            WHERE opname          LIKE 'RMAN: aggregate input'
              AND sofar           != TOTALWORK
              AND elapsed_seconds IS NOT NULL
       ),SUM(v.value     /1024/1024)),2) || ' Meg/sec'
 FROM gv$sesstat v, v$statname n, gv$session s
WHERE v.statistic# = n.statistic#
  AND n.name       = 'physical write total bytes'
  AND v.sid        = s.sid
  AND v.inst_id    = s.inst_id
  AND s.program LIKE 'rman@%'
GROUP BY n.name
/
SET HEAD ON





##### To check Restore Speed 

TTITLE OFF
SET HEAD OFF
SELECT 'Throughput: '||
       ROUND(SUM(v.value/1024/1024/1024),1) || ' Gig so far @ ' ||
       ROUND(SUM(v.value     /1024/1024)/NVL((SELECT MIN(elapsed_seconds)
            FROM v$session_longops
            WHERE opname          LIKE 'RMAN: aggregate input'
              AND sofar           != TOTALWORK
              AND elapsed_seconds IS NOT NULL
       ),SUM(v.value     /1024/1024)),2) || ' Meg/sec'
 FROM gv$sesstat v, v$statname n, gv$session s
WHERE v.statistic# = n.statistic#
  AND n.name       = 'physical write total bytes'
  AND v.sid        = s.sid
  AND v.inst_id    = s.inst_id
  AND s.program LIKE 'rman@%'
GROUP BY n.name
/



Tracing Rman :

For analyzing any issues we can  debug rman 

$ rman target / catalog rman/rman debug trace trace.log







Speeding Up Rman :


RMAN Multiplexing

RMAN uses two different types of buffers for I/O: disk and tape.RMAN multiplexing determineshow RMAN allocates disk buffers.

RMAN multiplexing is the number of files in a backup readsimultaneously and then written to the same backup piece. The degree of multiplexing depends onthe

FILESPERSET parameter of the BACKUP command as well as the MAXOPENFILES  parameterof the CONFIGURECHANNEL command or ALLOCATECHANNELcommand. 
Note: RMANmultiplexing is set at the channel level




Allocating Tape Buffers

•From SGA (large pool) with  BACKUP_TAPE_IO_SLAVES is  TRUE
.•From PGA with BACKUP_TAPE_IO_SLAVES is  FALSE


RMAN allocates the tape buffers in the System Global Area (SGA) or the Program Global Area(PGA), depending on whether I/O slaves are used. 

If the  BACKUP_TAPE_IO_SLAVES initialization parameter is set to TRUE , RMAN allocates tape buffers from the shared pool or the large pool if the LARGE_POOL_SIZE
initialization parameter is set.

If you set the parameter to FALSE  , RMAN allocates the buffers from the PGA.

If you use I/O slaves, set the LARGE_POOL_SIZE initialization parameter to set aside SGA memory that is dedicated to holding these large memory allocations. By doing this, the RMAN I/O buffers do not compete with thelibrary cache for shared pool memory.


Oracle recommends that you set the  BACKUP_TAPE_IO_SLAVES initialization parameter to  TRUE

In most circumstances, this will provide the best performance of backups to tape. Also, thissetting is required in order to perform duplexed backups. Duplexed backups are covered in the lessontitled “Using RMAN to Create Backups.”




Comparing Synchronous and Asynchronous I/O

When RMAN reads or writes data, the I/O is either synchronous orasynchronous. When the I/O is synchronous, a server process can perform only one task at a time. When it is asynchronous, a server process can begin an I/O and then perform other tasks while waiting for the I/O to complete. It can also begin multiple I/O operations before waiting for the first to complete.You can set initialization parameters that determine the type of I/O.

If you set BACKUP_TAPE_IO_SLAVES  to  TRUE  , the tape I/O is asynchronous. Otherwise, the I/O is synchronous




Tuning the  BACKUP  Command

The  MAXPIECESIZE  parameter specifies the maximum size of each backup piece created on thechannel.

The  FILESPERSET  parameter specifies the maximum number of files to place in a backup set. If you allocate only one channel, then you can use this parameter to make RMAN create multiple backup sets. For example, if you have 50 input data files and two channels, you can set FILESPERSET=5 to create 10 backup sets. This strategy can prevent you from splitting a backupset among multiple tapes.

The MAXOPENFILES   parameter setting depends on your disk subsystem characteristics. If you useASM, then set it to 1 or 2. Otherwise, if your data is not striped, then you may want to set this higher.To gain performance, increase either the number of files per backup set, or this parameter. If you arenot using ASM or striping of any kind, then try increasing MAXOPENFILES
.



Channel Tuning 

If you configure multiple channels for an SBT device, then you can specifically spread data filesacross those channels. 

Here is an example:

RUN
{ ALLOCATE CHANNEL c1 DEVICE TYPE sbt; 
ALLOCATE CHANNEL c2 DEVICE TYPE sbt; 
ALLOCATE CHANNEL c3 DEVICE TYPE sbt;
BACKUP (DATAFILE 1,2,5 CHANNEL c1) (DATAFILE 4,6 CHANNEL c2) (DATAFILE 3,7,8 CHANNEL c3);BACKUP DATABASE NOT BACKED UP;}




Monitor rman restore throughput:

set linesize 126
column Pct_Complete format 99.99
column client_info format a25
column sid format 999
column MB_PER_S format 999.99
select s.client_info,
l.sid,
l.serial#,
l.sofar,
l.totalwork,
round (l.sofar / l.totalwork*100,2) "Pct_Complete",
aio.MB_PER_S,
aio.LONG_WAIT_PCT
from v$session_longops l,
v$session s,
(select sid,
serial,
100* sum (long_waits) / sum (io_count) as "LONG_WAIT_PCT",
sum (effective_bytes_per_second)/1024/1024 as "MB_PER_S"
from v$backup_async_io
group by sid, serial) aio
where aio.sid = s.sid
and aio.serial = s.serial#
and l.opname like 'RMAN%'
and l.opname not like '%aggregate%'
and l.totalwork != 0
and l.sofar <> l.totalwork
and s.sid = l.sid
and s.serial# = l.serial#
order by 1;




########## Shell  Script to Check   backup  or  Restore Status ############# 

After setting db  environment   run below shell 




#!/bin/bash
#Set the environment
_env(){
RESTORE_PROGRESS=/tmp/restore_progress.log
touch dfsize
export SIZELOG=dfsize
#RMAN restore progress monitor in percentage from database
_restore_pct(){
while sleep 0.5;do
date_is=$(date "+%F-%H-%M-%S")
#ela_s=$(date +%s)
#echo "============================================================"+
#echo "         ----->$ORACLE_SID<-----                                |"|tr 'a-z' 'A-Z';echo "    Restore progress ($date_is)                  |"
#echo "============================================================"+
$ORACLE_HOME/bin/sqlplus -S "/ as sysdba" << EOF > dbsz.txt
set feedback off
set lines 200
set pages 1000
set termout off
col INPUT_BYTES/1024/1024 format 9999999
col OUTPUT_BYTES/1024/1024 format 9999999
col OBJECT_TYPE format a10
set serveroutput off
spool dbsz.out
variable s_num number;
BEGIN
  select sum((datafile_blocks)*8/1024) into :s_num from v\$BACKUP_DATAFILE;
  dbms_output.put_line(:s_num);
END;
/
set feedback on
select INPUT_BYTES/1024/1024 as inp_byte,OUTPUT_BYTES/1024/1024 as out_byte,OBJECT_TYPE,100*(MBYTES_PROCESSED/:s_num) as pctdone from v\$rman_status where status like '%RUNNING%';
spool off
EOF
 
#Realtime monitoring of RMAN restore which shows date and percentage of completion
pct="$(cat dbsz.txt|grep -v 'row'|tail -3|grep -v '^$'|awk '{print $5}')"
clear;
echo "$date_is|Current restore progress for $ORACLE_SID:[$pct]"
#cat $SIZELOG|grep -v 'PL'
#cat /dev/null > $SIZELOG
done
}
#ela_e=$(date +%s)
#echo "elapsed_time: $($ela_e - $ela_s)
_env
_restore_pct

Sunday, June 5, 2022

Edb PostgreSQL Basics -- For Oracle Dba

 
As i am  advancing my skillsets and adding  Postgres to  my profile , thought of writing this blog as i am preparing for  edb  postgres exam 

Unfortunately i have  mixed up postgresql and edb-postgresql   but  fundamentals remains same . 





Server Process  , Memory Structure , Physical files :











################################################
Server Process 
################################################
 
postmaster ( pmon   )  --> 

background process  -->  bgwritter - background writer   ,   logging collector  ,  stats collector  ,  
checkpointer  ,  archiver  , auto-vacuum  , wal writer ,  logical replication  ,  dbms_aq 
files   -->   data files  ,  wal  segments   ,   archived wal ,  error  files  global files 


postmaster ( pmon  )  --> 
Master database control process
Responsible for startup and shutdown
Handling of connection requests
Spawning of other necessary backend processes



 
Background Processes :

Bgwriter  :   background writer , writes  diry block to disk
Wal writer :  flush write ahead  logs to disk
Checkpointer :  performs checkpoint base don config parameters .
Auto-vacuum launcher  :   starts Auto-vacuum works as needed
Auto-vacuum worker :  recover free space  for reuse
Logging collector : routes  log messages to syslog , eventlog or log files
Stats collector :  collects usage statistics  by relation and blocks .
Archiver   :  archives write ahead log files
Local replication  launcher :  starts logical replication apply process for logical replication.
Dbms_aq launcher :   collects information for queuing functionality  of advanced servers .
 
Autovacuum
Dedicated backend for providing vacuum services
Essentially, a garbage collect of data files
Covered later in maintenance section

Writer
Background writer
Writing of pages from memory cache to disk (does not flush)

WAL writer
Responsible for maintaining transaction log (journal)
Only used for asynchronous commits

Checkpointer
A process that performs checkpoints
Flushing of all dirty buffers to disk storage

Archiver
WAL files saved to a specified location for backup
Can also be used to achieve replication

Logger
Responsible for writing information logs
Errors, warnings, slow running queries, etc.
Not used if writing to syslog

Stats collector
Support for collection and reporting of information about server activity
Can count accesses to tables and indexes
Information about vacuum and action analysis

bgworkers
Support for external extensions
Logical replication




################################################
Memory Structure
################################################

Shared Memory :
1) shared buffers
2) wal buffers
3) process aray  ( Pre Session Memory ) 
 

Shared buffers 
  • Shared buffers are the primary cache component for the server, storing disk blocks from database files.
  • All data sets accessed from the disk are placed in shared buffers, which allow subsequent reads to be memory reads.
  • All writes are performed in shared buffers, creating "dirty" pages. The bgwriter and checkpoint processes will write this out to disk.
  • Shared buffers contain free buffers, which are never used or freed after using, and dirty buffers, which result from DML.

Wal buffers
  • Stores intermediate write-ahead log records.
  • Written on commit by wal_writer process (or when full).


Pre Session Memory 

work_mem
  • Used for per-backend sort/hash memory.
  • Used during certain types of JOINs and for ORDER BY operations.
  • Set globally, but can be modified per session.

maintenance_work_mem
  • Used for certain types of maintenance operations (vacuum, index creation, re-index).
  • Allocated per session that uses it, such as multiple autovacuum workers




Effective_cache_size (integer)

The effective_cache_size parameter estimates how much memory is available for disk caching by the operating system and within the database itself. The PostgreSQL query planner decides whether it’s fixed in RAM or not. Index scans are most likely to be used against higher values; otherwise, sequential scans will be used if the value is low. Recommendations are to set Effective_cache_size at 50% of the machine’s total RAM.




################################################
Physical Storage 
################################################


PGDATA directory
  • All on-disk components of a database instance are stored in a data directory, which contains multiple subdirectories.
  • Some subdirectories can be moved by configuration and others by symlink (Unix/Linux only).
  • Very few files are user readable or modifiable.
  • Commonly referred to as PGDATA.











Datafiles : 
  • The actual data is stored in files, typically in the PGDATA directory, with one operating system file per page.
  • Although PostgreSQL provides some visibility into which files correspond to which tables, you normally don't deal with these files directly.
  • Also referred to as relations or base files.
  • Stored in subdirectories of the base directory.
  • Contain table or index data  
           Tables have a set of segments.
            Indexes have a set of segments.
  • Stored on disk as 1 GB segments:
             A 5 GB table will be five 1 GB files.
             Segments are automatically added, no intervention required.
  • Filled with data in 8 KB blocks (on disk) or pages (in-memory):
        Blocks and pages are identical, with the name changing based on where data currently resides.                When in-memory, they are referred to as pages.



WAL :
  • One transaction may involve writing to several different tables, and thus writing many pages to disk. Rather than doing this, PostgreSQL uses a write-ahead log (WAL), which is a sequential file where transactions are stored. If the server goes down, it can restore the state of its data by replaying these transactions.
  • Only one page needs to be written to the WAL for each transaction, which improves performance. Eventually, the transactions get executed in the background, and data pages are saved by utility processes.
  • This is the transaction journal.
  • Also known as: WAL, xlog, pg_xlog, transaction log, journal, REDO.
  • These files contain a persistent record of commits.
  • Success is not returned to the client until this is safely persisted to disk.
  • During crash recovery, this journal can be used to reconstruct all transactions.
  • These files are written by the WAL writer process.


# Perform Switch of  War  files 
select pg_switch_wal() ; 





Archived Logs 
  • After WAL entries get executed, the WAL segment eventually gets archived. These archival logs can be used to replay the transactions on a different server, and keep remote servers almost in sync.

To Clean Up Archive after Backups : 

pg_archivecleanup -n <PG_WAL_PATH> <WAL_FILE> | wc -l

wal_keep_segments tells it how many files to keep in the pg_xlog directory, not the archive directory.

It is kind of weird to use both wal_keep_segments and archive_command together, at least for relatively simple setups. If you just want a streaming replica and don't need the ability to do point-in-time recovery, you could turn off archiving altogether.

If you really want to keep using an archive but have it cleaned up when the files are no longer needed, you can use the archive_cleanup_command setting. But now that we have streaming replication and replication slots, that is almost obsolete.


Below related parameters in config files 

# To enable read-only queries on a standby server, wal_level must be set to
# "hot_standby". But you can choose "archive" if you never connect to the
# server in standby mode.
wal_level = hot_standby

min_wal_size=

# Set the maximum number of concurrent connections from the standby servers.
max_wal_senders = 5

# To prevent the primary server from removing the WAL segments required for
# the standby server before shipping them, set the minimum number of segments
# retained in the pg_xlog directory. At least wal_keep_segments should be
# larger than the number of segments generated between the beginning of
# online-backup and the startup of streaming replication. If you enable WAL
# archiving to an archive directory accessible from the standby, this may
# not be necessary.
wal_keep_segments = 32

wal_compressed 


# Enable WAL archiving on the primary to an archive directory accessible from
# the standby. If wal_keep_segments is a high enough number to retain the WAL
# segments required for the standby server, this is not necessary.
archive_mode    = on
archive_command = 'cp %p /path_to/archive/%f'








Stored file examples

PG_VERSION
Version string of the database instance

postmaster.opts
What options were used to start the instance

postmaster.pid
Process ID of the instance

server.crt
Server certificate for SSL

server.key
Server private key

root.crt
Trusted certificate authorities




################################################
PostgreSQL configuration files
################################################

PostgreSQL uses several text-based configuration files. While you can change where you store these files when compiling PostgreSQL using symbolic links on your file system, these configuration files are usually stored in the same folder. This folder is always called PGDATA, even though the actual folder may change.


● PG_VERSION
○ Version String of the Database Cluster
● pg_hba.conf
○ Host-Based access control file (built-in firewall)
● pg_ident.conf
○ ident-based access file for OS User to DB User Mapping
● postgresql.conf
○ Primary Configuration File for the Database
● postmaster.opts
○ Contains the options used to start the PostgreSQL Instance
● postmaster.pid
○ The Parent Process ID or the Postmaster Process ID



pg_hba.conf
  • The pg_hba.conf file contains information about who can connect to local databases, through which mechanism (local connection or over the network) from which addresses, and using which authentication method.



pg_ident.conf
  • The pg_ident.conf file maps operating system users to PostgreSQL users.




postgresql.conf
  • postgresql.conf contains most of the configuration parameters for PostgreSQL.
  • postgresql.auto.conf overrides values in postgresql.conf, and allows you to easily track parameters that are different for your local installation.
  • The ALTER SYSTEM command writes to this file if you persist its changes.




• postgresql.auto.conf
○ PostgreSQL gives Oracle like compatibility to modify parameters using "ALTER SYSTEM"
○ Any parameter modified using ALTER SYSTEM is written to this file for persistence
○ This is last configuration file read by PostgreSQL, when started. Empty by default
○ Always located in the data directory 




################################################
Software Installation and  Database / User  Creation 
################################################






PostgreSQL Cluster

● After Initializing your PostgreSQL using initdb (similar to mysqld --initialize) and starting it, you can
  create multiple databases in it
● A group of databases running on one Server & One Port - Is called a Cluster in PostgreSQL
● PostgreSQL Cluster may be referred to as a PostgreSQL Instance as well
● A PostgreSQL Cluster or an Instance:
  ○ Serves only one TCP/IP Port
  ○ Has a Dedicated Data Directory
  ○ Contains 3 default databases: postgres, template0 and template1
● When you add a Slave(aka Standby) to your PostgreSQL Cluster(Master), it may be referred to as a
  PostgreSQL High Availability Cluster or a PostgreSQL Replication Cluster
● PostgreSQL Cluster that can accept Writes and ships WALs to Slave(Standby), is called a Master



Software Installation

1) Installation options :  wizard ,  yum , rpm ,
2) while installation os user  enterprisedb is created
3) Default  installation location for Windows is  "C:\program files\edb\as11"  and for Linux   its "/usr/edb/as11"
 
 
1) PGDG Repository : PostgreSQL Global Development Group maintains YUM and APT repository 

For YUM
https://yum.postgresql.org

For APT
https://apt.postgresql.org/pub/repos/apt/

Step 1:
Choose the appropriate rpm that adds pgdg repo to your server
# yum install https://yum.postgresql.org/11/redhat/rhel-7.5-x86_64/pgdg-centos11-11-2.noarch.rpm

Step 2:
Install PostgreSQL using the following step
# yum install postgresql11 postgresql11-contrib postgresql11-libs postgresql11-server


#Optionally Initialize The Database and enable Automatic startup 
/usr/pgsql-11/bin/postgresql-11-setup initdb 
systemctl enable postgresql-11 
systemctl start postgresql-11 
systemctl status postgresql-11 
 



Edb Postgres Server Installation 

-- Downlaod the Edb-repo reposiory  configuration  file from yum.enterprisedb.com 
-- username and password for  yum access  can be  obtained  from  EnterpriseDb 
-- Login as root user and add user enterprisedb using  adduser  or  useradd and  set its password 
-- install  EPEL using  yum 
# yum install  epel-release 
-- Install the downloaded repository configuration  files 
# rpm -ivm edb-repo-latest.noarch.rpm 
-- Update username and password in /etc/yum.repos.d/edb.repo file  and  change enabled parameter to 1 
   for following repositories : 
   edbas11  , enterprisedb-dependencies  ,  enterprisedb-tools  
-- Install Yum 
#yum install edb-as11-server 
-- Configure a package installation using service configuration  file 
#  /usr/lub/system/edb-as11.service 
-- Create a database cluster  and start  cluster using services 
# /usr/edb/as11/bin/edb-as-11-setup  initdb 
# systemctl start edb-as-11 





#Connecting to  default  Edb Instance 
edb-psql     -p 5434 edb entrerprisedb 

edb-psql :
-- edb-psql is name of EnterpriseDB psql executable 
-- EnterpriseDB's  edb-psql provides termonal-based front-end  to advanced server 
-- it enables you to type  in queries interactively 

d, --dbname=, $PGDATABASE
-h, --host=, $PGHOST
-p, --port=, $PGPORT
-U, --username=, $PGUSER


EDB*Plus  : 
--  Edb*plus is command  line user  interface to Edb Postgres Advanced Server 
--  Edb*Plus accepts sql commands 
--  Edb*Plus are compatible with Oracle Sqlplus 

Eg : 

 DB_USER=enterprisedb
 DB_PASSWORD=abc123
 DB_SCHEMA="127.0.0.1:5432/edb"
 edbplus -S $DB_USER/$DB_PASSWORD@$DB_SCHEMA 2>>EDB_ERROR_FILE









Database Creation   : 

Initialize Your First PostgreSQL Cluster :

  initdb is used to Initialize a PostgreSQL cluster
$ echo "PATH=/usr/pgsql-11/bin:$PATH">>~/.bash_profile
$ source .bash_profile
$ echo $PGDATA
/var/lib/pgsql/11/data
$initdb --version
initdb (PostgreSQL) 11.0
$ initdb 
or 
$ initdb -D /usr/local/pgsql/data



pg_ctl -D "Path" start 
pg_ctl  status 
pg_ctl stop 





 
/usr/edb/as11/bin/edb-as-11-setup initdb


● Base Directory
  ○ Contains Subdirectories for every Database you create
  ○ Every Database Sub-Directory contains files for every Relation/Object created in the Database

● Datafiles
  ○ Datafiles are the files for Relations in the base directory
  ○ Base Directory contains Relations
  ○ Relations stored on Disk as 1GB segments
  ○ Each 1GB Datafile is made up of several 8KB Pages that are allocated as needed
  ○ Segments are automatically added unlike Oracle



Create a database  ( default  tablespace is pg_default ) 
 $ psql -c "CREATE DATABASE sample "
create database sample  owner user1  tablespace table1 


psql -U postgres 
show data_directory 
SHOW PORT 
\q



Get the datid for the database and see if it exists in the base directory
 $ psql -c "select datid, datname from pg_stat_database where datname = 'percona'"


Create a schema named: scott
 $ psql -d percona -c "CREATE SCHEMA scott"

Create a table named: employee in scott schema
 $ psql -d percona -c "CREATE TABLE scott.employee(id int PRIMARY KEY, name varchar(20))"

Locate the file created for the table: scott.employee in the base directory
 $ psql -d percona -c "select pg_relation_filepath('scott.employee')"


Check the size of the table in the OS and value of parameter: block_size
 $ psql -c "show block_size"

INSERT a record in the table and see the size difference
 $ psql -d percona -c "INSERT INTO scott.employee VALUES (1, 'frankfurt')"

INSERT more records and check the size difference
 $ psql -d percona -c "INSERT INTO scott.employee VALUES (generate_series(2,1000), 'junk')"


Connect to service :  
psql –p 5444 –d edb –U enterprisedb


Check Size of Database : 
$  select datname , oid from  pg_database ; 
$  select pg_database_size ('oid or name ') ;




 Tablespaces  :

In  postgres   tablespace are  directory ,

Each user defined tablespace  has a symbolic link under $PGDATA/pg_tblspc

○ Can be used to move Table and Indexes to different disks/locations
○ Helps distributing IO

● Steps to create tablespace in PostgreSQL
● Step 1: Create a directory for the tablespace
▪ $ mkdir -p /tmp/tblspc_1
$ chown postgres:postgres /tmp/tblspc_1
$ chmod 700 /tmp/tblspc_1

● Step 2: Create tablespace using the new directory
▪ $ psql -c "CREATE TABLESPACE tblspc_1 LOCATION '/tmp/tblspc_1'"

● Step 3: Create a table in the new table-space
▪ $ psql -d percona -c "CREATE TABLE scott.foo (id int) TABLESPACE tblspc_1"



SELECT spcname , pg_tablespace_location(oid)  from pg_tablespace  ;
\db+

# to set tablespace 
set default_tablespace= space1 ; 


-- pg_global tablespace corresponds to the PGDATA/global  directory 
-- pg_global  is used to store  cluster-wide tables and shared  system catalog 
-- pg_default  tablespace corresponds to the PGDATA/base directory 
-- pg_defalt  is used  to store  database and  relations 





Enable Archiving 

$ psql
ALTER SYSTEM SET archive_mode TO 'ON';
ALTER SYSTEM SET archive_command TO 'cp %p /var/lib/pgsql/archive/%f';
 $ pg_ctl -D $PGDATA restart -mf

Switch a WAL and see if the WAL is safely archived…
$ psql -c "select pg_switch_wal()"


If archiving has been enabled and the archive_command failed,
● the WAL segment for which the archiving failed will not be removed from pg_wal or pg_xlog
● an empty wal_file_name.ready file is generated in the archive_status directory
● the background process archiver attempts to archive the failed WAL segment until it succeeds
● there is a chance that the pg_wal directory can get filled and doesn't allow any more connections to
database 



Users and Roles : 

-- Public is default schema  .  Be default all  users  has Create and Usage rights on  public schema 
-- pg_catalog contains system tables
-- User and roles are same in postgres 
-- postgres is  super user 
-- Users are global  and schemas are for specific database 



● A read_only Role that only has SELECT, USAGE privileges on Schema: percona

○ CREATE ROLE scott_read_only;
GRANT SELECT ON ALL TABLES IN SCHEMA scott TO scott_read_only;
GRANT USAGE ON SCHEMA scott TO scott_read_only;


● A read_write Role that only has SELECT, INSERT, UPDATE, DELETE privileges on Schema: percona

○ CREATE ROLE scott_read_write;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA scott TO
scott_read_write;
GRANT USAGE ON SCHEMA scott TO scott_read_write;

● Create a User and assign either read_only or read_write role

○ CREATE USER pguser WITH LOGIN ENCRYPTED PASSWORD 'pg123pass';
GRANT scott_read_only to pguser;
ALTER USER pguser WITH CONNECTION LIMIT 20;



-- To change  User password 
 ALTER USER postgres PASSWORD '<new-password>'; 
or
ALTER USER username WITH ENCRYPTED PASSWORD 'password';


-- Login to db1 database  from postgres user into public schema 
psql -p 5432 db1 postgres





############################################
Handling Slow Queries
####################################### 


1) Enable logging_collector 


ALTER DATABASE pgbench SET log_min_duration_statement=0;

or 

It is also possible to enable this globally by adding to PostgreSQL configuration and then reload config:

log_min_duration_statement = 0

SELECT pg_reload_conf();


This enables logging of all queries across all of the databases in your PostgreSQL. If you do not see any logs, you may want to enable logging_collector = on as well. The logs will include all of the traffic coming to PostgreSQL system tables, making it more noisy. For our purposes let’s stick to the database level logging.




2)  Checking Explain Plan : 

EXPLAIN SELECT abalance FROM pgbench_accounts WHERE aid = 3344333;
EXPLAIN ANALYZE SELECT abalance FROM pgbench_accounts WHERE aid = 3344333;


3)  Pg_stat_statements

Pg_stat_statements is the extension that collects execution statistics for different query types.

If you do not have pg_stat_statements enabled, you can do it in a standard way. Either via configuration file and
shared_preload_libraries = 'pg_stat_statements'

Or you can enable it via PostgreSQL command line:
CREATE EXTENSION pg_stat_statements;




############################################
Startup and Shutdown 
####################################### 

Its not possible to stop individual database . You can disallow connections via ALTER database, or (selectively) disallow new connections via pg_hba.conf. 



$ initdb
Success. 

You can now start the database server using:

postgres -D /u/pg/data
or
pg_ctl -D /u/pg/data -l logfile start




Restart Edb service :    systemctl { start | stop | restart } edb-as-11


Shutdown Modes in PostgreSQL

● -ms (Smart Mode - Default mode)
○ Waits for all connections to exit and does not allow new transactions
○ Committed transactions applied to Disk through a CHECKPOINT before shutdown
○ May take more time on busy systems
$ pg_ctl -D $PGDATA stop -ms

● -mf (Fast Mode - Recommended on Busy Systems)
○ Closes/Kills all the open transactions and does not allow new transactions. SIGTERM is sent to server processes
  to exit promptly
○ Committed transactions applied to Disk through a CHECKPOINT before shutdown
○ Recommended on Busy Systems
$ pg_ctl -D $PGDATA stop -mf

● -mi (Immediate Mode - Forced and Abnormal Shutdown during Emergencies)
  ○ SIGQUIT is sent to all the processes to exit immediately, without properly shutting down
  ○ Requires Crash Recovery after Instance Start
  ○ Recommended in Emergencies
$ pg_ctl -D $PGDATA stop -mi
or 
$ pg_ctl -D $PGDATA stop -m immediate 



Check Status of  Postgres cluster : 
$ pg_ctl   status -D $PGDATA  





Starting and Stopping a PostgreSQL

PostgreSQL can be stopped and started from command line using pg_ctl

  ○ Starting PostgreSQL
▪ pg_ctl -D $PGDATA start
  ○ Stopping PostgreSQL
▪ pg_ctl -D $PGDATA stop







################################################
View/Modify Parameters in postgresql.conf
################################################



• Use show to view a value set to a parameter
  $ psql -c "show work_mem"

• To see all the settings, use show all
  $ psql -c "show all"

• Modifying a parameter value by manually editing the postgresql.conf file
  $ vi $PGDATA/postgresql.conf

• Use ALTER SYSTEM to modify a parameter
 $ psql -c "ALTER SYSTEM SET archive_mode TO ON"
 $ pg_ctl -D $PGDATA restart -mf


• Use reload using the following syntax to get the changes into effect for parameters not needing RESTART
 $ psql -c "select pg_reload_conf()"
 Or
 $ pg_ctl -D $PGDATA reload



pg_controldata 



postgresql.conf   File 
-- Holds Parameters used by a  cluster 
-- Parameters are case sensitive 
-- Normally  stored in a data directory 
-- initdb installs  default copy 
-- some parameters only table  affect  on server restart ( pg_ctl restart ) 
-- # used for comments 
-- one parameter per line 


Ways to  set server parameters 
-- some parameters can only be changed  per session using  SET  command 
-- Some parameter can be Changed   at user level  using "alter user" 
-- some Parameterr can be changed at database level using "alter database "
-- "SHOW" command  show current settings 
-- pg_settings and pg_file_settings catalog table list settings   information 
eg  :  select name , setting   from pg_settings 


Alter system command writes setting to postgresql.auto.conf 




################################################
Setting Environmental Variables :
################################################

PATH :   should point to bin directory

PGDATA : should point to default data cluster directory

PSPORT : should point to port on which database cluster is runing

PGUSER:  specifies the default database  user name

PGDATABASE:  specify the default database


SEARCH_PATH  : 
PostgreSQL has a great feature: schemas. So you have one database with multiple schemas. This is a really great solution for the data separation between different applications. Each of them can use different schema, and they also can share some schemas between them.

You can set the default search_path at the database level:
ALTER DATABASE <database_name> SET search_path TO schema1,schema2;

Or at the user or role level:
ALTER ROLE <role_name> SET search_path TO schema1,schema2;
ALTER ROLE <role_name> IN DATABASE <db_name> SET search_path TO schema1,schema2;

SHOW search_path;




################################################
MVCC: Multi-Version Concurrency Control
################################################


● Data consistency
● Prevents viewing inconsistent data
● Readers and Writers do not block each other
● No Rollback segments for UNDO
● UNDO management is within tables
● A tuple contains the minimum and maximum transaction ids that are permitted to see it
● Just like SELECT statements executing WHERE
xmin <= txid_current() AND (xmax = 0 OR txid_current() < xmax)



●PostgreSQL
○ Maintains UNDO within a table through versions - old and new row versions
○ Transaction ID’s are used to identify a version a query can use
○A background process to delete old row versions explicitly
○ No additional writes to a separate UNDO storage in the event of writes
○ Row locks stored on tuple itself and no separate lock table




############################################
VACUUM / AUTOVACUUM
############################################


●Live Tuples: Tuples that are Inserted or up-to-date or can be read or modified
● Dead Tuples: Tuples that are changed (Updated/Deleted) and unavailable to be used for any future
transactions
●Continuous transactions may lead to a number of dead rows. A lot of space can be rather re-used by
future transactions
●VACUUM in PostgreSQL would cleanup the dead tuples and mark it to free space map
●Transaction ID (xmax) of the deleting transaction must be older than the oldest transaction still active in
PostgreSQL Server for vacuum to delete that tuple ( i.e. xmax < oldest_active_txid )
●If xmax of a tuple is 100 and xact_committed = true and the oldest transaction id that is still active is 99,
then vacuum cannot delete that tuple.
● Autovacuum in PostgreSQL automatically runs VACUUM on tables as a background process
●Autovacuum is also responsible to run ANALYZE that updates the statistics of a Table.

● To start autovacuum, you must have the parameter autovacuum set to ON
● Background Process : Stats Collector tracks the usage and activity information
● PostgreSQL identifies the tables needing vacuum or analyze depending on certain parameters
● Parameters needed to enable autovacuum in PostgreSQL are:
  autovacuum = on # (ON by default)
  track_counts = on # (ON by default)
● An automatic vacuum or analyze runs on a table depending on a certain mathematical equations


Autovacuum VACUUM
○Autovacuum VACUUM threshold for a table =
autovacuum_vacuum_scale_factor * number of tuples + autovacuum_vacuum_threshold
○If the actual number of dead tuples in a table exceeds this effective threshold, due to updates and
deletes, that table becomes a candidate for autovacuum vacuum


Autovacuum ANALYZE
○Autovacuum ANALYZE threshold for a table =
autovacuum_analyze_scale_factor * number of tuples + autovacuum_analyze_threshold
○Any table with a total number of inserts/deletes/updates exceeding this threshold since last
analyze is eligible for an autovacuum analyze 

●autovacuum_vacuum_scale_factor or autovacuum_analyze_scale_factor: Fraction of the table
records that will be added to the formula. For example, a value of 0.2 equals to 20% of the table records
●autovacuum_vacuum_threshold or autovacuum_analyze_threshold: Minimum number of obsolete
records or dml’s needed to trigger an autovacuum
●Let’s consider a table: foo.bar with 1000 records and the following autovacuum parameters
 autovacuum_vacuum_scale_factor = 0.2
 autovacuum_vacuum_threshold = 50
 autovacuum_analyze_scale_factor = 0.1
 autovacuum_analyze_threshold = 50
●Table : foo.bar becomes a candidate for autovacuum VACUUM when,
Total number of Obsolete records = (0.2 * 1000) + 50 = 250
●Table : foo.bar becomes a candidate for autovacuum ANALYZE when,
Total number of Inserts/Deletes/Updates = (0.1 * 1000) + 50 = 150



Manual Vacuuming : 
VACUUM FULL TABLE_NAME 



AutoVaccuming Parameters 

select name , setting  from pg_settings where  name like 'autovacuum%'; 


vacuum threshold = vacuum base threshold + vacuum scale factor + number of tables 
analyze threshold = analyze base threshold + analyze scale factor +  number of tuples 




############################################
Reindexing 
############################################


-- Reindex rebuolds and  Index  using data  stored  index's table 
-- This is avoid index bloating 
-- \h reindex  will show all  reindex options 
-- 'reindex system'  will  




############################################
Backup and Restore 
####################################### 


● PostgreSQL provides native backup tools for both Logical and Physical backups.
● Backups similar to mysqldump and Xtrabackup are automatically included with Community PostgreSQL
● Backups like RMAN in Oracle may be achieved using Open Source tools like pgBackRest and pgBarman



○ Logical Backups

■ pg_dump (Both Custom(Compressed and non human-readable) and Plain Backups)
■ pg_restore (To restore the custom backups taken using pg_dump)
■ pg_dumpall (To backup Globals - Users and Roles)
■ Logical Backups cannot be used to setup Replication and perform a PITR
■ You cannot apply WAL’s after restoring a Backup taken using pg_dump



pg_dump options 
-a : Data only . No data definition 
-s : Data Definition  only 
-n (schema)  : Dump  specified schema 
-t ( table)  : Dump  Specified Table only 
-f ( file name ) : Dump data to specified file 
-Fp : Dump in plan text ( default ) 
-Ft : Dump in tar format  
-Fc : Dump in compressed format 


Eg 
pg_dump -U postgres -d nano > /tmp/backup     ( backup will be created in text format ) 



Let’s use pgbench to create some sample tables
$ pgbench -i percona (Initialize)
$ pgbench -T 10 -c 10 -j 2 percona (load some data)

● Use pg_dump to backup the DDL (schema-only) of database: percona
$ pg_dump -s percona -f /tmp/percona_ddl.sql

● Use pg_dump to backup a table (with data) using custom and plain text format
$ pg_dump -Fc —t public.pgbench_history -d percona -f /tmp/pgbench_history
$ pg_dump -t public.pgbench_branches -d percona -f /tmp/pgbench_branches


● Create an another database and restore both the tables using pg_restore and psql
$ psql -c "CREATE DATABASE testdb"
$ pg_restore -t pgbench_history -d testdb /tmp/pgbench_history
$ psql -d testdb -f /tmp/pgbench_branches


pg_dumpall
○ Can dump all the databases of a cluster into a script file
○ Use psql to restore the backup taken using pg_dumpall
○ Can be used to dump global objects such as ROLES and TABLESPACES


● To dump only Globals using pg_dumpall, use the following syntax
$ pg_dumpall -g > /tmp/globals.sql

● To dump all databases (or entire Cluster), use the following syntax
$ pg_dumpall > /tmp/globals.sql



We can also use unix tools along 
eg :   pg_dump dbname  | gzip  > filename.gz 

pg_dump -U postgres -d nano | gzip > /tmp/backup     ( backup will be created in zipped text format ) 

pg_dump -U postgres -d nano | split -b 1k > /tmp/backup     ( backup will be created in splitted text format ) 





○ Physical Backups

■ pg_basebackup : File System Level & Online Backup, similar to Xtrabackup for MySQL
■ Useful to build Replication and perform PITR
■ This Backup can only use one process and cannot run in parallel
■ Explore Open Source Backup tools like : pgBackRest, pgBarman and WAL-e for more features like Xtrabackup


Command line options for pg_basebackup
$ pg_basebackup --help
-D --> Target Location of Backup
-cfast -—> Issues a fast checkpoint to start the backup earlier
-Ft -—> Tar format. Use -Fp for plain
-v --> Print the Backup statistics/progress.
-U --> A User who has Replication Privilege.
-W --> forcefully ask for password of replication User above. (Not mandatory)
-z --> Compresses the Backup
-R --> Creates a recovery.conf file that can be used to setup replication
-P --> Shows the progress of the backup
-l --> Creates a backup_label file

 pg_basebackup -U postgres -p 5432 -h 127.0.0.1 -D /tmp/backup_11052018 -Ft -z -Xs -P -R -l backup_label





Taking Consistent Backup  using pg_basebackup 


1) Enable Wal archiving   for point in time  recovery 

wal_level = archive
archive_mode = on


psql select * from pg_settings where name ='wal_level';
psql  show wal_level you  
psql select name, setting, sourcefile, sourceline from pg_settings where name = 'wal_level';
psql -c 'show config_file;'
 grep wal_level /data/pgsql/9.5/data/postgresql.conf



2) create a checkpoint using pg_start_backup  and Copy contents of the data directory


 pg_basebackup -U psql -h 10.0.0.110 -x -D /pgbase/




This script will take the backup of postgreSQL cluster  daily basis  and it will  remove the backup files if backup age  reached 7 days. its the 

#Backup Dir
#Make sure this is a dedicated mount point to PostgreSQL Backups
#Don't put traling / in path
#Backup Details
backup_dir=/backup
export PGPASSWORD="Nijam@1234"
export PGPORT="5432"
echo -e "\n\nBackup Status: $(date +"%d-%m-%y")" >> $backup_dir/Status.log
echo -e "-----------------------" >> $backup_dir/Status.log
echo -e "\nStart Time: $(date)\n" >> $backup_dir/Status.log
/usr/lib/postgresql/9.6/bin/pg_basebackup -U psql  -w -D $backup_dir/PostgreSQL_Base_Backup_$(date +"%d-%m-%y") -l "`date`" -P -F tar -z -R &>> $backup_dir/Status.log
echo -e "\nEnd Time: $(date)" >> $backup_dir/Status.log

#Auto Deletion for Backups
#Value 7 for retention_duration will keep 8 days backups

#retention_duration=7
#find $backup_dir/PostgreSQL_Base_Backup* -type d -mtime +$retention_duration -exec rm -rv {} \;





Found  below document relevant to restoration of base backup : 

https://dbsguru.com/restore-backup-using-pg_basebackup-postgresql/

https://www.percona.com/blog/2018/12/21/backup-restore-postgresql-cluster-multiple-tablespaces-using-pg_basebackup/


https://docs.google.com/document/d/1J8feIwMfXmKqOd4oXREFCnO9c7cwFoOT/edit?usp=sharing&ouid=108567388003714488593&rtpof=true&sd=true





####################################### 
High Availability in PostgreSQL
####################################### 

● Streaming Replication for PostgreSQL 9.x and above
○ WAL Segments are streamed to Standby/Slave and replayed on Slave
○ Not a Statement/Row/Mixed Replication like MySQL
○ This can be referred to as a byte-by-byte or Storage Level Replication
○ Slaves are always Open for Read-Only SQLs but not Writes
○ You cannot have different Schema or Data in a Master and a Slave in Streaming Replication
○ Allows Cascading Replication
○ Supports both Synchronous and Asynchronous Replication
○ Supports a Delayed Standby for faster PITR

● Logical Replication and Logical Decoding for PostgreSQL 10 and above
○ Allows for Replication of selected Tables using Publisher and Subscriber Model
○ Similar to binlog_do_db in MySQL, but no DDL Changes are replicated
○ Subscribers are also open for Writes automatically
○ Used in Data Warehouse environments that stores Data fetched from multiple OLTP Databases for Reporting, etc




Streaming Replication: 


● Step 1: Create a user in Master with REPLICATION ROLE
CREATE USER replicator
WITH REPLICATION
ENCRYPTED PASSWORD 'replicator';

● Step 2: Parameters you should know while setting up SR
archive_mode: Must be set to ON to enable Archiving of WALs
wal_level: Must be set to "hot_standy" until 9.5 and "replica" in the later versions.
max_wal_senders: Must be set to 3 if you are starting with 1 Slave. For every Slave, you may add 2 wal senders.
wal_keep_segments: Number of WALs always retained in pg_xlog (Until PostgreSQL 9.6) or pg_wal (From
PostgreSQL 10)
archive_command: This parameter takes a shell command. It can be a simple copy command to copy the WAL
segments to another location or a Script that has the logic to archive the WALs to S3 or a remote Backup Server.
hot_standby: Must be set to ON on Standby/Replica and has no effect on the Master. However, when you setup
your Replication, parameters set on Master are automatically copied. This parameter is important to enable READS
on Slave. Else, you cannot run your SELECTS on Slave. 

● Step 3: Set the parameters that are not set already
ALTER SYSTEM SET wal_keep_segments TO '50';
select pg_reload_conf();

● Step 4: Add an entry to pg_hba.conf of Master to allow Replication connections from Slave
 Default location of pg_hba.conf is the Data Directory
$ vi pg_hba.conf
Add the following line between >>>>> and <<<<<< to the end of the pg_hba.conf file
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
host replication replicator 192.168.0.28/32 md5
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Replace the IP address(192.168.0.28) with your Slave IP address

● Step 5: Give a SIGHUP or RELOAD
$ pg_ctl -D $PGDATA reload


● Step 6: Use pg_basebackup to backup of your Master data directory to the Slave data directory
$ pg_basebackup -U replicator -p 5432 -D /tmp/slave -Fp -Xs -P -R
● Step 7: Change the port number of your slave if you are creating the replication in the same server
for demo
$ echo "port = 5433" >> /tmp/slave/postgresql.auto.conf
● Step 8: Start your Slave
$ pg_ctl -D /tmp/slave start
● Step 9: Check the replication status from Master using the view : pg_stat_replication
select * from pg_stat_replication;




################################################
Applying Minor Version  Patch 
################################################

pg_upgrade has been around for a very long time as a tool for upgrading major versions of PostgreSQL. Using this tool is not required for minor version upgrades, which means that upgrading your current version of 11.9 to 11.13 is not necessary

Follow the below steps to perform a minor version upgrade:


1) Check the current PostgreSQL version

select version();


2) Wait for all replicas to catch up 


3) Take the backup of PostgreSQL cluster: Use pg_dumpall for logical backup and pg_basebackup for physical backup.

pg_dumpall -U postgres -W  -f /u01/backup/dumpall.sql
ls -l /u01/backup/dumpall.sql
 


4) Stop the PostgreSQL cluster

/usr/pgsql-9.5/bin/pg_ctl -D /var/lib/pgsql/9.5/data stop

 systemctl  stop  edb-as-9



There are  3 types of shutdown 
1) Smart  
2) Fast ( Default ) 
3) Immediate 




 
5) Run yum update to update new binaries


[root@test /]#yum -y install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm 
[root@test /]#yum install postgresql12-server 



Install the below packages on linux server.

[On RHEL/CentOS]
# yum install gcc*
# yum install zlib-devel*
# yum install readline-devel*

[On Debian/Ubuntu]
# apt install gcc*
# apt install zlib1g-dev*
# apt install libreadline6-dev*

[On SUSE Linux ]
# Zypper in gcc*
# zlib1g-dev*
# libreadline6-dev*
# zypper in zlib*


Install the new version of postgres server as root user.

yum check-update <package_name>
yum update <package_name>
yum update edb*

 
[root@test-machine01 ~]# yum check-update postgresql13*
[root@test-machine01 ~]# yum list  postgresql13*
[root@test-machine01 ~]# yum update  postgresql13*




6)  Start PostgreSQL Cluster and check Version

systemctl  start edb-as-9

 

 


################################################
Killing Session 
################################################


Note Down PID for  session that you want to  Kill 

select * from pg_stat_activity;

SELECT pid, state, backend_start, substr(query, 0, 100) q
FROM pg_stat_activity
WHERE backend_type = 'client backend'
ORDER BY backend_start;





 Kill The Session 

select pg_terminate_backend(pid) 
from pg_stat_activity
where pid = '18765';

SELECT pg_cancel_backend(pid);
 


################################################
key  views 
################################################


PG_TABLES
PG_ROLES 
PG_USER
pg_views 
role_table_grants 
PG_TABLESPACE 
pg_stat_activity 
pg_locks 
pg_config 




################################################
#Finding  Blocking Session and Killing  It 
################################################


1) Find  Blocking PID 

select pid, 
       usename, 
       pg_blocking_pids(pid) as blocked_by, 
       query as blocked_query
from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0;

SELECT 'SELECT pg_terminate_backend(' || pid || ');'
FROM pg_stat_activity
WHERE pid = ANY (pg_blocking_pids(blockedpid));



2) Killing Blocking pid 

SELECT 'SELECT pg_terminate_backend(' || pid || ');'
  FROM pg_stat_activity
  WHERE pid = ANY (pg_blocking_pids(613));




################################################
# Checking Long runing  session 
################################################


SELECT 'SELECT pg_terminate_backend(' || pid || ');'
FROM pg_stat_activity
WHERE backend_type = 'client backend'
      AND pid != pg_backend_pid()
      AND backend_start < NOW() - '10 seconds'::interval;
Adjust '10 seconds' as appropriate.





################################################
Monitoring Commands 
################################################

# To View Control information For a database Cluster 
pg_controldata -D datadir 
pg_controldata -D datadir -V



#DN Commands 
\d     ( object information ) 
\dt+  ( size of  table ) 
\dn    ( schema  information ) 
\dn+  ( schema with permission ) 
\l       (  to see databases  )  


\conninfo    (  current connection information ) 


select current_database(); 
select  current_schema  ; 


Show commands : 
show  data_directory 
show work_mem 


# To See Database Parameters 
select name, setting, setting::int * 16 || 'MB' AS setting_in_mb
from pg_settings 
where name in ('min_wal_size', 'max_wal_size');


#Checking Statistics Details 
select relname , reltuples   from  pg_class where  relname='testanalyze'; 


#Check Size of  Database / Tables 
edb-psql dbname username
SELECT pg_size_pretty( pg_database_size('dbname') );
SELECT pg_size_pretty( pg_total_relation_size('tablename') );


#Check File Path for  Table 
select pg_relation_filepath('sales') ; 


select * from pg_database;
select current_database(); 
select current_user;
SELECT version();
select * from pg_tablespace;





################################################
Controlling output of query : 
################################################



 save all query results and output to a file
-o filename
\o filename
\out filename

 output all query results and output to a bash command
\o | command
\out | command


run previous query repeatedly 
\watch