As i am advancing my skillsets and adding Postgres to my profile , thought of writing this blog as i am preparing for edb postgres exam
Unfortunately i have mixed up postgresql and edb-postgresql but fundamentals remains same .
Server Process , Memory Structure , Physical files :
################################################
Server Process
background process --> bgwritter - background writer , logging collector , stats collector ,
checkpointer , archiver , auto-vacuum , wal writer , logical replication , dbms_aq
files --> data files , wal segments , archived wal , error files global files
postmaster ( pmon ) -->
Master database control process
Responsible for startup and shutdown
Handling of connection requests
Spawning of other necessary backend processes
Background Processes :
Bgwriter : background writer , writes diry block to disk
Wal writer : flush write ahead logs to disk
Checkpointer : performs checkpoint base don config parameters .
Auto-vacuum launcher : starts Auto-vacuum works as needed
Auto-vacuum worker : recover free space for reuse
Logging collector : routes log messages to syslog , eventlog or log files
Stats collector : collects usage statistics by relation and blocks .
Archiver : archives write ahead log files
Local replication launcher : starts logical replication apply process for logical replication.
Dbms_aq launcher : collects information for queuing functionality of advanced servers .
Wal writer : flush write ahead logs to disk
Checkpointer : performs checkpoint base don config parameters .
Auto-vacuum launcher : starts Auto-vacuum works as needed
Auto-vacuum worker : recover free space for reuse
Logging collector : routes log messages to syslog , eventlog or log files
Stats collector : collects usage statistics by relation and blocks .
Archiver : archives write ahead log files
Local replication launcher : starts logical replication apply process for logical replication.
Dbms_aq launcher : collects information for queuing functionality of advanced servers .
Autovacuum
Dedicated backend for providing vacuum services
Essentially, a garbage collect of data files
Covered later in maintenance section
Writer
Background writer
Writing of pages from memory cache to disk (does not flush)
WAL writer
Responsible for maintaining transaction log (journal)
Only used for asynchronous commits
Checkpointer
A process that performs checkpoints
Flushing of all dirty buffers to disk storage
Archiver
WAL files saved to a specified location for backup
Can also be used to achieve replication
Logger
Responsible for writing information logs
Errors, warnings, slow running queries, etc.
Not used if writing to syslog
Stats collector
Support for collection and reporting of information about server activity
Can count accesses to tables and indexes
Information about vacuum and action analysis
bgworkers
Support for external extensions
Logical replication
Memory Structure
1) shared buffers
2) wal buffers
3) process aray ( Pre Session Memory )
- Shared buffers are the primary cache component for the server, storing disk blocks from database files.
- All data sets accessed from the disk are placed in shared buffers, which allow subsequent reads to be memory reads.
- All writes are performed in shared buffers, creating "dirty" pages. The bgwriter and checkpoint processes will write this out to disk.
- Shared buffers contain free buffers, which are never used or freed after using, and dirty buffers, which result from DML.
Wal buffers
- Stores intermediate write-ahead log records.
- Written on commit by wal_writer process (or when full).
Pre Session Memory
work_mem
- Used for per-backend sort/hash memory.
- Used during certain types of JOINs and for ORDER BY operations.
- Set globally, but can be modified per session.
maintenance_work_mem
- Used for certain types of maintenance operations (vacuum, index creation, re-index).
- Allocated per session that uses it, such as multiple autovacuum workers
Effective_cache_size (integer)
The effective_cache_size parameter estimates how much memory is available for disk caching by the operating system and within the database itself. The PostgreSQL query planner decides whether it’s fixed in RAM or not. Index scans are most likely to be used against higher values; otherwise, sequential scans will be used if the value is low. Recommendations are to set Effective_cache_size at 50% of the machine’s total RAM.
################################################
Physical Storage
################################################
PGDATA directory
- All on-disk components of a database instance are stored in a data directory, which contains multiple subdirectories.
- Some subdirectories can be moved by configuration and others by symlink (Unix/Linux only).
- Very few files are user readable or modifiable.
- Commonly referred to as PGDATA.
Datafiles :
- The actual data is stored in files, typically in the PGDATA directory, with one operating system file per page.
- Although PostgreSQL provides some visibility into which files correspond to which tables, you normally don't deal with these files directly.
- Also referred to as relations or base files.
- Stored in subdirectories of the base directory.
- Contain table or index data
Indexes have a set of segments.
- Stored on disk as 1 GB segments:
Segments are automatically added, no intervention required.
- Filled with data in 8 KB blocks (on disk) or pages (in-memory):
WAL :
- One transaction may involve writing to several different tables, and thus writing many pages to disk. Rather than doing this, PostgreSQL uses a write-ahead log (WAL), which is a sequential file where transactions are stored. If the server goes down, it can restore the state of its data by replaying these transactions.
- Only one page needs to be written to the WAL for each transaction, which improves performance. Eventually, the transactions get executed in the background, and data pages are saved by utility processes.
- This is the transaction journal.
- Also known as: WAL, xlog, pg_xlog, transaction log, journal, REDO.
- These files contain a persistent record of commits.
- Success is not returned to the client until this is safely persisted to disk.
- During crash recovery, this journal can be used to reconstruct all transactions.
- These files are written by the WAL writer process.
# Perform Switch of War files
select pg_switch_wal() ;
Archived Logs
- After WAL entries get executed, the WAL segment eventually gets archived. These archival logs can be used to replay the transactions on a different server, and keep remote servers almost in sync.
To Clean Up Archive after Backups :
pg_archivecleanup -n <PG_WAL_PATH> <WAL_FILE> | wc -l
wal_keep_segments tells it how many files to keep in the pg_xlog directory, not the archive directory.
It is kind of weird to use both wal_keep_segments and archive_command together, at least for relatively simple setups. If you just want a streaming replica and don't need the ability to do point-in-time recovery, you could turn off archiving altogether.
If you really want to keep using an archive but have it cleaned up when the files are no longer needed, you can use the archive_cleanup_command setting. But now that we have streaming replication and replication slots, that is almost obsolete.
Below related parameters in config files
# To enable read-only queries on a standby server, wal_level must be set to
# "hot_standby". But you can choose "archive" if you never connect to the
# server in standby mode.
wal_level = hot_standby
min_wal_size=
# Set the maximum number of concurrent connections from the standby servers.
max_wal_senders = 5
# To prevent the primary server from removing the WAL segments required for
# the standby server before shipping them, set the minimum number of segments
# retained in the pg_xlog directory. At least wal_keep_segments should be
# larger than the number of segments generated between the beginning of
# online-backup and the startup of streaming replication. If you enable WAL
# archiving to an archive directory accessible from the standby, this may
# not be necessary.
wal_keep_segments = 32
wal_compressed
# Enable WAL archiving on the primary to an archive directory accessible from
# the standby. If wal_keep_segments is a high enough number to retain the WAL
# segments required for the standby server, this is not necessary.
archive_mode = on
archive_command = 'cp %p /path_to/archive/%f'
Stored file examples
PG_VERSION
Version string of the database instance
postmaster.opts
What options were used to start the instance
postmaster.pid
Process ID of the instance
server.crt
Server certificate for SSL
server.key
Server private key
root.crt
Trusted certificate authorities
PostgreSQL configuration files
PostgreSQL uses several text-based configuration files. While you can change where you store these files when compiling PostgreSQL using symbolic links on your file system, these configuration files are usually stored in the same folder. This folder is always called PGDATA, even though the actual folder may change.
● PG_VERSION
○ Version String of the Database Cluster
● pg_hba.conf
○ Host-Based access control file (built-in firewall)
● pg_ident.conf
○ ident-based access file for OS User to DB User Mapping
● postgresql.conf
○ Primary Configuration File for the Database
● postmaster.opts
○ Contains the options used to start the PostgreSQL Instance
● postmaster.pid
○ The Parent Process ID or the Postmaster Process ID
pg_hba.conf
- The pg_hba.conf file contains information about who can connect to local databases, through which mechanism (local connection or over the network) from which addresses, and using which authentication method.
pg_ident.conf
- The pg_ident.conf file maps operating system users to PostgreSQL users.
postgresql.conf
- postgresql.conf contains most of the configuration parameters for PostgreSQL.
- postgresql.auto.conf overrides values in postgresql.conf, and allows you to easily track parameters that are different for your local installation.
- The ALTER SYSTEM command writes to this file if you persist its changes.
• postgresql.auto.conf
○ PostgreSQL gives Oracle like compatibility to modify parameters using "ALTER SYSTEM"
○ Any parameter modified using ALTER SYSTEM is written to this file for persistence
○ This is last configuration file read by PostgreSQL, when started. Empty by default
○ Always located in the data directory
################################################
Software Installation and Database / User Creation
################################################
PostgreSQL Cluster
● After Initializing your PostgreSQL using initdb (similar to mysqld --initialize) and starting it, you can
create multiple databases in it
● A group of databases running on one Server & One Port - Is called a Cluster in PostgreSQL
● PostgreSQL Cluster may be referred to as a PostgreSQL Instance as well
● A PostgreSQL Cluster or an Instance:
○ Serves only one TCP/IP Port
○ Has a Dedicated Data Directory
○ Contains 3 default databases: postgres, template0 and template1
● When you add a Slave(aka Standby) to your PostgreSQL Cluster(Master), it may be referred to as a
PostgreSQL High Availability Cluster or a PostgreSQL Replication Cluster
● PostgreSQL Cluster that can accept Writes and ships WALs to Slave(Standby), is called a Master
Software Installation
1) Installation options :
wizard , yum , rpm ,
2) while installation os user enterprisedb is created
2) while installation os user enterprisedb is created
3) Default installation location for Windows is "C:\program files\edb\as11" and for Linux its "/usr/edb/as11"
1) PGDG Repository : PostgreSQL Global Development Group maintains YUM and APT repository
For YUM
https://yum.postgresql.org
For APT
https://apt.postgresql.org/pub/repos/apt/
Step 1:
Choose the appropriate rpm that adds pgdg repo to your server
# yum install https://yum.postgresql.org/11/redhat/rhel-7.5-x86_64/pgdg-centos11-11-2.noarch.rpm
Step 2:
Install PostgreSQL using the following step
# yum install postgresql11 postgresql11-contrib postgresql11-libs postgresql11-server
#Optionally Initialize The Database and enable Automatic startup
/usr/pgsql-11/bin/postgresql-11-setup initdb
systemctl enable postgresql-11
systemctl start postgresql-11
systemctl status postgresql-11
Edb Postgres Server Installation
-- Downlaod the Edb-repo reposiory configuration file from yum.enterprisedb.com
-- username and password for yum access can be obtained from EnterpriseDb
-- Login as root user and add user enterprisedb using adduser or useradd and set its password
-- install EPEL using yum
# yum install epel-release
-- Install the downloaded repository configuration files
# rpm -ivm edb-repo-latest.noarch.rpm
-- Update username and password in /etc/yum.repos.d/edb.repo file and change enabled parameter to 1
for following repositories :
edbas11 , enterprisedb-dependencies , enterprisedb-tools
-- Install Yum
#yum install edb-as11-server
-- Configure a package installation using service configuration file
# /usr/lub/system/edb-as11.service
-- Create a database cluster and start cluster using services
# /usr/edb/as11/bin/edb-as-11-setup initdb
# systemctl start edb-as-11
#Connecting to default Edb Instance
edb-psql -p 5434 edb entrerprisedb
edb-psql :
-- edb-psql is name of EnterpriseDB psql executable
-- EnterpriseDB's edb-psql provides termonal-based front-end to advanced server
-- it enables you to type in queries interactively
d, --dbname=, $PGDATABASE
-h, --host=, $PGHOST
-p, --port=, $PGPORT
-U, --username=, $PGUSER
EDB*Plus :
-- Edb*plus is command line user interface to Edb Postgres Advanced Server
-- Edb*Plus accepts sql commands
-- Edb*Plus are compatible with Oracle Sqlplus
Eg :
DB_USER=enterprisedb
DB_PASSWORD=abc123
DB_SCHEMA="127.0.0.1:5432/edb"
edbplus -S $DB_USER/$DB_PASSWORD@$DB_SCHEMA 2>>EDB_ERROR_FILE
Database Creation :
Initialize Your First PostgreSQL Cluster :
initdb is used to Initialize a PostgreSQL cluster
$ echo "PATH=/usr/pgsql-11/bin:$PATH">>~/.bash_profile
$ source .bash_profile
$ echo $PGDATA
/var/lib/pgsql/11/data
$initdb --version
initdb (PostgreSQL) 11.0
$ initdb
or
$ initdb -D /usr/local/pgsql/data
pg_ctl -D "Path" start
pg_ctl status
pg_ctl stop
/usr/edb/as11/bin/edb-as-11-setup initdb
● Base Directory
○ Contains Subdirectories for every Database you create
○ Every Database Sub-Directory contains files for every Relation/Object created in the Database
● Datafiles
○ Datafiles are the files for Relations in the base directory
○ Base Directory contains Relations
○ Relations stored on Disk as 1GB segments
○ Each 1GB Datafile is made up of several 8KB Pages that are allocated as needed
○ Segments are automatically added unlike Oracle
Create a database ( default tablespace is pg_default )
$ psql -c "CREATE DATABASE sample "
create database sample owner user1 tablespace table1
psql -U postgres
show data_directory
SHOW PORT
\q
Get the datid for the database and see if it exists in the base directory
$ psql -c "select datid, datname from pg_stat_database where datname = 'percona'"
Create a schema named: scott
$ psql -d percona -c "CREATE SCHEMA scott"
Create a table named: employee in scott schema
$ psql -d percona -c "CREATE TABLE scott.employee(id int PRIMARY KEY, name varchar(20))"
Locate the file created for the table: scott.employee in the base directory
$ psql -d percona -c "select pg_relation_filepath('scott.employee')"
Check the size of the table in the OS and value of parameter: block_size
$ psql -c "show block_size"
INSERT a record in the table and see the size difference
$ psql -d percona -c "INSERT INTO scott.employee VALUES (1, 'frankfurt')"
INSERT more records and check the size difference
$ psql -d percona -c "INSERT INTO scott.employee VALUES (generate_series(2,1000), 'junk')"
Connect to service :
psql –p 5444 –d edb –U enterprisedb
Check Size of Database :
$ select datname , oid from pg_database ;
$ select pg_database_size ('oid or name ') ;
Tablespaces :
In postgres tablespace are directory ,
Each user defined tablespace has a symbolic link under $PGDATA/pg_tblspc
○ Can be used to move Table and Indexes to different disks/locations
○ Helps distributing IO
● Steps to create tablespace in PostgreSQL
● Step 1: Create a directory for the tablespace
▪ $ mkdir -p /tmp/tblspc_1
$ chown postgres:postgres /tmp/tblspc_1
$ chmod 700 /tmp/tblspc_1
● Step 2: Create tablespace using the new directory
▪ $ psql -c "CREATE TABLESPACE tblspc_1 LOCATION '/tmp/tblspc_1'"
● Step 3: Create a table in the new table-space
▪ $ psql -d percona -c "CREATE TABLE scott.foo (id int) TABLESPACE tblspc_1"
SELECT spcname , pg_tablespace_location(oid) from pg_tablespace ;
\db+
# to set tablespace
set default_tablespace= space1 ;
-- pg_global tablespace corresponds to the PGDATA/global directory
-- pg_global is used to store cluster-wide tables and shared system catalog
-- pg_default tablespace corresponds to the PGDATA/base directory
-- pg_defalt is used to store database and relations
Enable Archiving
$ psql
ALTER SYSTEM SET archive_mode TO 'ON';
ALTER SYSTEM SET archive_command TO 'cp %p /var/lib/pgsql/archive/%f';
$ pg_ctl -D $PGDATA restart -mf
Switch a WAL and see if the WAL is safely archived…
$ psql -c "select pg_switch_wal()"
If archiving has been enabled and the archive_command failed,
● the WAL segment for which the archiving failed will not be removed from pg_wal or pg_xlog
● an empty wal_file_name.ready file is generated in the archive_status directory
● the background process archiver attempts to archive the failed WAL segment until it succeeds
● there is a chance that the pg_wal directory can get filled and doesn't allow any more connections to
database
Users and Roles :
-- Public is default schema . Be default all users has Create and Usage rights on public schema
-- pg_catalog contains system tables
-- User and roles are same in postgres
-- postgres is super user
-- Users are global and schemas are for specific database
● A read_only Role that only has SELECT, USAGE privileges on Schema: percona
○ CREATE ROLE scott_read_only;
GRANT SELECT ON ALL TABLES IN SCHEMA scott TO scott_read_only;
GRANT USAGE ON SCHEMA scott TO scott_read_only;
● A read_write Role that only has SELECT, INSERT, UPDATE, DELETE privileges on Schema: percona
○ CREATE ROLE scott_read_write;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA scott TO
scott_read_write;
GRANT USAGE ON SCHEMA scott TO scott_read_write;
● Create a User and assign either read_only or read_write role
○ CREATE USER pguser WITH LOGIN ENCRYPTED PASSWORD 'pg123pass';
GRANT scott_read_only to pguser;
ALTER USER pguser WITH CONNECTION LIMIT 20;
-- To change User password
ALTER USER postgres PASSWORD '<new-password>';
or
ALTER USER username WITH ENCRYPTED PASSWORD 'password';
-- Login to db1 database from postgres user into public schema
psql -p 5432 db1 postgres
############################################
Handling Slow Queries
Handling Slow Queries
#######################################
1) Enable logging_collector
ALTER DATABASE pgbench SET log_min_duration_statement=0;
or
It is also possible to enable this globally by adding to PostgreSQL configuration and then reload config:
log_min_duration_statement = 0
SELECT pg_reload_conf();
This enables logging of all queries across all of the databases in your PostgreSQL. If you do not see any logs, you may want to enable logging_collector = on as well. The logs will include all of the traffic coming to PostgreSQL system tables, making it more noisy. For our purposes let’s stick to the database level logging.
2) Checking Explain Plan :
EXPLAIN SELECT abalance FROM pgbench_accounts WHERE aid = 3344333;
EXPLAIN ANALYZE SELECT abalance FROM pgbench_accounts WHERE aid = 3344333;
3) Pg_stat_statements
Pg_stat_statements is the extension that collects execution statistics for different query types.
If you do not have pg_stat_statements enabled, you can do it in a standard way. Either via configuration file and
shared_preload_libraries = 'pg_stat_statements'
Or you can enable it via PostgreSQL command line:
CREATE EXTENSION pg_stat_statements;
############################################
Startup and Shutdown
Startup and Shutdown
#######################################
Its not possible to stop individual database . You can disallow connections via ALTER database, or (selectively) disallow new connections via pg_hba.conf.
$ initdb
Success.
You can now start the database server using:
postgres -D /u/pg/data
or
pg_ctl -D /u/pg/data -l logfile start
Restart Edb service : systemctl { start | stop | restart } edb-as-11
Shutdown Modes in PostgreSQL
● -ms (Smart Mode - Default mode)
○ Waits for all connections to exit and does not allow new transactions
○ Committed transactions applied to Disk through a CHECKPOINT before shutdown
○ May take more time on busy systems
$ pg_ctl -D $PGDATA stop -ms
● -mf (Fast Mode - Recommended on Busy Systems)
○ Closes/Kills all the open transactions and does not allow new transactions. SIGTERM is sent to server processes
to exit promptly
○ Committed transactions applied to Disk through a CHECKPOINT before shutdown
○ Recommended on Busy Systems
$ pg_ctl -D $PGDATA stop -mf
● -mi (Immediate Mode - Forced and Abnormal Shutdown during Emergencies)
○ SIGQUIT is sent to all the processes to exit immediately, without properly shutting down
○ Requires Crash Recovery after Instance Start
○ Recommended in Emergencies
$ pg_ctl -D $PGDATA stop -mi
or
$ pg_ctl -D $PGDATA stop -m immediate
Check Status of Postgres cluster :
$ pg_ctl status -D $PGDATA
Starting and Stopping a PostgreSQL
PostgreSQL can be stopped and started from command line using pg_ctl
○ Starting PostgreSQL
▪ pg_ctl -D $PGDATA start
○ Stopping PostgreSQL
▪ pg_ctl -D $PGDATA stop
################################################
View/Modify Parameters in postgresql.conf
################################################
• Use show to view a value set to a parameter
$ psql -c "show work_mem"
• To see all the settings, use show all
$ psql -c "show all"
• Modifying a parameter value by manually editing the postgresql.conf file
$ vi $PGDATA/postgresql.conf
• Use ALTER SYSTEM to modify a parameter
$ psql -c "ALTER SYSTEM SET archive_mode TO ON"
$ pg_ctl -D $PGDATA restart -mf
• Use reload using the following syntax to get the changes into effect for parameters not needing RESTART
$ psql -c "select pg_reload_conf()"
Or
$ pg_ctl -D $PGDATA reload
pg_controldata
postgresql.conf File
-- Holds Parameters used by a cluster
-- Parameters are case sensitive
-- Normally stored in a data directory
-- initdb installs default copy
-- some parameters only table affect on server restart ( pg_ctl restart )
-- # used for comments
-- one parameter per line
Ways to set server parameters
-- some parameters can only be changed per session using SET command
-- Some parameter can be Changed at user level using "alter user"
-- some Parameterr can be changed at database level using "alter database "
-- "SHOW" command show current settings
-- pg_settings and pg_file_settings catalog table list settings information
eg : select name , setting from pg_settings
Alter system command writes setting to postgresql.auto.conf
################################################
Setting Environmental Variables :
################################################
PGDATA : should point to default data cluster directory
PSPORT : should point to port on which database cluster is runing
PGUSER: specifies the default database user name
PGDATABASE: specify the default database
SEARCH_PATH :
PostgreSQL has a great feature: schemas. So you have one database with multiple schemas. This is a really great solution for the data separation between different applications. Each of them can use different schema, and they also can share some schemas between them.
You can set the default search_path at the database level:
ALTER DATABASE <database_name> SET search_path TO schema1,schema2;
Or at the user or role level:
ALTER ROLE <role_name> SET search_path TO schema1,schema2;
ALTER ROLE <role_name> IN DATABASE <db_name> SET search_path TO schema1,schema2;
SHOW search_path;
################################################
MVCC: Multi-Version Concurrency Control
################################################
● Data consistency
● Prevents viewing inconsistent data
● Readers and Writers do not block each other
● No Rollback segments for UNDO
● UNDO management is within tables
● A tuple contains the minimum and maximum transaction ids that are permitted to see it
● Just like SELECT statements executing WHERE
xmin <= txid_current() AND (xmax = 0 OR txid_current() < xmax)
●PostgreSQL
○ Maintains UNDO within a table through versions - old and new row versions
○ Transaction ID’s are used to identify a version a query can use
○A background process to delete old row versions explicitly
○ No additional writes to a separate UNDO storage in the event of writes
○ Row locks stored on tuple itself and no separate lock table
############################################
VACUUM / AUTOVACUUM
############################################
●Live Tuples: Tuples that are Inserted or up-to-date or can be read or modified
● Dead Tuples: Tuples that are changed (Updated/Deleted) and unavailable to be used for any future
transactions
●Continuous transactions may lead to a number of dead rows. A lot of space can be rather re-used by
future transactions
●VACUUM in PostgreSQL would cleanup the dead tuples and mark it to free space map
●Transaction ID (xmax) of the deleting transaction must be older than the oldest transaction still active in
PostgreSQL Server for vacuum to delete that tuple ( i.e. xmax < oldest_active_txid )
●If xmax of a tuple is 100 and xact_committed = true and the oldest transaction id that is still active is 99,
then vacuum cannot delete that tuple.
● Autovacuum in PostgreSQL automatically runs VACUUM on tables as a background process
●Autovacuum is also responsible to run ANALYZE that updates the statistics of a Table.
● To start autovacuum, you must have the parameter autovacuum set to ON
● Background Process : Stats Collector tracks the usage and activity information
● PostgreSQL identifies the tables needing vacuum or analyze depending on certain parameters
● Parameters needed to enable autovacuum in PostgreSQL are:
autovacuum = on # (ON by default)
track_counts = on # (ON by default)
● An automatic vacuum or analyze runs on a table depending on a certain mathematical equations
Autovacuum VACUUM
○Autovacuum VACUUM threshold for a table =
autovacuum_vacuum_scale_factor * number of tuples + autovacuum_vacuum_threshold
○If the actual number of dead tuples in a table exceeds this effective threshold, due to updates and
deletes, that table becomes a candidate for autovacuum vacuum
Autovacuum ANALYZE
○Autovacuum ANALYZE threshold for a table =
autovacuum_analyze_scale_factor * number of tuples + autovacuum_analyze_threshold
○Any table with a total number of inserts/deletes/updates exceeding this threshold since last
analyze is eligible for an autovacuum analyze
●autovacuum_vacuum_scale_factor or autovacuum_analyze_scale_factor: Fraction of the table
records that will be added to the formula. For example, a value of 0.2 equals to 20% of the table records
●autovacuum_vacuum_threshold or autovacuum_analyze_threshold: Minimum number of obsolete
records or dml’s needed to trigger an autovacuum
●Let’s consider a table: foo.bar with 1000 records and the following autovacuum parameters
autovacuum_vacuum_scale_factor = 0.2
autovacuum_vacuum_threshold = 50
autovacuum_analyze_scale_factor = 0.1
autovacuum_analyze_threshold = 50
●Table : foo.bar becomes a candidate for autovacuum VACUUM when,
Total number of Obsolete records = (0.2 * 1000) + 50 = 250
●Table : foo.bar becomes a candidate for autovacuum ANALYZE when,
Total number of Inserts/Deletes/Updates = (0.1 * 1000) + 50 = 150
Manual Vacuuming :
VACUUM FULL TABLE_NAME
AutoVaccuming Parameters
select name , setting from pg_settings where name like 'autovacuum%';
vacuum threshold = vacuum base threshold + vacuum scale factor + number of tables
analyze threshold = analyze base threshold + analyze scale factor + number of tuples
############################################
Reindexing
############################################
-- Reindex rebuolds and Index using data stored index's table
-- This is avoid index bloating
-- \h reindex will show all reindex options
-- 'reindex system' will
############################################
Backup and Restore
Backup and Restore
#######################################
● PostgreSQL provides native backup tools for both Logical and Physical backups.
● Backups similar to mysqldump and Xtrabackup are automatically included with Community PostgreSQL
● Backups like RMAN in Oracle may be achieved using Open Source tools like pgBackRest and pgBarman
○ Logical Backups
■ pg_dump (Both Custom(Compressed and non human-readable) and Plain Backups)
■ pg_restore (To restore the custom backups taken using pg_dump)
■ pg_dumpall (To backup Globals - Users and Roles)
■ Logical Backups cannot be used to setup Replication and perform a PITR
■ You cannot apply WAL’s after restoring a Backup taken using pg_dump
pg_dump options
-a : Data only . No data definition
-s : Data Definition only
-n (schema) : Dump specified schema
-t ( table) : Dump Specified Table only
-f ( file name ) : Dump data to specified file
-Fp : Dump in plan text ( default )
-Ft : Dump in tar format
-Fc : Dump in compressed format
Eg
pg_dump -U postgres -d nano > /tmp/backup ( backup will be created in text format )
Let’s use pgbench to create some sample tables
$ pgbench -i percona (Initialize)
$ pgbench -T 10 -c 10 -j 2 percona (load some data)
● Use pg_dump to backup the DDL (schema-only) of database: percona
$ pg_dump -s percona -f /tmp/percona_ddl.sql
● Use pg_dump to backup a table (with data) using custom and plain text format
$ pg_dump -Fc —t public.pgbench_history -d percona -f /tmp/pgbench_history
$ pg_dump -t public.pgbench_branches -d percona -f /tmp/pgbench_branches
● Create an another database and restore both the tables using pg_restore and psql
$ psql -c "CREATE DATABASE testdb"
$ pg_restore -t pgbench_history -d testdb /tmp/pgbench_history
$ psql -d testdb -f /tmp/pgbench_branches
pg_dumpall
○ Can dump all the databases of a cluster into a script file
○ Use psql to restore the backup taken using pg_dumpall
○ Can be used to dump global objects such as ROLES and TABLESPACES
● To dump only Globals using pg_dumpall, use the following syntax
$ pg_dumpall -g > /tmp/globals.sql
● To dump all databases (or entire Cluster), use the following syntax
$ pg_dumpall > /tmp/globals.sql
We can also use unix tools along
eg : pg_dump dbname | gzip > filename.gz
pg_dump -U postgres -d nano | gzip > /tmp/backup ( backup will be created in zipped text format )
pg_dump -U postgres -d nano | split -b 1k > /tmp/backup ( backup will be created in splitted text format )
○ Physical Backups
■ pg_basebackup : File System Level & Online Backup, similar to Xtrabackup for MySQL
■ Useful to build Replication and perform PITR
■ This Backup can only use one process and cannot run in parallel
■ Explore Open Source Backup tools like : pgBackRest, pgBarman and WAL-e for more features like Xtrabackup
Command line options for pg_basebackup
$ pg_basebackup --help
-D --> Target Location of Backup
-cfast -—> Issues a fast checkpoint to start the backup earlier
-Ft -—> Tar format. Use -Fp for plain
-v --> Print the Backup statistics/progress.
-U --> A User who has Replication Privilege.
-W --> forcefully ask for password of replication User above. (Not mandatory)
-z --> Compresses the Backup
-R --> Creates a recovery.conf file that can be used to setup replication
-P --> Shows the progress of the backup
-l --> Creates a backup_label file
pg_basebackup -U postgres -p 5432 -h 127.0.0.1 -D /tmp/backup_11052018 -Ft -z -Xs -P -R -l backup_label
Taking Consistent Backup using pg_basebackup
1) Enable Wal archiving for point in time recovery
wal_level = archive
archive_mode = on
psql select * from pg_settings where name ='wal_level';
psql show wal_level you
psql select name, setting, sourcefile, sourceline from pg_settings where name = 'wal_level';
psql -c 'show config_file;'
grep wal_level /data/pgsql/9.5/data/postgresql.conf
2) create a checkpoint using pg_start_backup and Copy contents of the data directory
pg_basebackup -U psql -h 10.0.0.110 -x -D /pgbase/
This script will take the backup of postgreSQL cluster daily basis and it will remove the backup files if backup age reached 7 days. its the
#Backup Dir
#Make sure this is a dedicated mount point to PostgreSQL Backups
#Don't put traling / in path
#Backup Details
backup_dir=/backup
export PGPASSWORD="Nijam@1234"
export PGPORT="5432"
echo -e "\n\nBackup Status: $(date +"%d-%m-%y")" >> $backup_dir/Status.log
echo -e "-----------------------" >> $backup_dir/Status.log
echo -e "\nStart Time: $(date)\n" >> $backup_dir/Status.log
/usr/lib/postgresql/9.6/bin/pg_basebackup -U psql -w -D $backup_dir/PostgreSQL_Base_Backup_$(date +"%d-%m-%y") -l "`date`" -P -F tar -z -R &>> $backup_dir/Status.log
echo -e "\nEnd Time: $(date)" >> $backup_dir/Status.log
#Auto Deletion for Backups
#Value 7 for retention_duration will keep 8 days backups
#retention_duration=7
#find $backup_dir/PostgreSQL_Base_Backup* -type d -mtime +$retention_duration -exec rm -rv {} \;
Found below document relevant to restoration of base backup :
https://dbsguru.com/restore-backup-using-pg_basebackup-postgresql/
https://www.percona.com/blog/2018/12/21/backup-restore-postgresql-cluster-multiple-tablespaces-using-pg_basebackup/
https://docs.google.com/document/d/1J8feIwMfXmKqOd4oXREFCnO9c7cwFoOT/edit?usp=sharing&ouid=108567388003714488593&rtpof=true&sd=true
#######################################
High Availability in PostgreSQL
#######################################
● Streaming Replication for PostgreSQL 9.x and above
○ WAL Segments are streamed to Standby/Slave and replayed on Slave
○ Not a Statement/Row/Mixed Replication like MySQL
○ This can be referred to as a byte-by-byte or Storage Level Replication
○ Slaves are always Open for Read-Only SQLs but not Writes
○ You cannot have different Schema or Data in a Master and a Slave in Streaming Replication
○ Allows Cascading Replication
○ Supports both Synchronous and Asynchronous Replication
○ Supports a Delayed Standby for faster PITR
● Logical Replication and Logical Decoding for PostgreSQL 10 and above
○ Allows for Replication of selected Tables using Publisher and Subscriber Model
○ Similar to binlog_do_db in MySQL, but no DDL Changes are replicated
○ Subscribers are also open for Writes automatically
○ Used in Data Warehouse environments that stores Data fetched from multiple OLTP Databases for Reporting, etc
Streaming Replication:
● Step 1: Create a user in Master with REPLICATION ROLE
CREATE USER replicator
WITH REPLICATION
ENCRYPTED PASSWORD 'replicator';
● Step 2: Parameters you should know while setting up SR
archive_mode: Must be set to ON to enable Archiving of WALs
wal_level: Must be set to "hot_standy" until 9.5 and "replica" in the later versions.
max_wal_senders: Must be set to 3 if you are starting with 1 Slave. For every Slave, you may add 2 wal senders.
wal_keep_segments: Number of WALs always retained in pg_xlog (Until PostgreSQL 9.6) or pg_wal (From
PostgreSQL 10)
archive_command: This parameter takes a shell command. It can be a simple copy command to copy the WAL
segments to another location or a Script that has the logic to archive the WALs to S3 or a remote Backup Server.
hot_standby: Must be set to ON on Standby/Replica and has no effect on the Master. However, when you setup
your Replication, parameters set on Master are automatically copied. This parameter is important to enable READS
on Slave. Else, you cannot run your SELECTS on Slave.
● Step 3: Set the parameters that are not set already
ALTER SYSTEM SET wal_keep_segments TO '50';
select pg_reload_conf();
● Step 4: Add an entry to pg_hba.conf of Master to allow Replication connections from Slave
Default location of pg_hba.conf is the Data Directory
$ vi pg_hba.conf
Add the following line between >>>>> and <<<<<< to the end of the pg_hba.conf file
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
host replication replicator 192.168.0.28/32 md5
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Replace the IP address(192.168.0.28) with your Slave IP address
● Step 5: Give a SIGHUP or RELOAD
$ pg_ctl -D $PGDATA reload
● Step 6: Use pg_basebackup to backup of your Master data directory to the Slave data directory
$ pg_basebackup -U replicator -p 5432 -D /tmp/slave -Fp -Xs -P -R
● Step 7: Change the port number of your slave if you are creating the replication in the same server
for demo
$ echo "port = 5433" >> /tmp/slave/postgresql.auto.conf
● Step 8: Start your Slave
$ pg_ctl -D /tmp/slave start
● Step 9: Check the replication status from Master using the view : pg_stat_replication
select * from pg_stat_replication;
################################################
Applying Minor Version Patch
################################################
pg_upgrade has been around for a very long time as a tool for upgrading major versions of PostgreSQL. Using this tool is not required for minor version upgrades, which means that upgrading your current version of 11.9 to 11.13 is not necessary
Follow the below steps to perform a minor version upgrade:
1) Check the current PostgreSQL version
select version();
2) Wait for all replicas to catch up
3) Take the backup of PostgreSQL cluster: Use pg_dumpall for logical backup and pg_basebackup for physical backup.
pg_dumpall -U postgres -W -f /u01/backup/dumpall.sql
ls -l /u01/backup/dumpall.sql
4) Stop the PostgreSQL cluster
/usr/pgsql-9.5/bin/pg_ctl -D /var/lib/pgsql/9.5/data stop
systemctl stop edb-as-9
There are 3 types of shutdown
1) Smart
2) Fast ( Default )
3) Immediate
5) Run yum update to update new binaries
[root@test /]#yum -y install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
[root@test /]#yum install postgresql12-server
Install the below packages on linux server.
[On RHEL/CentOS]
# yum install gcc*
# yum install zlib-devel*
# yum install readline-devel*
[On Debian/Ubuntu]
# apt install gcc*
# apt install zlib1g-dev*
# apt install libreadline6-dev*
[On SUSE Linux ]
# Zypper in gcc*
# zlib1g-dev*
# libreadline6-dev*
# zypper in zlib*
Install the new version of postgres server as root user.
yum check-update <package_name>
yum update <package_name>
yum update edb*
[root@test-machine01 ~]# yum check-update postgresql13*
[root@test-machine01 ~]# yum list postgresql13*
[root@test-machine01 ~]# yum update postgresql13*
6) Start PostgreSQL Cluster and check Version
systemctl start edb-as-9
################################################
Killing Session
################################################
Note Down PID for session that you want to Kill
select * from pg_stat_activity;
SELECT pid, state, backend_start, substr(query, 0, 100) q
FROM pg_stat_activity
WHERE backend_type = 'client backend'
ORDER BY backend_start;
Kill The Session
select pg_terminate_backend(pid)
from pg_stat_activity
where pid = '18765';
SELECT pg_cancel_backend(pid);
################################################
key views
################################################
PG_TABLES
PG_ROLES
PG_USER
pg_views
role_table_grants
PG_TABLESPACE
pg_stat_activity
pg_locks
pg_config
################################################
#Finding Blocking Session and Killing It
################################################
1) Find Blocking PID
select pid,
usename,
pg_blocking_pids(pid) as blocked_by,
query as blocked_query
from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0;
SELECT 'SELECT pg_terminate_backend(' || pid || ');'
FROM pg_stat_activity
WHERE pid = ANY (pg_blocking_pids(blockedpid));
2) Killing Blocking pid
SELECT 'SELECT pg_terminate_backend(' || pid || ');'
FROM pg_stat_activity
WHERE pid = ANY (pg_blocking_pids(613));
################################################
# Checking Long runing session
################################################
SELECT 'SELECT pg_terminate_backend(' || pid || ');'
FROM pg_stat_activity
WHERE backend_type = 'client backend'
AND pid != pg_backend_pid()
AND backend_start < NOW() - '10 seconds'::interval;
Adjust '10 seconds' as appropriate.
################################################
Monitoring Commands
################################################
# To View Control information For a database Cluster
pg_controldata -D datadir
pg_controldata -D datadir -V
#DN Commands
\d ( object information )
\dt+ ( size of table )
\dn ( schema information )
\dn+ ( schema with permission )
\l ( to see databases )
\conninfo ( current connection information )
select current_database();
select current_schema ;
Show commands :
show data_directory
show work_mem
# To See Database Parameters
select name, setting, setting::int * 16 || 'MB' AS setting_in_mb
from pg_settings
where name in ('min_wal_size', 'max_wal_size');
#Checking Statistics Details
select relname , reltuples from pg_class where relname='testanalyze';
#Check Size of Database / Tables
edb-psql dbname username
SELECT pg_size_pretty( pg_database_size('dbname') );
SELECT pg_size_pretty( pg_total_relation_size('tablename') );
#Check File Path for Table
select pg_relation_filepath('sales') ;
select * from pg_database;
select current_database();
select current_user;
SELECT version();
select * from pg_tablespace;
################################################
Controlling output of query :
################################################
save all query results and output to a file
-o filename
\o filename
\out filename
output all query results and output to a bash command
\o | command
\out | command
run previous query repeatedly
\watch
No comments:
Post a Comment