wiki:HandsOnPostgreSQL
Last modified 12 years ago Last modified on 07/08/09 18:08:14

Best place to look up

  1. Download PostgreSQL >= 8.3.0
  2. untar distribution
  3. cd postgres-8.x.x
  4. ./configure --prefix=/opt/pgsql
  5. make
  6. su -c "make install"
  7. initdb -D /bigdisk/data

The directory /bigdisk/data contains pg_xlog directory, where postgres stores it's transactions logs. It's a good practice to move this directory to another disk and set a symbolic link:

mv /bigdisk/data/data/pg_xlog /another/disk/pg_xlog
ln -s /another/disk/pg_xlog /bigdisk/data/data/pg_xlog

in addition you need to tweak your (user postgres) profile:

PATH=$PATH:/opt/pgsql/bin:/opt/pgsql/sbin

LD_LIBRARY_PATH=/opt/pgsql/lib:$LD_LIBRARY_PATH


export PATH LD_LIBRARY_PATH

Enable TCP/IP (JDBC ) connections:

in /bigdisk/data/postgres.conf :

listen_addresses = '*'          # what IP address(es) to listen on; 
port = 5432                     # port number to bind

Some tweeking:

enable autovacuum in /bigdisk/data/postgres.conf:

stats_start_collector = on
stats_command_string = on
stats_row_level = on

autovacuum = on
autovacuum_naptime = 600
autovacuum_vacuum_threshold = 1000
autovacuum_analyze_threshold = 500

Memory ( I have dedicated Linux server with 16GB of RAM):

in /bigdisk/data/postgres.conf:

shared_buffers = 512MB          # min 128kB or max_connections*16kB
temp_buffers = 128MB                    # min 800kB
work_mem = 128MB                                # min 64kB
maintenance_work_mem = 512MB            # min 1MB
max_stack_depth = 8MB                   # min 100kB
max_fsm_pages = 153600          # min max_fsm_relations*16, 6 bytes each
random_page_cost = 2.0                  # same scale as above
effective_cache_size = 8GB

in /etc/sysctl.conf:

kernel.shmmax = 578715648

Now you are ready to start your database:

pg_ctl -D /bigdisk/data -l /bigdisk/db.log start

Backup and Recovery

Migration between releases

The least downtime can be achieved by installing the new server in a different directory and running both the old and the new servers in parallel, on different ports. Then you can use something like:

pg_dumpall -p 5432 | psql -d postgres -p 6543

if you cannot or do not want to run two servers in parallel you can do the backup step before installing the new version, bring down the server, move the old version out of the way, install the new version, start the new server, restore the data. For example:

pg_dumpall > backup
pg_ctl stop
mv /opt/pgsql /opt/pgsql.old
cd ~/postgresql-new
gmake install
initdb -D /opt/pgsql/data
postgres -D /opt/pgsql/data
psql -f backup postgres

Replication with Slony-I

Following basic steps have to be done:

  1. master and slave hosts have to be able to talk to each other . add following lines into ${PGDATA}/pg_hba.conf

on master:

host    all         all         <slave ip>/32          trust

on slave:

host    all         all         <master ip>/32          trust

and activate changes (on master and slave):

pg_ctl -D /bigdisk/data -l /bigdisk/db.log reload
  1. copy database schema
  2. create replication set
  3. subscribe replication set
  4. start replication processes

to make the life easier following scripts can be used:

# slony_replica.conf
# 
#

# list of databases to be replicated
db="pnfsdb"

# prefix of current replication set. useful when one host is a replica for many 
db_prefix="h1repro"

# source database host
master=h1repro2.desy.de

# replica database host
slave=dcache6.desy.de

tables="t_admin t_data00 t_data02 t_data03 t_data04 t_data1 t_data96 t_data97 t_data98 t_data99 t_scratch t_user"
replica_user=postgres

#
# create_schema.sh
# 

. slony_replica.conf


createdb -h ${slave} ${db}

createlang -d ${db} -h ${master} plpgsql
createlang -d ${db} -h ${slave} plpgsql

pg_dump -h ${master} -s ${db} | psql -h ${slave} ${db}
#
# create_set.sh
#

. ./slony_replica.conf

cluster_id=1

echo "cluster name = ${db}_rep;"
echo "node 1 admin conninfo = 'dbname=${db} host=${master} port=5432 user=${replica_user}';" 
echo "node 2 admin conninfo = 'dbname=${db} host=${slave}  port=5432 user=${replica_user}';"

echo "init cluster ( id = ${cluster_id}, comment = '${db} DB replication cluster' );"
echo "create set ( id = ${cluster_id}, origin = 1, comment = '${db} DB replication set' );"

tid=1
for table_name in ${tables}
do
        echo "set add table ( set id = ${cluster_id}, origin = 1, id = ${tid}, full qualified name = 'public.${table_name}', comment = 'Table ${table_name}' );"
        tid=`expr ${tid} + 1`
done

node_id=2
echo "store node ( id = ${node_id}, comment = 'Node ${node_id}, ${slave}' );"
echo "store path ( server = 1, client = ${node_id}, conninfo = 'dbname=${db} host=${master} user=${replica_user}');"
echo "store path ( server = 2, client = 1, conninfo = 'dbname=${db} host=${slave} user=${replica_user}');"
echo "store listen ( origin = 1, provider = 1, receiver = ${node_id} );"
echo "store listen ( origin = ${node_id}, provider = ${node_id}, receiver = 1 );"

echo ""
echo ""

#
# subscribe_set.sh
#

. ./slony_replica.conf

cluster_id=1

echo "cluster name = ${db}_rep;"

echo "node 1 admin conninfo = 'dbname=${db} host=${master} port=5432 user=${replica_user}';"   
echo "node 2 admin conninfo = 'dbname=${db} host=${slave} port=5432 user=${replica_user}';"   

echo "subscribe set ( id = ${cluster_id} , provider = 1, receiver =  2, forward = no);"

echo ""
echo ""

#
# start_slon.sh
#

. ./slony_replica.conf

slon -d1  ${db}_rep "dbname=${db} user=postgres" > /databases/pgsql/logs/pnfsdb_replica.log 2>&1 &

and go!

./create_schema.sh
./create_set.sh | slonik
./subscribe_set.sh | slonik
./start_slon.sh # on master and slave


Last update: Fri Feb 26 13:31:52 2021

Attachments