wiki:manuals/Layout
Last modified 10 years ago Last modified on 02/25/08 18:24:39

dCache storage cluster layout

dCache is a high throughput storage service, typically installed in a cluster for better aggregate data input and output.

As a minimum dCache cluster you can use a single server, but if this does not scale to your needs the cluster should be expanded. If you are setting up a small storage cluster which will later expand it maybe wise to move the services into a structure that will scale easiest in the long term. Once the cluster contains 3 or more computers the cluster manager should consider reorganizing the system to decompose the service to have three separate nodes.

Admin nodes

With three identical servers we suggest splitting the centralized services across 3 hosts based upon their load and their importants.

admin
pnfs
srm

These servers should be the more powerful nodes from your storage cluster.

so this translates into a site-info.def for one site which has storage on each of these three nodes.

# admin remains the same
DCACHE_ADMIN=ce-goegrid.$MY_DOMAIN

# Pools remain the same 
DCACHE_POOLS="$DCACHE_ADMIN:/disk0 $DCACHE_ADMIN:/disk1
$DCACHE_ADMIN:/disk2 $DCACHE_ADMIN:/disk3 se3-goegrid.$MY_DOMAIN:/disk0
se3-goegrid.$MY_DOMAIN:/disk1 se3-goegrid.$MY_DOMAIN:/disk2 se3-goegrid.
$MY_DOMAIN:/disk3"

DCACHE_PNFS_SERVER="se2-goegrid.gwdg.de"
DCACHE_DOOR_SRM="se3-goegrid.gwdg.de"


DCACHE_DOOR_LDAP="some.other.host.gwdg.de"

# I put the gsiftp server on the srm node as then to end users
# it will look like a single node install and be simpler to understand.

DCACHE_DOOR_GSIFTP="se2-goegrid.gw"
DCACHE_DOOR_GSIDCAP="se2-goegrid.gw"

# the following protocols I would leave disabled unless asked to switch on by experiments.

DCACHE_DOOR_DCAP="off"
DCACHE_DOOR_XROOTD="off"

Please note that the LDAP door iw the node that the information system is enabled upon a host if its not to be installed upon the admin node.

DCACHE_DOOR_LDAP="se-goegrid.gw"

Pool Nodes

Pool nodes should have their storage pools running XFS on Linux or ZFS on solaris. Pool nodes can be dynamically added to a dcache storage cluster, no changes should be needed in the administrative and central service nodes on adding a pool node. Once a pool node is added to a storage cluster they must be de-commissioned correctly.

Moving nodes and roles across your dCache cluster.

SRM doors cant be moved because their host name is stored with every file entry in the catalog's on the grid which do not separate host name and file path.

SRM admin nodes (where the Location Manager service runs) can be moved but all dcache nodes configuration needs updating.

PNFS nodes can be moved but again all dcache nodes configuration needs updating and the database needs to be backed up and moved.

Installing Nodes

All 3 central dCache server nodes should be regarded as admin nodes from the perspective of YAIM for dcache and its meta packages, it will bring in the pnfs and postresql databases. PNFS is only needed on the pnfs server.

yum install desy-SE_dcache_admin_postgres

The information node (your extra SL4 node)

yum install desy-SE_dcache_info

Important Post installation tasks

The following tasks are essential after dCache is setup.

  • Backing up the PNFS Database.
  • Backing up the SRM database.
  • Checking the information system.