wiki:manuals/multipleReplicaManagers
Last modified 13 years ago Last modified on 02/21/08 18:51:53

Multiple ReplicaManager instance HOW-TO

This page describe how to set-up a configuration in which it possible to have more than one replicaManager enabled in dCache.

In this way it is possible to enable replication of files in several pool-groups, this usually can be useful to enable both replication and quota for different storage area (with different quotas). Some examples of typical use-cases can be found on this talk: http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=2&confId=21405

In order to do this you need to follow the usual procedure in order to "link" a storage area to a specific group of pools.

For example you can follow these instruction:

#####################
+ Map the alice VO to the pool:

http://trac.dcache.org/trac.cgi/wiki/manuals/poolSetup1

  • On PNFS:
 # cd /pnfs/ba.infn.it/data/cms/store/user/donvito/
 # echo "StoreName   cms" >".(tag)(OSMTemplate)"   
 # echo user1 > ".(tag)(sGroup)" 

 # cd /pnfs/ba.infn.it/data/cms/store/important_files/ 
 # echo "StoreName   cms" >".(tag)(OSMTemplate)"       
 # echo important > ".(tag)(sGroup)" 

Login as Administrator (e.i. in the Admin Interface of dCache see chap.3.4 of dCache book manual) and add the Poolnode to the group alice:

ssh -c blowfish -p 22223 -l admin admin.node.localdomain
(local) admin> cd PoolManager  
(PoolManager) admni> psu create pgroup user1 
(PoolManager) admni> psu create unit -store cms:user1@osm 
(PoolManager) admni> psu create ugroup user1 
(PoolManager) admni> psu addto ugroup user1 cms:user1@osm 
(PoolManager) admni> psu create link user1-link world-net user1 
(PoolManager) admin> psu add link user1-link user1
(PoolManager) admin> psu set link user1-link -readpref=10 -writepref=10 -cachepref=10  
(PoolManager) admin> psu addto pgroup user1 pooluser_1   
(PoolManager) admin> psu addto pgroup user1 pooluser1_1  
(PoolManager) admin>  
(PoolManager) admni> psu create pgroup important 
(PoolManager) admni> psu create unit -store cms:important@osm 
(PoolManager) admni> psu create ugroup important  
(PoolManager) admni> psu addto ugroup important cms:important@osm  
(PoolManager) admni> psu create link important-link world-net important 
(PoolManager) admin> psu add link important-link important
(PoolManager) admin> psu set link important-link -readpref=10 -writepref=10 -cachepref=10 
(PoolManager) admin> psu addto pgroup important poolresilient_1 
(PoolManager) admin> psu addto pgroup important poolresilient1_1
(PoolManager) admin> save 

###########################################

After this you should check that everything works as expected: writing files on the specific directory will result in writing files on the expected pools.

To test this you can:

# dccp /tmp/test_file /pnfs/ba.infn.it/data/cms/store/user/donvito/  

# cat /opt/d-cache/billing/YYYY/MM/DD/billing-YYYY-MM-DD

and

# dccp /tmp/test_file /pnfs/ba.infn.it/data/cms/store/important_files/

# cat /opt/d-cache/billing/YYYY/MM/DD/billing-YYYY-MM-DD

If this works fine, you can go on configuring two different replica manager cells.

You should configure replica manager as usual following the instruction on the standard documentation. You should put into dCacheSetup the name of one of the two pool-group in which you want to enable the replica, and then run as usual the install.sh script.

In order to enable the replica also for the other pool-group you should:
(this procedure is tested on 1.8.0-X with X>12, but it should work also in 1.7)

1) copy "/opt/d-cache/jobs/replica" in "/opt/d-cache/jobs/replica1" and replica.lib.sh in replica1.lib.sh on the same dir.

2) copy "/opt/d-cache/config/replica.batch" -> "/opt/d-cache/config/replica1.batch"

3) create a new DB that you can call, for example, "replicas1" by running: # su - postgres ; createdb replicas1 ; psql replicas1 < /opt/d-cache/etc/psql_install_replicas.sql

4) modify /opt/d-cache/config/replica1.batch replacing "replicas" with "replicas1"

5) change the pool-group used in "replica1.batch" file

6) add in dcache.batch these 3 rules:


set env -c broadcastSetupReplicaManager4 "register diskCacheV111.vehicles.PoolStatusChangedMessage? replica1Manager"
set env -c broadcastSetupReplicaManager5 "register diskCacheV111.vehicles.PnfsModifyCacheLocationMessage? replica1Manager"
set env -c broadcastSetupReplicaManager6 "register diskCacheV111.vehicles.PoolRemoveFilesMessage? replica1Manager"


after this line:


set env -c broadcastSetupReplicaManager3 "register diskCacheV111.vehicles.PoolRemoveFilesMessage? replicaManager"


and these lines:


set env -c broadcastSetupReplicaManager4 ""
set env -c broadcastSetupReplicaManager5 ""
set env -c broadcastSetupReplicaManager6 ""


after this line:


set env -c broadcastSetupReplicaManager3 ""


and these lines:


${broadcastSetupReplicaManager4}
${broadcastSetupReplicaManager5}
${broadcastSetupReplicaManager6}


after this line:


${broadcastSetupReplicaManager3}


7) start the dcache-core as usual and then start the second replicaManager with:

/opt/d-cache/jobs/replica1 -domain=replica1Domain -logfile=/var/log/replica1.log start

At this point you will see that the files in "important" pools will be replicated using only the pools in that groups and the same will happen to the files on "user1" pools.

If you need to add more replica manager process on more group of pools you need to follow this procedure again changing the name of the cell, the DB and the job file.

Last Modified by GiacintoDonvito? @ Tue Jan 22 2008