wiki:manuals/in2p3VOConfig
Last modified 11 years ago Last modified on 07/13/07 09:38:07

VO Configuration for PPS at IN2P3-CC

This is a proposal for configurating dCache-SRM v2.2 in order to match Atlas and LHCb requirements. This configuration has not been tested nor validated. This page will be updated as soon as we detect a bug. This configuration was set up with help and advice from dCache team.

PoolManager configuration

Basically you have to create linkGroups which allows the SrmSpaceManager to match poolGroups with Space management configuration, in /opt/d-cache/config/PoolManager:

...
psu create linkGroup linkgroup-disk
psu set linkGroup attribute linkgroup-disk HSM=None
psu set linkGroup attribute linkgroup-disk VO=/lhcb/lcgprod
psu set linkGroup attribute linkgroup-disk /lhcb/lcgprodRole=*
psu set linkGroup attribute linkgroup-disk VO=/dteam
psu set linkGroup attribute linkgroup-disk /dteamRole=production
psu set linkGroup attribute linkgroup-disk /dteamRole=lcgadmin
psu set linkGroup attribute linkgroup-disk VO=lcgdteam
psu set linkGroup attribute linkgroup-disk lcgdteamRole=*
psu set linkGroup attribute linkgroup-disk VO=/atlas
psu set linkGroup attribute linkgroup-disk /atlasRole=production
psu set linkGroup attribute linkgroup-disk VO=/atlas/soft-valid
psu set linkGroup attribute linkgroup-disk /atlas/soft-validRole=production
psu addto linkGroup linkgroup-disk ...
...

In our config, all disk pools are shared between all VOs. You may have dedicated linkGroups per VO.

gPlazma configuration

In order to activate gPlazma cell on the HEAD NODE

  • modify the file /opt/d-cache/etc/node_config
    gPlazmaService=yes
    
  • configure for using the local module in /opt/d-cache/config/dCacheSetup
    useGPlazmaAuthorizationModule=true
    useGPlazmaAuthorizationCell=false
    

All other nodes should then be configured to contact gPlazma cell

  • in /opt/d-cache/config/dCacheSetup
    useGPlazmaAuthorizationModule=false
    useGPlazmaAuthorizationCell=true
    
  • and in /opt/d-cache/etc/node_config
    gPlazmaService=no
    

On the HEAD NODE, gPlazma was configured to use gplazmalite-vorole-mapping and grid-mapfile. Modify /opt/d-cache/etc/dcachesrm-gplazma.policy

...
# Switches
saml-vo-mapping="OFF"
kpwd="OFF"
grid-mapfile="ON"
gplazmalite-vorole-mapping="ON"

# Priorities
saml-vo-mapping-priority="1"
kpwd-priority="3"
grid-mapfile-priority="4"
gplazmalite-vorole-mapping-priority="2"
...

Then VOMS mapping is configured in /etc/grid-security/grid-vorolemap

"*" "/dteam/Role=lcgadmin" lcgdteam
"*" "/ops" lcgdteam
"*" "/atlas" atlagrid
"*" "/lhcb" lhcbgrid

You may change usernames to match your local configuration (ie. atlas001...)

VOM authorization is described in /etc/grid-security/storage-authzdb

version 2.1
authorize lcgdteam read-write 3915 239 / / /
authorize aligrid read-write 3495 145 / / /
authorize atlagrid read-write 3327 124 / / /
authorize cmsgrid read-write 3033 119 / / /
authorize lhcbgrid read-write 3437 155 / / /

You may change usernames and paths to match your local configuration. The paths should probably be restricted to something like:

version 2.1
authorize lcgdteam read-write 3915 239 / /pnfs/in2p3.fr/data/dteam /pnfs/in2p3.fr/data/dteam
authorize aligrid read-write 3495 145 / /pnfs/in2p3.fr/data/alice /pnfs/in2p3.fr/data/alice
authorize atlagrid read-write 3327 124 / /pnfs/in2p3.fr/data/atlas /pnfs/in2p3.fr/data/atlas
authorize cmsgrid read-write 3033 119 / /pnfs/in2p3.fr/data/cms /pnfs/in2p3.fr/data/cms
authorize lhcbgrid read-write 3437 155 / /pnfs/in2p3.fr/data/lhcb /pnfs/in2p3.fr/data/lhcb

Publication in PPS BDII

At the moment, information is statically published. The LDIF file is derived from Maarten's example