Last modified 10 years ago Last modified on 10/13/10 18:06:38

[part of a series of meetings]


Antje, Jan, Paul, Owen, Karsten, Christian; Gerd; Thomas; Dmitry


[see box on the right-hand side]


Up to two minutes (uninterrupted) per person where they can answer two questions:

  • What I did last week (since the last meeting),
  • What I plan to do in the next week.

No questions until we get through everyone :)

Christian: documentation discussions; websites with dCache documentation, sshd implementation.

Karsten: learning dCache, looking into Argus.

Owen: documentation; EMI (new ETICS), Clarifying PEB documentation. Disappearing Thursday--Friday, working on virt. of workernode.

Paul: EMI work, reviewing code, fixing ReviewBoard, etc.

Jan: Three days off .. teaching the new guys things.

Antje: Vacation last week. Working on the installation of dCache, updating the book.

Dmitry: Updated CDF from dCache 1.7 to 1.9.5. Have been looking at the RT tickets. Answering tickets.

Gerd: Vacation (very nice). Reading emails. Last 1.5 days looking into user mapping and group mapping specifically for NDGF; this is for NDGF's use-case only. This is not intended for general usage, but will be a starting point for discussion /design of future gPlazma plugins.

Thomas: Netty based HTTP mover up for review. Some smaller changes: setting of Origin principal, etc. Next week: presenting dCache at CHEP (in Taipei)

Plans for patch-releases

Should we make a new patch release?

Xrootd problem

There is a problem with the xrootd hang-up handler; this is RT ticket #5859.

The problem was introduced when attempting to fix an earlier performance regression (an earlier patch resulting in excessive resource consumption).

The problem is understood and also how to fix it. Thomas will put a patch on review board. Once this is in, we can release 1.9.5-23.

It's unclear which other supported branches will need to be patched as some branches haven't been released with the (broken) patch that causes this problem.

Trunk activity

Progress with new features...

eclipse support

Christian has improved the eclipse ant target; now down to a single error (which is currently inexplicable). Paul said he'll try to help out.

Issues from yesterday's Tier-1 meeting

Outstanding RT Tickets

[This is an auto-generated item. Don't add items here directly]

Ticket about RFC-proxies

Thomas wanted to discuss how to approach this ticket.

The fix for parsing RFC-compliant proxies is in JGlobus v1.8.0. We currently use JGlobus v1.7.1 in all our supported branches (and Trunk).

Our version of JGlobus 1.7.1 includes a number of patches. Porting these patches to JGlobus v1.8.0 shouldn't be too hard; however, should we upgrade the supported branches to JGlobus 1.8.0 ?

We may be able to backport the RFC-compliance patch to 1.7.1; Thomas thinks the patch was to adjust some error-checking, so the patch may be relatively easy to get into 1.7.1

Globus do not support 1.7(.1) any longer and (we think) would be unlikely to release a new version of JGlobus with support for RFC-proxies.

There was general agreement to move Trunk to JGlobus 1.8.0. The question is whether to use 1.7.1 with the RFC-proxy fixes or switch to 1.8.0.

NDGF could try deploying either 1.8.0 or a patched version of 1.7.1 to check whether these versions work (in a production environment) and whether they fix the problem.

There was some discussion on how important is this? Owen indicated that, although it may not be a problem now, it's an impending problem that will definitely hit us "soon". Thomas added that some software produces are generating RFC-proxies by default.

Gerd+Thomas: try to patch 1.7.1 (prob. for supported branches) and 1.8.0 (prob. only for trunk).

Thomas can get generate RFC-compliant from his Nordunet membership; so he can test the problem.

Gerd mentioned that the patches for JGlobus are stored in modules/external/cog

Install directories

Paul asked: can we update our documentation so we don't give /pnfs/<vo name> as our example directory structure in Chimera. The use of pnfs is anachronistic and including the site name ( serves no purpose.

Suggested alternatives are:




Some sites may be dedicated to a single VO. These sites could use a structure like:


We want to keep at least one directory since Chimera has a /admin directory where data shouldn't be stored.

Gerd explained that, at NDGF, the exported path is always the part that starts VO-name (e.g., /atlas). The namespace has a directory structure like: /pnfs/, but this is hidden from the end-users.

Owen was concerned that this not be changed in dCacheConfigure: a site may not have set a value and, if the default changes, dCache configure may recreate the directories elsewhere, so "loosing" (or hiding) stored data.


Paul mentioned that today was a GDB meeting. Of particular note was:

  • a talk by Jens Jensen on SRM getTurl performance (his data) in which dCache was mentioned favourably, due to Gerd's sync/async patch.
  • a talk by Patrick describing the NFS v4.1 demonstrator.

Debconf in dCache

dCache has a lot of modules: Chimera, SRM, gridftp (etc) doors, so is very modular, at least, in terms of deployment.

Owen wonders which component is responsible for configuring certain "shared" responsibilities. For example, is the site's namespace structure (/pnfs/ something configured in chimera or dCache?

The chimera package is actually just a configuration package; so this would naturally fit with the Chimera package.

Thomas: don't feel all of dCache configuration should be covered by

Owen: I feel it should.

Gerd: it might be interesting to see how far we can get .. but our time isn't for free.

Thomas: If you have a highly distributed deployment, site-admins will want to configure things themselves or use dCacheConfig.

dCacheConfigure may remove the need for a separate debconf package .. in fact, dCacheConfig. is designed to be embedded within something, like debconf.

Owen and Thomas to continue talking about this via mailing list.


Tigran, Patrick and Thomas are attending CHEP.

Review of RB requests


Same time, next week.