wiki:developers-meeting-20101130
Last modified 10 years ago Last modified on 11/30/10 17:22:48

dCache Tier I meeting November 30, 2010

[part of a series of meetings]

Present

dCache.org(Tigran, Owen, Tanja, Paul), PIC(Gerard), GridKa(Doris)

Agenda

(see box on the other side)

Site reports

FZK

Doris reported that things are fine at FZK; she had no issues to report.

There was a question, though. The logging in gPlazma is currently rather excessive, resulting in the log file becoming too long over night.

Doris tried adjusting the Log4j-appender but this didn't seem to work. Doris found that "set log level WARNING" give no reduction in the volume; increasing the threshold to ERROR resulted in apparently output.

Tigran explained that ERROR level should be fine.

Doris explained that, for some authz requests, gPlazma initially claims that the user's credentials are rejected, only for it to accept the credential and the transfer to succeed. This is clearly nonsense.

Paul said that this is likely due to gPlazma's iterative behaviour, but that she was right, this is a bug.

Doris will send an email to support@… so we can identify which lines in the output are being logged at the wrong level.

This has been done; see RT-5974

http://rt.dcache.org/Ticket/Display.html?id=5974

PIC

Gerard reported that everything is OK at PIC.

He had two questions:

gPlazma and lsc files

Gerard asked if gPlazma supports the new ".lsc" file format? He later supplied additional information from an EGI broadcast.

A new version of lcg-vomscerts has been released for gLite 3.1.
[...]
The rpm is available in the gLite 3.1 yum repositories of:

* glite-FTS_oracle
* glite-WMS

Please note that the repository of the following services IS NOT going to be
updated with this version of lcg-vomscerts:

* glite-UI
* glite-VOBOX
* glite-WN
* lcg-CE

The mentioned node types no longer need lcg-vomscerts when they have been
configured correctly. i.e. when for each supported VO a directory
/etc/grid-security/vomsdir/<vo> contains a "*.lsc" file for each of its VOMS
servers.

YAIM handles that automatically, see the "VO_<vo-name>_VOMSES" and
"VO_<vo-name>_VOMS_CA_DN" variables in the YAIM documentation:
https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables

To update just the lcg-vomscerts rpm:

 yum update lcg-vomscerts

For other node types, or sites that do not use yum or YAIM, the rpm is
available here:

http://etics-repository.cern.ch/repository/download/registered/org.glite/lcg-vomscerts/6.2.0/noarch/lcg-vomscerts-6.2.0-1.noarch.rpm

No one from the dCache team was sure about this.

Gerard submitted a ticket about this issue. See RT-5973

http://rt.dcache.org/Ticket/Display.html?id=5973

Limiting size used by restored files

Gerard asked if it was possible to limit a VO's ability to restore from tape. The concern is that a VO might restore a colossal amount of storage so it's pinned with a large lifetime; 6 months, say. If all pools are HSM-attached then this would block anyone writing into dCache. The worry is that someone (or something) might do this by accident.

One idea would be to restore into a space reservation; once that reservation is exhausted then no further restores would be allowed. dCache doesn't support this and it currently isn't part of the plan to implement support for it.

However, restoring into a space-token isn't essential, the important part is limiting the damage that a VO can do to themselves.

Another potential solution is to have only some subset of the pools declared HSM-attached; however, this artificially limits the ability to restore from tape, so is undesirable.

Doris explained how they solve the problem at FZK: they have a finite number of pools and use the hopping manager.

This issue has been raised as RT ticket 4580.

http://rt.dcache.org/Ticket/Display.html?id=4580

Support tickets for discussion

[Items are added here automagically]

RT 5906: Atlas transfer problem

DTNM

Same time, next week.