wiki:developers-meeting-20100323
Last modified 11 years ago Last modified on 03/23/10 19:55:37

dCache Tier I meeting March 23, 2010

[part of a series of meetings]

Present

dCache.org(Gerd,Tigran,Owen,Paul), Sara(Onno), BNL(Pedro), NDGF(Gerd), PIC(Gerard), GridKa(Doris)

Agenda

(see box on the right-hand side of page)

Site reports

SARA

Onno reported that things are currently fine.

There are a few support tickets that are currently open.

First, there is a support ticket about a file's availability, as reported by SRM ls. The issue is that when creating a separate link-group/link for xroot the SRM reported that these files had availability UNAVAILABLE. This is a problem because FTS uses this value to determine whether a file is really there or not; if not then it fails any transfer involving this UNAVAILABLE file.

The work-around for this issue is to specify a protocol unit "*/*" and include this protocol unit in the link selection.

The issue arises because of a deficiency in the SRM specification. In dCache, the locality of a file depends on which protocol the user chooses to access the file; however, this information is not relayed as part of the SRM ls command. Since dCache does not know which protocol the user will use, it must decide on the users behalf. Currently behaviour is that, if a file is not accessible by all protocols then it is reported as UNAVAILABLE; however, we may be able to change this behaviour so that a file is reported as UNAVAILABLE only if it is inaccessible by any protocol.

The second issue is one that Ron has reported. SARA are unable to publish SRM reservations for ATLAS. Paul requested further information that Ron has now provided. Paul will investigate with this extra information.

NDGF

Gerd reported that NDGF updated to dCache v1.9.7 last week. This gives 1.9.7 almost a week of running at a Tier-1 centre in production environment.

GridKa

Doris had nothing to report: everything is fine.

PIC

Gerard reported that PIC are still waiting for the SRM database recovery procedure. The issue is that an unscheduled update to the PostGreSQL resulted in a database corruption. PIC were forced to roll-back to a back-up copy; however, there is a window of time between when the backup was taken and restoring the backup. Files transferred in that window are recorded in dCache, but are not in the Space Manager database. PIC need some means to reconcile this discrepancy.

Gerard also reported an issue with their test-instance. This appears to be due them not creating the necessary database in PostGreSQL. Gerard is to investigate further.

Gerard also asked about the SRM publishing local protocols: can this be filtered? The issue is that Gerard is seeing SAM tests attempting to use site-local protocols for a file transfers. These show up in the SRM Watch utility.

Paul said that he believes the problem here is incorrect tests, rather than dCache misbehaviour. Neither the SRM nor GLUE provides a mechanism to say that support for a protocol is site-local. Testing all advertised protocols, whether advertised within SRM (srmGetTransferProtocols) or by GLUE (GlueSEAccessProtocol), will inevitably risk attempting transfers that cannot succeed.

Gerard asked Onno about SARA's configuration of xrootd protocol support: is it possible to allow access to a subtree of dCache's namespace? Onno replied that Ron had configured this and he would send Gerard the details.

BNL

Pedro asked whether there had been any progress with the GridFTP copy-wait problem? Gerd said he'd not had time to work on it, but it's on his TODO list.

Pedro reported that BNL have seen the same source locality problem. After a short discussion, it seemed this wasn't the same issue as Onno had reported. It may have been due to pools becoming disabled, but there was no clear evidence that this was the cause.

Doris' questions

Pool sizes

Doris asked everyone what their biggest pools were.

Gerd said that, for NDGF, the maximum storage exposed by a single node is 200 TB, but this was using multiple pools. The biggest pool is 34 TB (not on the 200 TB machine).

Pedro reported that, at BNL, they have some pools with 16 TB. The issue with having larger pools is the start-up time: if a pool goes off-line then a 10 TB pool can take some 15 minutes to come back up again; so, for them, 10--16TB is the "sweet spot".

There was some discussion about why it takes some time for pools to come back up again. Gerd mentioned that the issue, in some cases, is the underlying filesystem: GPFS pools are slow due to dCache's process of scanning the metadata (file size) of the data files in the pool. ZFS, which BNL uses, may suffer from a similar problem.

Gerard said that, at PIC, they have some pools with 60 TB of storage and the newer thumper boxes have 33 TB pools. The new hardware has a single 135 TB pool per machine. Whenever possible, they run a single pool per machine; but, if multiple pools are running then they each have their own domain.

Doris asked Gerard if the pools remain responsive to the admin commands. Gerard: yes.

Gerard also mentioned that their 33 TB pool, which have a lot of small files, needs some 4 GiB to start up. Gerd asked how many files are on these pools? Roughly 2 million. Gerard reported that PIC has some 16 GiB memory on a Thumper and no more than 5 pools per node, so this works out OK.

ACLs

Doris also asked whether people were using ACLs in production.

DESY people were not sure if ACLs are in production; it's certain anticipated that this happens in the near future, if not already.

Support tickets for discussion

[Items are added here automagically]

DTNM

Same time, next week.