Last modified 5 years ago Last modified on 01/17/17 15:15:51

dCache Tier I meeting MONTH DATE, 2013

[part of a series of meetings]

Present, IN2P3(), Sara(), Triumf(), BNL(), NDGF(), PIC(), KIT(), Fermi(), CERN()


(see box on the other side)

Site reports


Everything is OK at PIC; nothing to report.

Marc found the CHEP presentations very interesting and will likely (re-)use some slides this afternoon.

Marc is also talking with Pepe about how to support the ?? VO; he will come back with more concrete questions soon.

Marc also reported that last week enstore problem (discovered on test instance) was fixed with the latest update.


Ulf reported that NDGF had an interesting weekend, with plenty of errors. There were several FTS transfers from ATLAS where dCache complained that it received too much data. The file should have been some 200 MiB, but dCache received about 4x that amount. The problem turned out to be a broken sending storage endpoint.

There have been issues with pools connecting to the head nodes. This seems to be related to ZK. Gerd said he's found two bugs -- one trivial and one that's more complicated. These problems seem to be triggered by a restart of zookeeper -- Gerd has more details.

Ulf was also experimenting with an IPv6-only test box. This was configured to use dCache's internal ZK cell. However, with IPv6-only, the ZK cell does not start up in a timely fashion, resulting in problems with cells communicating with each other. Ulf is deploying a dedicated ZK cluster as a work-around.

This work will be reported at the WLCG workshop (which is mid-June).


Xavier reported that things are running very well, no issues in the past week.

Xavier had several questions:

Zookeeper in virtual machine

Does anyone present have experience running ZK nodes on virtual machines?

NDGF: fully virtual machines (3 machines on different hardware)

PIC: yes, but only test cluster (1 real, 2 virtual machine)

Zookeeper cluster for multiple dCache instances

Is it possible to run multiple dCache instances using the same ZK cluster?

The motivation is that a single ZK cluster for all instances would allow KIT to deploy ZK on dedicated hardware (rather than reusing the dCache hardware), with better decoupling from dCache.

No, while this is theoretically possible, dCache currently assumes it is the only user of the ZK cluster.

Duplicate pools

Xavier asked whether ZK allow dCache pools to share the same underlying storage?

Paul replied: no, but it's a technology that should make this easier to achieve.

SQL query


Paul promised to remind Tigran.

Support tickets for discussion

[Items are added here automagically]


Same time, next week.