Last modified 10 years ago Last modified on 02/02/11 18:48:04

[part of a series of meetings]



[see box on the right-hand side]


Up to two minutes (uninterrupted) per person where they can answer two questions:

  • What I did last week (since the last meeting),
  • What I plan to do in the next week.

No questions until we get through everyone :)

Karsten: security-related classes and methods to collect what we need for the EMI AuthN people. Continued working on gPlazma-2 work.

Tigran: nothing useful .. stuff.

Paul: GLUE 2 info-provider, and getting trunk so I'm happy with it.

Antje: working on the book; discovering problems with Chimera-CLI in 1.9.11 & trunk.

Tanja: tickets, NFS, patch about returning layouts; working on understanding striping.

Christian: bug-listing / RT work to keep EMI happy. Building 1.9.11 in ETICS and test-bed configuration of machine.

Gerd: everything! forensic analysis of Mainz problem. Lots of refactoring of pool-manager; and more.

Dmitry: tickets [...]

Plans for patch-releases

Should we make a new patch release?

Merged tones of stuff into 1.9.11 and 1.9.10. Start testing and get a release out prob. by the end of the week.

Trunk activity

Progress with new features...

We can now run gPlazma-2 in trunk.

Created very small pool-manager: no selection unit, no . very light-weight item. This can be run centrally or inside the door.

Managed to run dCache without central pool-manager; able to read / write files; this is with xrootd / FTP / NFS ... but without space-manager and without httpd running.

There are lots of components that send commands to pool-manager; e.g., string commands.

You cannot query a component unless "ps aux poolgroup"

Need to discover what information the pool-manager /may/ provide and not hard-code what information is ex

Perhaps pool-manager should provide beans that allow httpd to discover what information is available.

maybe we need to clean-up the interface first --- move away from sending ASCII commands.

Pin-manager doesn't need the cost; pin-manager periodically queries the pool-manager for a copy of the PoolMonitorV2. This would allow it to choose a pool without talking to pool-manager, so cutting down on latency (one less message round-trip)


For the location-based: this is to allow clients to do read striping. Tanja is looking into this for NFS. For this, we need to clean-up the mover infrastructure in the pools.

The old way: always a door that initiates a transfer.

The new way: have a protocol-engine that

Have to verify file-handle.. the pool needs to know where is the originating door to know if the user is allowed to read the file.

Tigran found some issues with the xrootd mover.

New CMS software release breaks compatibility with our software.

This is a production release.

Try to get into CMS QA process.

Golden release support

What we going for is: 1.9.5

The detector and accelerator people decided to continue the current LHC run until end-of-2012.

We said we would support 1.9.5 to "1st year or end of first run".

Encourage people to upgrade to 1.9.12 by providing more features.

We continue supporting 1.9.5, but not back-porting new features. If people want the new features then they must upgrade to 1.9.12.

Can we move 1.9.12 release dates? Not really, it's tied to release of EMI-1.

Shall we have different policy about back-porting code for 1.9.12? No, we should have the same strict rules as with 1.9.5.

OSG is making a survey of dCache sites: which sites want to upgrade? Only one site found that wants to upgrade.

How about adding SRM request storage .. scalability .. less memory usage.

The problem was that these kinds of arguments caused a delay in getting a release out.

Dmitry believes the "SRM scalability" changes will be done in time: ready in a month from now.

We can double-check the progress in a couple of weeks.

gPlazma2 mapping interface

mapping and reverse-mapping.

After long discussion we have an interface.

Gerd's not happy with the interface; two set of plugins that operate differently.

Are we happy with the interface as-is or do with change it.

Gerd: no reason why we can't change it. Had concrete ideas in Copenhagen,

Like to get rid of the map/rmap: want one operation.

Map(principal) --> {principal, principal, ..}.

How do we do the rmap a principal to the same kind of principal?

Returning a set because, in the webdav interface (HTML rendering), it could be that the uid corresponds to something else.

If you do setacl on the gid, ..

What are the use-cases for reverse mapping?

HTML to show file ownership as principal(s)

If you do the reverse-mapping you could get 20 kerberos principals. Just choose one

This is a problem with the forward mapping.

Forward mapping: allow user to authorise based on DN. This DN gets a UID that is shared with many other DNs then we cannot honour the end-users

DN --> group mapping should be OK.

Maybe we should simply not allow setting DNs by ACLs for legacy gPlazma.

For existing dCache sites, we s

For legacy plugins we can allow login but reject any mapping (currently map and reverse-map).

Plugin itself decides that it cannot support map (/rmap).

Sudo use-case doesn't matter for map/rmap.

Door interface to login-service:

single "map" method that takes a principal and returns a set of principals (possibly empty) of principals that may be authorised concurrently....

Database schema management

Postponed until Dmitry is available.

Issues from yesterday's Tier-1 meeting

Doris, Gerard.

Issues from EMI

Only required platform in EMI-1 SL5 64bit; for EMI-2 (April 2012) only required platform is SL5 64, SL6 64 + something Debian ??

dCache RPM not appearing in website. This was because there was no consideration in EMI-0 on packaging. This

EMT mailing list on packaging.

Building issue

Can't build dCache on SL-5.

Committed the Batik fix into 1.9.11, so should be available in 1.9.11 branch's HEAD.

Fixed but Maven doesn't work because HOME envar is set to correct value but maven tries to use / as home. Could be due to a chroot ?

Outstanding RT Tickets

[This is an auto-generated item. Don't add items here directly]

Mainz problem

This is ticket

Tar archive with lots of files. The one saying "sweep". Patrick thinks this is the "sweeper ls" output.

From the output it appears that very many of the files are cache-only.

Lots of cached files, ran out of

They say they ran "migration move" without any options.

Gerd to ask them the relevant questions tomorrow.

Gerard / PIC use-case

If we start in read-only mode then we don't touch the data-files.

Second is we open db is read-only, or simply don't open the db.

Start without inventory.

Two read-only modes: don't-accept-new-data and don't-touch-the-filesystem.

Review of RB requests



Same time, next week.