Last modified 10 years ago Last modified on 02/16/11 18:13:16

[part of a series of meetings]


Tanja, Christian, Antje, Paul, Tigran, Karsten, Patrick, Gerd


[see box on the right-hand side]


Up to two minutes (uninterrupted) per person where they can answer two questions:

  • What I did last week (since the last meeting),
  • What I plan to do in the next week.

No questions until we get through everyone :)

Karsten: work on VORolePlugin

Patrick: meetings

Gerd: pool-manager refactoring, helping Mattias getting the system right, list of things still need to be fixed for 1.9.12

Tanja: Tickets, Working on patch to allow multiple movers for one transfer. Working with Sven on scripts.

Christian: a total of five EMI meetings; still trying to find out about test-bed. Still fighting with ETICs (reported problems via GGUS .. no useful answers so far). Verification and Test plan for EMI.

Antje: still working on the book; fighting with gsi-dcap. Next chapter will be gPlazma.

Paul: work on GLUE 2.0 info-provider, tickets, ..

Tigran: VM image for Guttingen; merging and testing, today released 1.9.11-2. dcap-patch-to-make it work with space-manager didn't make it in.

Plans for patch-releases

Should we make a new patch release?

1.9.11 just done. After meeting do 1.9.5 then 1.9.10 releases.

Perhaps we do 1.9.10 first .. 1.9.10-4 isn't recommended at the moment (although it's running at DESY and Uni.Michigan).

Trunk activity

Progress with new features...

What is missing for functional 1.9.12


Paul is focusing on getting that done.


We have a plan: Karsten is working on integrating old behaviour into the new one. The username/pw is still TODO.

Things still need

srmCopy doesn't understand errors (pools were not generating errors)

SRM syn. reply causing some problems.

srm doesn't cancel pin requests if file is released during initial pinning.

the pin request will be retried internally and

info publishing authorisation incorrectly

login broker publishing information in info

CDC problems

Static thread-pools in new pool, resulting in logging incorrectly attributed to wrong cell.

request getting stuck in pool-manager when request-container limit reached: FSM gets stuck.

Cells (like pin-manager) with bounded message queue length. If en-queuinig the reply fails then the timeout isn't honoured.

Also, we've no idea how close Fermi team are in getting the scalable SRM into ReviewBoard.

Gerd away Wed--Friday next week. Tigran away all next week, back for a week then away for two weeks.

Taking things out …

There is a patch:

o pin-manager so it is hidden, o remove support for liquibase support in the old pin-manager

Gerd is happy fix bugs in the new pin-manager in 1.9.13 and back-port those fixes into 1.9.12.

Changing the pool-manager to be compatible with the old pin-manager?

For 1.9.12 the pool-manager must be able to talk to the old and new pin-manager.

Florida asking for sites that will volunteer to deploy new EMI releases.

When can we say NDGF is running EMI-1? Beginning of March.

One thing Gerd would like to remove is liquibase for the old pin-manager. This was a bad decision, given how development went on the new pin-manager.

Gerd and Tigran to discuss this off-line.

All the changes to transfer-manager: mainly cleaning up code. The largest patch adds http support.

Do we commit transfer-manager updates? Only if there's time.

TODO: we need to talk to Dmitry to understand what are his plans are for getting the scalable/redundant SRM into 1.9.12.

No blind commits: everything goes through RB.

Issues from yesterday's Tier-1 meeting

Triumf and NDGF dialled in, neither had anything to report.

Issues from EMI

2nd March is code-freeze date for EMI.

We currently can't build 1.9.11. With 1.9.5 there isn't this problem; probably since we weren't using Maven then.

Christian to send Patrick a link to the GGUS ticket.

EMI says we MUST provide src RPMs.

This and that

The xrootd presentation for Goettingen workshop. This will be given by the sys-admin at Wuppertal.

Dmitry found that xrootd, under low load, would do worse than NFS. The problem is due to the client.

The number of round-trips was significantly higher when using xrootd compared NFS. This increases the latency provided there is sufficient network bandwidth.

If there's still time, it would make sense to compare running our "standard ATLAS job" against a SLAC xrootd, for a comparison.

"Should I run dcap or xrootd?" is a hard question to answer as it's highly depends on the client. Suck it and see.

"Should they use SLAC or dCache xrootd". If we can show that we're comparable performance. Tigran was running both versions on a machine at DESY. The difference between the two was really small.

The test Dima extracted from the hammer cloud, ran used only the one file. When's he back? In a few weeks, but Yves has full access.

One close to the venue and two in the city centre .. but the ones in the centre are more expensive and take longer to arrive.

Gerd is giving a presentation on the migration module.

Outstanding RT Tickets

[This is an auto-generated item. Don't add items here directly]

dccp doesn't have a man page

Review of RB requests


Same time, next week.