wiki:manuals/Release_and_Test
Last modified 10 years ago Last modified on 06/05/08 15:46:00

Release and testing of dCache

This pages describes the process of Releasing dcache by without using the automated process. This is done in three stages.

  • Tagging a release,
  • Building,
  • Testing,
  • Releasing.

These three process are all discussed and may be placed in seperate processes over time.

Tagging a release for testing

I tend to do this of wien.desy.de and install the dcache-build with sl 3

apt-get install dcache-build 

For SL4 add the file

#cat /etc/yum.repos.d/dcacheBuild.repo
[dcacheBuild]
name=dcacheBuild
baseurl=http://cvs.dcache.org/repository/yum/sl4.4.0/$basearch/RPMS.tools/
enabled=1

and type

#yum install dcache-build 

This brings in automatically all the dcache build dependencies and also provides a busyb script in /etc/dCacheBuild/busyb.xml

For details on checking out dCache for please follow the link.

A release is requested for one branch eg 1.7.0 or 1.7.1 or 1.8.0. the first stage is to check out the head of this branch into a fresh directory.

cvs co -r production-1-7-0 -d head_production-1-7-0

Once this is completed. enter the directory

cd head_production-1-7-0

Checkout all the modules, Either using the short cut ant target specified in /etc/dCacheBuild/busyb.xml or each module separately as specified in the README in the build directory.

ant checkout.all -Dcvs.tag=X-Y-Z

now edit the rpm.properties file to make a new release number. The release number both for dcache.client and dcache.server this number should be prefixed with build_$NEWRELEASENUMBER. and commit your changes.

cvs commit -m "build release numbers updated"

Once this is completed taging can start.

cvs tag -r build_$NEWRELEASENUMBER .
cvs tag -r build_$NEWRELEASENUMBER modules/*

once this is done you should leave the head directory and return to the base directory.

Building

Ser up the location of JAVE_HOME.

export JAVA_HOME=/usr/java/jdk1.5.0_12/

You can now check out your tagged release.

cvs co -r build_$NEWRELEASENUMBER -d Build.$NEWRELEASENUMBER Build

for example

cvs co -r build_beta_1_8_0_12_0 -d Build.build_beta_1_8_0_12_0 Build

now enter this directory.

cd Build.$NEWRELEASENUMBER

and checkout them modules.

ant checkout.all -Dcvs.tag=build_X-Y-Z

once this is done you can complete the build process by following all the targets specified /etc/dcachebuild/busyb.xml

ant cleanall
ant cells.bin
ant cells-protocols.bin
ant cells-log4j.bin
ant dcache-srm.bin
ant infoProvider.bin
ant infoProvider.bin
ant xrootd-tokenauthz.bin
ant javatunnel.bin
ant srmclient.bin
ant server.rpm
ant server.tgz
rm -rf modules/external/globus/globus-4.0.3 ; ant client32.rpm
ant client32.tgz
rm -rf modules/external/globus/globus-4.0.3 ; ant client64.rpm

you should now have client and server rpms. If this is not case the the build has probably failed due to one of a few reasons, most likely some one has commited something that is not building on this branch.

If your building the dcap client and it fails please look at this log file.

build/dcap/dcap.log 

provided you have rpms these should then install these rpms on another virtual machine. You should check the xen host log file in /root/logbook.txt.

xen-image-manager -b $YOURTESTNODE -r $TARGETOS

Note: As cern has not yet released yum or apt targets suitable for installing an building releases other than SL3, SL4 i386 and x64 releases cannot be explained within the scope of this document.

to test the release you should now install dCache using the standard YAIM method unto the configure process

/opt/glite/yaim/scripts/install /root/site-info.def 
 glite-SE-dcache-admin-postgres

at thsi stage you should remove the dCache rpms and install your new test rpms.

rpm -e --nodeps --noscripts dcache-server
rpm -e --nodeps --noscripts dcache-client

and now install your newly build rpms.

rpm -i dcache-server-build_X.X.X.noarch.rpm
rpm -i dcache-client-build_X.X.X.i586.rpm

you can now proceed with configuring the dCache following the standard YAIM install process.

/opt/glite/yaim/scripts/configure /root/site-info.def 
 glite-SE-dcache-admin-postgres

you should now have a fully installed dcache.

Testing

Testing this release is now important. The dCache functional test suite is installed on the hosts clinton.desy.de and limerick.desy.de. On these hosts you can run the dCache test suite using the command

dCacheTestSuite.py  -s 'security=gsi locality=client release=1.7.0' 
    -d desy.de -v dteam -T $YOURTESTHOST -q 0

at the end of this process your should check the return code with

echo $?

if this value is correct you should then return to your build host and tag the release as production.

Tagging a release for Release

This is done as follows

cd build_$NEWRELEASENUMBER

and then tag this release as production

cvs tag -r production_$NEWRELEASENUMBER .
cvs tag -r production_$NEWRELEASENUMBER modules/*

return to the base build directory and checkout this release again.

cvs co -r production_$NEWRELEASENUMBER -d production_$NEWRELEASENUMBER

check out the modules either using the process specified in the README or the ant shortcut task

ant checkout -Dcvs.tag=production-X-Y-Z

once this is done you can complete the build process by following all the targets specified /etc/dcachebuild/busyb.xml.

you should now have client and server rpms. If this is not case the the build has probably failed due to your own typos or mistakes as only the cvs tag has changed changing only some internal dCache variables changing what the administrator will see on the internal dCache web pages.

Learning SVN

Checking out head

svn co svn+ssh://omsynge@svn.dcache.org/data/svn/dCache/trunk

moving head to a branch.

svn copy svn+ssh://omsynge@svn.dcache.org/data/svn/dCache/trunk svn+ssh://omsynge@svn.dcache.org/data/svn/dCache/branches/owen/build-1-8-0-4

Checking out that branch

svn co svn+ssh://omsynge@svn.dcache.org/data/svn/dCache/branches/owen/build-1-8-0-4

Removing the "rc" from client rpm names

ant -Dbuild.minor.number=4 dcap32.rpm

Re-building old release

ant -Dbuild.minor.number=12p6 -Drelease.name=production-1-8-0-12p6 server.rpm server.tgz

Yum repo for CTB

(16:19:40) Louis: http://grid-deployment.web.cern.ch/grid-deployment/glite/cert/3.1/internal/sl4/i386/RPMS.cert-updates/ (16:19:41) Louis: BUT (16:19:52) Louis: the ctb-vomscert is not yet in