Welcome to the inaugural edition of the occasional VDT newsletter!
VDT 1.1.13 is a major new release for the VDT. For the last year, the VDT has had two distinct versions (US and European), and 1.1.13 will unify them into a single release. The European release had a slightly different version of Globus, and was released as RPMs. The US version was released as a Pacman installation. Beginning with 1.1.13, we will have a single version of Globus for everyone, to ensure compatibility. A subset of the the VDT (Globus, GPT, GSI OpenSSH, KX509 and MyProxy) will be released as RPMs for anyone to use.
In addition, VDT 1.1.13 now contains Globus 2.4.3 plus a number of patches to fix bugs. All of these patches have been submitted to Globus, and we expect that many of them, if not all of them, will be in a future Globus release.
This version of the VDT is packaged differently from previous versions. Instead of having three separate installations (server, client, and SDK), there is now just a single monolithic installation. Why did we make this change? For much of the software, it turned out to be very hard to divide it into server and client portions. Condor can be used as a batch system or a personal grid submission point. The Globus gatekeeper is normally used as a server, but users can use it as a personal gatekeeper for testing. The examples go on and on. The server and client installations of the VDT overlapped more and more as time went on. In addition, many VDT customers simply installed both the server and the client everywhere.
Of course, not everyone wants a monolithic release. VDT 1.1.13 allows
users to install individual components of the VDT. For instance, you
can install just Globus or just Condor or just Chimera, or any
combination you like. So what does it mean to have installed
the VDT? The
vdt-version command now tells users
what subset of the VDT has been installed.
In the past, the VDT has been hard to upgrade. While we still recommend that user install this VDT release in a new, empty directory, there is a tool to copy your configuration from an old VDT release to the new VDT, thereby preserving your old configuration. This should significantly simplify upgrading the VDT.
VDT 1.1.13 is undergoing internal testing, and should be released within two weeks, if not sooner.
Add all these new features to the VDT took more time than anticipated, and there is more to be done. In the near future, we expect to release VDT 1.1.14, which will fix problems that people find in VDT 1.1.13 and will upgrade several software packages:
Once VDT 1.1.14 is released, it will be tested more extensively than our usual releases. After fixing any problems we find, we will release this fixed version as VDT 1.2.0. This will be considered the stable release series of the VDT. (See our release policy for more information about stable and development releases.)
The 1.2.x series (1.2.0, 1.2.1, etc.) will only have minor upgrades and bug fixes, because it is intended to be a stable release series that people can use for production use. In parallel with the release of VDT 1.2.0, we will work on the next development series of the VDT, 1.3.x. This development series will contain the OGSA-Globus components. Initially it will contain Globus 3.2, and will be followed by Globus 4.0. There is not yet much software that works with the OGSA components, so the non-OGSA components of Globus (the Globus 2.4.x series) will continue to be supported and maintained. We expect that the VDT 1.3.x series will allow VDT users to become familiar with the new grid services. on
Our hope is that VDT 1.2.0 and 1.3.0 will both be released in the spring of 2004.
Most of the VDT components are built using machinery (software and computers) developed and deployed by the the NSF Middleware Initiative. This picture illustrates how the software is built:
NMI builds several components for VDT 1.1.13: Globus, Condor, MyProxy, KX509, GSI OpenSSH, PyGlobus, and UberFTP. NMI checks the software out of the appropriate CVS repositories, then patches the software. Currently, only Globus is patched. Each of these software packages uses GPT, so GPT source bundles are then created. These are built and tested on the NMI build pool, then given to the VDT.
The NMI build pool is a cluster of nearly forty computers, and it uses Condor to distribute the builds. There are a wide variety of computer architectures and operating systems in the build pool. When a VDT build is started, we currently tell Condor to run the same build on three architectures: RedHat 7, RedHat 9, and RedHat 7 with the gcc 3 compiler. (This last architecture is a requirement from LCG.) After the build, NMI automatically deploys the software and does basic verification testing to ensure that the software works. After NMI completes the build, the VDT team imports the software into the VDT cache.
There is very close collaboration between the VDT and NMI groups. In fact, one person, Parag Mhashilkar, spends 50% of his time on the VDT and 50% of his time on NMI. He is able to facilitate a close collaboration.
The VDT team builds a few software packages such as the fault tolerant shell and expat. Currently we are working with the NMI team to use the NMI machinery to build this software for us. As we expand the number of architectures that we support, this will save us a lot of time and will eliminate errors.
Some software in the VDT is built by the contributors that wrote the software. The Virtual Data System (VDS), Netlogger, and Monalisa are three such systems. We trust these contributors to give us good builds of their software
When all of the builds have been assembled, the VDT team packages them together. All software goes into a Pacman cache, along with scripts to configure the software during the installation. A subset of the software is turned into RPMs and put onto the VDT web site for download. (We expect this subset to grow larger in the near future, and it will hopefully encompass the entire set of VDT software in the future.)
At this point, the VDT team begins rigorous testing. We have a nightly test that tests that the installation went smoothly (nothing failed, all files look good, etc.) and also tests the functionality of the software (we can submit jobs, transfer files, etc.) After a couple of days of good tests, the VDT Testing Group is brought in to do more testing. This is a group that installs the VDT on a wide variety of sites and does both local testing and testing between sites. After about a week of testing, the VDT is released to everyone.
Carey Kireyev joined the VDT team at the University of Wisconsin-Madison in December of 2002. Though he came from a Microsoft Windows background, he learned Unix quicker than a cheetah runs. He has written many of the Perl and Bourne shell scripts that make the VDT installation experience as pleasant as it is. Carey created most of the VDT certification tests and set up the nightly test suite that is used by the VDT team. Occasionally, we let him out of the VDT cage and allow him to flex his C++ muscles to aid the development of Condor and Condor-G.
Since he is originally from Minsk in Belarus, he doesn't mind the cold winters in Wisconsin, which is good since we don't want him to suddenly move away to a warmer climate.