Burt Holzman asked about VOMS not installing from the VDT 1.3.9 cache. He was installing the VOMS-Client package on a Scientific Linux machine. Tim noted that Jerry (Gieraltowski?) at Argonne reported a similar problem, but that he (Tim) hadn't been able to reproduce it. Dan Yocum said that he had successfully installed VOMS on his FC4 machine.
After some more poking around, it seemed that the problem affected only x86-64 machines. Then, after the meeting, Tim checked the VDT's notes, and found that VOMS isn't installed on RHEL3/x86-64 (and lookalike) machines, because in VDT testing, voms-proxy-init would fail to work against the server after the first successful request. The problem was logged in LCG's Savannah; there is no known workaround at this time, although it's possible that installing the 32-bit binaries in compatibility mode may work but is untested.
Dan Yocum asked about progress on a Mac OS X distribution of at least the compute-node packages (specifically, the Globus gatekeeper, PRIMA, and Condor). In general, Mac OS X work has received little attention recently, due to other demands placed on the VDT team by the rapid sequence of VDT 1.3.7, 1.3.8, and 1.3.9. Further, the team is hoping to ready VDT 1.3.10 for the end of January, so it's possible that Mac OS X support will be delayed further.
Leigh Grundhoefer introduced Kristy Kallback-Rose, who will be working on the midwest tier 2 cluster. Kristy noted a problem with running configure_monalisa when using the 'daemon' account with no home directory. The configure script would hang after a certain point, and manual attempts to run MonALISA seemed to fail pending the creation of SSH keys in the user's home directory. Leigh suggested they just use a 'monalisa' account and ensure a home directory exists.
Leigh wondered if it was possible to support 'old' VOMS Admin URLs, as seen in VDT 1.3.6. Those URLs started with something like https://<machine-address>:8443/edg-voms-admin/. Tim promised to check to see whether an alias had been set up for the old-style URLs in VDT 1.3.9a and get back to Leigh with a solution.
Iwona Sakrejda had some questions about setting up some machines to submit jobs using Condor-G. There was some confusion around the term 'master', which in other batch systems refers to the central control machine. In Condor, every machine runs condor_master, which is a process that oversees the other Condor daemons running on that machine. However, to have a Condor pool, there needs to be (at least) one machine called the 'central manager', which runs the negotiator and collector daemons (under the condor_master instance). A submit machine does not need its own negotiator or collector daemons, but still needs condor_master running a scheduler daemon (schedd).
Iwona wanted to know whether the osg-client package contained enough of Condor to set up the submit machines, and the answer was yes.
Vikram was hoping to follow-up with Alain (on paternity leave) about building PRIMA using the NMI build and test framework. When building against 32-bit Globus builds, everything worked. However, as soon as a single 64-bit platform was added to the NMI build command file, the build failed right away. Tim suggested looking at the platforms for which Globus was built and making sure that all of the 64-bit platforms requested for the PRIMA build had a corresponding Globus build in the given NMI runid. Further discussion was planned offline.