We plan to release updates for VDT 1.3.9, 1.3.10, and 1.3.11 as soon as possible. The first two (1.3.9 and 1.3.10) will come sooner because they have been more widely deployed than 1.3.11. There were questions asked about whether or not the security advisory affects GRAM's delegated proxies. Alain is investigating.
Alan asked for advice on umasks, but no one knew enough to give good advice. We want a umask that is restrictive when making proxies, but more open when making a client VDT installation.
Alain briefly discussed the note from last week, and asked for comments. There were few comments. Alain proposed that he is leaning towards installing new versions of the VDT (not small updates) in a separate directory. It will copy as much configuration as possible from the first directory, and it can be made to run in "test" mode. That is, it can run on separate ports, with separate init scripts, and the like, so it can be run in parallel with the old VDT. Then we would have commands to "turn off" the old VDT, and switch the new one to standard ports. This would allow for very smooth transitions.
There are some tricky cases here:
Leigh likes this idea a lot. Other people didn't have much comment, but perhaps they just all agree. We'll discuss it more at the OSG consortium meeting next week.
Tim Freeman talked about Globus's Virtual Workspaces, which is likely to be used in Open Science Grid for the Edge Services Framework. Alain quizzed him to learn the basics so that we can start figuring out how to integrate this into the VDT.
It is a GT4 service, and it doesn't need to be on the same computer as the virtual machine. Instead it talks to other nodes with the hypervisor, using remote Xen control: This could be with XML RPC, but they just use ssh now. It talks to a command-line program they wrote which does image management and some sort of validation. In a vague sense, it looks like GRAM. The service is named Virtual Workspaces. There is another service called dynamic accounts that also exists, but we're not planning on using it.
We can download it now, but doesn't incorporate remote resource management, and instead must be co-located with the hypervisor. In a week or two (after the OSG consortium meeting), they can give us a feature-incomplete preview for us to learn from. It relies on Java core and some database common packages. We can build it from source or use their GAR files. There is also a Python component, which resides on the remote resources, which depends on Python 2.3 or later and Xen 3.0 or 2.0. This is the bit that talks to the hypervisor and controls it.
We can do simple testing on a single computer without Xen, because they have a mode where it pretends that all of Xen control commands succeed.
It can optionally use RFT for image staging, and RFT can optionally be on a different node. Users will need a computer that has a lot of disk space for the image. My guess is that using RFT is a good idea, even if it is optional. They plan to eventually add SRM support. OSG might host images at SDSC and Fermilab via SRM, and they are contacting VOs to get advice on what is needed.
It can be accessed via a command-line client which is complete, if a bit clunky. They have sample code if people want to write their own Java client. There is a group in Intel making a GUI too.
It is mostly under the Globus/Apache license except for some plug-ins that are under another open-source license, BSD-like. By default it uses a grid-mapfile, though a different one than GRAM. It can use the same authorization callout that GRAM uses, so OSG should be able to use PRIMA just fine.
Alan asked about performance: Abishek says that his performance concerns with Xen 2.0 were addressed in Xen 3.0.
Alan asked about the VOMRS security advisory. We don't yet have VOMRS in the VDT, so we're not doing anything with it.
Michael Samidi talked about issues with TclGlobus working in a 64-bit environment. There are apparently segfaults when passing data from Tcl to C code. TclGlobus 1.4.0 will solve the problems, but it will take a bit longer to solve than they expected. They'll give it to us when it's ready, and it will include a license.
Someone (I didn't catch his name) asked questions about how, when he submits jobs, he gets files in .globus/jobs that are never deleted. Alain asked for more details, which he will send to vdt-support. Alain thinks that it's a gatekeeper/job manager issue, but it's a bit hard to distinguish because the submitter and gatekeeper are on the same computer.