The service management subsystem will support the use cases listed below. The use cases use the words enable and disable to mean very specific things:
The service management subsystem will contain two scripts:
For an existing service, it is possible to include just a --name and --enable/--disable flag to set the desired state of the service.
One and only one of --on and --off must be specified.
The service management state file will be located at
There will be one line per service/type combination, where the service is simply the name of the service and the type is a string identifying the method of running the service.
The format of a line is
NAME <tab> TYPE <tab> ENABLE <tab> DETAILSwhere the components are defined as follows:
DETAILS fields are as follows:
|cron||time spec (crontab format), command and arguments|
|inetd||port, protocol (tcp or udp), command and arguments|
|init||path to init script|
This script is used to install services on a machine when the VDT is deployed. It can install tasks into inetd/xinetd, init.d, and crontab, as well as modify the 'services' definition file. It has mechanisms to figure out whether the system is running inetd or xinetd. When the script is called to install a task into a subsystem, it first removes any trace of it in the target subsystem, then writes if the install flag is enabled, it then proceeds to add in the information as appropriate.
The script does not keep a record of what items it has installed or uninstalled. I propose that we keep a logfile of what is installed by the system and whether it is currently enabled or not. This will basically serve as a record of what items that this VDT installation is responsible for; vdt-install-service should be able to remove and add back the appropriate addition to the subsystems strictly based on the contents of the file. We will also need to provide a way for other scripts to easily list out all the packages from this file.
We also do not currently have any mechanism for allowing us to dictate the boot order of our services (this only matters for init.d scripts). From what I can tell, all our services are installed to start and stop at the same init level (99) and at all run levels; I can only assume that scripts are executed in alphabetical order within the run level. I am not sure if it is possible to change this to give us a finer grain control. One possible solution would to make our our 'vdt-init' wrapper script that would in turn call the appropriate init script, allowing us to control the boot order.
If we wanted to go this route, we will need to provide a switch to this script that would allow one to specify which order level that the service needed to be started at. The default would place the service in the last 'bucket' of when services to get started.
To make it more clear what kind of installation (cron/init/inetd) each configure script for a package does each configure script does, we also might want to come up with a nice wrapper class or simple hashtable that can be declared at the top of each configure script. This would make things more uniform and easier to follow rather than embedding the calls to vdt-install-service deep inside of the code.
I decided that the name 'vdt-enable' was more indicative to what we are trying to accomplish. By default, the script will install a package into the appropriate subsystem (as defined in its configure script) and also prop up all the services that it knows about. The later functionality can be disabled by passing a '--nostartup' flag.
This script basically invokes vdt-install-service for each package that was installed that requires to be instantiated by a subsystem. Then it will attempt to start the package's daemons by invoking the init scripts.
It is not clear to me whether this script becomes the new 'frontend' for users to start and stop daemons in the VDT. That is, instead of calling the 'post-install/daemon' scripts directly, they can pass a flag to start a particular daemon ('vdt-enable apache').
The question that really matters whether we want to let a package's daemon execute on a system that we know is not properly configured as it should be. It is not clear to me at this time whether we want a more rigid control over our system-wide environment.
This script is used to essentially disable a VDT installation from running. There are two parts of this operation: (1) we must shutdown all services that we have started, (2) we must disable these services from starting again when the system is rebooted. This is basically a reversal of the operations done in vdt-enable.
When vdt-disable is called, it runs reads the list of packages that are installed and first calls the init scripts to stop the daemons. Next vdt-disable calls 'vdt-install-service --remove' in the reverse order of how things were originally installed. The logfile is marked that the package has been disabled.
These examples assume that the VDT is installed as root. I believe that non-root installations would be similar, except that the VDT installation scripts would be unable to add all the information that needed for packages to certain subsystems (init/inetd).
For now, User will simply run the VOMS init script manually to start the new VO. When the VDT is restarted, it will start all of the VOs contained in VOMS, as expected. That is, we will not keep information about each separate VO. If users don't like this behavior, we can change it later.
Yes, that's what we're saying. When does it matter to have installation and starting as distinct events?
vdt-install-service --service <NAME> --port <PORT> --inetd ...it looks like it will do both parts. However, we do not seem to take advantage of this functionality in any of the configure scripts. Is there a reason for this? Could we combine these two service types into one? Should we?