##############################################################################################
#			VDT dCache Package                                                   # 
##############################################################################################
The package can be downloaded from http:/vdt.cs.wisc.edu/software/dcache/server

This release of VDT-dCache enables a user to install dCache 1.9.5-* (either a fresh install 
or an upgrade from previous 1.9.x release)

Important Namespace related settings that should exist in your Site Configuration File
 1. If doing a fresh install
	SETUP_CHIMERA="yes" and CHIMERA_EXISTS="no"
 2. If doing an upgrade to dCache 1.9.5-* with PNFS based FS
	 SETUP_CHIMERA="no" and CHIMERA_EXISTS="no" and DCACHE_PNFS="HOSTNAME-OF-PNFS-NODE"
    Once installation is done, then migrate to Chimera using MIGRATION document.
 3. If doing an upgrade to dCache 1.9.5-* with Chimera based FS
	SETUP_CHIMERA="no" and CHIMERA_EXISTS="yes"
 4. Since PNFS has now been replaced with Chimera,OSG Storage group no longer provides 
    support for PNFS.

************ If doing an upgrade, refer to UPGRADE document for the procedure  ***************

##############################################################################################
#               	PRE-REQUISITES                                                       #
##############################################################################################
Make sure:
o if you are installing gratia probes, "pyOpenSSL" is installed
o if you are installing on SL5 node, "compat-readline43","libxslt" and "openssl" libraries are 
  installed
o all nodes on which dCache services will run, including the pool nodes have valid host 
  certificates
o NFS is not already running, else chimera install will fail
o directory /etc/grid-security/certificates exists on all dcache nodes,else authentication will 
  fail. Also,the contents of this directory - namely, the certificates and crls have to be 
  updated regularly. There may be several ways to do this, but one I know works is installing 
  the OSG WN package. Here is a sample install:
  - wget http://physics.bu.edu/pacman/sample_cache/tarballs/pacman-3.29.tar.gz
  - tar -zxvf pacman-3.29.tar.gz
  - cd pacman-3.29
  - source setup.sh
  - mkdir /usr/local/vdt-2.0.0
  - cd /usr/local/vdt-2.0.0
  - pacman -get http://software.grid.iu.edu/osg-1.2:wn-client
  - . setup.sh
  - vdt-ca-manage setupCA --location root --url osg
  - vdt-control --enable fetch-crl vdt-rotate-logs vdt-update-certs
  - vdt-control --on
    Upon successful installation, you will see entries in crontab
    file such as:
    - /usr/local/vdt-2.0.0/fetch-crl/share/doc/fetch-crl-2.6.6/fetch-crl.cron
    - /usr/local/vdt-2.0.0/vdt/sbin/vdt-update-certs-wrapper --vdt-install /usr/local/vdt-2.0.0 
      --called-from-cron
o "rpcinfo" is in PATH environment variable
o Postgres is 8.3 or higher (Refer to POSTGRES-UPGRADE document)
o Postgres/Chimera are running (if they are already installed on a node).If not, you need to 
  start them before running the installation script
   /etc/init.d/postgres start
   /etc/init.d/chimera-nfs-run.sh start
##############################################################################################
#                       INSTALLATION                                                         #
##############################################################################################
The log and error files for installation process are
"{HOSTNAME}-vdt-install.log" and "{HOSTNAME}-vdt-install.err"

On the admin node:
 o Download the package
 o Untar the package
 o Setup the configuration for your SE
   Edit {VDT-DCACHE-INSTALL-HOME}/install/siteinfo.conf according to your site specifications.
   Note: 
   1. Due to significant changes made in site configuration file, We STRONGLY RECOMMEND 
   editing the file provided in the package instead of copying over the old 
   siteinfo.conf that you may have. 
   2. If doing a fresh dcache install, make sure in file siteinfo.conf - 
      - SETUP_CHIMERA="yes"
      - other CHIMERA related settings are set properly
      - DCACHE_PNFS is not present
   3. If doing a dcache upgrade with current namespace as PNFS
      make sure in file siteinfo.conf -  
      - SETUP_CHIMERA="no"
      - CHIMERA_EXISTS="no"
      - add line 
	DCACHE_PNFS="HOSTNAME-OF-PNFS-NODE"
	This line is only needed for this last time and after migration to chimera is complete,
	make sure you REMOVE this line.
   4. If doing a dcache upgrade with current namespace as Chimera
      make sure in file siteinfo.conf -
      - SETUP_CHIMERA="no"
      - CHIMERA_EXISTS="yes"

 o (Optional) Edit the "coretext.conf" file
	
On remaining nodes:
 o Download the package
 o Untar the package
 o cd into the {VDT-DCACHE-INSTALL-HOME}/install directory
 o Copy over the siteinfo.conf file from admin node above. You may have to edit this file if java 
   location is different on these nodes
 o (Optional) Edit the "coretext.conf" file
	
First on the Chimera node and then on remaining dCache nodes:
 o (Optional) Dryrun
	./install.py --dryrun
 o (Optional) Help
	./install.py --help
 o Actual Install
	./install.py
 o Do post-install configuration (Refer to POST-INSTALL CONFIGURATION)
 o Start dCache Services
   On Chimera node:
    - Postgres should be running at this point. If not, run
                > /etc/init.d/postgresql start
    - Chimera server should be running at this point. If not, run
                > /etc/init.d/chimera-nfs-run.sh start
   On Admin node:
    - Postgres should be running at this point. If not, run
                > /etc/init.d/postgresql start
    - Start dCache core services
                > /etc/init.d/dcache start
		(this will also bring up pool, if its also a pool node)		
    - If enabled, start gratia transfer probe service
                > /etc/init.d/gratia-dcache-transfer start
   On Chimera node:
    - Start the Chimera domain
                > /etc/init.d/dcache start
                (this starts up the pnfsManager, which by default runs 
		on the Chimera node)
   On SRM node:
   - Postgres should be running at this point. If not, run
                > /etc/init.d/postgresql start

   On SRM, Gsiftp,Gsidcap nodes:
    - Start dCache core services
                > /etc/init.d/dcache start
   On Pool nodes:
    - Start up the pools
                > /etc/init.d/pool start
As a simple first level check, visit "http://[ADMIN_NODE]:2288"
It should list all the services that are running and on which node
##############################################################################################
#                 POST-INSTALL CONFIGURATION                                                 #
##############################################################################################
After you have run the installation script successfully, you need to make sure 

1. the directory structure is setup correctly on namespace node
  Path /pnfs/$YOUR-DOMAIN/data should exist at this point. Create any subdirectories that you 
  may want now. 

2. the Pool configuration file (PoolManager.conf) is setup correctly. An upgrade will currently 
   backup your old file and put a new default one in its place. Move the old one to its original 
   location.and make any changes in it that you want.

3. Authorization is setup correctly
For configuration, you should refer to main dCache website (http:/www.dcache.org/manuals/Book/
config/cf-gplazma.shtml), but here is a minimal check list:
 o customize /opt/d-cache/etc/dcachesrm-gplazma.policy file
 o depending on how gPlazma is configured, it will use various files. Make sure these files 
   exist and have correct information
 o Adjust your configuration according to the following:
   ****Important changes in gPlazma configuration****
  gPlazma now persists authorizations in a database. Therefore, postgres must be installed on 
  the node which is running gPlazma. No special configuration of postgres is needed by gPlazma, 
  nor will it interfere with other dCache uses of the database.

  gPlazma no longer supports the convention of setting a literal Role=NULL and/or Capability=NULL 
  when no role or capability exist in a user's proxy. This will affect sites that are using 
  /etc/grid-security/grid-vorolemap for authorization and are currently using the convention. 
  All instances of Role=NULL and Capability=NULL should be removed from 
  /etc/grid-security/grid-vorolemap. For example, if a site is currently specifying 
  fully-qualified attribute names (groups and roles) in a form such as
  "/DC=org/DC=doegrids/OU=People/CN=Ted Hesselroth 898521" "/cms/Role=cmsprod/Capability=NULL" cmsprod
  "/DC=org/DC=doegrids/OU=People/CN=Ted Hesselroth 898521" "/cms/Role=NULL/Capability=NULL" uscms
  such lines should be changed to
  "/DC=org/DC=doegrids/OU=People/CN=Ted Hesselroth 898521" "/cms/Role=cmsprod" cmsprod
  "/DC=org/DC=doegrids/OU=People/CN=Ted Hesselroth 898521" "/cms" uscms

  gPlazma now includes a new authorization plugin, to support the XACML authorization schema.
  Using XACML with SOAP messaging allows gPlazma to acquire authorization mappings from any 
  service which supports the obligation profile for grid interoperability
  (http://cd-docdb.fnal.gov/cgi-bin/ShowDocument?docid=2952). Servers presently supporting XACML 
  mapping are the latest releases of GUMS and SCAS. Using the new plugin is optional, and previous 
  configuration files are still compatible with gPlazma. If the installation is an upgrade it will 
  change /opt/d-cache/config/gPlazma.batch. It is normally not necessary to change this file, but 
  if you have customized the previous copy, transfer your changes to the new batch file.
  If you wish to use the new plugin, add a line for xacml-vo-mapping in
  /opt/d-cache/etc/dcachesrm-gplazma.policy, or rename the file
  /opt/d-cache/etc/dcachesrm-gplazma.policy.rpmnew (created by the upgrade process) to
  dcachesrm-gplazma.policy and edit it. The following configuration has xacml mapping turned on 
  and all other plugins turned off.

  # Switches
  xacml-vo-mapping="ON"
  saml-vo-mapping="OFF"
  kpwd="OFF"
  grid-mapfile="OFF"
  gplazmalite-vorole-mapping="OFF"
	
  You will also need to set the endpoint for the xacml plugin by changing the line 
  for XACMLmappingServiceUrl.
  # XACML-based grid VO role mapping
  XACMLmappingServiceUrl="https://gums.oursite.edu:8443/gums/services/GUMSXACMLAuthorizationServicePort"
  # Time in seconds to cache the mapping in memory
  #xacml-vo-mapping-cache-lifetime="0"
  # SAML-based grid VO role mapping
  mappingServiceUrl="https://gums.oursite.edu:8443/gums/services/GUMSAuthorizationServicePort"
  # Time in seconds to cache the mapping in memory
  #saml-vo-mapping-cache-lifetime="0"
  In production it is advisable to enable caching by setting the lifetime to 60 or 120 (seconds).

  ****SRM Space Manager Link Group Authorization****
  In the link group authorization file, a leading slash before the group name is now required, 
  except when a user name is being used instead of a true group name. For example, if previously 
  authorizing the role cmsprod of the group cms with
  cms/Role=cmsprod
  the line should be changed to
  /cms/Role=cmsprod
  If a user has no group, the user name as mapped by gPlazma is used in place of the group name. 
  In this case no slash is used before the user name. For example, if a user with no group is 
  mapped to the user name cms999, then the line in the LinkGroup authorization file authorizing 
  the user to make a space reservation is cms999
  Note that no Role=* need be appended to the user name. Use of Role=* such as
  /dteam/Role=*
  is no longer needed - the wildcard is now assumed if no role is specified, e.g.,
  /dteam 
  authorizes dteam group members with any or no role to make space reservations.
	
  ****Default Access Latency and Retention Policy****
  The system wide default access latency and retention policy is now defined by the PnfsManager. 
  Default AccessLatency and RetentionPolicy defined by the variables SpaceManagerDefaultAccessLatency 
  and SpaceManagerDefaultRetentionPolicy in dCacheSetup will now need to be specified in dCacheSetup 
  on the PnfsManager node. The ones defined on the SRM node will have no effect.
##############################################################################################
#                      DIRECTORY STRUCTURE                                                   #
##############################################################################################
RPMS    - This directory contains rpms for:
  dCache,Postgres,Chimera,srmwatch,Gratia storage and transfer probe,JDK, PNFS dump and
  dCache Core Operations dCache Chronicle (OSG Operations Toolkit)
install - This directory contains:
 o installation scripts
   install.py -  invokes other installation scripts
   validate.py - validates the site configuration file
   install_java.sh - installs java
   install_jython.sh - installs jython
   install_dcache.sh - installs dcache
   setup_chimera.sh - setup chimera
   install_postgres.sh - installs postgres
   install_srmwatch.sh - installs srmwatch
   install_initd_scripts.sh - installs dcache/pool init.d scripts
   install_gratia_probes.sh - installs gratia storage/transfer probes on admin node
   install_dcache_core_operations.sh - installs dcache core operations rpm
   install_dcache_chronicle.sh - install dcache chronicle rpm
   rpm_unpack.sh
 o configuration files
   siteinfo.conf - site configuration file that you need to edit
   coretext.conf - file that you don't have to change at all, unless you want to customize your 
                   site and not use the default settings mentioned in this file.
 o INSTALL  - this file
 o POSTGRES-UPGRADE - file which contains instructions on how to upgrade Postgres software
 o MIGRATION - file which contains PNFS to Chimera Migration instructions
 o config  - This directory contains LinkGroupAuthorization.conf,PoolManager.conf and 
	     storage-authzdb. The first two serve as default configuration files. The last one
             is an example authorization file.
             WARNING: If you are doing an upgrade, your previous PoolManager.conf will be 
	     replaced. A backup of the previous PoolManager.conf is saved for reference when 
	     customizing PoolManager.conf
 o utils    - This directory contains file "config_file". This is yaim configuration and setup file 
 o init.d  - This directory contains the pool and dcache init.d scripts
            (the startup script for chimera is provided by dcache itself)
tools - This directory contains:
   companion2chimera.sql - script to transfer companion data in to chimera database 
##############################################################################################
#                       SITE CONFIGURATION                                                   #
##############################################################################################
"siteinfo.conf" file, defines which service will run on each dcache node. The definition for 
services which may run on more than one node may be a space-separated list of fully-qualified 
hostnames. If a service needs further information, such as pool size and location, they may 
appear in colon-separated fields after the hostname.
##############################################################################################
#                       INSTALLATION SCRIPTS                                                 #
##############################################################################################
+   Java   +
 Installation script will install the JDK which exists under the RPMS directory of package
+  Jython  +
 The jython installation script runs only if the node is an admin node and if you have chosen to 
 install the dcache core operations rpm. Jython will be installed only if one of below conditions 
 is not true:
 - the dcache core operations rpm has already been installed OR
 - the directory specified by value of JYTHON_LOCATION in siteinfo.conf already exists.
 In this case, the script assumes that jython has already been installed. The jython installation 
 process will be interactive, requiring input from you. 
 IMPORTANT: Standard install of Jython does not provide all packages. When asked during the dialog, 
  choose 'all/everything' option. Also, when asked about the "target directory", enter the same 
  value as that of parameter JYTHON_LOCATION in your siteinfo.conf file. 
+ Postgres +
 Installation script will install Postgres only if
  - this is a Chimera server node OR any of SRM, Billing, Space Manager, Pin Manager or Replica 
    Manager services are running on this node and
  - it is not installed already 	
 Note: dCache/Chimera strongly recommends using Postgres 8.3 or higher
+ Chimera  +
 Setup script will setup Chimera only if
 - you have chosen "yes" to SETUP_CHIMERA in siteinfo.conf
 - this is the Chimera server node (defined by the value of variable DCACHE_CHIMERA in 
   siteinfo.conf) and
 - NFS server is not running on this node 
 Note: If a previous attempt to set it up failed in the process, you will have to cleanup and
 then re-run the install script
+  dCache  +
 Installation script will unpack the dcache-server rpm if necessary and modify the dCache 
 configuration files
 - Services will be turned on and pools will be created according to the information in 
   siteinfo.conf
 - Files etc/node_config.delta and etc/dCacheSetup.delta are created, which show exactly what 
   lines in etc/node_config or config/dCacheSetup will be set. These files can be used for custom 
   settings as well (see CUSTOMIZATION section).
+  gratia  +
 Installation script will unpack the gratia probes on admin node and will modify the 
 configuration files. It will also check if indexes have been created in the billing database
 - Please note that tables in billing database get created when dCache services are first started
   Once the tables have been created, you need to create four indexes as follows-
	 login to the billing database and run
		 > create index dates_bi on billinginfo(datestamp)
                 > create index initiator on billinginfo(initiator)
                 > create unique index transaction on doorinfo(transaction)
                 > create index dates_di on doorinfo(datestamp)
 - gratia transfer probe will need to be started by running "service gratia-dcache-transfer start" 
 - gratia storage probe is a cronjob that is run by default every hour. The cron file 
   gratia-probe-dcache-storage.cron is located in /etc/cron.d 
 
##############################################################################################
#                       CUSTOMIZATION                                                        #
##############################################################################################
etc/node_config.delta and etc/dCacheSetup.delta can be used to change any other lines in 
etc/node_config or config/dCacheSetup. To change or add any other lines in these files,
before running install.sh add the line or lines to the *.delta file (creating the file if 
necessary).
##############################################################################################
#                       CLEANUP                                                              #
##############################################################################################
DISCLAIMER: Below are commands I run to do a complete dCache related cleanup when my 
single-node installation attempt does not succeed.
Follow instructions in this section, at your risk. There may be some additional cleanup that
you may have to do. Below is what works for me :)

NOTE: Modify the names of rpms and various locations sensibly. If yours is a multi-node install, 
then you need to cleanup only what could have been installed on a dcache node.

> Stop dCache
[root@fgt0x4 install]# /etc/init.d/dcache stop
Stopping dCache services... 

> Unmount PNFS/Chimera
[root@fgt0x4 install]# umount /pnfs
umount: /pnfs: not mounted

> Stop Chimera
[root@fgt0x4 install]# /etc/init.d/chimera-nfs-run.sh stop
Shutting down Chimera-NFSv3 interface

> Stop PostgreSQL
[root@fgt0x4 install]# /etc/init.d/postgresql stop
Stopping postgresql service:                               [  OK  ]

> Check that no 'dCache and Postgres related' processes are running

> Edit files
[root@fgt0x4 install]# vi /etc/exports (Remove the 2 lines - 
	/ localhost(rw)
	/pnfs)
[root@fgt0x4 install]# vi /etc/fstab (Remove any PNFS related entries)
[root@fgt0x4 install]# vi /etc/mtab (Remove any PNFS related entries)

> Remove any chkconfig entries
[root@fgt0x4 install]# chkconfig --del chimera-nfs-run.sh; chkconfig --del postgres;
chkconfig --del pnfs;chkconfig --del dcache
error reading information on service postgres: No such file or directory
error reading information on service pnfs: No such file or directory

> Remove startup scripts
[root@fgt0x4 install]# rm -rf /etc/init.d/chimera-nfs-run.sh 
[root@fgt0x4 install]# rm -rf /etc/init.d/dcache
[root@fgt0x4 install]# rm -rf /etc/init.d/postgres
[root@fgt0x4 install]# rm -rf /etc/init.d/pool
[root@fgt0x4 install]# rm -rf /etc/init.d/gratia-dcache-transfer

> Remove Gratia storage probe cron
[root@fgt0x4 install]# rm -rf /etc/cron.d/gratia-probe-dcache-storage.cron

> Remove registration
[root@fgt0x4 install]# rpcinfo -d nfs 2
rpcinfo: Could not delete registration for prog nfs version 2
[root@fgt0x4 install]# rpcinfo -d nfs 3

> Stop NFS
[root@fgt0x4 install]# /etc/init.d/nfs stop
Shutting down NFS mountd:                                  [FAILED]
Shutting down NFS daemon:                                  [FAILED]
Shutting down NFS quotas:                                  [FAILED]
Shutting down NFS services:                                [FAILED]

> Remove Postgres RPMS
This will erase all postgresql rpms and delete any existing postgres databases and installations!
Make a backup of your databases and anything else, you think you might need later.

[root@fgt0x4 install]# rpm -e --noscripts --nodeps postgresql-8.3.7-1PGDG.rhel5.i386 
postgresql-libs-8.3.7-1PGDG.rhel5.i386 postgresql-devel-8.3.7-1PGDG.rhel5.i386 
postgresql-server-8.3.7-1PGDG.rhel5.i386

> Remove dCache RPM
[root@fgt0x4 install]# rpm -e --noscripts --nodeps dcache-server-1.9.5-19.noarch

> Remove any Gratia probe, srmwatch, pnfsDump, dCache core and dCache chronicle RPMS if they got installed in the
process
[root@fgt0x4 ~]# rpm -e --nodeps gratia-probe-dCache-storage-itb-1.06.15d-1.noarch 
gratia-probe-extra-libs-1.06.15d-1.noarch gratia-probe-common-1.06.15d-1.noarch 
gratia-probe-dCache-transfer-itb-1.06.15d-1.noarch gratia-probe-extra-libs-arch-spec-1.06.15d-1.i386 
gratia-probe-services-1.06.15d-1.noarch
[root@fgt0x4 ~]#

> Remove directories
[root@fgt0x4 install]# rm -rf /opt/d-cache
[root@fgt0x4 install]# rm -rf /var/lib/pgsql
[root@fgt0x4 install]# rm -rf /opt/pgsql (if this was created by the failed install)
[root@fgt0x4 install]# rm -rf /pnfs
[root@fgt0x4 install]# rm -rf /opt/jython (whereever you chose to install jython)
[root@fgt0x4 install]# rm -rf /pnfs-tmp-mount

> Cleanup the Pool directories if 'ls' against them shows that an attempt was made
to intialize the pools. If the parent pool directory is empty, then you don't need 
to delete it
[root@fgt0x4 install]# ls /scratch/pool1/
[root@fgt0x4 install]#
so, no need to delete /scratch/pool1

> Do a 'df' to confirm that there are no hangs (If there are, a system reboot should
fix it, provided you cleaned up as mentioned above)

> Do a 'ps -ef' to check for any stale dcache related processes and kill them if you find any.

> Now you are ready to try the install again
##############################################################################################
#                       SUPPORT                                                              #
##############################################################################################
For more detailed information about dCache or Chimera, refer the dCache Book at 
http:/www.dcache.org/manuals/Book/
For any issues/questions with/about the the VDT-dCache package, please send email to 
osg-storage@opensciencegrid.org