Version 84 (modified by 18 years ago) ( diff ) | ,
---|
gLite Template Customization
Site customization to QWGtemplates is done through a small set of templates used to define variables used as input by QWG templates. This doesn't cover OS basic configurationt that is decribed in the page about template framework.
All site parameters related to QWG middleware are supposed to be declared in template pro_lcg2_config_site.tpl
. To start a new site, import the site paramater template example. The list of all available variables with their description and their default value can be consulted in template source:templates/trunk/grid/glite-3.1/defaults/glite.tpl. This template is a critical part of standard templates and should not be modified or duplicated.
Note : Information in this page may document features or configuration options not present in the current release. These information are related to changes and improvement that will be available in next release and are already present in the current development branch. If you are urgently requiring these features, use content of this branch.
Documentation in this page is based on QWG templates for gLite 3.1. Everything mentionned here also applies to QWG templates for gLite 3.0, except when explicitly stated.
Machine types
QWG templates provide a template per machine type (CE, SE, RB, ...). They are located in machine-types
directory and are intended to be generic templates. No modification should be needed.
To configure a specific machine with gLite middleware, you just need to include the appropriate machine type template into the machine profile, after specifying a template containing the specific configuration for this particular machine with the variable xxx_CONFIG_SITE
(look in the template for the exact name of the variable).
Here an example for configuring a Torque based CE :
object template profile_grid10; # Define specific configuration for a GRIF CE to be added to # standard configuration variable CE_CONFIG_SITE = "pro_ce_torque_grif"; # Configure as a CE (Torque) + Site's BDII include machine-types/ce; # # software repositories (should be last) # include repository_common;
In this example, CE_CONFIG_SITE
specify the name of a template defining the Torque configuration.
All the machine types share a common basic configuration, described in template machine-types/base.tpl
. This template allows to add site specific configuration to this common base configuration (e.g. configuration of a monitoring agent). This is done by defining variable GLITE_BASE_CONFIG_SITE
to a template containing the site specific configuration to be added to the common configuration (at the end of the common configuration). This variable can be defined, for example, in template pro_site_cluster_info.tpl
.
The following sections describe specific variables that can be used with each machine type. The machine type template to include is specified at the beginning of the section as Base template. In addition, to get more details, you can look at examples.
VO Configuration
List of VOs to configure on a specific node is defined in variable VOS
. Generally a site-wide default value is defined in pro_lcg2_config_site.tpl
(defined with operator ?=
). This value can be overidden on a specific machine by defining VOS
variable in the machine profile, before including the machine type profile.
An example of VOS definition is :
variable VOS ?= list('alice', 'atlas', 'biomed', 'calice', 'cms', 'cppm', 'dteam', 'dzero', 'egeode', 'lhcb', 'ops', 'planck', );
Note : dteam
and ops
are mandatory VOs.
For each VO listed in VOS
, there must be a template defining the VO parameters in vo/params
. The template name must be the same as the VO name used in VOS
. If the VO to be added has no template to define its parameters, refer to next section about adding a new VO.
Site Specific Defaults for VO Parameters
It is possible to define site specific defaults for VOs that override standard default. This must be done by defining variable VOS_SITE_PARAMS
as a nlist. This nlist can contain one entry per VO plus an entry DEFAULT
. Entry DEFAULT
is used to define paramaters that will apply to all VOs, other entries apply only to one specific VO. The entry key is the VO name (except for DEFAULT
), as used in VOS
variable.
Each entry value must be the name of a structure template or a nlist defining any of these properties :
create_home
: Create home directories for VO accounts. Default defined by variableCREATE_HOME
variable.create_keys
: Create SSH keys for VO accounts. Default defined by variableCREATE_KEYS
variable.unlock_accounts
: a regexp defining host names where the VO accounts must be unlockedpool_digits
: define default number of digits to use when creating pool accountspool_offset
: define offset from VO base uid for the first pool accountpool_start
: index of the first pool accounts to create for a VOpool_size
: number of pool accounts to create by default for a VOsw_mgr_role
: description of VO software manager role. Avoid to change default.- Location of standard services. See below.
For example, to define a site specific RB for VO Alice, create a template vo/site/alice.tpl
in your site directory like :
structure template vo/site/alice; 'rb_hosts' = 'myrb.example.org';
and add the following entry in VOS_SITE_PARAMS
in your pro_lcg2_config_site.tpl
:
variable VOS_SITE_PARAMS = nlist ('alice', 'vo/site/alice', );
Alternativly, you can define these parameters directly into VOS_SITE_PARAMS
:
variable VOS_SITE_PARAMS = nlist ('alice', nlist('rbhosts' , 'myrb.example.org', ), );
Adding a New VO
Adding a new VO involved the creation of a template defining VO parameters. This template name must be the name you use to refer to the VO in rest of the configuration but is not required to be the real VO name (can be an alias used in the configuration). This template must be located in directory vo/params
, in one of your cluster or site specific hierarchy of templates or in gLite templates.
Note : if you create a template for a new VO, be sure to commit it to the QWG repository if you have write access toit, or to send it to QWG developpers. There is normally no reason for a VO definition not to be generally available.
To create a template to describe a new VO, the easiest is to copy the template for an already configured VO. The main variables supported in this template are :
name
: VO official name. No default.account_prefix
: prefix to use when creating accounts for the VO. Generally the 3 first letters of the VO name. No default.voms_servers
: a nlist describing VOMS server used by the VO, if any. If the VO has several (redundant) VOMS servers, this property can be a list of nlist. For each VOMS server, supported properties are :name
: name of the VOMS server. This is a name used internally by template. By default, template defining VOMS server certificate has the same name. No default.host
: VOMS server host name. No default.port
: VOMS server port associated with this VO. No default.cert
: template name, invo/certs
, defining VOMS server certificate. If not specified, defaults to the VOMS server name.
voms_mappings
(replace deprecatedvoms_roles
) : list of VOMS groups/roles supported by the VO. This property is optional. This is a nlist with one entry per mapping (mapped accounts). The supported properties for each entriy are :description
: description of the mapping. This property is informational, except for VO software manager where it must beSW manager
(with this exact casing).pattern
(replace deprecatedname
) : VO group/role combinations mapped to this account. This can be a string or a list of string (if several group/role combinations are mapped to the same account). Each value can be either a role name (without/ROLE=
) or a group/role combination in standard format/group1/group2/.../ROLE=rolename
. Note that and/ROLE
keywords are required to be upper case, that there may be several groups but only one role and if both are present, role must be the last one. Look at LHCb VO parameters for an example.suffix
: suffix to append toaccount_prefix
to build account name associated with this role.
base_uid
: first uid to use for the VO.create_home
: Create home directories for VO accounts. Default defined by variableCREATE_HOME
variable.create_keys
: Create SSH keys for VO accounts. Default defined by variableCREATE_KEYS
variable.gid
: GID associated with VO accounts. Default : first pool account UID.pool_size
: number of pool accounts to create for the VO. Defaults : 200.pool_digits
: number of digits to use for pool accounts. Must be large enough to handlepool_size
. Default is 3.pool_offset
: define offset from VO base uid for the first pool account- Location of standard services. See below.
In addition to this template, you need to have another template defining the public key of the VOMS server used by the VO. This template has the name of the VOMS server by default. It can be explicitly defined with cert
property of a VOMS server entry. If the new VO is using an already used VOMS server, there is no need to add the certificate.
Default Services for a VO
Location of standard services to use with a specific VO can be defined either in the VO parameters or in the site specific parameters for a VO. Services that can be configured are :
proxy
: name of the proxy server used by the VO. No default, optional.rb_hosts
: LCG RB host name to use by default. Service ports will be set to default values. Can be a list or a single value.wms_hosts
: gLite WMS host name to use by default. Service ports will be set to default values. Can be a list or a single value.catalog
: define catalog type used by the VO. Optional. Must be defined only for VO still usingRLS
(value must berls
orRLS
).
In addition to variables above, it is possible to use the following variables if you need more control over service location or endpoints :
nshosts
: name:port of the RB used by the VO (Network Server). No default.lbhosts
: name:port of the RB used by the VO (Logging and Bookeeping). No default.wms_nshosts
: name:port of the WMS used by the VO (Network Server). Can be a list or a single value. No default.wms_lbhosts
: name:port of the WMS used by the VO (Logging and Bookeeping). Can be a list or a single value. No default.wms_proxies
: endpoint URI of WMProxy used by the VO. Can be a list or a single value. No default.
VO Specific Areas
There are a couple of variables available to customize VO specific areas (software area, VO accounts home directories...) :
VO_SW_AREAS
: a nlist with one entry per VO (key is the VO name as used inVOS
variable). The value is a directory to use for the VO software area. Be sure to list this directory or its parent inWN_SHARED_AREAS
if you want to use a shared filesystem for this area (this is highly recommended). Directories listed in this variable will be created with the appropriate permissions (0755
for VO group).VO_HOMES
: a nlist with one entry per VO (key is the VO name as used inVOS
variable). The value is a directory prefix to use when creating home directories for accounts. A suffix will be added to this name corresponding to the VO role suffix for role accounts or the the account number for pool accounts. By default, VO accounts are created in/home
.VO_SWMGR_HOMES
: a nlist with one entry per VO (key is the VO name as used inVOS
variable). The value is a directory to use as the home directory for the VO software manager. If there is not entry for a VO, VO_HOMES is used. Main purpose of this variable is to define home directory for the software manager as the VO software area. This can be achieved easily by assigningVO_SW_AREAS
to this variable.CREATE_HOME
: this variable controls creation of VO accounts home directories. It accepts 3 values :true
,false
andundef
.undef
is a conditional true : home directories are not created if they reside on a NFS shared file system (it is listed inWN_SHARED_AREAS
) and the NFS server is not the current machine.
Tuning VO configuration on a specific node
Each machine type templates define VO configuration (pool accounts, gridmap file/dir...) appropriate to the machine type. If you want to change this configuration, on a specific node, you can use the following variables :
NODE_VO_ACCOUNTS
(boolean) : VO accounts must be created for each VO initialized. Default : true.NODE_VO_GRIDMAPDIR_CONFIG
(boolean) : gridmapdir entries must be initialized for pool accounts. Default :false
.NODE_VO_WLCONFIG
(boolean) : initialize workload management environment for each VO. Normally enabled only on resource brokers. Default : false.NODE_VO_CREATEHOME
(boolean) : create home directories for pool accounts. Default : true.
In addition you can execute actions specific to the local machine by defining the following variable (mainly used to define a VO list specific to a node by assigning a non default value to VOS
variable) :
NODE_VO_CONFIG
(string) : site specific template that must be included before actually doing VO intialization. Allow for specific VO modification to default VO configuration. Default : none.
Note : before modifying default VO configuration for a specific machine, be sure what you want to do is valid. Misconfiguring VO can have dramatic effects on service availability.
Mapping of VOMS groups/roles into grid-mapfile
grid-mapfile is used as a source of mapping information between users DN and Unix accounts when this cannot be obtained from VOMS.
Default behaviour for describing user mapping in grid-mapfile used to be to map users with a specific role to the account corresponding to this role. Unfortunatly, the result is unpredictable if a user has several roles in the VO. The default in QWG templates, starting with release gLite-3.0.2-12, is to always map users to normal users in grid-mapfile. To obtain a mapping based on a specific role, users have to get a proxy with the required VOMS extensions using voms-proxy-init --voms
.
To revert to previous behaviour, you need to define variable VO_GRIDMAPFILE_MAP_VOMS_ROLES
to true
in your machine profile or one of your site specific templates.
Allocation of Service Accounts
Some services allow to define a specific account to be used to run the service. In this case, there is one template for each of these accounts in common/users
. The name of the template generally matches the user account created or, when the template is empty, the name of the service.
A site can redefine account names or characteristics (uid, home directory...). To do this, you should not edit directly the standard templates as the changes will be lost in the next version of the template (or you will have to redo them by hand). You should create a users
directory somewhere in your site or cluster hierarchy (e.g. under the site
directory, not directly at the same level else it will not work without adjusting cluster.build.properties
) and put your customized version of the template here.
Note : don't change the name of the template, even if you change the name of the account used (else you'll need to modify standard templates needing this user).
Accepted CAs
There is one template defining all the accepted CAs. We generally produced a new one each time there is a new release of the list of CAs officially accepted by EGEE. If you need to adjust it, create a site or cluster specific copy of common/security/cas.tpl
in a directory common/security
.
If you need to update this template, refer to the standard procedure to do it.
Shared File Systems
It is recommended to use a shared file system mounted (at least) on CE and WNs for VO software areas. It is also sometimes convenient to use a shared file system for VO pool accounts (this is more or less a requirement to run MPI jobs). Currently, QWG templates support the use of NFS or non NFS shared file systems. Configuration is done by the following variables :
WN_SHARED_AREAS
: a nlist with one entry per file system which is shared between worker nodes and CE (key is the escaped file system mount point). If the filesystem is served by NFS and managed by Quattor on client and/or server, the value for each entry is the name of the NFS server and optionaly the path on the NFS server if different from the path on the worker node. Else (NFS filesystem not managed by Quattor, non NFS filesystem like AFS, LUSTRE, GPFS...) the value must beundef
.NFS_AUTOFS
: when true, useautofs
to mount NFS file systems on NFS clients. This is the recommended setting, as this is the only one to avoid complex inter-dependency in startup order. But for backward compatibility, default value is false.
Note : variable WN_NFS_AREAS has been deprecated and replaced by WN_SHARED_AREAS. It the latter is not defined, WN_NFS_AREAS is used if defined.
NFS file systems listed in this variable with a defined value are mounted on CE and WNs. The NFS server for the file systems can be any machine types and is not required to be managed by Quattor (but in this case, you probably need to force CREATE_HOME
to true
on one machine). If it is managed by Quattor, all actions required are done automatically.
Specifying NFS options
There are two variables to define mount options to be used with NFS file systems :
NFS_DEFAULT_MOUNT_OPTTIONS
: defines mount options to be used by default, if none are explicitly defined for a filesystem.NFS_MOUNT_OPTS
: defines mount options to be used for a specific file system. This variable is a nlist with one entry per file system : key must be the escaped path of the mount point.
Defining NFS exports
On each NFS server or in cluster or site parameters, NFS exports can be defined using a set of variables. By default only CE and worker nodes are given access to NFS server.
Note : the following variables don't configure filesystem mounting. For this see Configuring shared filesystems.
Variables available to customize the NFS export ACL are :
NFS_CE_HOSTS
: list of CE hosts requiring access to NFS server (default is CE_HOST)NFS_WN_HOSTS
: list of WN hosts requiring access to NFS server (default is WN_HOSTS)NFS_LOCAL_CLIENTS
: list of other local hosts requiring access to NFS server
These variables can be a string, a list or a nlist. A string value is interpreted as a list with one element. When specified as a list or string, the value must be regexp matching name of nodes that must be given access to NFS server. The access right is the value of variable NFS_DEFAULT_RIGHTS
. When specified as a nlist, the key must be an escaped regexp and the value is the access rights.
Note : when possible, this is recommended to replace default value for NFS_WN_HOSTS by one or several regexpsmatching WN names.
NFS Server
Base template : machine-types/nfs
.
With QWG templates, it is possible to configure a machine as a dedicated NFS server whose configuration is shared with grid machines for file system configuration and accounts.
LCG CE Configuration
Base template : machine-types/ce
.
QWG templates handle configuration of the LCG CE and the selected batch system (LRMS). To select the LRMS you want to use, you have to define variable CE_BATCH_NAME
. There is no default. If you want to use Torque/MAUI, recommended version is torque2
.
The value of CE_BATCH_NAME
must match a directory in common
directory of gLite3 templates.
Note : as of gLite 3.0.2, LRMS supported are Torque v1 (torque1
) and Torque v2 (torque2
), with MAUI scheduler.
Previous versions of QWG templates used to require definition of CE_BATCH_SYS
. This is deprecated : this variable is now computed from CE_BATCH_NAME
.
PBS/Torque
PBS/Torque related templates support the following variables :
CE_HOST
: name of the PBS/Torque master
CE_LOCAL_QUEUES
: a list of Torque queue to define that will not be available for grid usage (accessible only with standard Torque commands). This list has a format very similar toCE_QUEUES
, except that key containing queue name is callednames
instead ofvos
and that its value is useless.
CE_PRIV_HOST
: alternate name of PBS/Torque server. Used in configuration where WNs are in a private network and PBS/Torque master has 2 network names/adresses.
CE_QUEUES
: a nlist defining for each queue the list of VOs allowed to access the queue and optionally the specific attributes of the queue. Access list for queue is defined undervos
key, attributes underattlist
key. The value for each key is a nlist where the key is queue name. For access list, the value is a list of VO allowed access to the queue. For queue attributes, the value is a nlist where the key is a Torque attribute and the value the attribute value. Look at example for more information on how to define one queue for each supported VO.
TORQUE_SUBMIT_FILTER
: this variable allow to redefine the script used as a Torque submit filter. A default filter is provided in standard templates.
TORQUE_TMPDIR
: normally defined to refer to the working area created by Torque for each job, on a local filesystem. Define asnull
if you don't want job current directory to be redefined to this directory.
WN_ATTRS
: this variable is a nlist with one entry per worker node (key is the node fullname). Each value is a nlist consisting in a set of PBS/Torque attribute to set on the node. Values are anykey=value
supported byqmgr set server
command. One useful value isstate=offline
to cause a specific node to drain orstate=online
to reenable the node. Just suppressingstate=offline
is not enough to reenable the node. One specific entry inWN_ATTRS
isDEFAULT
: this entry is applied to any node that doesn't have a specific entry. If you want to avoïd re-enabling a node explicitely, you can have theDEFAULT
entry be defined with thestate=free
arguments. For instance, you could define :variable WN_ATTRS ?= nlist( "DEFAULT", nlist("state","free"), "mynode.mydomain.com", nlist("state","offline") );
WN_CPUS_DEF
: default number of CPU per worker node.
WN_CPUS
: a nlist with one entry per worker node (key is the node fullname) having a number of CPUs different from the default.
For more details about all of these variables, their format and their default values, look at template defining default values for gLite related variables.
MAUI
MAUI related templates support the following variables :
MAUI_CFG
: the content of this variable must contain the full content ofmaui.cfg
file. Look at pro_lcg2_config_site_maui.tpl example on how to define this variable from other configuration elements.
MAUI_WN_PART_DEF
: default node partition to use with worker nodes
MAUI_WN_PART
: a nlist with one entry per worker node (key is node fullname). The value is the name of the MAUI partition where to place the specific worker node.
RSH and SSH Configuration
By default Quattor doesn't configure any RSH or SSH trust relationship between CE and WNs if home directories are on a shared filesystem declared in variable WN_SHARED_AREAS
. Else it configures SSH with host-based authentication. By default RSH is always configured with an empth hosts.equiv
file.
If this doesn't fit your needs, you can explicitly control RSH and SSH configuration with the following variables :
CE_USE_SSH
: ifundef
(default), configuration is based on use of a shared filesystem for home directories. Else it explicitly set wether to configure SSH host-based authentication (true
) or not (false
).SSH_HOSTBASED_AUTH_LOCAL
: when this variable is true andCE_USE_SSH
is false, configure SSH host-based authentication on each WN restricted to the current WN (ability to use SSH without entering a password only for ssh to the current WN). This is sometimes required by some specific software.RSH_HOSTS_EQUIV
: If true,/etc/hosts.equiv
is created with an entry for the CE and each WN. If false an empty/etc/hosts.equiv
is created. Ifundef
, nothing is done. Default isundef
.
CE Status
CE related templates use variable CE_STATUS
to control CE state. Supported values are :
Production
: this is the normal state. CE receives and processes jobs.Draining
: CE doesn't accept new jobs but continues to execute jobs queued (as long as they are WNs available to execute them).Closed
: CE doesn't accept new jobs and jobs already queued are not executed. Only running jobs can complete.Queuing
: CE accepts new jobs but will not execute them.
CE_STATUS
indicates the desired status of the CE. All the necessary actions are taken to set the CE in the requested status. Default status (if variable is not specified) is Production
. This variable can be used in conjunction to WN_ATTRS to drain queues and/or nodes.
Restarting LRMS Client
It is possible to force a restart of LRMS (batch system) client on all WNs by defining variable LRMS_CLIENT_RESTART
. This variable, if present, must be a nlist with one entry per WN to restart (key is the WN name) or 'DEFAULT' for all WNS without a specific entry. When the value is changed (or first defined), this triggers a LRMS client restart. The value itself is not relevant but it is advised to use a timestamp for better tracking of forced restart.
For example to force a restart on all WNs, you can add the following definition :
variable LRMS_CLIENT_RESTART = nlist( 'DEFAULT', '2007-03-24:18:33', );
A good place to define this variable is template pro_site_cluster_info
in cluster site
directory.
Note : this feature is currently implemented only for Torque v2 client.
Run-Time Environment
gLite 3.0 templates introduce a new way to define GlueHostApplicationSoftwareRunTimeEnvironment
. Previously it was necessary to define a list of all tags in the site configuration template. As most of these tags are standard tags attached to a release of the middleware, there is now a default list of tags defined in the default configuration site template, defaults/site.tpl. To supplement this list with tags specific to the site (e.g. LCG_SC3
), define a variable CE_RUNTIMEENV_SITE
instead of defining CE_RUNTIMEENV
:
variable CE_RUNTIMEENV_SITE = list("LCG_SC3");
This change is backward compatible : if CE_RUNTIMEENV
is defined in the site configuration template, this value will be used.
Working Area on Torque WNs
By default, QWG templates configure Torque client on WNs to define environment variable TMPDIR
and location of stdin
, stdout
and stderr
to a directory local to the worker node (/var/spool/pbs/tmpdir
) and define environment variable EDG_WL_SCRATCH
to TMPDIR
(except for jobs requiring several WNs, e.g. MPI). This configuration is particularly adapted to shared home directories but works well with non shared home directories too.
The main requirement is to appropriatly size /var
on the WNs as jobs sometimes require an important scratch area. On the other hand, /home
is not required to be very large, as it should not store very large file for a long period. It is strongly recommended to use shared home directories, served through NFS or another distributed file system, as it optimizes /home
usage and allows to dedicate local disk space on WNs to /var
.
If your configuration cannot be set as recommended or if you current configuration has a large space in /home and a limited space in /var, you can define the following property in your WN profiles before including machine-types/wn
:
variable TORQUE_TMPDIR = /home/pbs/tmpdir";
SE Configuration
Base template :
- DPM :
machine-types/se_dpm
. - dCache :
machine-types/se_dCache
.
Note : This section covers the generic SE configuration, not a specific implementation.
List of site SEs
The list of SEs available at your site must be defined in variable SE_HOSTS
. This variable is a nlist with one entry for each local SE. The key is the SE host name and the value is a nlist defining SE parameters.
Supported parameters for each SE are :
type
: define SE implementation. Must beSE_Classic
,SE_dCache
orSE_DPM
. This parameter is required and has no default. Note that SE Classic is deprecated.accessPoint
: define the root path of any VO specific area on the SE. This parameter is required with Classic SE and dCache. It is optional with DPM where it defaults to/dpm/dom.ain.name/homes
.arch
: used to defineGlueSEArchitecture
for the SE. This parameter is optional and defaults tomultidisk
that should be appropriate for standard configurations.
For more details, look at example and comments in gLite defaults.
Note : Format of SE_HOSTS
has been changed in gLite-3.0.2-11 release of QWG templates. Look at release notes to know how to migrate from previous format.
CE Close SEs
Variable CE_CLOSE_SE_LIST
defines the SEs that must be registered in BDII as a close SE for the current CE. It can be either a value used for every VO or a nlist with a default value (key is DEFAULT
) and one entry per VO with a different close SE (key is the VO name). Each value must be a string if there is only one close SE or a list of SEs.
CE_CLOSE_SE_LIST
defaults to deprecated SE_HOST_DEFAULT
if defined, else to all the SEs defined in SE_HOSTS variable.
It is valid to have no close SE defined. To remove default definition, you need to do :
variable CE_CLOSE_SE_LIST = nlist('DEFAULT', undef);
It is valid for the close SE to be outside your site but this is probably not recommended for standard configurations.
Default SE
Variable CE_DEFAULT_SE
is used to define the default SE for the site. It can be either a SE name or a nlist with a default entry (key is DEFAULT
) and one entry per VO with a different default SE (key is the VO name).
By default, if not explicitly defined, it defaults to the first SE in CE_CLOSE_SE_LIST entries. The default SE can be outside your site (probably not recommended for standard configurations).
DPM Configuration
DPM related standard templates require a site template to describe the service site configuration. The variable DPM_CONFIG_SITE
must contain the name of this template. This template defines the whole DPM configuration, including all disk servers used and is used to configure all the machines part of the DPM configuration.
On DPM head node, variable SEDPM_SRM_SERVER
must be defined to true
. This variable is false
by default (DPM disk servers).
If you want to use Oracle version of DPM server define the following variable in your machine profile :
variable DPM_SERVER_MYSQL = false;
DPM site parameters
There is no default template provided for DPM configuration. To build your own template, you can look at template pro_se_dpm_config.tpl in examples provided with QWG templates.
Starting with QWG Templates release gLite-3.0.2-9, there is no default password value provided for account used by DPM daemons and for the DB accounts used to access the DPM database. You MUST provide one in your site configuration. If you forget to do it, you'll get a not very explicit panc error :
[pan-compile] *** wrong argument: operator + operand 1: not a property: element
If you want to use a specific VO list on your DPM server and you have several nodes in your DPM configuration (DPM head node + disk servers), you need to write a template defining VOS
variable (with a non default value) and define variable NODE_VO_CONFIG
to this template.
Using non standard port numbers
It is possible to use non standard port numbers for DPM daemons dpm
, dpns
and all SRM daemons. To do this, you only need to define the XXX_PORT
variable corresponding to the service. Look at gLite default parameters to find the exact name of the variable.
Using a non standard account name for dpmmgr
If you want to use an account name different from dpmmgr
to run DPM daemons, you need to define variable DPM_DAEMON_USER
in your site configuration template and provide a template to create this account, based on users/dpmmgr.tpl.
Script to publish dynamic information into BDII
It is possible to define and configure a site specific version of the GIP plugin used to publish DPM dynamic information into BDII (space used/free per VO). This is achieved by :
- Writing a site or cluster specific template providing the script itself and optionally its name and arguments.
- Define variable
GIP_SCRIPT_DPM_DYNAMIC_CONFIG
to the name of this template.
The template must define the following variables :
GIP_SCRIPT_DPM_DYNAMIC
: this variable is mandatory and must contain the plugin code (generally a shell or Perl script).GIP_SCRIPT_DPM_DYNAMIC_NAME
(optional) : name of the plugin. Default tolibexec/lcg-info-dynamic-dpm-alternate
in LCG installation directory (generally/opt/lcg
).GIP_SCRIPT_DPM_DYNAMIC_ARGS
(optional) : script arguments. Default :var/gip/ldif/static-file-SE.ldif
in LCG installation directory (generally/opt/lcg
).
Note : QWG templates for gLite 3.0 provides a modified version of original GIP plugin for DPM working with DPM versions 1.5.10 to 1.6.3 included (standard plugin provided with these versions doesn't work properly with VO dedicated pools). To use it define variable GIP_SCRIPT_DPM_DYNAMIC_CONFIG
to glite/se_dpm/server/info_dynamic_voms
.
LFC Configuration
LFC related standard templates require a site template to describe the service site configuration. The variable LFC_CONFIG_SITE
must contain the name of this template.
If you want to use Oracle version of LFC server define the following variable in your machine profile :
variable LFC_SERVER_MYSQL = false;
LFC templates allow a LFC server to act as a central LFC server (registered in BDII) for somes VOS and as a local LFC server for the others. This are 2 variables controlling what is registered in the BDII :
LFC_CENTRAL_VOS
: list of VOs for which the LFC server must be registered in BDII as a central server. Default is an empty list.LFC_LOCAL_VOS
: list all VOs for which the server must be registered in BDII as a local server. Default to all supported VOs (VOS
variable). If a VO is in both lists, it is removed fromLFC_LOCAL_VOS
. If you don't want this server to be registered as a local server for any VO, even if configured on this node (present inVOS
list), you must define this variable as an empty list :variable LFC_LOCAL_VOS = list();
VOs listed in both lists must be present in VOS
variable. These 2 variables have no impact on GSI (security) configuration and don't control access to the server. If you want to have VOS
variable (controlling access to the server) matching the list of VOs supported by the LFC server (either as central or local catalogues), you can add the following definition to your LCF server profile :
variable VOS = merge(LFC_CENTRAL_VOS, LFC_LOCAL_VOS);
LFC site parameters
Base template : machine-types/lfc
.
Normally the only thing really required in this site specific template is the password for LFC user (by default lfc
) and the DB accounts. Look at standard LFC templates/trunk/glite-3.0.0/glite/lfc/config configuration template for the syntax.
Starting with QWG Templates release gLite-3.0.2-9, there is no default password value provided for account used by DPM daemons and for the DB accounts used to access the DPM database. You MUST provide one in your site configuration. If you forget to do it, you'll get a not very explicit panc error :
[pan-compile] *** wrong argument: operator + operand 1: not a property: element
LFC Alias
It is possible to configure a LFC server to register itself into the BDII using a DNS alias rather than the host name. To achieve this, you need to define in your site parameters a variable LFC_HOSTS
(replacement for former LFC_HOST
) which must be a nlist where keys are LFC server names and values are nlist accepting the following parameters :
alias
: DNS alias to use to register this LFC server into the BDII
Using non standard port numbers
It is possible to use non standard port numbers for LFC daemons. To do this, you only need to define the XXX_PORT
variable corresponding to the service. Look at gLite default parameters to find the exact name of the variable.
Using a non standard account name for lfcmgr
If you want to use an account name different from lfcmgr
to run LFC daemons, you need to define variable DPM_USER
in your site configuration template and provide a template to create this account, based on users/lfcmgr.tpl.
LCG RB Configuration
Base template : machine-types/rb
.
After the initial installation of the RB, it is necessary to manually initialize the MySQL database used by the RB using MyQL script provided by YAIM and then rerun NCM components for Quattor to complete the configuration, using the command :
ncm-ncd --configure --all
BDII
Base template : machine-types/bdii
.
QWG Templates support configuration of all types of BDII :
- Top-level BDII (default type) : use a central location to get their data (all BDIIs use the same source). This central location contains information about all sites registered in the GOC DB. Use of FCR (Freedom of Choice) enabled by default.
- Site BDII : BDII in charge of collecting information about site resources. Support the concept of sub-site BDII (hierarchy of BDII to collect site information).
- Resource BDII : used in replacement of Globus MDS to publish resource information into BDII.
When configuring BDII on a machine, the following variables can be used (in the machine profile or in a site specific template) to tune the configuration :
BDII_TYPE
: can beresource
,site
,top
.top
is the default, except if deprecated variableSITE_BDII
is true.BDII_SUBSITE
: name of the BDII sub-site. Ignored on any BDII exceptsite
. Must be empty for the main site BDII (default) or defined to the sub-site name if this is a subsite BDII.BDII_SUBSITE_ONLY
(gLite 3.1 only) : if false, allow to run both subsite and site BDII on the same machine. Default : true.BDII_USE_FCR
: set to false to disable use of FCR (Freedom of Choice) on top-level BDII or to true to force its use on other BDII types.BDII_FCR_URL
: use a non standard source for FCR.
Starting with QWG templates gLite-3.0.2-13, all machine types publishing information into BDII (almost all except WN, UI and disk servers) are using BDII configured as a resource BDII for this purpose. In addition all these machine types can be configured as a site/subsite BDII by definining appropriate variable into node profile (BDII_TYPE='site'
and if applicable BDII_SUBSITE
). This combined BDII configuration is the default on a LCG CE : define BDII_TYPE=resource
in CE profile to change it.
Note : combined BDII is the default on LCG CE for backward compatibility. But it is highly recommended to run the site BDII on another machine type.
Configuring BDII URLs on a site BDII
A site BDII aggretates information published by several other BDIIs, typically resource BDIIs or subsite BDIIs. List of resources to aggregate are specicified by variable BDII_URLS
. This variable is typically defined in site parameters, pro_lcg2_config_site.tpl, and is ignored on all nodes except a site (or combined) BDII.
Variable BDII_URLS
is nlist of URLs corresponding to the Globus MDS or resource BDII URLs to aggregate on the site BDII. Key is a arbitrary name (like CE
, DPM1
...) and value is the URL. See site configuration example.
For site using an internal hierarchy of site and subsite BDIIs, it is possible to use BDII_URLS
for subsite BDIIs and BDII_URLS_SITE
for site BDII. This allow both to coexist in the same site parameter template (typically pro_lcg2_config_site.tpl).
BDII_URLS
can contain an entry for a combined BDII. When configuring BDII on this server, this entry will be transparently removed. This allows to move site BDII server to another machine already running a resource BDII without editing BDII_URLS
.
Note : mds-vo-name on a combined BDII is the site or subsite name (not resource
), even for local services.
Restriction : each BDII in BDII hierarchy must use a different mds-vo-name
. Thus it is not possible to use site BDII mds-vo-name
in BDII_URLS
or this will be considered as a loop and the entry will be ignored.
Configuring a subsite BDII
It is possible to run a hierarchy of site BDII. This is particularly useful for a site made of several autonomous entities as it allows each subsite to export a unique access point to subsite resources. Each subsite manage the actual configuration of its subsite BDII and all the subsites are then aggregated by the site BDII. GRIF is a site example of such a configuration.
A subsite BDII is a site BDII where variable BDII_SUBSITE
has been defined to a non empty value. This value is appended to site name to form the mds-vo-name
for the subsite.
Defining Top-level BDII
It is necessary to define the top-level BDII used by the site. This is done by variable TOP_BDII_HOST
. This variable replaces deprecated BDII_HOST
. It has no default.
Note : this is a good practice to use a DNS alias as the top-level BDII name. This allows to change the actual top-level BDII without editing configuration.
MPI Support
To activate MPI support on the CE and WNs, you need to define variable ENABLE_MPI
to true
in your site parameters (normally pro_lcg2_config_site.tpl
). It is disabled by default.
A default set of RPMs for various flavours of MPI (MPICH, MPICH2, OPENMPI, LAM) will be installed. If you would like to install a custom version of a particular MPI implementation, you can do so by defining the following variables:
- MPI_<flavour>_VERSION : Version of the package (e.g. MPI_MPICH_VERSION = "1.0.4")
- MPI_<flavour>_RELEASE : Release number of the package (e.g. MPI_MPICH_RELEASE = "1.sl3.cl.1")
- MPI_<flavour>_EXTRAVERSION : Patch number of the package (if needed e.g. MPI_MPICH_EXTRAVERSION="p1")
By using variables, we ensure that the version published is consistent with the installed RPMs. N.B. this feature is available in QWG revisions >= 2493.
FTS Client
On machine types supporting it (e.g. UI, VOBOX, WN), you can configure a FTS client. Normally, to configure FTS client you only need to define variable FTS_SERVER_HOST
to the name of your preferred FTS server (normally your closest T1).
To accomodate specific needs, there are 2 other variables whose default value should be appropriate :
FTS_SERVER_PORT
: port number used by FTS server. Default : 8443.FTS_SERVER_TRANSFER_SERVICE_PATH : root path of transfer service on FTS server. This is used to build leftmost part of URLs related to FTS services. Default :
/glite-data-transfer-fts`.
MyProxy Server
Base template : machine-types/px
.
VOBOX
Base template : machine-types/vobox
.
UI
Base template : machine-types/ui
.
RPMs Repositories
repository/config/glite.tpl describes the RPM repositories used to locate RPMs required by gLite templates. QWG Templates require 5 RPMs repositories plus an optional one. Name given here are the default ones.
glite_repos_prefix
: gLite RPMs shipped with gLite.- glite_repos_prefix
_externals
: RPMs required by gLite and shipped with it but developed and maintained outside gLite. - glite_repos_prefix
_updates
: official updates to gLite base RPMs, as provided by gLite releases. - glite_repos_prefix
_unofficial
(optional) : unofficial updates to gLite base RPMs used at the site. Normally empty. mpi
: RPMs related to MPI.ca
: CA RPMs as distributed by Grid PMA.
glite_repos_prefix
can be customized without editing the standard template, defining REPOSITORY_GLITE_PREFIX
variable. If not explicitly defined, it defaults to glite_3_0_0
for gLite 3.0 and glite_3_1
for gLite 3.1.
All required repositories must have an associated template whose name is the same as the repository, in site or cluster specific templates. Optional repository is ignored if its associated template is not present. Each template describe the content of the repositories. When using SCDB, this template is maintained with command ant update.rep.templates
.
Note : it is not required to use this structure and you can edit this template to match your local conventions, if different. When upgrading QWG templates, be sure to revert changes to this template.
A template version of these RPMs is distributed as part of examples (templates/trunk/sites/example/repository). They can be used to compile examples but for deployment of a real configuration, you need to build your own version of these templates. You can build an initial version of these repositories by downloading RPMs from the URL mentioned at top the template examples with wget
or src/utils/misc/rpmUpdates.pl. Then update the URL at the top of the template examples.