Version 7 (modified by 19 years ago) ( diff ) | ,
---|
gLite Template Customization
Despite the change in template layout, LCG2 template customization is still valid for gLite3. Refer to this description, except for components with an explicit description provided here.
Allocation of Service Accounts
Some services allow to define a specific account to be used to run the service. In this case, there is one template for each of these accounts in common/users
. The name of the template generally matches the user account created or, when the template is empty, the name of the service.
A site can redefine account names or characteristics (uid, home directory...). To do this, you should not edit directly the standard templates as the changes will be lost in the next version of the template (or you will have to redo them by hand). You should create a users
directory somewhere in your site or cluster hierarchy (e.g. under the site
directory, not directly at the same level else it will not work without adjusting cluster.build.properties
) and put your customized version of the template here.
Note : don't change the name of the template, even if you change the name of the account used (else you'll need to modify standard templates needing this user).
Accepted CAs
There is one template defining all the accepted CAs. We generally produced a new one each time there is a new release of the list of CAs officially accepted by EGEE. If you need to adjust it, create a site or cluster specific copy of common/security/cas.tpl
in a directory common/security
.
If you need to update this template, refer to the standard procedure to do it.
LCG CE Configuration
QWG templates handle configuration of the LCG CE and the selected batch system (LRMS). To select the LRMS you want to use, you have to define variable CE_BATCH_NAME
. There is no default.
The value of CE_BATCH_NAME
must match a directory in common
directory of gLite3 templates.
Note : as of gLite 3.0.2, LRMS supported are Torque v1 (torque1
) and Torque v2 (torque2
), with MAUI scheduler.
Previous versions of QWG templates used to require definition of CE_BATCH_SYS
. This is deprecated : this variable is now computed from CE_BATCH_NAME
.
PBS/Torque
PBS/Torque related templates support the following variables :
CE_QUEUES
: a nlist with one entry per queue (key is the queue name). For each queue, the value itself is a nlist. One mandatory key isattr
and defines the queue parameters (qmgr set queue
options). Another optional key isvos
and is used to explicitly define the VOs which have access to the queue (by default, only the VO with the same name as the queue has access). Look at pro_lcg2_config_site.tpl example for an example on how to define one queue for each supported VO.
WN_NFS_AREAS
: a nlist with one entry per file system that must be NFS mounted on worker nodes (key is the escaped file system mount point). Value for each entry is the name of the NFS server and optionaly the path on the NFS server if different from the path on the worker node.
WN_ATTRS
: this variable is a nlist with one entry per worker node (key is the escaped node fullname). Each value is a set of PBS/Torque attribute to set on the node. Value value are anykey=value
supported byqmgr set server
command. One useful value isstatus=offline
to cause a specific node to drain orstatus=online
to reenable the node. Just suppressingstatus=offline
is not enough to reenable the node. One specific entry inWN_ATTRS
isDEFAULT
: this entry is applied to any node that doesn't have a specific entry.
WN_CPUS_DEF
: default number of CPU per worker node.
WN_CPUS
: a nlist with one entry per worker node (key is the node fullname) having a number of CPUs different from the default.
MAUI
MAUI related templates support the following variables :
MAUI_CFG
: the content of this variable must contain the full content ofmaui.cfg
file. Look at pro_lcg2_config_site_maui.tpl example on how to define this variable from other configuration elements.
MAUI_WN_PART_DEF
: default node partition to use with worker nodes
MAUI_WN_PART
: a nlist with one entry per worker node (key is node fullname). The value is the name of the MAUI partition where to place the specific worker node.
CE Status
CE related templates use variable CE_STATUS
to control CE state. Supported values are :
Production
: this is the normal state. CE receives and processes jobs.Draining
: CE doesn't accept new jobs but continues to execute jobs queued (as long as they are WNs available to execute them).Closed
: CE doesn't accept new jobs and jobs already queued are not executed. Only running jobs can complete.Queuing
: CE accepts new jobs but will not execute them.
CE_STATUS
indicates the desired status of the CE. All the necessary actions are taken to set the CE in the requested status. Default status (if variable is not specified) is Production
. This variable can be used in conjunction to WN_ATTRS to drain queues and/or nodes.
Run-Time Environment
gLite 3.0 templates introduce a new way to define GlueHostApplicationSoftwareRunTimeEnvironment
. Previously it was necessary to define a list of all tags in the site configuration template. As most of these tags are standard tags attached to a release of the middleware, there is now a default list of tags defined in the default configuration site template, defaults/site.tpl. To supplement this list with tags specific to the site (e.g. LCG_SC3
), define a variable CE_RUNTIMEENV_SITE
instead of defining CE_RUNTIMEENV
:
variable CE_RUNTIMEENV_SITE = list("LCG_SC3");
This change is backward compatible : if CE_RUNTIMEENV
is defined in the site configuration template, this value will be used.
DPM Configuration
DPM related standard templates require a site template to describe the service site configuration. The variable DPM_CONFIG_SITE
must contain the name of this template. This template defines the whole DPM configuration, including all disk servers used and is used to configure all the machines part of the DPM configuration.
There is no default template provided for DPM configuration. To build your own template, you can look at template pro_se_dpm_config.tpl in examples provided with QWG templates.
If you want to use Oracle version of DPM server define the following variable in your machine profile :
variable DPM_SERVER_MYSQL = false;
As of DPM 1.5.10, the script used to publish dynamic information for DPM into BDII (space used/free per VO) has not been updated to interact properly with VOMS mapping. As a result, all VO specific pools are not counted into values published. QWG templates provide a fixed version of the script that can be installed by adding the following line to DPM head node profile :
include glite/se_dpm/server/info_dynamic_voms;
To work properly this script requires /opt/lcg/etc/DPMCONFIG
(or whatever file you defined for DPNS database connexion information) to be readable by world. This can be achieved by adding the following line to your DPM configuration in your site specific template :
"/software/components/dpmlfc/options/dpm/db/configmode" = "644";
LFC Configuration
LFC related standard templates require a site template to describe the service site configuration. The variable LFC_CONFIG_SITE
must contain the name of this template.
Normally the only thing really required in this site specific template is the password for LFC user (by default lfc
) and the MySQL administrator (by default root
). There a no default value provided for these password. Look at standard LFC templates/trunk/glite-3.0.0/glite/lfc/config configuration template for the syntax.
If you want to use Oracle version of LFC server define the following variable in your machine profile :
variable LFC_SERVER_MYSQL = false;
LFC templates allow a LFC server to act as a central LFC server (registered in BDII) for somes VOS and as a local LFC server for the others. This are 2 variables controlling what is registered in the BDII :
LFC_CENTRAL_VOS
: list of VOs for which the LFC server must be registered in BDII as a central server. Default is an empty list.LFC_LOCAL_VOS
: list all VOs for which the server must be registered in BDII as a local server. Default to all supported VOs (VOS
variable). If a VO is in both lists, it is removed fromLFC_LOCAL_VOS
. If you don't want this server to be registered as a local server for any VO, even if configured on this node (present inVOS
list), you must define this variable as an empty list :variable LFC_LOCAL_VOS = list();
VOs listed in both lists must be present in VOS
variable. These 2 variables have no impact on GSI (security) configuration and don't control access to the server.