QWG Templates and Site Customization

This page describes the template framework structure QWG templates is part of and how to integrate site custimizations into this framework.

QWG template framework has been designed to achieve the following goals :

  • Provide generic templates for software installation and configuration, with local site parameters separated from provided templates.
  • Preserve site customization across software upgrades (OS or middleware)
  • Allow support of several sites sharing some configuration database, leveraging overall management load but keeping required flexibility.

Template Hierarchies

QWG LCG2 templates rely on a basic template structure based on template hierarchies. Each template hierarchy is dedicated to one specific aspect of the configuration, e.g. OS, LCG middleware, site specific parameters...

There is no imposed naming of template hierarchies, except for the clusters hierarchy which must contain cluster definition. clusters is the only hierarchy searched for machine profiles.

The suggested layout is :

  • os : this hierarchy is used for OS related templates (e.g. RPMs associated with each feature group). Generally this hierarchy is made of one sub-hierarchy per OS version/architecture (e.g. sl308-386, sl440-x86_64). Most of the templates in these hierarchy are generated from OS distribution and should not be edited.
  • grid : this hierarchy is used for templates related to EGEE/LCG middleware installation and configuration. Generally this hierarchy is made of one sub-hierarchy per middleware version (e.g. glite-3.0.0, glite-3.1). This hierarchy typically contains templates provided by QWG LCG. Most of these templates are configurable through variable definition and should require no edit.
  • standard : this hierarchy is used for other kind of standard templates provided by some products, e.g. Quattor core templates, pan standard templates, Lemon templates... Generally this hierarchy contains one directory tree for each product. The templates in this hierarchy should not be edited.
  • sites : this hierarchy is used for templates that are not standard (site specific templates or site customized version of standard templates) but are (potentially) common to several clusters. This hierarchy generally contains one sub-hierarchy per site. site concept is explained in more detail later but has no requirement to be linked to a physical location. Look at a site example.
  • clusters : this hierarchy is used for cluster specific templates. There should be one sub-hierarchy per cluster. A cluster defines a group of machines sharing some common configuration. One specific of a cluster is that it must contain a directory profiles containing the machine profiles (e.g. object template used to define a machine configuration). It is valid for a cluster to have an empty profiles directory. Look at a cluster example.

Other pages describe in more details layout and customization of OS templates and gLite templates.

Clusters and Template Hierarchies

Each cluster is associated with one middleware version or sites by defining the appropriate include path used by pan compiler to locate templates. This include path is defined in the file at the top of each cluster hierarchy, (cfg/clusters/cluster-name).

This file must contain one line defining the property cluster.pan.includes as a list of space separated hierarchy list. The hierarchy is interpreted as a file pattern relative to the cfg directory (or whatever has been specified for cfg property in file It must specify a directory that the template name (including it's relative position in the directory) must be appended to to locate the template file.

The include path is processed in the order specified. If a template exists in several hierarchies, the first one found according to the include path order is used.

Note : Legacy template names, as defined in the template statement at the beginning of the template, don't include the relative position of the template in its template hierarchy (called namespace). To include seach templates if they are spread to several sub-directories of the template hierarchy, you can specify to use all subdirectories instead of listing them explicitly, appending the pattern /**/* at the trail of the directory name. This entry must be added after the normal entry. With such legacy templates, if a template exists in several directories of a hierarchy, the inclusion order is unspecified.

Look at example. As reflected by this example, hierarchies of standard templates must be included after site specific templates (there may be several sites the cluster belongs to), in the following order : grid, os and standard.

Note : clusters/cluster-name hierarchy is implicitly added as the first entry in the include path. It should not be added explicitly to the file.

Cluster parameters

For every cluster, it is possible to customize its configuration in template pro_site_cluster_info.tpl. There must be one such template per cluster. Look at example provided for more information about the minimum set of variables to define.

One information required is the default root password for nodes in the cluster (it can also be customized on a per node basis, into the node profile). This must be the password hash as returned by command :

openssl passwd -1 my_preferred_password

Selecting OS and Kernel version

Refer to section about Configuring OS in documentation about OS template customizations.

RPM repositories

Each machine profile must contain the list of RPM repositories (directories where to download RPMs from) used in the profile. This is done by including at the end of the profile a cluster or site specific template, generally named repository/config.tpl. This must be done last in the machine profile and you should avoid doing this twice as this is a time consuming operation.

It is recommended to refer to this template through variable PKG_REPOSITORY_CONFIG and to define this variable in cluster specific pro_site_cluster_info.tpl. Using this method you should add at the end of each machine profile :


Look at machine profile examples for more information.

Site Concept

Layout of QWG templates is based on a clear separation of standard templates, those part of QWG releases, and site specific templates. Site specific templates generally contains :

  • Site parameters used as input by standard templates to produce the final configuration meeting site requirements.
  • Provide site specific features

When using SCDB, machine profiles are organized in `clusters. A cluster in SCDB is not related to a computing cluster but to an arbitrary grouping of machines. All clusters in SCDB must reside into the cfg/clusters templates hierarchy. There are many reasons why a site may want to have separate SCDB clusters, for example :

  • Different group of machines : grid servers, internal servers, desktops...
  • Different version of grid middleware : currently, all the machines in a cluster must run the same middleware version (see gLite templates layout). When transitionning from one version to another, the site need to create a new cluster.
  • Production and testbed machines
  • Clusters of machines spread over different geographical locations
  • Split of a large number of machines into smaller units, for example based on alphabetical order of machine names

When a site is running several clusters, it is generally convenient not to duplicate information shared by several clusters. This is done through the concept of sites. A site in SCDB is not related to a geographical grouping of machines. And a cluster can belong to several sites. This is up to each site to define how it uses SCDB sites. By convention, all the SCDB sites are defined into cfg/sites but nothing prevents a site to use another name for this or use several different template hierarchies.

A cluster is associated with one or several sites by the cluster specific file If associated with several sites, the first one takes precedence over the following one if a template exists in several sites. This ability to associate a cluster with several sites is a key SCDB feature that can be used for several use cases.

Below is the description for several common use cases. They are presented as distinct use cases for clarity but nothing prevents mixing them.

Managing several geographical sites from one SCDB

In this use case, one SCDB is used to managed different geographic sites, sharing some configuration information between them. In the following example, the geograhical site is called mycity and all sites belong to one grid site called gridsite. This is easily achieved by defining the following SCDB sites :

  • mycity : defines parameters corresponding to the geographical locations (like network related parameters) that are common to all kind of clusters (grid machines, non grid servers, desktops...).
  • gridsite, on the other hand, will define parameters that are common to all grid machines, whatever geographical location they belong to.

In, you include mysite before gridsite, with something like :

cluster.pan.includes=sites/mycity sites/mycity/**/* sites/gridsite sites/gridsite/**/* ...

Note : look at Doc/TemplateCustom for more information on format.

If you want to add a cluster at another geographical location (e.g. mycity2 belonging to the same grid site, you will just need to adapt this cluster with something like :

cluster.pan.includes=sites/mycity2 sites/mycity2/**/* sites/gridsite sites/gridsite/**/* ...

If you want to add a non grid cluster at location mycity, you will tune this cluster with something like :

In, you include mysite before gridsite, with something like :

cluster.pan.includes=sites/mycity sites/mycity/**/* ...

Note : sites/ in examples must be replaced by whatever you choosed, if you decided to use another name.

Supporting production and test clusters

This use case is about a site running production systems and test grid systems. They share most of there grid parameters but some of them are differents. This is easily achieved by defining 3 SCDB sites (in this example, grid site is called gridsite) :

  • gridsite-prod : parameters specific to grid production systems
  • gridsite-test : parameters specific to grid test systems
  • gridsite : parameters common to all grid systems at the site

In production cluster, include path will be defined as :

cluster.pan.includes=sites/gridsite-prod sites/gridsite-prod/**/* sites/gridsite sites/gridsite/**/* ...

In test cluster, include path will be defined as :

cluster.pan.includes=sites/gridsite-test sites/gridsite-test/**/* sites/gridsite sites/gridsite/**/* ...

Note : sites/ in examples must be replaced by whatever you choosed, if you decided to use another name.

Hardware Templates

Hardware configuration of machines is described as part of the machine profiles through a set of templates. There is one template for each kind of hardware subsystem (CPU, memory, nic...). They are located in standard/hardware. There is one subdirectory for each kind of hardware subsystem. There is also a legacy subdirectory with templates in the old format (non namespaced).

The recommandation is to create one template per machine that includes the appropriate templates to describe the machine hardware. The suggested location for these site specific templates are in hardware/machine in site directory. Look at examples for more information.

Note : if you have existing clusters that are using only the legacy, non namespaced, version of hardware template, you need to update in all your clusters and add standard/hardware/legacy after standard in cluster.pan.includes value.

Template Compilation Tool

The recommended method to process all the templates and build machine profiles is to use ant tool, a Java based equivalent of make, provided with SCDB (it can also be used without SCDB). ant brings the advantage of platform independance, allowing to do Quattor management tasks on any platform (Unix, Windows, MacOS).

Look at How to Use SCDB documentation for more information on how to use this tool.

Last modified 10 years ago Last modified on Jul 4, 2007, 11:40:55 AM