Notes about Installation and Configuration of a Torque server (no cream) - EMI-2 - SL6 x86_64

  • These notes are provided by site admins on a best effort base as a contribution to the IGI communities and MUST not be considered as a subsitute of the Official IGI documentation.
  • This document is addressed to site administrators responsible for middleware installation and configuration.
  • The goal of this page is to provide some hints and examples on how to install and configure an EMI torque server based on EMI-2 middleware.


  1. About IGI - Italian Grid infrastructure
  2. About IGI Release
  3. EMI-2 Release
  4. Yaim Guide
  5. TOBE CHANGED - site-info.def yaim variables
  6. TOBE CHANGED - site-BDII yaim variables
  7. Troubleshooting Guide for Operational Errors on EGI Sites
  8. Grid Administration FAQs page

Service installation

O.S. and Repos

  • Starts from a fresh installation of Scientific Linux 6.x (x86_64).
# cat /etc/redhat-release 
Scientific Linux release 6.2 (Carbon)

* Install the additional repositories: EPEL, Certification Authority, EMI-2

# yum install yum-priorities yum-protectbase epel-release
# rpm -ivh

# cd /etc/yum.repos.d/
# wget

  • Be sure that SELINUX is disabled (or permissive). Details on how to disable SELINUX are here:

# getenforce 

yum install

# yum clean all
Loaded plugins: downloadonly, kernel-module, priorities, protect-packages, protectbase, security, verify, versionlock
Cleaning up Everything

# yum install ca-policy-egi-core
# yum install emi-torque-server emi-torque-utils 

Service configuration

You have to copy the configuration files in another path, for example root, and set them properly (see later):
# cp -vr /opt/glite/yaim/examples/siteinfo .


Create the directory siteinfo/vo.d and fill it with a file for each supported VO. You can download them from HERE

users and groups

You can download them from HERE.


KISS: Keep it simple, stupid! For your convenience there is an explanation of each yaim variable. For more details look HERE.
# cat siteinfo/site-info.def

VOS=" dteam infngrid ops gridit"
QUEUES="cert prod"
CERT_GROUP_ENABLE="dteam infngrid ops /dteam/ROLE=lcgadmin /dteam/ROLE=production /ops/ROLE=lcgadmin /ops/ROLE=pilot /infngrid/ROLE=SoftwareManager /infngrid/ROLE=pilot"
PROD_GROUP_ENABLE=" gridit / /gridit/ROLE=SoftwareManager /"



WN list

Set in this file the WNs list, for example:
# less /root/siteinfo/wn-list.conf

munge configuration

  • generate a key by launching /usr/sbin/create-munge-key
# ls -ltr /etc/munge/
total 4
-r-------- 1 munge munge 1024 Jan 13 14:32 munge.key

  • Copy the key, /etc/munge/munge.key to every host of your cluster, adjusting the permissions:
# chown munge:munge /etc/munge/munge.key

  • Start the munge daemon on each node:
# service munge start
Starting MUNGE:                                            [  OK  ]

# chkconfig munge on

tomcat and ldap users

It is necessary to create tomcat and ldap users on the torque server, otherwise the computing elements will fail in connecting the server.

When those users doesn't exist on the server, on the CE you will see errors like the following

2012-04-24 15:37:29 lcg-info-dynamic-scheduler: LRMS backend command returned nonzero exit status
2012-04-24 15:37:29 lcg-info-dynamic-scheduler: Exiting without output, GIP will use static values
Can not obtain pbs version from host

instead, on the torque server:

04/24/2012 14:00:46;0080;PBS_Server;Req;req_reject;Reject reply code=15021(Invalid credential), aux=0, type=StatusJob, from
04/24/2012 14:01:02;0080;PBS_Server;Req;req_reject;Reject reply code=15021(Invalid credential), aux=0, type=StatusJob, from

Solution is to add tomcat and ldap users/groups to torque host and restart pbs_server - as they exists only on CreamCE host.

# echo 'tomcat:x:91:91:Tomcat:/usr/share/tomcat5:/bin/sh' >> /etc/passwd
# echo 'ldap:x:55:55:LDAP User:/var/lib/ldap:/bin/false' >> /etc/passwd
# echo 'tomcat:x:91:' >> /etc/group
# echo 'ldap:x:55:' >> /etc/group

yaim check

Verify to have set all the yaim variables by launching:
# chmod -R 600 siteinfo/

# /opt/glite/yaim/bin/yaim -v -s siteinfo/site-info.def -n TORQUE_server -n TORQUE_utils
   INFO: YAIM terminated succesfully.

yaim config

# /opt/glite/yaim/bin/yaim -c -s siteinfo/site-info.def -n TORQUE_server -n TORQUE_utils
   INFO: YAIM terminated succesfully.

Service Checks

TORQUE checks

  • check the pbs settings:
# qmgr -c 'p s'
  • check the WNs state
# pbsnodes -a

maui settings

  • In order to reserve a job slot for test jobs, you need to apply some settings in the maui configuration (/var/spool/maui/maui.cfg). Suppose you have enabled the test VOs (ops, dteam and infngrid) on the "cert" queue and that you have 8 job slots available. Add the following lines in the maui.cfg files:
CLASSCFG[prod] QDEF=normal
After the modification restart maui.

  • In order to avoid that yaim overwrites this file during the host reconfiguration, set:
in your site.def (the first time you launch the yaim script, it has to be set to "yes")


Date Comment By
2012-05-24 First draft Paolo Veronesi

-- PaoloVeronesi - 2012-05-24

-- PaoloVeronesi - 2012-05-25

Edit | Attach | PDF | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | More topic actions
Topic revision: r4 - 2012-06-04 - PaoloVeronesi
This site is powered by the TWiki collaboration platformCopyright © 2008-2018 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback