Known issues

Open known issues

Known problems in CREAM software or in other software modules affecting a CREAM based CE (the list refer to known problem affecting the latest release of the software released in EMI)

Null pointer exception from Tomcat6 HTTP processor

Due to a [https://bugzilla.redhat.com/show_bug.cgi?id=1128396|bug] in tomcat6 (up to version 6.0.24.78), the container fails to parse a chunked transfer encoding and raises the following exception:
Aug 01, 2014 12:34:56 AM org.apache.coyote.http11.Http11Processor process
SEVERE: Error finishing request
java.lang.NullPointerException
        at org.apache.coyote.http11.filters.ChunkedInputFilter.parseHeader(ChunkedInputFilter.java:420)
        at org.apache.coyote.http11.filters.ChunkedInputFilter.parseEndChunk(ChunkedInputFilter.java:409)
        at org.apache.coyote.http11.filters.ChunkedInputFilter.doRead(ChunkedInputFilter.java:155)
        at org.apache.coyote.http11.filters.ChunkedInputFilter.end(ChunkedInputFilter.java:208)
        at org.apache.coyote.http11.InternalInputBuffer.endRequest(InternalInputBuffer.java:336)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:886)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
        at java.lang.Thread.run(Thread.java:745)
There's no workaround at the moment.

Missing infoprovider wrapper or corrupted configuration file in EMI-3

Due to a bug in rpm scriptlets of the packages:
  • dynsched-generic
  • lcg-info-dynamic-scheduler-pbs
  • info-dynamic-scheduler-slurm
an update of those infoproviders cause the wrapper /var/lib/bdii/gip/plugin/glite-info-dynamic-scheduler-wrapper to be deleted or the file /etc/lrms/scheduler.conf to be corrupted.

The workaround consists on running YAIM every time one of the above packages is update.

SEVERE: A web application created a ThreadLocal ... but failed to remove it when the web application was stopped

This error is reported only by tomcat6 when the server container is being shutting down. Even if the message shows a memory leak this issue doesn't occur when the server is running.

Problem in configuring CREAM (ONLY if the YAIM tool isn't used)

If CREAM isn't configured by using YAIM tool, the following query MUST be executed on the cream database:

use creamdb; ALTER TABLE db_info MODIFY creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP; commit;

This because there is an unwanted auto-updating of the field "creationTime" on the cream database that it causes a problem by submitting jobs via WMS

Submitting jobs via WMS, you could obtain the following wrong message:

=============== glite-wms-job-status Success =============

BOOKKEEPING INFORMATION:

Status info for the Job : ...

Current Status: Aborted <-----------------

Status Reason: CREAM'S database has been scratched and all its jobs have been lost <-----------------

Destination: ...

Submitted: ...

Parent Job: ...

==================================================================

N.B: The CREAM database isn't scratched, but it is so from the WMS point of view because of the above problem.

Authorization error from Argus in EMI-2

Argus server does not authorize a user if it is under heavy load. For a workaround see Problem with Argus 1.5 (EMI-2) and CREAM

tomcat5 fails to start unless the supplementary repository is enabled in RHEL 5.9

In RHEL 5.9, installing tomcat5 (tomcat5-5.5.23-0jpp.37.el5) pulls in java-1.7.0-ibm-devel (1:1.7.0.3.0-1jpp.2.el5) IF the supplementary-5 repository is enalbed. However, without the supplementary repo set up, tomcat5 pulls in java-1.4.2-gcj-compat-devel instead. In this case starting tomcat5 fails as seen in /var/log/tomcat5/catalina.out :
Using CATALINA_BASE:   /usr/share/tomcat5
Using CATALINA_HOME:   /usr/share/tomcat5
Using CATALINA_TMPDIR: /usr/share/tomcat5/temp
Using JRE_HOME:
WARNING: error instantiating 'org.apache.juli.ClassLoaderLogManager' referenced by java.util.logging.manager, class not found
java.lang.ClassNotFoundException: org.apache.juli.ClassLoaderLogManager not found
   <<No stacktrace available>>
WARNING: error instantiating '1catalina.org.apache.juli.FileHandler,' referenced by handlers, class not found
java.lang.ClassNotFoundException: 1catalina.org.apache.juli.FileHandler,
   <<No stacktrace available>>
Exception during runtime initialization
java.lang.ExceptionInInitializerError
   <<No stacktrace available>>
Caused by: java.lang.NullPointerException
   <<No stacktrace available>>

Version-Release number of selected component (if applicable): tomcat5-5.5.23-0jpp.37.el5.

How reproducible: always.

Steps to reproduce:

  • install RHEL 5.9
  • install tomcat5
  • service tomcat5 start

Actual results: tomcat5 fails to start normally

Expected results: tomcat5 starts normally

Additional info: Installing java-1.6.0-openjdk-devel allows tomcat5 to start without any error

Problem on transferring the sandbox files between the (EMI-3) CREAM CE and the WN on SLURM

This issue doesn't allow the WN to transfer back to the CREAM node the sandbox files and it happens only if the file system is not shared (see the YAIM configuration for SLURM) and CREAM is configured with the following YAIM's variables:

SANDBOX_TRANSFER_METHOD_BETWEEN_CE_WN=LRMS
CE_BATCH_SYS=slurm
JOB_MANAGER=slurm
The workaround is to overwrite the file /usr/libexec/slurm_submit.sh with the fixed one available for download at https://github.com/prelz/BLAH/blob/master/src/scripts/slurm_submit.sh and it not needed to restart tomcat or reconfigure CREAM for applying the patch.

The jobSuspend and jobResume operations fail on CREAM CE (EMI-3) SLURM enabled

Since the hold and resume operations have not yet been implemented in BLAH, CREAM reports to the user the failure of such operations with a message error like "slurm hold command failed (stdout:hold not supported-)"

The jobs submitted to CREAM CE (EMI-3) SLURM enabled fail with the message “failure reason = 127”

In case the jobs fail with the “failure reason = 127”, please restart the sshd daemon on all nodes in order to apply the changes on the ssh configuration:
# /etc/init.d/sshd restart

Problem at first configuration with EMI-2 CREAM with GE

The first yaim configuration for a EMI-2 CREAM CE using GE as batch system fails with:

/etc/lrms/scheduler.conf: No such file or directory 

The problem disappears running again yaim

Problem if there are "extra" characters in the beginning of the tomcat key file

Because on an issue in trustmanager, there can be problems if there is something before -----BEGIN ...---- in /etc/grid-security/tomcat-key.pem

The workaround is to simply remove these chars

Problem suspending not running job for LSF with old blparser

When the CREAM CE is configured to use the old blparser, there might be problems suspending jobs when LSF is used.

In this case the blparser can crash. It will then automatically restarted by the blparser_master process, but for a few minutes submissions won't work.

Issue with conflicting BUpdaterSGE instances

gLite service in CreamCE starts the following services by this exact order: tomcat5, glite-lb-locallogger and glite-ce-blahparser.

The default behaviour of tomcat5 is to start BUpdaterSGE daemon case it thinks it is not running. The problem is that at start up time, BUpdaterSGE is also started by glite-ce-blahparser afterwards. This gives rise to two running instance of the BUpdaterSGE daemon and to a race condition while monitoring running jobs. Jobs may end up being cancelled by BUpdaterSGE conflicts.

The workaroud is to change the order how the different services are started:

# diff /root/gLite.orig /etc/init.d/gLite
36c36
<       start)  SERVICE_LIST=`cat $GLITE_STARTUP_FILE`
---
>       start)  SERVICE_LIST=`cat $GLITE_STARTUP_FILE | sort`
44c44
<         stop) SERVICE_LIST=`cat $GLITE_STARTUP_FILE | sort`
---
>         stop) SERVICE_LIST=`cat $GLITE_STARTUP_FILE | sort -r`

This will be fixed in EMI 2.

CREAM jobs are cancelled with status reason=3 in a GE system

When a job is submitted by BLAH to an GE system, the blah job_registry is updated via sge_submit.sh script with status "1" and all the subsequent status are updated in blah job_registry by the BUpdaterSGE daemon.

BUpdaterSGE is the daemon that decides in what status a given job is examining the output of a "qstat" command. There is a tricky situation when a job disappears: Was it cancelled or did it finish? To know the difference, BUpdaterSGE uses "qacct" to query the accounting log. If there is information about the job in the accounting log, it means it finished, otherwise it means it was cancelled. There are two queries to the accounting log using "qacct -j" with a difference of one minute between the two. If both queries return error, the job is assumed as cancelled.

If you are seeing systematic cancelled jobs in glite-cream-ce.log like

02 Dec 2012 12:43:01,247 org.glite.ce.creamapi.jobmanagement.cmdexecutor.AbstractJobExecutor - JOB CREAM172977114 STATUS CHANGED: REALLY-RUNNING => CANCELLED [description=Cancelled by CE admin] [failureReason=reason=3] [localUser=XXX] [workerNode=XXX] [delegationId=1354371579.274301]

this most probably means "qstat" and "qacct" commands can not be successfully executed by tomcat. This could happen by several reasons:

  • The BUpdaterSGE daemons does not inherits the correct GE environment variables
  • Tomcat user is not allowed to query the GE system
  • The accounting file is not shared in the CreamCE or not updated on the fly

The BUpdaterSGE daemons does not inherits the correct GE environment variables

If the environment present in a BUpdaterSGE process does not include the GE environment variables, the GE client commands (qstat, qconf) can not be executed by BUpdaterSGE. A consequence of this are qacct segfault messages in syslog or dmesg.

As a consequence, BUpdaterSGE will assume that jobs have been cancelled (because it receives no information from qstat or qacct). You can check the environment for BUpdaterSGE process using the following commands and searching for the GE env variables (SGE_EXECD, SGE_QMASTER, SGE_ROOT, SGE_CLUSTER_NAME, SGE_CELL)

# ps xuawww | grep -i sge
tomcat    7423  0.6  0.5  37184 21328 ?        S    Nov23 103:56 /usr/bin/BUpdaterSGE
root     30622  0.0  0.0  61180   804 pts/0    R+   13:41   0:00 grep -i sge

# (cat /proc/7423/environ; echo) | tr '\000' '\n'

This can happen if the BUpdaterSGE daemon is restarted by other user different than root (for example, tomcat starts the daemon at boot time and restarts it if the daemon is dead) without sourcing the proper environment. The workaround is to force the environment to be loaded in /etc/init.d/gLite and /etc/init.d/glite-ce-blahparser. This can be done simply by adding a line like the one bellow to be sourced at the beguinning of previous scripts where the GE environment is properly defined (SGE_EXECD, SGE_QMASTER, SGE_ROOT, SGE_CLUSTER_NAME, SGE_CELL).

 . /etc/profile.d/sge.sh

Tomcat user is not allowed to query the GE system

Some GE systems use certificates to encrypt the communication between GE client and server. For CreamCE, tomcat must be able to query your system (BUpdaterSGE daemon is running under user tomcat). If this is not the case, most probably you will get the following error while trying to do a "qstat" with user tomcat

su - tomcat
sh-3.2$ qstat -u '*'
error: commlib error: can't set CA chain file (/var/sgeCA/sge_qmaster/GridKa/userkeys/tomcat/cert.pem)
error: commlib error: ssl error ([ID=33558530] in module "system library": "No such file or directory")
error: unable to send message to qmaster using port 15020 on host "xxxxxxxx": can't set CA chain file

The accounting file is not shared in the CreamCE or not updated on the fly

BUpdaterSGE needs to consult the GE accounting file to determine how did a given job ended. Therefore, the GE accounting file must be shared between the GE SERVER / QMASTER and the CREAM CE.

Moreover, to guarantee that the accounting file is updated on the fly, the GE configuration should be tunned (using qconf -mconf) in order to add under the reporting_params the following definitions: accounting=true accounting_flush_time=00:00:00

pbs_server create hangs on first time run

On a fresh installation of torque-server, running pbs_server for the first time by running /etc/init.d/pbs_server start will hang. Also, the command to create a server database file hangs (/etc/init.d/pbs_server create). Issue tracked at https://bugzilla.redhat.com/show_bug.cgi?id=744138.

Problems with suspend when the old blparser is used

When the old blparser is used, there are problems with the suspend command.

Relevant bug: https://savannah.cern.ch/bugs/?90085

Problems running single yaim functions in EMI-2

When configuring a EMI-2 CREAM-CE with yaim, there might be problems if a single function is run (the problem is that the =TOMCAT_VERSION_ variable is not defined).

The problem involves the following functions:

  • config_cream_ce
  • config_cream_cemon
  • config_cream_emies
  • config_cream_gliteservices
  • config_cream_stop

There aren't problems if instead the whole CREAM-CE is configured (i.e. yaim -c -s site-info.def -n creamCE ...).

Problems re-enabling CEMon and/or EMI-ES

When configuring a EMI-2 CREAM-CE, by default CEMon and EMI-ES are not deployed, unless the relevant yaim variables USE_CEMON and USE_EMIES are set to true.

There are problems if CEMon is disabled (i.e. with USE_CEMON not set or set to false) and then a reconfiguration is done re-enabling it (i.e setting USE_CEMON to true). The workaround is to reinstalling the glite-ce-monitor rpm before reconfiguring.

There is the same issue for EMI-ES.

Execution of DAG jobs

Execution of DAG jobs on the CREAM based CE through the gLite WMS is not implemented yet.

qsub crashes

With some Torque versions it was observer qsub crashing with glibc detecting a double free or corruption.Although this is a problem to be addressed in Torque problem, adding:

export MALLOC_CHECK_=0

to /etc/blah.config should help

CREAM CE not Torque master: communication errors when the maui server and client are not of the same builds.

* Bug #61698: when the CREAM CE is not a Torque server, there could be communication errors when the maui (and probably torque) server and client are NOT of the same builds.

A common scenario/example when this can happen:

  • The maui server is a 32bit binary deployed on a 32bit LCG-CE
  • The 64bit maui client is deployed on a 64bit CREAM-CE

From the CREAM-CE node perform:

[root@cream-ce]# diagnose –g

If you see:

ERROR:    lost connection to server
ERROR:    cannot request service (status)

you are affected by the problem.

A possible workaround is the following:

On the LCG-CE create a cron file to dump the diagnose -g output to a file:

[root@lcg-ce]# cat <<EOF>> /etc/cron.d/diagnose-for-cream
*/5 * * * * root  /usr/bin/diagnose –g > /export/dir/to/cream-ce/diagnose.out
EOF

The interval defined in /etc/cron.d/diagnose-for-cream file, has to be set by the experts. Just an example has been provided here.

Then export over NFS the directory where the file is located:

[root@lcg-ce]# cat /etc/exports
/export/dir/to/cream-ce            cream-ce(rw,map_identity,no_root_squash,sync)

On the CREAM-CE include/mount the remote directory to a local one:

[root@cream-ce]# cat /etc/fstab | grep diagnose
lcg-ce: /export/dir/to/cream-ce                /import/dir/to/cream-ce         nfs    defaults,bg        0 0

Then feed the lcg-info-dynamic-scheduler with the diagnose output file:

 
[root@cream-ce]# cat /opt/glite/etc/lcg-info-dynamic-scheduler.conf|grep vomaxjobs-maui
vo_max_jobs_cmd: /opt/lcg/libexec/vomaxjobs-maui -h lcg-ce –infile /import/dir/to/cream-ce/diagnose-for-cream

Special characters in CREAM_DB_USER and CREAM_DB_PASSWORD

Don't use special characters in the CREAM_DB_USER and CREAM_DB_PASSWORD yaim variables

Problems with OS language different than US English

Problems have been reported if jobs are submitted through the WMS to a CREAM CE deployed on a machine installed using a non-English language. This is because of different representations of decimal numbers. The workaround in this case is to uncomment the line:

LANG=en_US

in $CATALINA_HOME/conf/tomcat5.conf and then restart tomcat

Old known issues

Problems in CREAM software or in other software modules affecting a CREAM based CE that have already been fixed (i.e. they are not affecting the latest release of the software released in EMI)

Configuration error: cannot get instance of CommandManager: org/glite/lb/LBException

This issue occurs in one of the following cases:

  • when upgrading the CREAM CE from 1.15 to 1.16.
  • when upgrading the package glite-lb-client-java to version 2.0.6-1 or later

The workaround consists on creating a symbolic link in the web application deployment directory:

ln -sf /usr/lib/java/glite-lb-client-java.jar /var/lib/tomcat*/webapps/ce-cream/WEB-INF/lib

Fix provided in EMI-3 update 21

Error from TORQUE infoprovider: Exception: Cannot find user for xxxxx.xxxxxxxx

This error can be reported by the low level infoprovider of TORQUE when the CREAM service and TORQUE server are deployed on different hosts and a set of "local accounts", not belonging to any pool account, are defined on the TORQUE server. The error message is misleading, the infoprovider is not able to retrieve the group of the local account, not the user. The workaround consists on creating in the CREAM host the same set of users and groups defined on the TORQUE server.

Fix provided in EMI-3 update 14

NoClassDefFoundError from Argus updating from EMI-2 to EMI-3

Updating from EMI-2 to EMI-3 the support for Argus requires CAnL libraries which are incompatible with the CREAM service in EMI-3. It is necessary to force the service to use old Argus libs in this way:
ln -sf /usr/share/java/argus-pep-api-java-compat.jar $CATALINA_HOME/webapps/ce-cream/WEB-INF/lib/argus-pep-api-java.jar
ln -sf /usr/share/java/argus-pep-common-compat.jar $CATALINA_HOME/webapps/ce-cream/WEB-INF/lib/argus-pep-common.jar

Fix provided in EMI-3 update 6

Wrong time format for MaxWallClockTime

With EMI-2 update 9 an issue occurs concerning the time format of GlueCEPolicyMaxWallClockTime and GlueCEPolicyMaxObtainableWallClockTime on a TORQUE based installation. The attributes are published in hours, there's no workaround at the moment the only way is to change the script /usr/libexec/info-dynamic-pbs according to:
360c360
<            $maxWall = int(&convertHhMmSs($1)/60);
---
>            $maxWall = int(&convertHhMmSs($1));
363c363
<       $defaultWall = int(&convertHhMmSs($1)/60);
---
>       $defaultWall = int(&convertHhMmSs($1));

Relevant bug: https://savannah.cern.ch/bugs/?101076

Fix provided in EMI-3 update 6

Problem switching off of the JobSubmissionManager (i.e. JOB_SUBMISSION_MANAGER_ENABLE false in /etc/glite-ce-cream/cream-config.xml

The switching off of the JobSubmissionManager ( in /etc/glite-ce-cream/cream-config.xml) makes the CREAM service not available for the users. In particular the CREAM UI reports the following message error:

"Received NULL fault; the error is due to another cause: FaultString=[CREAM service not available: configuration failed!] - FaultCode=[SOAP-ENV:Server] - FaultSubCode=[SOAP-ENV:Server]"

Fix provided with EMI-2 update 9

Problem by submitting jobs via WMS (direct submission to CREAM CE isn't affected by this problem)

There is an unwanted auto-updating of the field "creationTime" on the cream database. This happen, for example, when tomcat is restarted (yaim does the stop and the start of the tomcat service).

Submitting jobs via WMS, you could obtain the following wrong message:

=================== glite-wms-job-status Success =================

BOOKKEEPING INFORMATION:

Status info for the Job : ...

Current Status: Aborted <-----------------

Status Reason: CREAM'S database has been scratched and all its jobs have been lost <-----------------

Destination: ...

Submitted: ...

Parent Job: ...

======================================================================

N.B: The CREAM database isn't scratched, but it is so from the WMS point of view because of the above problem.

The problem is solved applying the following workaround on the cream database:


use creamdb; ALTER TABLE db_info MODIFY creationTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP; commit;

Fix provided with EMI-2 update 7

EMI-2 CREAM CE delegates bad proxy to WN

The delegated and limited proxy on the CE contains a corrupted field "X509v3 Key Usage"; this issue can be reproduced executing the following command:
openssl x509 -noout -text -in /var/glite/cream_sandbox/[vo]/[dn_fqan_mappeduser]/proxy/[delegationid_internalid]
A temporary workaround is to change to proxy-limiting command in the script /usr/bin/glite-cream-copyProxyToSandboxDir.sh, the modified version of the script is available here

Fix provided with EMI-2 update 7

Problem with cancelled jobs notification

In BLAH 1.18.0(EMI2) to run correctly the notification of cancelled jobs it is needed to add in /etc/blah.config the row:

bupdater_use_bhist_for_killed="yes"

Fix released with EMI-1 Update 17 and EMI-2 Update 1

Problem with generic dynamic scheduler with SGE

The yaim plugin for sge configures the gip for publishing information but when used out of the box the following error is shown in the BDII log:

Traceback (most recent call last):
File "/usr/libexec/lcg-info-dynamic-scheduler", line 256, in ?
import lrms
ImportError: No module named lrms

The workaround is defining PYTHONPATH in /var/lib/bdii/gip/plugin/glite-info-dynamic-scheduler-wrapper:

$ cat /var/lib/bdii/gip/plugin/glite-info-dynamic-scheduler-wrapper
#!/bin/sh
#/opt/lcg/libexec/lcg-info-dynamic-scheduler -c /opt/glite/etc/lcg-info-dynamic-scheduler.conf
export PYTHONPATH=/usr/lib/python:$PYTHONPATH
/usr/libexec/lcg-info-dynamic-scheduler -c /etc/lcg-info-dynamic-scheduler.conf

Relevant ticket: https://ggus.eu/ws/ticket_info.php?ticket=76961

Fix provided with EMI-2 update 3

Error parsing GLUE2PolicyRule

All the attibutes "GLUE2PolicyRule" defined in the file /var/lib/bdii/gip/ldif/ComputingShare.ldif MUST BE in the form "VO:nameofthevo" (the VO prefix is mandatory) Other strings, even the empty one, are not correctly parsed by the script lcg-info-dynamic-scheduler and the following error is reported in the BDII log:

vogrp = tmpl[2].strip()
IndexError: list index out of range
In that case the wrong attributes MUST BE removed

Fix provided with EMI-2 update 3

GlueCEStateWaitingJobs: 444444 and WallTime workaround

If on the queues there is published:

GlueCEStateWaitingJobs: 444444
and in the log /var/log/bdii/bdii-update.log you notice errors like the folllowing:
Traceback (most recent call last):
  File "/usr/libexec/lcg-info-dynamic-scheduler", line 435, in ?
    wrt = qwt * nwait
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
probably the queues have no "resources_default.walltime" parameter configured. So define it for each queue by launching, for example:
# qmgr -c "set queue prod resources_default.walltime = 01:00:00"
# qmgr -c "set queue cert resources_default.walltime = 01:00:00"
# qmgr -c "set queue cloudtf resources_default.walltime = 01:00:00"

Relevant ticket https://ggus.eu/tech/ticket_show.php?ticket=83229

Fix provided with EMI-2 update 3

Issue with the setting of the maximum number of accepted FTP connections

The number of maximum number of gridftp connections is now automatically set in /etc/grid-security/gridftp.conf.

It should be manually added also in the file /etc/gridftp.conf where the line:

connections_max 150

should be added.

Relevant ticket: https://ggus.eu/tech/ticket_show.php?ticket=78902

Because CREAM is able to protect itself if the load, memory usage, etc. is too high disabling submission through a limiter script

/usr/bin/glite_cream_load_monitor
thresholds for the values for this system and CREAM specific parameters are defined in a configuration file (/etc/glite-ce-cream-utils/glite_cream_load_monitor.conf), the file
/etc/glite-ce-cream-utils/glite_cream_load_monitor.conf
must be modified also.

Memory leak in bupdater for PBS and LSF

Version 1.16.3 of BLAH is affected by a quite critical memory leak in the bupdater component for LSF and PBS. Because of that the usage of memory of the bupdater process will keep increasing till when it crashes/it is killed by OOM. It is then automatically restarted by blah.

SGE is not affected by this problem

For LSF and PBS, the workaround is to configure the blparser using the old method (see: http://wiki.italiangrid.it/twiki/bin/view/CREAM/SystemAdministratorGuideForEMI1#1_2_4_Choose_the_BLAH_BLparser_d) or to the restart from time to time the bupdater.

Relevant bug: https://savannah.cern.ch/bugs/index.php?89859

Fix provided with EMI-1 Update 12

Problems starting the bupdater and bnotifier with (S)GE

With SGE, there is a problem when starting the bupdater and bnotifier:

Starting BNotifier: /usr/bin/BNotifier: sge_helperpath not defined. Exiting
[FAILED]
Starting BUpdaterSGE: /usr/bin/BUpdaterSGE: sge_helperpath not defined. Exiting
[FAILED]

The workaround is to uncomment the variable:

 sge_helperpath=/usr/bin/sge_helper 

in the blah.config file.

See https://savannah.cern.ch/bugs/index.php?88974 and https://ggus.eu/ws/ticket_info.php?ticket=76067

Fix provided with EMI-1 Update 12

No dynamic info published for one VOview

For one VOView the lcg-info-dynamic-scheduler doesn't publish information, and therefore the values defined in the static ldif file is used.

As found by Jan Astalos (thanks !) this is because a missing blank line at the end of /var/lib/bdii/gip/ldif/static-file-CE.ldif created by YAIM.

Waiting for the fix, the workaround is simply doing:

echo >> /var/lib/bdii/gip/ldif/static-file-CE.ldif 

after having configured via yaim

Relevant bug: http://savannah.cern.ch/bugs/?86191

Fix provided with CREAM CE 1.13.3 (see http://savannah.cern.ch/task/?24022) released with EMI-1 Update 10

Problems if Torque is not configured to suppress mails

Torque should be configured to suppress all mails (mail_domain=never). Otherwise the bupdater process of the blparser will keep dying.

Relevant bug: https://savannah.cern.ch/bugs/index.php?86238

Fix provided with BLAH 1.16.3 (see http://savannah.cern.ch/task/?22845) released with EMI-1 Update 10

Memory issues with new BLAH Blparser

If the new Blparser is used (click here to check this) there can be issues if the blah registry becomes very large. The submission process can get slower and there can be problems with memory usage.

Waiting for the fix, there are two possible workarounds:

  • Reduce the number of multiple instances of blahpd (the default value is 50). This means changing the value cream_concurrency_level in cream-config.xml. To apply the change, you will then need to restart tomcat. This should help addressing the issue, but it will also mean less parallel instances interacting with the batch system (and so a possible reduction of the throughput in the submission to the batch system)
. Click here to get more details
  • Reduce the value for purge_interval in blah.config. This value is expressed in seconds. A job is removed from the BLAH registry (and therefore not managed anymore by BLAH and therefore CREAM) after purge_interval seconds since its submission. To apply the change, you will then need to restart the blparser (/etc/init.d/glite-ce-blahparser restart)

Relevant bug: https://savannah.cern.ch/bugs/index.php?75854

Fix provided with BLAH 1.16.3 (see http://savannah.cern.ch/task/?22845) released with EMI-1 Update 10

Problems with Torque 2.5.7-1

There is a problem with latest torque version available in the EPEL repository (2.5.7-1).

At start the following error is reported:

[root@cream-38 ~]# /etc/init.d/pbs_server start
/var/torque/server_priv/serverdb
Starting TORQUE Server: PBS_Server: LOG_ERROR::No such file or directory (2)
in pbs_init, unable to stat checkpoint directory /var/torque/checkpoint/,
errno 2 (No such file or directory)
PBS_Server: LOG_ERROR::PBS_Server, pbsd_init failed
                                                           [FAILED]

Problem addressed with Torque v- 2.5.7-2

Problems affecting users with certificates signed by the GermanGrid

Because of a bug in trustmanager, users with certificates signed by the GermanGrid CA can't submit jobs to CREAM. The error message is something like:

Failed to create a delegation id for job https://grid-lb0.desy.de:9000/ADkeOt6tc0Rfi8oP-pzUrQ: reason is Client 'O=GermanGrid,OU=DESY,CN=Alexander Fomenko' is not issuer of proxy 'O=GermanGrid,OU=DESY,CN=Alexander Fomenko,CN=proxy,CN=proxy'.

Problems with SubCAs when Argus is used as authorization system

There are problems when CREAM CE is configured to use Argus, happening with sub-CAs (e.g. CERN-TCA, UKeScienceCA)

CREAM doesn't transfert the output files remotely if SANDBOX_TRANSFER_METHOD="LRMS"

The related bug is https://savannah.cern.ch/bugs/index.php?95480; the error occurs only if the transfer method selected is "LRMS" and the name of the URL is lexicographically greater than "gsiftp://localhost". No workaround is available.

FAILURE_REASON="Cannot enqueue the command id=-1: Data truncation: Data too long for column 'commandGroupId' at row 1 (rollback performed)"

it's a bug reintroduced in EMI-2 CREAM CE: https://savannah.cern.ch/bugs/index.php?95593

A workaround is to modify a database table by executing the following query:

use creamdb; ALTER TABLE command_queue MODIFY commandGroupId varchar(255) 
NULL; 

-- LisaZangrando - 2012-10-26

Edit | Attach | PDF | History: r69 < r68 < r67 < r66 < r65 | Backlinks | Raw View | More topic actions
Topic revision: r69 - 2014-11-04 - PaoloAndreetto
 

This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback