condor_hold [-debug] [-reason reasonstring] [-subcode number] [-pool centralmanagerhostname[:portnumber] -name scheddname ] [-addr "a.b.c.d:port"] cluster... cluster.process... user... -constraint expression ...
condor_hold [-debug] [-reason reasonstring] [-subcode number] [-pool centralmanagerhostname[:portnumber] -name scheddname ] [-addr "a.b.c.d:port"] -all
condor_hold places jobs from the HTCondor job queue in the hold state. If the -name option is specified, the named condor_schedd is targeted for processing. Otherwise, the local condor_schedd is targeted. The jobs to be held are identified by one or more job identifiers, as described below. For any given job, only the owner of the job or one of the queue super users (defined by the QUEUE_SUPER_USERS macro) can place the job on hold.
A job in the hold state remains in the job queue, but the job will not run until released with condor_release.
A currently running job that is placed in the hold state by condor_hold is sent a hard kill signal. For a standard universe job, this means that the job is removed from the machine without allowing a checkpoint to be produced first.
% condor_hold -constraint "JobStatus!=2"
Multiple options within the same command cause the union of all jobs that meet either (or both) of the options to be placed in the hold state. Therefore, the command
% condor_hold Mary -constraint "JobStatus!=2"places all of Mary's queued jobs into the hold state, and the constraint holds all queued jobs not currently running. It also sends a hard kill signal to any of Mary's jobs that are currently running. Note that the jobs specified by the constraint will also be Mary's jobs, if it is Mary that issues this example condor_hold command.
condor_hold will exit with a status value of 0 (zero) upon success, and it will exit with the value 1 (one) upon failure.