Calculation Job

Configuration Dialog Basic

The Calculation Job node offers the possibility to run distributed calculation jobs from within a DPS flow.

Configuration

Signals

Configuration Dialog Signals

Here it is possible to configure the input/output handling of the calculation.

For more details see: Signal Input and Signal Output.

Calculation

Configuration Dialog Calculation

Here it is possible to specify the calculation type and upload the model.

Calculation type
Ebsilon

Select this type if the calculation is to run an EBSILON® Professional model. Details and the Extended Type can be found in the Ebsilon Calculation Type.

Python

Select this type if you want the calculation to run a Python or IronPython model. For details and the Extended Type see Python Calculation Type.

Options

Configuration Dialog Options

General Options

Pass-through input data

If this option is enabled, all signals of the incoming data group are copied to the output data group. If the name of an output signal is identical to the name of an input signal, the output signal is used and the input is discarded. The input signal is even discarded if the calculation job does not contain the specified output signal or was manually deleted.

Default: enabled

Delete processed job

If this option is enabled, the distributed calculation job will be deleted after successful processing.

Default: enabled

Discard calculations with missing input values

If this option is enabled, no calculation will be performed if there are missing inputs.

Default: disabled

Handle received jobs with execution outcome warning as error

If this option is enabled, jobs with the outcome “warning” will be treated as if they had the outcome “error”.

Default: disabled

Include metadata signals on out port

If this option is enabled, the metadata signals are written to the data group for the out port. See: metadata_signals

Default: disabled

Include metadata signals on error port

If this option is enabled, the metadata signals are written to the data group for the error port. See: metadata_signals

Default: enabled

Create a debug archive for failed task items

If this feature is enabled, a debug archive is created for each failed task item. The debug archive can be accessed from the distributed calculation user interface.

Default: disabled

Enqueue job in the specified execution queue (optional)

The execution queue of the distributed calculation service to use. If no queue is specified, the default setting is used.

Set a calculation time limit for jobs (e.g. 5m or 00:05:00) (optional)

The maximum time after which a calculation must be completed (see Timespan ). If the calculation has not been completed or started after this time, it will be aborted.

Advanced Options

Job creation mode

SingleTask AutoGrouped

Default: AutoGrouped

Job parallel create limit

The maximum number of pending jobs created on the distributed calculation service. When the limit is reached, the node waits until a job is completed. Use 0 to disable the limit.

Default: 0

Discard enqueued datagroup messages when job creation limit is reached

If this option is enabled, the node will cancel the incoming calculation requests when the limit is reached (see Job parallel create limit).

Default: disabled

Clear all stored jobs after worker reboot

If this option is enabled, all calculation jobs that have not been received will be removed on worker restart.

Default: disabled

Clear all enqueued datagroup messages before start

If this option is enabled, all datagroups that are pending at the in port will be removed on start.

Default: disabled

Delete stored input datagroup of job

If this option is enabled, stored input datagroup of job will be deleted.

Default: enabled

Connection

Configuration Dialog Connection
Calculation Connection

Select the distributed calculation connection to be used for this node (see Global Configuration).

Attention

Manual deletion of calculation jobs that have not yet been received by the Calculation Job node can lead to unexpected behavior in the processing pipeline. (e.g. the original data group id will be lost, because it is stored in the calculation job and so the Flow Hook Receive node will never catch the expected output message)

Doctree