enterprise job scheduling workload automation it process automation batch processing job netz prozess it automatisierung process decomposition load balancing resource schedulix open source job scheduler
schedulix offers a huge seature set, which enables you,
to meet all requirements considering your IT process automation in an efficient and elegant way.
User-defined exit state model
Complex workflows with branches and loops can be realised using batch hierarchies,
dependencies and triggers by means of freely definable Exit State objects and how they are interpreted.
Job and batch dependencies
You can make sure that individual steps of a workflow are performed correctly by defining Exit State dependencies.
Dependencies can be specified more precisely in addition to the required exit state by defining a condition.
Branches can be implemented in alternative sub-workflows using dependencies that have the exit state as a condition.
Hierarchical workflow modelling
Among other benefits, hierarchical definitions for work processes facilitate the modelling of dependencies,
allow sub-processes to be reused and make monitoring and operations more transparent.
The additional Milestone object type makes it easier to model complex workflows.
Job and batch parameters
Both static and dynamic parameters can be set for Submit batches and jobs.
Job result variable
Jobs can set any result variables via the API which can then be easily visualised in the Monitoring module.
(Sub-)workflows can be dynamically submitted or paralleled by jobs using the Dynamic Submit function.
Local dependencies between parts of the submitted batch instances are correctly assigned when parallelising batches
using the Dynamic Submit function. This hugely simplifies the processing of pipelines.
Job and batch triggers
Dynamic submits for batches and jobs can be automated using exit state-dependent triggers.
This allows notifications and other automated reactions to workflow events to be easily implemented.
In addition to the exit state and trigger type, events can also be specified more precisely by defining
a condition. Asynchronous triggers enable events to be triggered during runtime.
This also allows for reactions to runtime timeouts.
Automatic reruns of sub-workflows can be implemented by using triggers.
So-called 'pending' jobs can be defined to swap out sub-workflows to external systems without
overloading the system with placeholder processes.
Job, Batch and Milestone workflow objects can be orderly organised in a folder structure.
All jobs below a folder can be centrally configured by defining parameters at folder level.
Requirements for static resources can be configured to be inherited by all jobs below a folder by
defining folder environments.
This allows jobs to be assigned to different runtime environments (development, test, production, etc.)
dependent upon a higher-level folder.
Resources can also be globally instanced at folder level as well as in the workflow environment,
making them available to all the jobs below this folder.
Job and batch resources
Instancing resources at batch or job level allows a workflow load generated by hierarchically
subordinate jobs to be locally controlled.
Static resources can be used to define where a job is to be run.
If the requested resources are available in multiple environments, the jobs are automatically distributed
by the schedulix Scheduling System.
A quantity of available units of a resource can be defined for runtime environments using system resources.
A quantity can be stated in the resource requirement for a job to ensure that the load on a resource is restricted.
The job priority can be used to define which jobs are to take priority over other jobs when there is a lack of resources.
Jobs can be prevented from 'starving' with an individually configured 'priority aging' which
automatically raises their priority over the time span.
The interplay of static and system resources allows jobs to be automatically distributed over
different runtime environments dependent upon which resources are currently available.
Synchronising resources can be requested with different lock modes (no lock, shared, exclusive, etc.)
or assigned them for synchronising independently started workflows.
Synchronising resources can be bound to a workflow across multiple jobs with sticky allocations
to protect critical areas between two or more separately started workflows.
A state model can be assigned to synchronising resources and the resource requirement
can be defined dependent upon the state.
Automatic state changes can be defined dependent upon a job's exit state.
Resource requirements can define a minimum or maximum time interval in which the resource
was assigned a new state. This allows actuality and queue conditions to be easily implemented.
A reaction to the changing states of synchronising resources can be triggered with an
automatic submit of a batch or job. After the state transition, the activation of the trigger can be more precisely
specified with an extra condition.
Resource parameters allow jobs to be configured dependent upon the allocated resource.
Resource parameters of exclusively allocated resources can be written via the API.
This allows resources to be used to store meta data.
Authentication routines for job servers, users and jobs using IDs and passwords are effective
methods of controlling access to the system.
The schedulix Time Scheduling module allows workflows to be automatically run at defined times based
on complex time conditions. This usually obviates the need for handwritten calendars, although they
can be used whenever required.
The schedulix web front end allows standard browsers to be used for modelling, monitoring and
operating in intranets and on the internet.
This obviates the need to run client software on the workstations.
The full API of the schedulix Scheduling System allows the system to be completely controlled from
the command line or from programs (Java, Python, Perl, etc.).
The schedulix Scheduling System stores all the information about modelled workflows and the runtime
data in an RDBMS repository.
All the information in the system can be accessed via the SCI (Standard Catalog Interface)
whenever required using SQL.
The secure network communication of the schedulix components via SSL/TLS also fulfils more stringent
Back to the schedulix Homepage