enterprise job scheduling workload automation it process automation batch processing job netz prozess it automatisierung process decomposition load balancing resource schedulix open source job scheduler
schedulix offers a huge seature set, which enables you,
to meet all requirements considering your IT process automation in an efficient and elegant way.
User-defined exit state model
Complex workflows with branches and loops can be realised using batch hierarchies,
dependencies and triggers by means of freely definable Exit State objects and how they are interpreted.
Job and batch dependencies
You can make sure that individual steps of a workflow are performed correctly by defining Exit State dependencies
Branches can be implemented in alternative sub-workflows using dependencies that have the exit state as a condition.
Hierarchical workflow modelling
Among other benefits, hierarchical definitions for work processes facilitate the modelling of dependencies,
allow sub-processes to be reused and make monitoring and operations more transparent.
Job and batch parameters
Both static and dynamic parameters can be set for Submit batches and jobs.
Job result variable
Jobs can set any result variables via the API which can then be easily visualised in the Monitoring module.
(Sub-)workflows can be dynamically submitted or paralleled by jobs using the Dynamic Submit function.
Local dependencies between parts of the submitted batch instances are correctly assigned when parallelising batches
using the Dynamic Submit function. This hugely simplifies the processing of pipelines.
Job and batch triggers
Dynamic submits for batches and jobs can be automated using exit state-dependent triggers.
This allows notifications and other automated reactions to workflow events to be easily implemented.
Automatic reruns of sub-workflows can be implemented by using triggers.
So-called 'pending' jobs can be defined to swap out sub-workflows to external systems without
overloading the system with placeholder processes.
Job, Batch and Milestone workflow objects can be orderly organised in a folder structure.
All jobs below a folder can be centrally configured by defining parameters at folder level.
Static resources can be used to define where a job is to be run.
If the requested resources are available in multiple environments, the jobs are automatically distributed
by the schedulix Scheduling System.
A quantity of available units of a resource can be defined for runtime environments using system resources.
A quantity can be stated in the resource requirement for a job to ensure that the load on a resource is restricted.
The job priority can be used to define which jobs are to take priority over other jobs when there is a lack of resources.
The interplay of static and system resources allows jobs to be automatically distributed over
different runtime environments dependent upon which resources are currently available.
Synchronising resources can be requested with different lock modes (no lock, shared, exclusive, etc.)
or assigned them for synchronising independently started workflows.
Synchronising resources can be bound to a workflow across multiple jobs with sticky allocations
to protect critical areas between two or more separately started workflows.
A state model can be assigned to synchronising resources and the resource requirement
can be defined dependent upon the state.
Automatic state changes can be defined dependent upon a job's exit state.
Resource requirements can define a minimum or maximum time interval in which the resource
was assigned a new state. This allows actuality and queue conditions to be easily implemented.
Resource parameters allow jobs to be configured dependent upon the allocated resource.
Authentication routines for job servers, users and jobs using IDs and passwords are effective
methods of controlling access to the system.
The schedulix Time Scheduling module allows workflows to be automatically run at defined times based
on complex time conditions. This usually obviates the need for handwritten calendars, although they
can be used whenever required.
The schedulix web front end allows standard browsers to be used for modelling, monitoring and
operating in intranets and on the internet.
This obviates the need to run client software on the workstations.
The full API of the schedulix Scheduling System allows the system to be completely controlled from
the command line or from programs (Java, Python, Perl, etc.).
The schedulix Scheduling System stores all the information about modelled workflows and the runtime
data in an RDBMS repository.
All the information in the system can be accessed via the SCI (Standard Catalog Interface)
whenever required using SQL.
Back to the schedulix Homepage