Since version 6.0 jBPM comes with component called jBPM executor that is responsible for carrying on with background (asynchronous) tasks. It started to be more and more used with release of 6.2 by users and even more with coming 6.3 where number of enhancements are based on that component:
- async continuation
- async throw signals
- async start process instance
jBPM executor uses by default a polling mechanism with backend data base that stores jobs to be executed. There are couple of reasons to use that mechanism:
- supported on any runtime environment (application server, servlet container, standalone)
- allows to decouple requesting the job from executing the job
- allows configurable retry mechanism of failed jobs
- provides search API to look through available jobs
- allows to schedule jobs instead of being executed immediately
Following is a diagram illustrating a sequence of events that describe default (polling based) mechanism of jBPM executor (credits for creating this diagram go to Chris Shumaker)
Executor runs in sort of event loop manner - there is one or more threads that constantly (on defined intervals) poll the data base to see if there are any jobs to be executed. If so picks it and delegates for execution. The delegation differs between runtime environments:
- environment that supports EJB - it will delegate to ejb asynchronous method for execution
- environment that does not support EJB will execute the job in the same thread that polls db
This in turn drives the configuration options that look pretty much like this:
- in EJB environment usually single thread is enough as it is used only for triggering the poll and not actually doing the poll, so number of threads should be kept to minimum and the interval should be used to fine tune the speed of processing of async jobs
- on non EJB environment number of threads should be increased to improve processing power as each thread will be actually doing the work
In both cases users must take into account the actual needs for execution as the more threads/more frequent polls will cause higher load on underlying data base (regardless if there are jobs to execute or not). So keep that in mind when fine tuning the executor settings.
So while this fits certain set of use cases it does not scale well for systems that require high throughput in distributed environment. Huge number of jobs to be executed as soon as possible requires more robust solution to actually cope with the load in reasonable time and with not too heavy load on underlaying data base.
This came us to enhancement that allows much faster (and immediate compared to polling) execution, and yet still provide same capabilities as the polling:
- jobs are searchable
- jobs can be retried
- jobs can be scheduled
The solution chosen for this is based on JMS destination that will receive triggers to perform the operations. That eliminates to poll for available jobs as the JMS provider will invoke the executor to process the job. Even better thing is that the JMS message carries only the job request id so the executor will fetch the job from db by id - the most efficient retrieval method instead of running query by date.
JMS allows clustering support and fine tuning of JMS receiver sessions to improve concurrency. All in standard JEE way.
Executor discovers JMS support and if available will use it (all supported application servers) or fall back to default polling mechanism.
NOTE: JMS is only supported for immediate job requests and not the scheduled one
Polling mechanism is still there as it's responsibility is still significant:
- deals with retries
- deals with scheduled jobs
Although need for the high throughput on polling is removed. That means that users when using JMS should consider to change the interval of polls to higher number like every minute instead of every 3 seconds. That will reduce the load on db but still provide very performant execution environment.
Next article will illustrate the performance improvements when using the JMS based executor compared with default polling based. Stay tuned and comments as usually are more than welcome.