2019/10/22

Introducing jBPM's Human Task prediction API

In this post, we’ll introduce a new jBPM API which allows for predictive models to be trained with Human Tasks (HT) data and for HT to incorporate model predictions as outputs and complete HT without user interaction.

This API will allow you to add Machine Learning capabilities to your jBPM project by being able to use, for instance, models trained with historical task data to recommend the most likely output. The API also gives developers the flexibility to implement a “prediction-only” service (which only suggests outputs) as well as automatically completing the task if the prediction’s confidence meets a user-defined prediction confidence threshold.
This API exposes the HT handling to a prediction service.
A prediction service is simply any third-party class which implements the org.kie.internal.task.api.prediction.PredictionService interface.





 This interface consists of three methods:

  • getIdentifier() - a method which returns a unique (String) identifier for your prediction service
  • predict(Task task, Map<String, Object> inputData) - a method that takes task information and the task's inputs from which we will derive the model's inputs, as a map. The method returns a PredictionOutcome instance, which we will look in closer detail later on
  • train(Task task, Map<String, Object> inputData, Map<String, Object> outputData) - this method, similarly to predict, takes task info and the task's inputs, but now we also need to provide the task's outputs, as a map, for training 

This class will consist of:

  • A Map<String, Object> outcome containing the prediction outputs, each entry represents an output attribute’s name and value. This map can be empty, which corresponds to the model not providing any prediction.
  • A confidence value. The meaning of this field is left to the developer (e.g. it could represent a probability between 0.0 and 1.0). It's relevance is related to the confidenceThreshold below.
  • A confidenceThreshold - this value represents the confidence cutoff after which an action can be taken by the HT item handler.
As an example, let's assume our confidence represents a prediction probability between 0.0 and 1.0. If the confidenceThreshold is 0.7, that would mean that for confidence > 0.7 the HT outputs would be set to the outcome and the task automatically closed. If the confidence <= 0.7, then the HT would set the prediction outcome as suggested values, but the task would not be closed and still need human interaction. If the outcome is empty, then the HT life cycle would proceed as if no prediction was made.
By defining a confidence threshold which is always higher than the confidence, developers can create a “prediction-only” service, which will assign predicted outputs to the task, but never complete it.
 


The initial step is then, as defined above, the predict step. In the scenario where the prediction's confidence is above the threshold, the task is automatically completed. If the confidence is not above the threshold, however, when the task is eventually completed both the inputs and the outputs will then be used to further train the model by calling the prediction service's train method.


Example project


An example project is available here. This project consists of a single Human Task, which can be inspected using Business Central. The task is generic and simple enough in order to demonstrate the working of the jBPM's prediction API.




For the purposes of the demonstration, this task will be used to model a simple purchasing system where the purchase of a laptop of a certain brand is requested and must be, eventually, manually approved. The tasks inputs are:


  • item - a String with the brand's name
  • price - a Float representing the laptop's price
  • ActorId - a String representing the user requesting the purchase


The task provides as outputs:


  • approved - a Boolean specifying whether the purchase was approved or not


This repository contains two example prediction service implementations as Maven modules and a REST client to populate the project with tasks to allow the predictive model training.

Start by downloading, or alternatively cloning, the repository:

$ git clone git@github.com:ruivieira/jbpm-recommendation-demo.git
 
For this demo, two random forest-based services, one using the SMILE library and another as a Predictive Model Markup Language (PMML) model, will be used. The services, located respectively in services/jbpm-recommendation-smile-random-forest and services/jbpm-recommendation-pmml-random-forest, can be built with (using SMILE as an example):

$ cd services/jbpm-recommendation-smile-random-forest
$ mvn clean install

The resulting JARs files can then be included in the Business Central’s kie-server.war located in standalone/deployments directory of your jBPM server installation. To do this, simply create a WEB-INF/lib, copy the compiled JARs into it and run

$ zip -r kie-server.war WEB-INF


The PMML-based service expects to find the PMML model in META-INF, so after copying the PMML file in jbpm-recommendation-pmml-random-forest/src/main/resources/models/random_forest.pmml into META-INF, it should also be included in the WAR by using

$ zip -r kie-server.war META-INF

jBPM will search for a prediction service with an identifier specified by a Java property named org.jbpm.task.prediction.service. Since in our demo, the random forest service has the identifier SMILERandomForest, we can set this value when starting Business Central, for instance as:

$ ./standalone.sh -Dorg.jbpm.task.prediction.service=SMILERandomForest


For the purpose of this documentation we will illustrate the steps using the SMILE-based service. The PMML-based service can be used by starting Business Central and setting the property as

$ ./standalone.sh -Dorg.jbpm.task.prediction.service=PMMLRandomForest


Once Business Central has completed the startup, you can go to http://localhost:8080/business-central/ and login using the default admin credential wbadmin/wbadmin. After choosing the default workspace (or creating your own), then select "Import project" and use the project git URL:

https://github.com/ruivieira/jbpm-recommendation-demo-project.git


The repository also contains a REST client (under client) which allows to add Human Tasks in batch in order to have sufficient data points to train the model, so that we can have meaningful predictions.

NOTE: Before running the REST client, make sure that Business Central is running and the demo project is deployed and also running.

The class org.jbpm.recommendation.demo.RESTClient performs this task and can be executed from the client directory with:

$ mvn exec:java -Dexec.mainClass="org.jbpm.recommendation.demo.RESTClient"


The prices for Lenovo and Apple laptops are drawn from Normal distributions with respective means of 1500 and 2500 (pictured below). Although the prediction service is not aware of the deterministic rules we've used to set the task outcome, it will train the model based on the data it receives. The tasks' completion will adhere to the following logic:

  • The purchase of a laptop of brand Lenovo requested by user John or Mary will be approved if the price is around $1500
  • The purchase of a laptop of brand Apple requested by user John or Mary will be approved if the price is around $2500
  • The purchase of a laptop of brand Lenovo requested by user John or Mary will be rejected if the price is around $2500 



The client will then simulate the creation and completion of human tasks, during which the model will be trained.

SMILE-based service


As we've seen, when creating and completing a batch of tasks (as previously) we are simultaneously training the predictive model. The service implementation is based on a random forest model a popular ensemble learning method.

When running the RESTClient, 1200 tasks will be created and completed to allow for a reasonably sized training dataset. The prediction service initially has a confidence threshold of 1.0 and after a sufficiently large number (arbitrarily chosen as 1200) of observations are used for training, the confidence threshold drops to 0.75. This is simply to demonstrate the two possible actions, i.e. prediction without completing and completing the task. This also allows us to avoid any cold start problems.

After the model is trained with the task from RESTClient, we will now create a new Human Task.

If we create a HT requesting the purchase of an "Apple" laptop from "John" with the price $2500, we should expect it to be approved.

If fact, when claiming the task, we can see that the prediction service recommends the purchase to be approved with a "confidence" of 91%.




If he now create a task for the request of a "Lenovo" laptop from "Mary" with the price $1437, he would expect it to be approved. We can see that this is the case, where the form is filled in by the prediction service with an approved status with a "confidence" of 86.5%.



We can also see, as expected, what happens when "John" tries to order a "Lenovo" for $2700. The prediction service fills the form as "not approved" with a "confidence" of 71%.



In this service, the confidence threshold is set as 1.0 and as such the task was not closed automatically.

The minimum number of data points was purposely chosen so that after running the REST client and completing a single task, the service will drop the confidence threshold to 0.75.

If we complete one of the above tasks manually, the next task you create will be automatically completed if the confidence is above 0.75. For instance, when creating a task we are pretty sure will be approved (e.g. John purchasing a Lenovo $1500) you can verify that the task is automatically completed.

PMML-based service


The second example implementation is the PMML-based prediction service. PMML is a predictive model interchange standard, which allows for a wide variety of models to be reused in different platforms and programming languages.

The service included in this demo consists of pre-trained model (with a dataset similar to the one generated by RESTClient) which is executed by a PMML engine. For this demo, the engine used was jpmml-evaluator, the de facto reference implementation of the PMML specification.

There are two main differences when comparing this service to the SMILE-based one:

  • The model doesn't need the training phase. The model has been already trained and serialised into the PMML format. This means that we can start using predictions straight away from jBPM.
  • The train API method is a no-op in this case. This means that whenever the service's train method is called, it will not be used for training in this example (only the predict method is needed for a "read-only" model), as we can see from the figure below.



You can verify that the Business Central workflow is the same as with the SMILE service, although in this case no training is necessary.


The above instructions on how to setup the demo project are also available in the following video (details are in the subtitles):





In conclusion, in this post we’ve shown how to use a new API which allows for predictive models to suggest outputs and complete Human Tasks.

We’ve also shown a project which can use different prediction service backends simply by registering them with jBPM without any changes to the project.


Why not create your own jBPM prediction


service using your favourite Machine Learning framework, today?