Automatiko 0.4.0 has just been released. I comes with quite some new features among them are
- user task forms that is provided by user task management addon
- user task notifications that is provided by user task email addon
Automatiko 0.4.0 has just been released. I comes with quite some new features among them are
A common scenario when working with workflows is to handle data objects and their changes. In most of the situations workflow instance will only keep the last value of it and to realise use cases like comparing what was just sent to the instance with what was already in there requires having duplicated data object definitions. This is not the best approach as it makes the workflow definition "corrupted" with details less important from the business goal perspective.
With Automatiko (since version 0.3.0) there is an alternative way to this problem. This is to version data objects by annotating it with data object tag called versioned.
So what happens when you make data object versioned?
Automatiko engine will record every change to the variable as new version. These versions are then available to be accessed as any other variable but will require additional suffix to the variable name
Here is more of a developer view on the recently published article on Automatiko blog about building Kubernetes Operators with workflows. It shows that is brings significant value to the overall visibility of the operator logic and makes it really approachable for non kubernetes gurus.
I'd like to take it a bit further and show how efficient it can be thanks to the internals of Automatiko. Automatiko takes advantage of Quarkus that provide the runtime mechanics so to say. Quarkus comes with an outstanding feature called dev mode. Everyone who heard about Quarkus most likely heard about dev mode and live reload. But there is more to it!!!
Remote dev mode - is a sibling to the dev mode but it allows you to live reload application remotely. A perfect fit for in-container development or even better in kubernetes cluster development. This brings us to the unbelievable efficient developer experience when building Kubernetes operators - you can build them directly inside the Kubernetes cluster. No need for rebuilding the image, no need to redeploying the container and so on... it just works like a charm.
Have a look at this video illustrating how you can easily work on your operator logic that runs inside the kubernetes cluster. Make modifications to the logic and try it out almost instantly (assuming your kubernetes cluster and deployed container have enough resources to make it efficient ;)).
The video showed number of features
As a follow up of part 1 of the Automatiko IoT and MQTT I'd like to take you further in exploration around workflows and IoT with MQTT.
This time we look at the details of how to take advantage of some MQTT features (e.g. wildcard topics), collect sensor data into a bucket (or to make this simple - a list) and then assign user tasks based on amount of data collected instead of for every event.
In addition to that, we look into Automatiko features that makes using workflows for IoT way easier...
Around a month ago there was the first release of Automatiko project. It aims at providing an easy to use yet powerful toolkit to build services and functions based on workflows and decision. You can read up more information about the Automatiko project at the website or blog.
This blog post is very basic introduction that attempts to address the first level of entry when starting with Automatiko.
There are other ways to run mosquitto so visit website if docker is not an option.
org.kie.internal.task.api.prediction.PredictionService interface.getIdentifier() - a method which returns a unique (String) identifier for your prediction servicepredict(Task task, Map<String, Object> inputData) - a method that takes task information and the task's inputs from which we will derive the model's inputs, as a map. The method returns a PredictionOutcome instance, which we will look in closer detail later ontrain(Task task, Map<String, Object> inputData, Map<String, Object> outputData) - this method, similarly to predict, takes task info and the task's inputs, but now we also need to provide the task's outputs, as a map, for training Map<String, Object> outcome containing the prediction outputs, each entry represents an output attribute’s name and value. This map can be empty, which corresponds to the model not providing any prediction.confidence value. The meaning of this field is left to the developer (e.g. it could represent a probability between 0.0 and 1.0). It's relevance is related to the confidenceThreshold below.confidenceThreshold - this value represents the confidence cutoff after which an action can be taken by the HT item handler.confidenceThreshold is 0.7, that would mean that for confidence > 0.7 the HT outputs would be set to the outcome and the task automatically closed. If the confidence <= 0.7, then the HT would set the prediction outcome as suggested values, but the task would not be closed and still need human interaction. If the outcome is empty, then the HT life cycle would proceed as if no prediction was made.predict step.
In the scenario where the prediction's confidence is above the threshold, the task is automatically completed. If the confidence is not above the threshold, however, when the task is eventually completed both the inputs and the outputs will then be used to further train the model by calling the prediction service's train method.item - a String with the brand's nameprice - a Float representing the laptop's priceActorId - a String representing the user requesting the purchaseapproved - a Boolean specifying whether the purchase was approved or not$ git clone git@github.com:ruivieira/jbpm-recommendation-demo.git
For this demo, two random forest-based services, one using the SMILE library and another as a Predictive Model Markup Language (PMML) model, will be used. The services, located respectively in
services/jbpm-recommendation-smile-random-forest and services/jbpm-recommendation-pmml-random-forest, can be built with (using SMILE as an example):$ cd services/jbpm-recommendation-smile-random-forest $ mvn clean install
kie-server.war located in standalone/deployments directory of your jBPM server installation. To do this, simply create a WEB-INF/lib, copy the compiled JARs into it and run$ zip -r kie-server.war WEB-INF
META-INF, so after copying the PMML file in jbpm-recommendation-pmml-random-forest/src/main/resources/models/random_forest.pmml into META-INF, it should also be included in the WAR by using$ zip -r kie-server.war META-INF
org.jbpm.task.prediction.service. Since in our demo, the random forest service has the identifier SMILERandomForest, we can set this value when starting Business Central, for instance as:$ ./standalone.sh -Dorg.jbpm.task.prediction.service=SMILERandomForest
$ ./standalone.sh -Dorg.jbpm.task.prediction.service=PMMLRandomForest
wbadmin/wbadmin. After choosing the default workspace (or creating your own), then select "Import project" and use the project git URL:https://github.com/ruivieira/jbpm-recommendation-demo-project.git
client) which allows to add Human Tasks in batch in order to have sufficient data points to train the model, so that we can have meaningful predictions.org.jbpm.recommendation.demo.RESTClient performs this task and can be executed from the client directory with:$ mvn exec:java -Dexec.mainClass="org.jbpm.recommendation.demo.RESTClient"
RESTClient, 1200 tasks will be created and completed to allow for a reasonably sized training dataset. The prediction service initially has a confidence threshold of 1.0 and after a sufficiently large number (arbitrarily chosen as 1200) of observations are used for training, the confidence threshold drops to 0.75. This is simply to demonstrate the two possible actions, i.e. prediction without completing and completing the task. This also allows us to avoid any cold start problems.RESTClient, we will now create a new Human Task.RESTClient) which is executed by a PMML engine. For this demo, the engine used was jpmml-evaluator, the de facto reference implementation of the PMML specification.train API method is a no-op in this case. This means that whenever the service's train method is called, it will not be used for training in this example (only the predict method is needed for a "read-only" model), as we can see from the figure below.