2015/09/11

Unified KIE Execution Server - Part 3

Part 3 of Unified KIE Execution Server deals with so called managed vs. unmanaged setup of the environment. In version 6.2 users went through Rules deployments perspective to create and manage KIE Execution Server instances.
That approach required to have execution server configured and up and running. Some sort of online only registration that did not work if the kie server instance was down.

In version 6.3, this has been enhanced to allow complete configuration of KIE Execution Servers inside workbench even if there is no actual instances configured. So let's first talk about managed and unmanaged instances....

Managed KIE Execution Server 

Managed instance is one that requires controller to be available to properly startup. Controller is a component responsible for keeping a configuration in centralized way. Though that does not mean there must be only single controller in the environment. Managed KIE Execution Servers are capable of dealing with multiple controllers.

NOTE: It's important to mention that even though there can be multiple controllers they should be kept in sync to make sure that regardless which one of them is contacted by KIE Server instance it will provide same set of configuration.

Controller is only needed when KIE Execution Server starts as this is the time when it needs to download the configuration before it can be properly started. In case KIE Execution Server is started it will keep trying to connect to controller until the connection is successfully established. That means that no containers will be deployed to it even when there is local storage available with configuration. The reason why it is like that is to ensure consistency. If KIE Execution Server was down and the configuration has changed, to make sure it will run with up to date configuration it must connect to controller to fetch that configuration.

Configuration has been mentioned several times but what is that? Configuration is set of information:

  • containers to be deployed and started
  • configuration items - currently this is a place holder for further enhancements that will allow remotely configure KIE Execution Server components - timers, persistence, etc

Controller is a component that is responsible for overall management of KIE Execution Servers. It provides a REST api that is divided into two parts:

  • controller itself that is exposed to interact with KIE Execution Server instances
  • administration that allows to remotely manage KIE Execution Server
    • add/remove servers
    • add/remove containers to/from the servers
    • start/stop containers on servers
Controller deals only with KIE Execution Server configuration or definition to put it differently. It does not handle any runtime components of KIE Execution Server instances. They are always considered remote to controller. Controller is responsible for persisting the configuration to preserve restarts of the controller itself. It should manage the synchronization as well in case multiple controllers are configured to keep all definitions up to date on all instances of the controller.

By default controller is shipped with KIE workbench (jbpm console) and allows fully featured management interface (both REST api and UI). It uses underlying git repository as persistent store and thus when GIT repositories are clustered (using Apache Zookeeper and Apache Helix) it will cover the controllers synchronization as well.

Above diagram illustrates single controller (workbench) setup with multiple KIE Execution Server instances managed by it. Following diagram illustrates the clustered setup where there are multiple instances of controller sync over Zookeeper.


In the above diagram we can see that KIE Execution Server instances are capable to connect to all controllers, but they will connect to only one. Each instance will attempt to connect to controller as long as it can reach one. Once connection is established with one of the controller it will skip other controllers.

Working with managed servers

There are two approaches that users can take when working with managed KIE Server instances:

Configuration first
With this approach, user will start working with controller (either UI or REST api) and create and configure KIE Execution Server definitions. That is composed of:
    • identification of the server (id and name + optionally version for improved readability)
    • containers 

Register first
Let the KIE Execution Server instance to auto register on controller and then configure it in terms of what containers to run on it. This is simply skipping the registration step done in first approach and populates it with server id, name and version directly upon auto registration (or to put it simple on connect)

In general there is no big difference and which approach is taken is pretty much a personal preference. The outcome of both will be the same.

Unmanaged KIE Execution Server

Unmanaged KIE Execution server is in turn just a standalone instance and thus must be configured individually using REST/JMS api of the KIE Execution server itself. The configuration is persisted into a file that is considered internal server state. It's updated upon following operations:
  • deploy container
  • undeploy container
  • start container
  • stop container
Note that KIE Execution server will start only the containers that are marked as started. Even if the KIE Execution Server will be restarted, upon boot it will only make containers available that were in started state before server was shutdown.


In most of the case KIE Execution Server should be ran in managed mode as that provides lots of benefits in terms of control and configuration. More benefits will be noticed when discussing clustering and scalability of KIE Execution Servers where managed mode will show its true power :)

Let's run in managed mode

So that's about it in theory, let's try to run the KIE Execution Server in managed mode to see how this can be operated.

For that we need to have one Wildfly instance that will host the controller - KIE Workbench and another one that will hold KIE Execution Server. Second we already have based on part 1 of the blog series.
NOTE: You can run both KIE workbench and KIE Execution Server on the same application server instance but it won't show the improved manageability as they will always be up or down together. 

So let's start with installing workbench on Wildfly. Similar to what we had to do for KIE Execution server we start with creating user(s):
  • kieserver (with password kieserver1!) that will be used to communicate between KIE Server and controller, that user must be member of following roles:
    • kie-server
    • rest-all
  • either add following roles to kieserver user or create another user that will be used to logon to KIE workbench to manage KIE Execution Servers
    • admin
    • rest-all
To do so use the Wildfly utility script - add-user located in WILDFLY_HOME/bin and add application users. (for details how to do that part 1 of this blog series)

Once we have the users created, let's deploy the application to it. Download KIE workbench for wildfly 8 and copy the way file into WILDFLY_HOME/standalone/deployments.

NOTE: similar to KIE Server, personally I remove the version number and classifier from the war file name and make it as simple as 'kie-wb.war' that makes the context path short and thus easier to type.

And now we are ready to launch KIE workbench, to do so go to WILDFLY_HOME/bin and start it with following command:

./standlone.sh --server-config=standalone-full.xml

wait for the server to finish booting and then go to: 


logon with user you created (e.g. kieserver) and go to Deployments --> Rules Deployments perspective. See following screencast (no audio) that showcase the capabilities described in this article. It starts with configure first approach and does show following:
  • create KIE Execution Server definition in the controller
    • specified identifier (first-kie-server) and name
  • create new container in the KIE Execution Server definition (org.jbpm:HR:1.0)
  • configure KIE Execution Server to be managed by specifying URL to the controller via system property:
    • -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller
    • -Dorg.kie.server.id=first-kie-server (this is extremely important that this id matches one created in first step in the KIE Workbench)
  • start kie server and observe controller's log to see notification that kie server has connected to it
  • start container in controller and observe it being automatically started on KIE Execution Server instance
  • shutdown KIE Execution Server and observe logs and UI with updated status of kie server being disconnected
  • illustrates various manage options in controller and it effect on KIE Execution Server instance.


So this screen cast concludes third part of the Unified KIE Execution Server series. With this in mind we move on into more advanced cases where we show integration with non java clients and clustering. More will come soon...