This blog post initiates the series of articles about KIE Execution Server and its capabilities provided in version 6.3. Here is a short description of what you can expect:
In version 6.2 KIE Execution Server has been released that was targeting Drools users to provide out of the box execution environment that is accessible via REST and JMS interface. It was designed to be standalone and lightweight component that can be deployed to either application servers or web containers (with obvious limitation - no JMS on web containers).
As it proved to be a valid option as a standalone component that can be easily deployed and scaled, in version 6.3 there will be so called unified KIE Execution Server that will bring in more capabilities to the end users:
- Introduction to KIE Execution Server and installation notes
- Use of KIE Server Client to interact with KIE Execution Server
- KIE Execution Server managed vs unmanaged
- KIE Execution Server with non java clients
- KIE Execution Server clustering/scalability
These are just starting points as more articles most likely will follow depending on interest ... so let's start with first and foremost - the introduction and installation
KIE Execution Server introduction
In version 6.2 KIE Execution Server has been released that was targeting Drools users to provide out of the box execution environment that is accessible via REST and JMS interface. It was designed to be standalone and lightweight component that can be deployed to either application servers or web containers (with obvious limitation - no JMS on web containers).
As it proved to be a valid option as a standalone component that can be easily deployed and scaled, in version 6.3 there will be so called unified KIE Execution Server that will bring in more capabilities to the end users:
- BRM capability that is what was in 6.2 providing rules execution
- BPM capability that brings jBPM into the picture
- process execution
- task execution
- asynchronous jobs execution
All of these are provided in unified way and exposed via REST and JMS interfaces. On top of it a KIE Server Client is delivered that makes use of this server very easy in java environment.
The unification means that from end user point of view you will not have to switch between different servers to take advantage of rule or process execution, same client can be used to perform both and so on. Unified terminology was used as well to not confuse users and thus here comes the most important parts:
- server - is the actual instance of the execution server
- container - is execution representation of the kjar/KieContainer that can be composed of various assets (rules, processes, data model, etc) - there can be multiple containers on single server
- process - business process definition available in given container - can be many per container
- task - user task definition available in given container - can be many per container
- job - asynchronous job that is/was scheduled in the execution server
- query - predefined set of queries to retrieve data out from the execution server
NOTE: Very important note to take into account is that all operations that modify data like:
- insert fact
- fire rules
- start process
- complete task
must always be referenced via container to guarantee all configuration to be properly set - class loader for custom data, handlers, listeners being registered in time etc.
While access to read only data like queries is simplified and expects the minimum set of data to be given to find details. E.g. get process instance - requires only process instance id as by that it will be able to find it and will return all the details required to perform operations on it - including container id (same goes for tasks etc).
Installation
Let's start with standalone mode running on Wildfly 8.1.0.Final (8.1.0 is used as it was tested with both kie server and kie workbench so better stick to just one version of the application server at the beginning :))
So we have to start with downloading Wildfly distribution and unzipping it to desired location - referred as WILDFLY_HOME. Here we start with configuration:
- create user in application realm
- name: kieserver
- password: kieserver1!
- roles: kie-server
NOTE: these are the defaults that can be changed but if you decide to change them you'll need to provide changed values via system properties upon server startup. So for the sake of simplicity let's start with defaults.
To add user you can use add-user.sh (or add-user.bat on windows) script that comes with Wildfly distribution. Just go to WILDFLY_HOME/bin and invoke add-user script:
- next download EE7 version of kie execution server 6.3.0 version from here
- downloaded version shall be copied to WILDFLY_HOME/standalone/deployments
- personally I usually change the name of the war file to not include version and classifier as it will be used as context path of the deployed application making all urls much longer
- so optionally you can rename the war file to short version like kie-server.war
We are almost ready to start, last thing is to prepare set of system properties that we will use to start our server with fully featured environment:
- first of all we must start wildfly server with full profile that activates JMS support
- --server-config=standalone-full.xml
- optionally, though useful when we have many wildfly instances running on same machine, let's specify port offset for wildfly server
- -Djboss.socket.binding.port-offset=150
- next we give the kie server instance and identifier - it's optional as if not given it will generate one, though it will be less human readable so let's give it a name
- -Dorg.kie.server.id=first-kie-server
- let specify the url location that our kie server will be accessible - this is important when running in managed mode (see part 3 of this series) but it's a good practice to give it always
- -Dorg.kie.server.location=http://localhost:8230/kie-server/services/rest/server
with that we are ready to launch our kie server in standalone mode, use this command from WILDFLY_HOME/bin:
./standalone.sh
--server-config=standalone-full.xml
-Djboss.socket.binding.port-offset=150
-Dorg.kie.server.id=first-kie-server
-Dorg.kie.server.location=http://localhost:8230/kie-server/services/rest/server
Once application server (and application) starts you should be able to issue simple GET request to the server using the org.kie.server.location url to get information about running server:
When opening this page you will be prompted for user name and password, use the one you created in the beginning of installation process - kieserver with password kieserver1!
So we have kie server up and running with following capabilities:
- KieServer - this is always present as it provides deployment operations to be able to deploy/undeploy containers on kie server instance
- BRM - rules execution
- BPM - process, tasks and jobs execution
Version of the kie server is also available (in this case is 6.4.0-SNAPSHOT as already running on latest master version - though at the time of writing this 6.3.0 is exactly the same)
Unified kie server is built on top of extensions aka capabilities and they can be turned on or off via system properties if one does not need some:
- -Dorg.drools.server.ext.disabled=true - to disable BRM extension
- -Dorg.jbpm.server.ext.disabled=true - to disable BPM extension
When disabling BPM extension you will see lot less things being bootstrapped upon server start - no persistence is involved. So let's disable BPM capability, simply shutdown the server and start it with following command:
./standalone.sh
--server-config=standalone-full.xml
-Djboss.socket.binding.port-offset=150
-Dorg.kie.server.id=first-kie-server
-Dorg.kie.server.location=http://localhost:8230/kie-server/services/rest/server
-Dorg.jbpm.server.ext.disabled=true
watch the server startup logs and then issue the same url request as previously to see the server info response:
As you can see there is no BPM capabilities any more that means any attempt to contact any of the REST/JMS api that belong to BPM will fail.
Let's get back to fully featured KIE Execution Server and deploy container to it and run some simple process to verify it does work.
To do so, I'll use REST client in Firefox that allows to execute any HTTP method towards given endpoint. So we start with creating/deploying container to running KIE Execution Server
Endpoint:
- http://localhost:8230/kie-server/services/rest/server/containers/hr where hr is the name of the container
Method:
- PUT
Request body:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<kie-container container-id="hr">
<release-id>
<group-id>org.jbpm</group-id>
<artifact-id>HR</artifact-id>
<version>1.0</version>
</release-id>
</kie-container>
this is one of the standard example project that comes with every version of jBPM and it's part of jbpm-playground repository. Make sure it was built at least once and is available in maven repository that your server has access to or is in your local maven repo (usually at ~/.m2/reporitory)
When request is finished successfully you should see following response being returned:
That tells us we have single container deployed and it is in status STARTED - meaning ready to accept and process requests. So let's see if it actually is ready...
First let's see what processes do we have available there
That tells us we have some processes to be executed, so let's create one instance of hiring process with some process variables
Endpoint:
- http://localhost:8230/kie-server/services/rest/server/containers/hr/processes/hiring/instances
where hr is the name of the container and hiring is the process id
Method:
- POST
Request body:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<map-type>
<entries>
<entry>
<key>age</key>
<value xsi:type="xs:int" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">25</value>
</entry>
<entry>
<key>name</key>
<value xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">john</value>
</entry>
</entries>
</map-type>
So let's issue the start process request...
And examine response...
As we can see we have successfully created process instance of hiring process and the returned process instance id is 1.
As last verification step let's list active process instances available on our kie server instance
Endpoint:
- http://localhost:8230/kie-server/services/rest/server/queries/processes/instances
Method:
- GET