czwartek, 10 września 2015

Unified KIE Execution Server - Part 1

This blog post initiates the series of articles about KIE Execution Server and its capabilities provided in version 6.3. Here is a short description of what you can expect:

  1. Introduction to KIE Execution Server and installation notes
  2. Use of KIE Server Client to interact with KIE Execution Server
  3. KIE Execution Server managed vs unmanaged
  4. KIE Execution Server with non java clients
  5. KIE Execution Server clustering/scalability
These are just starting points as more articles most likely will follow depending on interest ... so let's start with first and foremost - the introduction and installation

KIE Execution Server introduction


In version 6.2 KIE Execution Server has been released that was targeting Drools users to provide out of the box execution environment that is accessible via REST and JMS interface. It was designed to be standalone and lightweight component that can be deployed to either application servers or web containers (with obvious limitation - no JMS on web containers).

As it proved to be a valid option as a standalone component that can be easily deployed and scaled, in version 6.3 there will be so called unified KIE Execution Server that will bring in more capabilities to the end users:

  • BRM capability that is what was in 6.2 providing rules execution
  • BPM capability that brings jBPM into the picture
    • process execution
    • task execution
    • asynchronous jobs execution
All of these are provided in unified way and exposed via REST and JMS interfaces. On top of it a KIE Server Client is delivered that makes use of this server very easy in java environment.
The unification means that from end user point of view you will not have to switch between different servers to take advantage of rule or process execution, same client can be used to perform both and so on. Unified terminology was used as well to not confuse users and thus here comes the most important parts:
  • server - is the actual instance of the execution server
  • container - is execution representation of the kjar/KieContainer that can be composed of various assets (rules, processes, data model, etc) - there can be multiple containers on single server
  • process - business process definition available in given container - can be many per container
  • task - user task definition available in given container - can be many per container
  • job - asynchronous job that is/was scheduled in the execution server
  • query - predefined set of queries to retrieve data out from the execution server

NOTE: Very important note to take into account is that all operations that modify data like:
  • insert fact
  • fire rules
  • start process
  • complete task
must always be referenced via container to guarantee all configuration to be properly set - class loader for custom data, handlers, listeners being registered in time etc.
While access to read only data like queries is simplified and expects the minimum set of data to be given to find details. E.g. get process instance - requires only process instance id as by that it will be able to find it and will return all the details required to perform operations on it - including container id (same goes for tasks etc).

Installation

Let's start with standalone mode running on Wildfly 8.1.0.Final (8.1.0 is used as it was tested with both kie server and kie workbench so better stick to just one version of the application server at the beginning :))

So we have to start with downloading Wildfly distribution and unzipping it to desired location - referred as WILDFLY_HOME. Here we start with configuration:
  • create user in application realm 
    • name: kieserver 
    • password: kieserver1!
    • roles: kie-server
NOTE: these are the defaults that can be changed but if you decide to change them you'll need to provide changed values via system properties upon server startup. So for the sake of simplicity let's start with defaults.
To add user you can use add-user.sh (or add-user.bat on windows) script that comes with Wildfly distribution. Just go to WILDFLY_HOME/bin and invoke add-user script:
  • next download EE7 version of kie execution server 6.3.0 version from here
  • downloaded version shall be copied to WILDFLY_HOME/standalone/deployments
    • personally I usually change the name of the war file to not include version and classifier as it will be used as context path of the deployed application making all urls much longer
    • so optionally you can rename the war file to short version like kie-server.war
We are almost ready to start, last thing is to prepare set of system properties that we will use to start our server with fully featured environment:
  • first of all we must start wildfly server with full profile that activates JMS support
    • --server-config=standalone-full.xml
  • optionally, though useful when we have many wildfly instances running on same machine, let's specify port offset for wildfly server
    • -Djboss.socket.binding.port-offset=150
  • next we give the kie server instance and identifier - it's optional as if not given it will generate one, though it will be less human readable so let's give it a name
    • -Dorg.kie.server.id=first-kie-server
  • let specify the url location that our kie server will be accessible - this is important when running in managed mode (see part 3 of this series) but it's a good practice to give it always
    • -Dorg.kie.server.location=http://localhost:8230/kie-server/services/rest/server
with that we are ready to launch our kie server in standalone mode, use this command from WILDFLY_HOME/bin:

./standalone.sh  
--server-config=standalone-full.xml 
-Djboss.socket.binding.port-offset=150 
-Dorg.kie.server.id=first-kie-server 
-Dorg.kie.server.location=http://localhost:8230/kie-server/services/rest/server

Once application server (and application) starts you should be able to issue simple GET request to the server using the org.kie.server.location url to get information about running server:
When opening this page you will be prompted for user name and password, use the one you created in the beginning of installation process - kieserver with password kieserver1!

So we have kie server up and running with following capabilities:
  • KieServer - this is always present as it provides deployment operations to be able to deploy/undeploy containers on kie server instance
  • BRM - rules execution
  • BPM - process, tasks and jobs execution
Version of the kie server is also available (in this case is 6.4.0-SNAPSHOT as already running on latest master version - though at the time of writing this 6.3.0 is exactly the same)

Unified kie server is built on top of extensions aka capabilities and they can be turned on or off via system properties if one does not need some:
  • -Dorg.drools.server.ext.disabled=true - to disable BRM extension
  • -Dorg.jbpm.server.ext.disabled=true - to disable BPM extension
When disabling BPM extension you will see lot less things being bootstrapped upon  server start - no persistence is involved. So let's disable BPM capability, simply shutdown the server and start it with following command:
./standalone.sh  
--server-config=standalone-full.xml 
-Djboss.socket.binding.port-offset=150 
-Dorg.kie.server.id=first-kie-server 
-Dorg.kie.server.location=http://localhost:8230/kie-server/services/rest/server
-Dorg.jbpm.server.ext.disabled=true

watch the server startup logs and then issue the same url request as previously to see the server info response:
As you can see there is no BPM capabilities any more that means any attempt to contact any of the REST/JMS api that belong to BPM will fail.

Let's get back to fully featured KIE Execution Server and deploy container to it and run some simple process to verify it does work.
To do so, I'll use REST client in Firefox that allows to execute any HTTP method towards given endpoint. So we start with creating/deploying container to running KIE Execution Server

Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/containers/hr
  • where hr is the name of the container
Method:
  • PUT
Request body:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<kie-container container-id="hr">
    <release-id>
        <group-id>org.jbpm</group-id>
        <artifact-id>HR</artifact-id>
        <version>1.0</version>
    </release-id>
</kie-container>

this is one of the standard example project that comes with every version of jBPM and it's part of jbpm-playground repository. Make sure it was built at least once and is available in maven repository that your server has access to or is in your local maven repo (usually at ~/.m2/reporitory)


When request is finished successfully you should see following response being returned:


That tells us we have single container deployed and it is in status STARTED - meaning ready to accept and process requests. So let's see if it actually is ready...

First let's see what processes do we have available there
Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/queries/processes/definitions
Method:
  • GET

When successfully executed you should find single process being available with process if hiring inside container id hr


That tells us we have some processes to be executed, so let's create one instance of hiring process with some process variables

Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/containers/hr/processes/hiring/instances
  • where hr is the name of the container and hiring is the process id
Method:
  • POST
Request body:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<map-type>
    <entries>
        <entry>
            <key>age</key>
            <value xsi:type="xs:int" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">25</value>
        </entry>
        <entry>
            <key>name</key>
            <value xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">john</value>
        </entry>
    </entries>
</map-type>

So let's issue the start process request...

And examine response...


As we can see we have successfully created process instance of hiring process and the returned process instance id is 1.

As last verification step let's list active process instances available on our kie server instance
Endpoint:
  • http://localhost:8230/kie-server/services/rest/server/queries/processes/instances
Method:
  • GET




So that's all for the first article, introducing unified KIE Execution Server and it's first steps - installation and verification if it actually works. Stay tuned for more coming ... a lot more :)

30 komentarzy:

  1. Hi Maciej,

    I am looking forward for the part 5:

    5. KIE Execution Server clustering/scalability

    Steven

    OdpowiedzUsuń
  2. Hi Maciej,
    I'm very new to kie server. I would like to know how to create container in kie server. Thank you.

    OdpowiedzUsuń
    Odpowiedzi
    1. this exact post describes how to do it. Though before you can deploy and execute your process or rules you need to design them in workbench - see this to get a feeling on where to start: http://mswiderski.blogspot.com/2013/11/jbpm-6-first-steps.html

      Usuń
  3. Hi Maciej,

    I tried this on "kie 6.2.0.Final-redhat-4" Following your instructiona, everything started failing from http://localhost:8230/kie-server/services/rest/server/queries/processes/definitions. You mentioned KIE version 6.3 version above. Is that the version that is supported?

    OdpowiedzUsuń
    Odpowiedzi
    1. make sure you have BPMS and not BRMS version as processes are only available in BPMS

      Usuń
  4. Is there any update on Kie Execution Server clustering/scalability

    OdpowiedzUsuń
    Odpowiedzi
    1. not yet, didn't have time to put an article about it. It's still on my todo list and I'll try to do it as soon as I have some spare time

      Usuń
  5. Ten komentarz został usunięty przez autora.

    OdpowiedzUsuń
  6. Ten komentarz został usunięty przez autora.

    OdpowiedzUsuń
  7. Ten komentarz został usunięty przez autora.

    OdpowiedzUsuń
  8. Ten komentarz został usunięty przez autora.

    OdpowiedzUsuń
  9. Hi Maciej, its very good your tutorial, I have a question, I want to fire a rule with a ruleflow-group at Kie Execution Server but does not fire a rule that is in the group called "Setp1" my code exampple in the body request is


    [batch-execution]
    [insert out-identifier="Grupo"]
    [demos.grupo.GrupoDTO]
    [id]1[/id]

    [/demos.grupo.GrupoDTO]
    [/insert]
    [fire-all-rules out-identifier="Step1"/]
    [/batch-execution]


    I tried to active my ruleflow-group "Step1" in this way
    I replaced parenthetical peak with [] because it doesnt appear my code in this post

    And my rule drl is

    package demos.grupo;
    rule "Step_1"
    ruleflow-group "Step1"
    dialect "mvel"
    no-loop true
    when
    $grupo : GrupoDTO( getId()==1);
    then
    $grupo.setDescripcion("Es el numero 1");
    update($grupo);

    end



    And into my test scenario in BRMS it works ok, but with KIE not.

    So, How can I activate a ruleflow-group in kie server body request XML?

    Please any suggestions, I would be very grateful :)

    OdpowiedzUsuń
    Odpowiedzi
    1. hi Duice.. i have a same problem too. I think kie server doesn't support for calling rule flow group in rule.

      Usuń
    2. it is supported but via rule flow - a process, see this test case for details:https://github.com/droolsjbpm/droolsjbpm-integration/blob/master/kie-server-parent/kie-server-tests/kie-server-integ-tests-drools/src/test/java/org/kie/server/integrationtests/drools/RuleFlowIntegrationTest.java

      Usuń
  10. Hi Maciej
    thanks to very useful documents.
    when i try "http://localhost:8080/kie-server/services/rest/server/queries/processes/definitions" in the rest client i encounter blank page involve "
    "
    all previous steps are successful but this step goes wrong.
    already i use kie-wb 6.4. this works for me in 6.3 version! what is my problem?
    could you write a post and describe 6.4?

    OdpowiedzUsuń
  11. Hi Maciej, We Migrated all our rules to Kie server v 6.3 on tomcat 7. Observed Kie server taking ~1000-1600ms for each GET request. what are the options i need to consider to improve performance of the kie server?. We are looking something around ~100ms response time

    Thanks
    Ravi

    OdpowiedzUsuń
  12. Hi Maciej

    does kie-server returns different response when i query process instance with and without status? I am not getting all the processed id when i do the
    kie-server/services/rest/server/queries/processes/instances?status=1&status=2&status=3

    where as only with kie-server/services/rest/server/queries/processes/instances i only see instances from one container and not the other

    OdpowiedzUsuń
    Odpowiedzi
    1. I am using wildfly 8.2.1 with kie 6.4.0.Final

      Usuń
    2. the difference is that process instances will be returned with status active (1) for the second url you described. So if there are no active process instances in the other container they wont be returned.

      Usuń
  13. Ten komentarz został usunięty przez administratora bloga.

    OdpowiedzUsuń
  14. Hi Maciej, how are you? I'm using kie-server runtime with jbpm 6.4 in no managed mode. The way we do the deploy of containers is through the REST API. I want to know if it does some magic behind the scenes with every deploy or just add a new container node in the xml file definition of my kie-server template. If I touch this xml (with extreme cautions!), do I have the same effect (comparing to REST API)?

    Thanks!

    OdpowiedzUsuń
    Odpowiedzi
    1. the difference is the REST api has direct effect, while modifying that file requires server restart. Moreover, if you change that file manually and not restart the server next REST api call regarding deployments will override your manual changes. So best would be to shutdown the server, do the modifications and start the server.

      Usuń
    2. hi Maciej - I think im following up on same message you posted earlier https://groups.google.com/forum/#!topic/jbpm-usage/-1dTL7SLNhw

      But when i modify my container for new config-item inside my {kieserver-id}.xml in JBOSS next time i restart those changes goes away.

      alternatively - When I use above method using REST it works but again the entire container wipes out when i restart the server.

      Usuń
  15. Hi Dear,

    i Like Your Blog Very Much..I see Daily Your Blog ,is A Very Usefull For me.

    You Can see also my services..Legal Process Server & Process Service In Orange County


    Legal process server & process service in Orange County. Process server express provides legal document service, court filing services at affordable prices.


    Visit Now - https://www.processserverexpress.net/

    OdpowiedzUsuń
  16. Hi,

    I have a setup which has one Controller Server( WB) and a Kie-server.
    I have setup user task notification on Controller server and the process works fine when triggered from Controller, but when same project is deployed in remote kie server using container neither I get email nor I receive any error in logs.
    On kie server I have added email configuration same as Controller server in standalone-full.xml but I don't receive email and I have also added userinfo callback information inside system properties

    OdpowiedzUsuń
    Odpowiedzi
    1. difficult to say without seeing all the details of your kie server config. If you would outline that it could help to trouble shoot the problem.

      Usuń
  17. Hi Maciej
    I have followed below tutorial from your kie-execution server series it really helped me to solve my issues
    http://mswiderski.blogspot.in/2015/09/unified-kie-execution-server-part-1.html
    but i want to know how can we get reponse from business process when we pass parameters using REST client
    I want data object values as output in response

    OdpowiedzUsuń
    Odpowiedzi
    1. there is no direct way to do so, start process call will only return process instance id. But you can then use another call with that id to get process instance variables as long as the process is still active. Alternative is to use human task and like that present the outcome of the process work.

      Usuń
  18. May I ask where these url endpoints are documented, like "queries/processes/definitions"?

    OdpowiedzUsuń
  19. Ten komentarz został usunięty przez administratora bloga.

    OdpowiedzUsuń