piątek, 11 września 2015

Unified KIE Execution Server - Part 3

Part 3 of Unified KIE Execution Server deals with so called managed vs. unmanaged setup of the environment. In version 6.2 users went through Rules deployments perspective to create and manage KIE Execution Server instances.
That approach required to have execution server configured and up and running. Some sort of online only registration that did not work if the kie server instance was down.

In version 6.3, this has been enhanced to allow complete configuration of KIE Execution Servers inside workbench even if there is no actual instances configured. So let's first talk about managed and unmanaged instances....

Managed KIE Execution Server 

Managed instance is one that requires controller to be available to properly startup. Controller is a component responsible for keeping a configuration in centralized way. Though that does not mean there must be only single controller in the environment. Managed KIE Execution Servers are capable of dealing with multiple controllers.

NOTE: It's important to mention that even though there can be multiple controllers they should be kept in sync to make sure that regardless which one of them is contacted by KIE Server instance it will provide same set of configuration.

Controller is only needed when KIE Execution Server starts as this is the time when it needs to download the configuration before it can be properly started. In case KIE Execution Server is started it will keep trying to connect to controller until the connection is successfully established. That means that no containers will be deployed to it even when there is local storage available with configuration. The reason why it is like that is to ensure consistency. If KIE Execution Server was down and the configuration has changed, to make sure it will run with up to date configuration it must connect to controller to fetch that configuration.

Configuration has been mentioned several times but what is that? Configuration is set of information:

  • containers to be deployed and started
  • configuration items - currently this is a place holder for further enhancements that will allow remotely configure KIE Execution Server components - timers, persistence, etc

Controller is a component that is responsible for overall management of KIE Execution Servers. It provides a REST api that is divided into two parts:

  • controller itself that is exposed to interact with KIE Execution Server instances
  • administration that allows to remotely manage KIE Execution Server
    • add/remove servers
    • add/remove containers to/from the servers
    • start/stop containers on servers
Controller deals only with KIE Execution Server configuration or definition to put it differently. It does not handle any runtime components of KIE Execution Server instances. They are always considered remote to controller. Controller is responsible for persisting the configuration to preserve restarts of the controller itself. It should manage the synchronization as well in case multiple controllers are configured to keep all definitions up to date on all instances of the controller.

By default controller is shipped with KIE workbench (jbpm console) and allows fully featured management interface (both REST api and UI). It uses underlying git repository as persistent store and thus when GIT repositories are clustered (using Apache Zookeeper and Apache Helix) it will cover the controllers synchronization as well.

Above diagram illustrates single controller (workbench) setup with multiple KIE Execution Server instances managed by it. Following diagram illustrates the clustered setup where there are multiple instances of controller sync over Zookeeper.

In the above diagram we can see that KIE Execution Server instances are capable to connect to all controllers, but they will connect to only one. Each instance will attempt to connect to controller as long as it can reach one. Once connection is established with one of the controller it will skip other controllers.

Working with managed servers

There are two approaches that users can take when working with managed KIE Server instances:

Configuration first
With this approach, user will start working with controller (either UI or REST api) and create and configure KIE Execution Server definitions. That is composed of:
    • identification of the server (id and name + optionally version for improved readability)
    • containers 

Register first
Let the KIE Execution Server instance to auto register on controller and then configure it in terms of what containers to run on it. This is simply skipping the registration step done in first approach and populates it with server id, name and version directly upon auto registration (or to put it simple on connect)

In general there is no big difference and which approach is taken is pretty much a personal preference. The outcome of both will be the same.

Unmanaged KIE Execution Server

Unmanaged KIE Execution server is in turn just a standalone instance and thus must be configured individually using REST/JMS api of the KIE Execution server itself. The configuration is persisted into a file that is considered internal server state. It's updated upon following operations:
  • deploy container
  • undeploy container
  • start container
  • stop container
Note that KIE Execution server will start only the containers that are marked as started. Even if the KIE Execution Server will be restarted, upon boot it will only make containers available that were in started state before server was shutdown.

In most of the case KIE Execution Server should be ran in managed mode as that provides lots of benefits in terms of control and configuration. More benefits will be noticed when discussing clustering and scalability of KIE Execution Servers where managed mode will show its true power :)

Let's run in managed mode

So that's about it in theory, let's try to run the KIE Execution Server in managed mode to see how this can be operated.

For that we need to have one Wildfly instance that will host the controller - KIE Workbench and another one that will hold KIE Execution Server. Second we already have based on part 1 of the blog series.
NOTE: You can run both KIE workbench and KIE Execution Server on the same application server instance but it won't show the improved manageability as they will always be up or down together. 

So let's start with installing workbench on Wildfly. Similar to what we had to do for KIE Execution server we start with creating user(s):
  • kieserver (with password kieserver1!) that will be used to communicate between KIE Server and controller, that user must be member of following roles:
    • kie-server
    • rest-all
  • either add following roles to kieserver user or create another user that will be used to logon to KIE workbench to manage KIE Execution Servers
    • admin
    • rest-all
To do so use the Wildfly utility script - add-user located in WILDFLY_HOME/bin and add application users. (for details how to do that part 1 of this blog series)

Once we have the users created, let's deploy the application to it. Download KIE workbench for wildfly 8 and copy the way file into WILDFLY_HOME/standalone/deployments.

NOTE: similar to KIE Server, personally I remove the version number and classifier from the war file name and make it as simple as 'kie-wb.war' that makes the context path short and thus easier to type.

And now we are ready to launch KIE workbench, to do so go to WILDFLY_HOME/bin and start it with following command:

./standlone.sh --server-config=standalone-full.xml

wait for the server to finish booting and then go to: 

logon with user you created (e.g. kieserver) and go to Deployments --> Rules Deployments perspective. See following screencast (no audio) that showcase the capabilities described in this article. It starts with configure first approach and does show following:
  • create KIE Execution Server definition in the controller
    • specified identifier (first-kie-server) and name
  • create new container in the KIE Execution Server definition (org.jbpm:HR:1.0)
  • configure KIE Execution Server to be managed by specifying URL to the controller via system property:
    • -Dorg.kie.server.controller=http://localhost:8080/kie-wb/rest/controller
    • -Dorg.kie.server.id=first-kie-server (this is extremely important that this id matches one created in first step in the KIE Workbench)
  • start kie server and observe controller's log to see notification that kie server has connected to it
  • start container in controller and observe it being automatically started on KIE Execution Server instance
  • shutdown KIE Execution Server and observe logs and UI with updated status of kie server being disconnected
  • illustrates various manage options in controller and it effect on KIE Execution Server instance.

So this screen cast concludes third part of the Unified KIE Execution Server series. With this in mind we move on into more advanced cases where we show integration with non java clients and clustering. More will come soon...

17 komentarzy:

  1. Thanks a lot. You are the first one to explain ho to run the KIE-server in manager mode!

  2. 18:36:20,467 WARN [org.kie.scanner.MavenRepository] (default task-56) Unable to resolve artifact: org:jbpm:pom:1.0
    18:36:20,467 ERROR [org.kie.server.services.impl.KieServerImpl] (default task-56) Error creating container 'Bell' for module 'org:jbpm:1.0': java.lang.RuntimeException: Cannot find KieModule: org:jbpm:1.0

    This is what i receive in my execution server when it is connected to the wb

  3. 19:45:08,767 ERROR [io.undertow.request] (default task-2) UT005023: Exception handling request to /kie-server/services/rest/server/containers/Bell: org.jboss.resteasy.spi.UnhandledException: java.lang.IllegalArgumentException: ConversationId not valid - missing releaseId
    19:45:08,830 WARN [org.kie.scanner.MavenRepository] (EJB default - 1) Unable to resolve artifact: Bell:incidentresponse:1.0
    19:45:18,842 WARN [org.kie.scanner.MavenRepository] (EJB default - 1) Unable to resolve artifact: Bell:incidentresponse:pom:1.0
    19:45:18,843 ERROR [org.kie.server.services.impl.KieServerImpl] (EJB default - 1) Error creating container 'Bell' for module 'Bell:incidentresponse:1.0': java.lang.RuntimeException: Cannot find KieModule: Bell:incidentresponse:1.0

    1. Hi Anand, do you have kie-wb and kie-server installed in different servers?

    2. Hi, sorry for the intrusion, but I have the same problem... I have drools-wb and kie-server on a different server, I configure the maven repository on settings.xml and when I create container on drools-wb is fine, but in the log of Wildfly (kie-server) appear the error: "ConversationId not valid - missing releaseId", can you help me? may be I'm doing something wrong...

  4. Hi Maciej,

    Many thanks for this article, it helped me to understand how to setup some kie-servers in a production environment.

    I've installed a business-central as controller and two different kie-servers in different servers. Deploying and creating a container works fine for me, and performing calls against 1st kie-server through Kie Remote API works fine too. However, when I try to perform calls against 2nd kie-server, traces are being showed in first kie-server machine. Is it ok? Why?

    King regards.

    1. It does not make sense as kie servers work independently. Make sure you access correct URL

      Note that kie remote API is for workbench remote interface while for kie server it' kie server client API

    2. Yes, sorry, I'm using kie-server-client API. Anyway, I'm changing the endpoint between calls but traces are only displayed in kie-server #1 log. However, if I shutdown kie-server #1, it works OK and traces are displayed in kie-server #2 log, maybe I've wrongly configured something...?

    3. You need to use different clients per server as the URL is only configured when kie server client is created

    4. When I perform calls to kie-server #2, I create a new session in the following way:

      configuration = KieServicesFactory.newRestConfiguration(urlKieServer_2, user, password);
      kieServicesClient = KieServicesFactory.newKieServicesClient(configuration);
      processServicesClient = kieServicesClient.getServicesClient(ProcessServicesClient.class);

      Is it correct?

    5. Hi again Maciej,

      I've seen that the only case that is valid in order to reproduce this is using a quartz timer since it's stored in DDBB and we guess that it's executed in any server that is running. Could you confirm this point?

      Many thanks.

    6. if quartz is configured to be clustered then the job can be executed at any of the nodes quartz is running on, but only on one and only one node.

  5. Hi,
    We are testing a production environment consisting on three different servers:

    -Business-Central (server #1)
    -Kie-server #1 (server #2)
    -Kie-server #2 (server #3)

    We have a load balancer for kie-servers (F5 big-ip). So we have three different IP's for performing rest calls in order to instance processes and manage tasks:

    -F5 IP (highly necessary for production environment in order to have high availability)
    -Kie-server server#1 IP
    -Kie-server server#2 IP

    We've performed some tests, and using F5 load balancer doesn't work OK:

    >> F5 test
    We have instantiated a process through kie-server-services API using F5 IP for Rest configuration. When we reach first human task, we try to get input task content through that API but all values are always null (first screenshot attached). However, if we try to get input task content through http request in a explorer, we are able to see all the task input content correctly (second screenshot attached).

    >> Server #1 IP
    We did the same test and it worked fine in both kie-server-services and explorer calls. We always get task input content correctly.

    >> Server #2 IP
    Same as Server #1 test.

    We guess there is some kind of bug in kie-server-services API since we can retrieve that task input content through explorer http calls... This is the code we are using to retrieve tasks content:

    String urlKieServer_balanced = "http://jbpm.mycompany.es:8080/kie-server/services/rest/server";

    String containerId = "process_salud_contratacion";

    KieServicesConfiguration configuration;
    KieServicesClient kieServicesClient;
    UserTaskServicesClient userTaskServicesClient;

    System.setProperty("org.kie.server.bypass.auth.user", "true");

    // Creamos la configuración REST: endpoint, usuario, password, estrategia de marshalling...
    configuration = KieServicesFactory.newRestConfiguration(urlKieServer_balanced, "KieServer", "Test123");

    Set> clases = new HashSet>();


    kieServicesClient = KieServicesFactory.newKieServicesClient(configuration);
    userTaskServicesClient = kieServicesClient.getServicesClient(UserTaskServicesClient.class);

    Map entrada = userTaskServicesClient.getTaskInputContentByTaskId(containerId, 3L);

    Do you know if there is any problem in that API with load balancers?


    1. there must be something wrong on the loadbalancer that changes the content of the response. I don't know the F5 load balancer at all so can't say anything on possible causes of issues. Though you could check if the request is forwarded properly to kie servers and then what response is returned from there and what is received and then forwarded to the client

      I am not aware of any kie server client issues with load balancers as the only thing it configures is to follow redirect.

  6. Hi Maciej, I have drools-wb and kie-server on a different server, I configure the maven repository on settings.xml and when I create container on drools-wb is fine, even in the local repository of kie-server the ". Jar "is copied, but in the log of Wildfly (kie-server) appear the error:" ConversationId not valid - missing releaseId ", neither can not consume the service of the container newly created for example: (http://ipserver:8080/kie-server/services/rest/server/containers/g2), can you help me?

    1. never seen this before, do you have a stack trace? Are you setting ConversationId header in anyway?