Clustering in jBPM v6

Clustering in jBPM v5 was not an easy task, there where several known issues that had to be resolved on client (project that was implementing solution with jBPM) side, to name few:

  • session management - when to load/dispose knowledge session
  • timer management - required to keep knowledge session active to fire timers
This is not the case any more in version 6 where several improvements made their place in code base, for example new module that is responsible for complete session management was introduced - jbpm runtime manager. More on runtime manager in next post, this one focuses on how clustered solution might look like. First of all lets start with all the important pieces jbpm environment consists of:

  1. asset repository - VFS based repository backed with GIT - this is where all the assets are stored during authoring phase
  2. jbpm server that includes JBoss AS7 with deployed jbpm console (bpm focused web application) of kie-wb (fully features web application that combines BPM and BRM worlds)
  3. data base - backend where all the state date is kept (process instances, ksessions, history log, etc)

Repository clustering

Asset repository is GIT backed virtual file system (VFS) that keeps all the assets (process definitions, rules, data model, forms, etc) is reliable and efficient way. Anyone who used to work with GIT understands perfectly how good it is for source management and what else assets are if not source code?
So if that is file system it resides on the same machine as the server that uses it, that enforces it to be kept in sync between all servers of a cluster. For that jbpm makes use of two well know open source projects:

Zookeeper is responsible for gluing all parts together where Helix is cluster management component that registers all cluster details (cluster itself, nodes, resources).

So this two components are utilized by the runtime environment on which jbpm v6 is based on:
  • kie-commons - provides VFS implementation and clustering 
  • uber fire framework - provides backbone of the web applications

So let's take a look at what we need to do to setup cluster of our VFS:

Get the software

  • download Apache Zookeeper (note that 3.3.4 and 3.3.5 are the only versions that were currently tested so make sure you get the correct version)
  • download Apache Helix  (note that version that was tested was 0.6.1)

Install and configure

  • unzip Apache Zookeeper into desired location - ( from now one we refer to it as zookeeper_home)
  • go to zookeeper_home/conf and make a copy of zoo_sample.conf to zoo.conf
  • edit zoo.conf and adjust settings if needed, these two are important in most of the cases:
# the directory where the snapshot is stored.
# the port at which the clients will connect

  •  unzip Apache helix to into desired location (from now one we refer to it as helix_home)

Setup cluster

Now we have all the software available locally so next step is to configure the cluster itself. We start with start of the Zookeeper server that will be master of the configuration of the cluster:
  • go to zookeeper_home/bin
  • execute following command to start zookeeper server:
sudo ./zkServer.sh start
  • zookeeper server should be started, if the server fails to start make sure that the data directory defined in zoo.conf file exists and is accessible
  • all zookeeper activities can be viewed zookeeper_home/bin/zookeeper.out
To do so, Apache Helix provides utility scripts that can be found in helix_home/bin.

  • go to helix_home/bin
  • create cluster
./helix-admin.sh --zkSvr localhost:2181 --addCluster jbpm-cluster
  • add nodes to the cluster 
node 1
./helix-admin.sh --zkSvr localhost:2181 --addNode jbpm-cluster nodeOne:12345
    ./helix-admin.sh --zkSvr localhost:2181 --addNode jbpm-cluster nodeTwo:12346
add as many nodes as you will have cluster members of jBPM server (in most cases number of application servers in the cluster)
NOTE: nodeOne:12345 is the unique identifier of the node, that will be referenced later on when configuring application severs, although it looks like host and port number it is use to identify uniquely logical node.
  • add resources to the cluster
./helix-admin.sh --zkSvr localhost:2181 
           --addResource jbpm-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE
  • rebalance cluster to initialize it
./helix-admin.sh --zkSvr localhost:2181 --rebalance jbpm-cluster vfs-repo 2

  • start the Helix controller to manage the cluster
./run-helix-controller.sh --zkSvr localhost:2181 
                        --cluster jbpm-cluster 2>&1 > /tmp/controller.log &
Values given above are just examples and can be changed according to the needs:
cluster name: jbpm-cluster
node name: nodeOne:12345, nodeTwo:12346
resource name: vfs-repo
zkSvr value must match Zookeeper server that is used.

Prepare data base 

Before we start with application server configuration data base needs to be prepared, for this example we use PostgreSQL data base. jBPM server will create all required tables itself by default so there is no big work required for this but some simple tasks must be done before starting the server configuration.

Create data base user and data base

First of all PostgreSQL data base needs to be installed, next user needs to be created on the data base that will own the jbpm schema, in this example we use:
user name: jbpm
password: jbpm

Once the user is ready, data base can be created, and again for the example jbpm is chosen for the data base name.

NOTE: this information (username, password, data base name) will be used later on in application server configuration.

Create Quartz tables

Lastly Quartz related tables must be created, to do so best is to utilize the data base scripts provided with Quartz distribution, jbpm uses Quartz 1.8.5. DB scripts are usually located under QUARTZ_HOME/docs/dbTables.

Create quartz definition file 

Quartz configuration that will be used by the jbpm server needs to accomodate the needs of the environment, as this guide is about to show the basic setup obviously it will not cover all the needs but will allow for further improvements.

Here is a sample configuration used in this setup:
# Configure Main Scheduler Properties  

org.quartz.scheduler.instanceName = jBPMClusteredScheduler
org.quartz.scheduler.instanceId = AUTO

# Configure ThreadPool  

org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 5

# Configure JobStore  

org.quartz.jobStore.misfireThreshold = 60000

org.quartz.jobStore.clusterCheckinInterval = 20000

# Configure Datasources  

Configure JBoss AS 7 domain

1. Create JDBC driver module - for this example PostgreSQL
a) go to JBOSS_HOME/modules directory (on EAP JBOSS_HOME/modules/system/layers/base)
b) create module folder org/postgresql/main
c) copy postgresql driver jar into the module folder (org/postgresql/main) as postgresql-jdbc.jar          name
d) create module.xml file inside module folder (org/postgresql/main) with following content:
         <module xmlns="urn:jboss:module:1.0" name="org.postgresql">
           <resource-root path="postgresql-jdbc.jar"/>

      <module name="javax.api"/>
      <module name="javax.transaction.api"/>

2. Configure data sources for jbpm server
a) go to JBOSS_HOME/domain/configuration
b) edit domain.xml file
for simplicity sake we use default domain configuration which uses profile "full" that defines two 
        server nodes as part of main-server-group
c) locate the profile "full" inside the domain.xml file and add new data sources
main data source used by jbpm
   <datasource jndi-name="java:jboss/datasources/psjbpmDS" 
                pool-name="postgresDS" enabled="true" use-java-context="true">
        additional data source for quartz (non managed pool)
        <datasource jta="false" jndi-name="java:jboss/datasources/quartzNotManagedDS"   
           pool-name="quartzNotManagedDS" enabled="true" use-java-context="true">
defined the driver used for the data sources
<driver name="postgres" module="org.postgresql">
3. Configure security domain 
     a) go to JBOSS_HOME/domain/configuration
     b) edit domain.xml file
for simplicity sake we use default domain configuration which uses profile "full" that defines two
        server nodes as part of main-server-group
     c) locate the profile "full" inside the domain.xml file and add new security domain to define security 
         domain for jbpm-console (or kie-wb) - this is just a copy of the "other" security domain defined 
         there by default
<security-domain name="jbpm-console-ng" cache-type="default"> <authentication>
            <login-module code="Remoting" flag="optional">
               <module-option name="password-stacking" value="useFirstPass"/>
            <login-module code="RealmDirect" flag="required">
                  <module-option name="password-stacking" value="useFirstPass"/>
for kie-wb application, simply replace jbpm-console-ng with kie-ide as name of the security domain.  
4. Configure server nodes

    a) go to JBOSS_HOME/domain/configuration
    b) edit host.xml file
    c) locate servers that belongs to "main-server-group" in host.xml file and add following system  

property nameproperty valuecomments
org.uberfire.nio.git.dir/home/jbpm/node[N]/repolocation where the VFS asset repository will be stored for the node[N]
org.quartz.properties/jbpm/quartz-definition.propertiesabsolute file path to the quartz definition properties
jboss.node.namenodeOneunique node name within cluster (nodeOne, nodeTwo, etc)
org.uberfire.cluster.idjbpm-clustername of the helix cluster
org.uberfire.cluster.zklocalhost:2181location of the zookeeper server
org.uberfire.cluster.local.idnodeOne_12345unique id of the helix cluster node, note that ':' is replaced with '_'
org.uberfire.cluster.vfs.lockvfs-reponame of the resource defined on helix cluster
org.uberfire.nio.git.daemon.port9418port used by the GIT repo to accept client connections, must be unique for each cluster member
org.uberfire.nio.git.ssh.port8001port used by the GIT repo to accept client connections (over ssh), must be unique for each cluster member
org.uberfire.nio.git.daemon.hostlocalhosthost used by the GIT repo to accept client connections, in case cluster members run on different machines this property must be set to actual host name instead of localhost otherwise synchronization won't work
org.uberfire.nio.git.ssh.hostlocalhosthost used by the GIT repo to accept client connections (over ssh), in case cluster members run on different machines this property must be set to actual host name instead of localhost otherwise synchronization won't work
org.uberfire.metadata.index.dir/home/jbpm/node[N]/indexlocation where index for search will be created (maintained by Apache Lucene)
org.uberfire.cluster.autostartfalsedelays VFS clustering until the application is fully initialized to avoid conficts when all cluster members create local clones
examples for the two nodes:
  •     nodeOne
  <property name="org.uberfire.nio.git.dir" value="/tmp/jbpm/nodeone" 
  <property name="org.quartz.properties" 
      value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
  <property name="jboss.node.name" value="nodeOne" boot-time="false"/>
  <property name="org.uberfire.cluster.id" value="jbpm-cluster" 
    <property name="org.uberfire.cluster.zk" value="localhost:2181" 
  <property name="org.uberfire.cluster.local.id" value="nodeOne_12345" 
  <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" 
  <property name="org.uberfire.nio.git.daemon.port" value="9418" boot-time="false"/>
  <property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodeone" boot-time="false"/>
  <property name="org.uberfire.cluster.autostart" value="false" boot-time="false"/>
  •     nodeTwo
    <property name="org.uberfire.nio.git.dir" value="/tmp/jbpm/nodetwo" 
    <property name="org.quartz.properties" 
       value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
    <property name="jboss.node.name" value="nodeTwo" boot-time="false"/>
    <property name="org.uberfire.cluster.id" value="jbpm-cluster" 
    <property name="org.uberfire.cluster.zk" value="localhost:2181" 
    <property name="org.uberfire.cluster.local.id" value="nodeTwo_12346" 
    <property name="org.uberfire.cluster.vfs.lock" value="vfs-repo" 
    <property name="org.uberfire.nio.git.daemon.port" value="9419" boot-time="false"/>
    <property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodetwo" boot-
    <property name="org.uberfire.cluster.autostart" value="false" boot-time="false"/>

NOTE: since this example runs on single node host properties for ssh and git daemons are omitted.

Since repository synchronization is done between git servers make sure that GIT daemons are active (and properly configured - host name and port) on every cluster member.
5. Create user(s) and assign it to proper roles on application server

Add application users
In previous step security domain has been created so jbpm console (or kie-wb) users could be authenticated while logging on. Now it's time to add some users to be able to logon to the application once it's deployed. To do so:
 a) go to JBOSS_HOME/bin
 b) execute ./add-user.sh script and follow the instructions on the screen
  - use Application realm not management
  - when asked for roles, make sure you assign at least:
  for jbpm-console: jbpm-console-user
  for kie-wb: kie-user
add as many users you need, same goes for roles, listed above are required to be authorized to use the web application. 

Add management (of application server) user
To be able to manage the application server as domain, we need to add administrator user, it's similar to what was defined for adding application users but the realm needs to be management
 a) go to JBOSS_HOME/bin
 b) execute ./add-user.sh script and follow the instructions on the screen
  - use Management realm not application

Application server should be now ready to be used, so let's start the domain:


after few seconds (it's still empty servers) you should be able to access both server nodes on following locations:
administration console: http://localhost:9990/console

the port offset is configurable in host.xml for given server.

Deploy application - jBPM console (or kie-wb)

Now it's time to prepare and deploy application, either jbpm-console or kie-wb. As by default both application comes with predefined persistence that uses ExampleDS from AS7 and H2 data base there is a need to alter this configuration to use PostgreSQL data base instead.

Required changes in persistence.xml

  • change jta-data-source name to match one defined on application server
  • change hibernate dialect to be postgresql 

Application build from source

If the application is built from source then you need to edit persistence.xml file that is located under:
next rebuild the jbpm-distribution-wars module to prepare deployable package - once that is named: 

Deployable package downloaded

In case you have deployable package downloaded (which is already a war file) you need to extract it change the persistence.xml located under:
once the file is edited and contains correct values to work properly with PostgreSQL data base application needs to be repackaged:
NOTE: before repackaging make use that previous war is not in the same directory otherwise it will be packaged into new war too.

jar -cfm jbpm-console-ng.war META-INF/MANIFEST.MF *

IMPORTANT: make sure that you include the same manifest file that was in original war file as it contains valuable entires.

To deploy application logon as management user into administration console of the domain and add new deployments using Runtime view of console. Once the deployment is added to the domain, assign it to the right server group - in this example we used main-server-group it will be default enable this deployment on all servers within that group - meaning deploy it on the servers. This will take a while and after successful deployment you should be able to access jbpm-console (or kie-wb) on following locations:

the context root (jbpm-console-ng) depends on the name of the war file that was deployed so if the filename will be jbpm-console-ng-jboss7.war then the context root will be jbpm-console-ng-jboss7. Same rule apply to kie-wb deployment.

And that's it - you should have fully operational jbpm cluster environment!!!

Obviously in normal scenarios you would like to hide the complexity of different urls to the application from end users (like putting in front of them a load balancer) but I explicitly left that out of this example to show proper behavior of independent cluster nodes.

Next post will go into details on how different components play smoothly in cluster, to name few:
  • failover - in case cluster node goes down
  • timer management - how does timer fire in cluster environment
  • session management - auto reactivation of session on demand
  • etc
As we are still in development mode, please share your thoughts on what would you like to see in cluster support for jBPM, your input is most appreciated!

There was a change in naming of system properties since the article was written so for those that configured it already for 6.0.0.Final there will be a need to adjust name of following system properties:

  • org.kie.nio.git.dir -> org.uberfire.nio.git.dir
  • org.kie.nio.git.daemon.port -> org.uberfire.nio.git.daemon.port
  • org.kie.kieora.index.dir -> org.uberfire.metadata.index.dir
  • org.uberfire.cluster.autostart - new parameter
Table above already contains proper values for 6.0.0.Final


  1. Hi,

    I am trying to achieve clustering for jBPM6. So far, I have been unsuccessful, in getting this to work. I have couple questions and doubts:

    1. Can we configure clusters using the h2 database itself? Is it mandatory to MySQL or PostgreSQL?

    2. Node here is referred to two jbpm containers or two servers hosting jbpm.

    3. I was unable to find the Quartz_Home. Where do I find it?


  2. you can configure cluster with H2 but just make sure that H2 will run in server mode so both (all) cluster members can access same data base.
    Depending on the perspective, as this article mainly covers jbpm clustering then the node is actually jbpm app that is being part of the cluster.
    quartz home is the location where you extract quartz distribution, you need to first download it.

  3. Thank you for clarifications:

    If I understand correctly, you are telling that we need to have 2 - jbpm installer on the same machine. Node one refers to one of the jbpm installer and node two to the other jbpm installer.

    Am I correct?

  4. yes, that can be one of the options to run it. The recommended would be to use JBoss AS domain instead of standalone servers (that are used in jbpm installer)

  5. Hi,

    I'm looking at how to cluster jBPM. In a production environment, I don't think I really need the git repository, since that will be managed in my development and test environments. All I really care about in production is my maven artifacts. In that scenario, is there any performance advantage to clustering the asset repository? Or am I better of just standing up a single maven repository that all the jBPM nodes use?


    1. I'd go for central maven repo outside of workbench and configure workbench to use it as primary repo. Then all artifacts would end up there so all cluster members have equal access to them.

      Alternatively you can still use workbench maven repo and then use shared drive (NFS, SAN) that will host the artifacts so all cluster members can access them.

  6. By any chance do we have clustering set-up documented for Tomcat ?

  7. In my production, Most of my process are timer based, I would like to know that these process which will invoked in one node, would it also invoke in the other node as well ? I can try to see what happens but in my installation I haven't configured the quartz at all. is it using in memory Quartz configuration ?

    1. if you haven't setup quartz at all then timers might fire at all cluster members as they are not synchronized. By default the timer is backed by in memory thread pool scheduler service that is always initialized when ksession is loaded. So if you extensively use timers I strongly recommend to use quartz with db job store.

    2. Thanks for responding back promptly. I am close to establishing the cluster let see how it goes. - thanks again

    3. Dear Maciej,
      I was able to deploy the jbpm in cluster following the above steps, but unfortunate none of the processes are working which are timer based. I have not configured so far the Helix and ZooKeeper as my kjar is not going to change and will not need syncing. any Idea what could be the issue. No errors in log . - Thanks

    4. Hi Maciej ,

      I was able to create a Jboss cluster and was able to deploy kie-workbench and also was able to deploy the kjar of my bpm flows. I used quartz to configure the timers based flow , using jdbc store , the process flows are visible in process definition but they are not triggering at all. more details can be found https://developer.jboss.org/message/944964#944964 my deployment versions are Jboss7.1.1.Final and Jboss 6.2.0.Final with oracle database.

      When I do the same deployment in standalone , there i am not even able to deploy the kjar itself it fails .

      Thanks in advance

      Sanjay Gautam

  8. Hi Maciej do you think the latest versions of zookeeper and helix should be fine enough ? as the blog when it was written it is long back in 2013. I cannot find that version on the zookeeper site

    1. for Benefit of the other user I tried with the Latest Helix (0.6.5) and Zookeeper (Release 3.4.6(stable) it works fine.

  9. Did the "Next post will go into details on how different components play smoothly in cluster" get written?

    1. unfortunately not. Though it's changed a bit especially around runtime clustering - there is no need to use zookeeper anymore to have synchronization across all cluster members as they are pulled from db by each node in the cluster.

      Feel free to drop a note at what you're interested in so I can try to provide more details ad hoc.

    2. I'm looking more at the execution server or embedding jBPM in my own server (don't think I need the workbench frontend).

      The bullet points you mentioned are what I'm researching:
      * failover - in case cluster node goes down
      * timer management - how does timer fire in cluster environment (quartz integration solves this?)
      * session management - auto reactivation of session on demand (human/timer/other?)

    3. if you consider execution time only then it's much easier.
      " failover - in case cluster node goes down"
      if node fails the in-flight transactions are rolled back and you will have to restart it on another node, there is no automatic fail over of running operations. Though you keep constant state based on transaction handling

      * timer management - how does timer fire in cluster environment (quartz integration solves this?)
      depending on application server, quartz can be used as best (and most known) solution. But since it starts background threads not managed by app server it might not be seen as good thing on servers such as WebSphere. Then you have to use ejb timer based timer service.

      * session management - auto reactivation of session on demand (human/timer/other?)
      that's already taken care of by RuntimeManager so no need to worry. Depending on what runtime strategy you use might slightly differ but should properly recreate session - talking about kie session not http session

    4. Thanks for the info. Can I ask a couple of follow-ups?

      "if node fails the in-flight transactions are rolled back and you will have to restart it on another node"
      Do you mean JPA transactions? What is it that I'd need to restart on another node? (process instance?)

      "that's already taken care of by RuntimeManager so no need to worry."
      Sounds great! So any calls to the (say) the Task Service will automatically recreate the ksession? And if using Quartz, timers expiring will also do this. Any other cases I should be thinking about?

      I'm assuming I'll use something like sticky sessions so that client apps connect to the jbpm cluster node that is running their processes - is that way it's typically setup?

    5. "if node fails the in-flight transactions are rolled back and you will have to restart it on another node"
      Do you mean JPA transactions? What is it that I'd need to restart on another node? (process instance?)
      It's JTA transaction that includes JPA operations. What needs to be restarted it the operation that was running when the server crashed. So if you complete a task and then server crash you would have to complete that task again.

      "that's already taken care of by RuntimeManager so no need to worry."
      Sounds great! So any calls to the (say) the Task Service will automatically recreate the ksession?
      And if using Quartz, timers expiring will also do this. Any other cases I should be thinking about?
      correct, whenever there is a need for accessing ksession runtime manager will ensure it's there and properly configured. Look at jBPM services if you consider embedded or KIE Server when you prefer to interact with process engine over remote interfaces

    6. Thanks, I'll look at jBPM Services. Is this post http://mswiderski.blogspot.co.uk/2014/11/cross-framework-services-in-jbpm-62.html the best place to start?

    7. yup, that's good to start with as it provides introduction to the concept

  10. Hi Maciej,

    I have some bpm processes those are started using a timer event.

    I am trying to use ejb-timers by adding the jbpm-services-ejb-timer.jar to the Spring-based web application. My app is deployed in a 2-node cluster in WebSphere.

    I have used a Scheduler in clustered scope, pointing to an XA-Datasource and a work manager. All 3 are cluster-scoped. This scheduler is connected by each nodes under ejb timer service settings.

    Ideally this should be sufficient. But I can see timers are running in both nodes and the behavior looks like they are non-persistent, even though I can see the tasks are added to the database table during deployment of the app. These tasks eventually are removed from the table and after few hours, no tasks remain in the table. But the timers still run repeatedly.

    Is there any special configuration needed to make these persistent?

    Please help on this.


  11. Hi Maciej,

    I found the problem for timers. We are using Deployment Synchronizer as we are using clustered nodes. But the problem is, deployment sync uses an Unmanaged Thread pool to run and thats why it cannot find the EJBScheduler from WebSphere, and thus defaults to ThreadPoolSchedulerService and ignores the ejbTimer completely.

    I have extended the DeploymentSync to use WebSphere's timer manager to make it work. Now timers run fine.

    Do you see any other place in the api which can cause issues due to unmanaged threads? I will try to search the api to check for that as well.

    Please suggest.


    1. there is out of the box component that extends deployment synchronizer that uses EJB timers and it comes with ejb services or cdi services. So you could use that too, otherwise it's fine with your extension as it pretty much does the same thing.

      timers and executor uses background threads and thus might be using unmanaged thread pool depending on configuration

  12. Hi Maciej,

    First of all i would like to say its an amazing blog :)

    Secondly i had a query. I have 2 JBPM(6.1.0 GA) applications deployed on 2 different JBOSS(EAP 6.1) instances. I have used the same datasource(Oracle DB) for both of the JBPM instances. I am getting some conflict errors after i did that. My aim is to have the same process instances among the 2 JBPM instances. I have not created the Quartz tables as mentioned in the above document. Is that compulsory? Can you please advice on this?


    1. you need to provide more details about the error otherwise no way to help

  13. Hi Maciej

    Is it possible to know in which cluster are we?


    1. the only way that I can think of is to rely on system properties that indicate what cluster you are in. Other than that I don't think so.

    2. Thanks Maciej, it worked!!

  14. Hi, nice blog! Put a lot of details here. Thanks! And my question is can I cluster kie servers in this way? And btw, how do I know it works? Can I just access two different human tasks that belongs to the same process running on seperated kie servers? In other words, the same process has been runned on twe kie servers, could I just access any human task on any kie server in this cluster? Actually I've done all of these steps with three machines, one kie workbench, two kie servers. But I've no idea it works or not.

    1. yes, all what you described will work as expected as long as all kie servers share that same db. There is another clustering of kie server article as well that you can find here http://mswiderski.blogspot.se/2016/04/kie-server-clustering-and-scalability.html

  15. hi please why id get this error :
    y:] - Exception causing close of session 0x15ab31d345e0000 due to java.io.IOException: Une connex
    y:] - Closed socket connection for client /0:0:0:0:0:0:0:1:51646 which had sessionid 0x15ab31d34
    eeperServer@358] - Expiring session 0x15ab31d345e0000, timeout of 30000ms exceeded
    cport:2181)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x15ab31d345e0000
    y:] - Accepted socket connection from /0:0:0:0:0:0:0:1:51649
    y:] - Client attempting to establish new session at /0:0:0:0:0:0:0:1:51649
    perServer@673] - Established session 0x15ab31d345e0001 with negotiated timeout 30000 for client /0:0:0:0:0:0:0:1:51649
    cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x15ab31d345e0001 type:create cxid:0
    when id run this command $ ./helix-admin.sh --zkSvr localhost:2181 --addCluster kie-cluster

  16. i want to use two servers jboss in the same windows

  17. version are zookeeper-3.4.9 and helix-core-0.6.7-pkg

  18. Maciel,
    First of all thanks for the tutorial.
    By any chance do we have clustering set-up documented for Tomcat?
    I am trying to make it work but if I make a change in any asset, a rule for example, the node I am connected have the change and the other node (I am working with 2 nodes) get advised by the Helix Listener, but the repository stays at the same state, meaning that my changes were not sync between the nodes.

    Thanks for all support.


    1. not really, there was no effort made to make this clustering work on tomcat unfortunately.

    2. No problem.
      I was able to make it work following the documentation.
      Thanks for replying me back.


    3. Hi Leandro,
      I am also trying to setup a clustering on tomcat server for Workbench in tomcat server. If it is working for you could you please share the documentation with me as well. It will be really helpful.

    4. Hi Leandro,
      I'm trying to configure a workbench cluster in tomcat server, would you please share the solution that you did??
      Thanks Lila

  19. Hi,
    I am trying to build a jbpm6.5 cluster with mysql as db. I m using

    I am creating setup by following steps

    ./bin/helix-admin.sh --zkSvr localhost:2181,localhost:2182,localhost:2183 --addCluster jbpm-domain-cluster

    /bin/helix-admin.sh --zkSvr localhost:2181,localhost:2182,localhost:2183 --addNode jbpm-domain-cluster server-one:12345

    ./bin/helix-admin.sh --zkSvr localhost:2181,localhost:2182,localhost:2183 --addNode jbpm-domain-cluster server-two:12346

    ./bin/helix-admin.sh --zkSvr localhost:2181,localhost:2182,localhost:2183 --addResource jbpm-domain-cluster vfs-repo-domain 1 LeaderStandby AUTO_REBALANCE

    ./bin/run-helix-controller.sh --zkSvr localhost:2181,localhost:2182,localhost:2183 --cluster jbpm-domain-cluster 2>&1 > ./controller.log &

    node-one start
    ./bin/standalone.sh -c standalone-full-ha.xml -b

    node-two start
    ./bin/standalone.sh -Djboss.socket.binding.port-offset=100 -c standalone-full.xml -b

    I am using following node configuration

    Node two-

    Node one starts properly, but while starting 2nd node it get stuck and jbpm-console never get deployed. Following is the last log that get printed in 2nd node

    17:23:28,759 INFO [org.apache.helix.manager.zk.ZKHelixManager] (MSC service thread 1-1) init handler: /jbpm-domain-cluster/INSTANCES/server-two_12346/MESSAGES, org.apache.helix.messaging.handling.HelixTaskExecutor@89be1f8
    17:23:28,760 INFO [org.apache.helix.manager.zk.CallbackHandler] (ZkClient-EventThread-349-localhost:2181,localhost:2182,localhost:2183) 349 START:INVOKE /jbpm-domain-cluster/INSTANCES/server-two_12346/MESSAGES listener:org.apache.helix.messaging.handling.HelixTaskExecutor
    17:23:28,760 INFO [org.apache.helix.manager.zk.CallbackHandler] (ZkClient-EventThread-349-localhost:2181,localhost:2182,localhost:2183) server-two_12346 subscribes child-change. path: /jbpm-domain-cluster/INSTANCES/server-two_12346/MESSAGES, listener: org.apache.helix.messaging.handling.HelixTaskExecutor@89be1f8
    17:23:28,761 INFO [org.apache.helix.messaging.handling.HelixTaskExecutor] (ZkClient-EventThread-349-localhost:2181,localhost:2182,localhost:2183) No Messages to process
    17:23:28,761 INFO [org.apache.helix.manager.zk.CallbackHandler] (ZkClient-EventThread-349-localhost:2181,localhost:2182,localhost:2183) 349 END:INVOKE /jbpm-domain-cluster/INSTANCES/server-two_12346/MESSAGES listener:org.apache.helix.messaging.handling.HelixTaskExecutor Took: 1ms

    and after this time out exception. Please let me know if i am missing something. What am I doing wrong.

    1. node-one properties
      org.kie.demo" value="false"
      org.kie.example" value="false"
      org.kie.server.persistence.ds" value="java:jboss/datasources/jbpmDS"
      org.kie.server.persistence.dialect" value="org.hibernate.dialect.MySQLInnoDBDialect"
      jboss.node.name" value="server-one"
      org.uberfire.nio.git.dir" value="/home/rupesh/jbpm6.5Cluster/clustering/server-one"
      org.uberfire.cluster.id" value="jbpm-domain-cluster"
      org.uberfire.cluster.zk" value="localhost:2181,localhost:2182,localhost:2183"
      org.uberfire.cluster.local.id" value="server-one_12345"
      org.uberfire.cluster.vfs.lock" value="vfs-repo-domain"
      org.uberfire.nio.git.daemon.host" value="localhost"
      org.uberfire.nio.git.daemon.port" value="9418"

      org.uberfire.nio.git.ssh.port" value="8003"

      org.uberfire.nio.git.ssh.host" value=""
      org.uberfire.metadata.index.dir" value="/home/rupesh/jbpm6.5Cluster/clustering/server-one"
      org.uberfire.nio.git.ssh.cert.dir" value="/home/rupesh/jbpm6.5Cluster/clustering/server-one"
      org.quartz.properties" value="/home/rupesh/jbpm6.5Cluster/clustering/quartz/quartz-definition-mysql.properties"

      node-two properties

      org.kie.demo" value="false"
      org.kie.example" value="false"
      jboss.node.name" value="server-two"
      org.kie.server.persistence.ds" value="java:jboss/datasources/jbpmDS"
      org.kie.server.persistence.dialect" value="org.hibernate.dialect.MySQLInnoDBDialect"
      org.uberfire.nio.git.dir" value="/home/rupesh/jbpm6.5Cluster/clustering/server-two"
      org.uberfire.cluster.id" value="jbpm-domain-cluster"
      org.uberfire.cluster.zk" value="localhost:2181,localhost:2182,localhost:2183"
      org.uberfire.cluster.local.id" value="server-two_12346"
      org.uberfire.cluster.vfs.lock" value="vfs-repo-domain"
      org.uberfire.nio.git.daemon.host" value="localhost"
      org.uberfire.nio.git.daemon.port" value="9419"

      org.uberfire.nio.git.ssh.port" value="8004"

      org.uberfire.nio.git.ssh.host" value=""
      org.uberfire.metadata.index.dir" value="/home/rupesh/jbpm6.5Cluster/clustering/server-two"
      org.uberfire.nio.git.ssh.cert.dir" value="/home/rupesh/jbpm6.5Cluster/clustering/server-two"
      org.quartz.properties" value="/home/rupesh/jbpm6.5Cluster/clustering/quartz/quartz-definition-mysql.properties"

  20. hello i am trying to set up clustering in tomcat but the git repos are not syncing. i followed all the steps in the blog. anyone able to do cluster set up in tomcat successfully?

  21. I'm trying to cluster a bunch of KIE-Drools-Wb 7.11.Final instances, but no luck as yet. I read your comment somewhere that zookeeper and helix support has been removed for 7.xx series. Is it true ? and if so how can we achieve clustering and hence sync of niogit repos?