Now we have all the software available locally so next step is to configure the cluster itself. We start with start of the Zookeeper server that will be master of the configuration of the cluster:
To do so, Apache Helix provides utility scripts that can be found in helix_home/bin.
- go to helix_home/bin
- create cluster
./helix-admin.sh --zkSvr localhost:2181 --addCluster jbpm-cluster
node 1
./helix-admin.sh --zkSvr localhost:2181 --addNode jbpm-cluster nodeOne:12345
node2
./helix-admin.sh --zkSvr localhost:2181 --addNode jbpm-cluster nodeTwo:12346
add as many nodes as you will have cluster members of jBPM server (in most cases number of application servers in the cluster)
NOTE: nodeOne:12345 is the unique identifier of the node, that will be referenced later on when configuring application severs, although it looks like host and port number it is use to identify uniquely logical node.
- add resources to the cluster
./helix-admin.sh --zkSvr localhost:2181
--addResource jbpm-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE
- rebalance cluster to initialize it
./helix-admin.sh --zkSvr localhost:2181 --rebalance jbpm-cluster vfs-repo 2
- start the Helix controller to manage the cluster
./run-helix-controller.sh --zkSvr localhost:2181
--cluster jbpm-cluster 2>&1 > /tmp/controller.log &
Values given above are just examples and can be changed according to the needs:
cluster name: jbpm-cluster
node name: nodeOne:12345, nodeTwo:12346
resource name: vfs-repo
zkSvr value must match Zookeeper server that is used.
Prepare data base
Before we start with application server configuration data base needs to be prepared, for this example we use PostgreSQL data base. jBPM server will create all required tables itself by default so there is no big work required for this but some simple tasks must be done before starting the server configuration.
Create data base user and data base
First of all PostgreSQL data base needs to be installed, next user needs to be created on the data base that will own the jbpm schema, in this example we use:
user name: jbpm
password: jbpm
Once the user is ready, data base can be created, and again for the example jbpm is chosen for the data base name.
NOTE: this information (username, password, data base name) will be used later on in application server configuration.
Create Quartz tables
Lastly Quartz related tables must be created, to do so best is to utilize the data base scripts provided with Quartz distribution, jbpm uses Quartz 1.8.5. DB scripts are usually located under QUARTZ_HOME/docs/dbTables.
Create quartz definition file
Quartz configuration that will be used by the jbpm server needs to accomodate the needs of the environment, as this guide is about to show the basic setup obviously it will not cover all the needs but will allow for further improvements.
Here is a sample configuration used in this setup:
#============================================================================
# Configure Main Scheduler Properties
#============================================================================
org.quartz.scheduler.instanceName = jBPMClusteredScheduler
org.quartz.scheduler.instanceId = AUTO
#============================================================================
# Configure ThreadPool
#============================================================================
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 5
#============================================================================
# Configure JobStore
#============================================================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.dataSource=managedDS
org.quartz.jobStore.nonManagedTXDataSource=notManagedDS
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval = 20000
#============================================================================
# Configure Datasources
#============================================================================
org.quartz.dataSource.managedDS.jndiURL=jboss/datasources/psjbpmDS
org.quartz.dataSource.notManagedDS.jndiURL=jboss/datasources/quartzNotManagedDS
Configure JBoss AS 7 domain
1. Create JDBC driver module - for this example PostgreSQL
a) go to JBOSS_HOME/modules directory (on EAP JBOSS_HOME/modules/system/layers/base)
b) create module folder org/postgresql/main
c) copy postgresql driver jar into the module folder (org/postgresql/main) as postgresql-jdbc.jar name
d) create module.xml file inside module folder (org/postgresql/main) with following content:
<module xmlns="urn:jboss:module:1.0" name="org.postgresql">
<resources>
<resource-root path="postgresql-jdbc.jar"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
</dependencies>
</module>
2. Configure data sources for jbpm server
a) go to JBOSS_HOME/domain/configuration
b) edit domain.xml file
for simplicity sake we use default domain configuration which uses profile "full" that defines two
server nodes as part of main-server-group
c) locate the profile "full" inside the domain.xml file and add new data sources
main data source used by jbpm
<datasource jndi-name="java:jboss/datasources/psjbpmDS"
pool-name="postgresDS" enabled="true" use-java-context="true">
<connection-url>jdbc:postgresql://localhost:5432/jbpm</connection-url>
<driver>postgres</driver>
<security>
<user-name>jbpm</user-name>
<password>jbpm</password>
</security>
</datasource>
additional data source for quartz (non managed pool)
<datasource jta="false" jndi-name="java:jboss/datasources/quartzNotManagedDS"
pool-name="quartzNotManagedDS" enabled="true" use-java-context="true">
<connection-url>jdbc:postgresql://localhost:5432/jbpm</connection-url>
<driver>postgres</driver>
<security>
<user-name>jbpm</user-name>
<password>jbpm</password>
</security>
</datasource>
defined the driver used for the data sources
<driver name="postgres" module="org.postgresql">
<xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class>
</driver>
3. Configure security domain
a) go to JBOSS_HOME/domain/configuration
b) edit domain.xml file
for simplicity sake we use default domain configuration which uses profile "full" that defines two
server nodes as part of main-server-group
c) locate the profile "full" inside the domain.xml file and add new security domain to define security
domain for jbpm-console (or kie-wb) - this is just a copy of the "other" security domain defined
there by default
<security-domain name="jbpm-console-ng" cache-type="default"> <authentication>
<login-module code="Remoting" flag="optional">
<module-option name="password-stacking" value="useFirstPass"/>
</login-module>
<login-module code="RealmDirect" flag="required">
<module-option name="password-stacking" value="useFirstPass"/>
</login-module>
</authentication>
</security-domain>
for kie-wb application, simply replace jbpm-console-ng with kie-ide as name of the security domain.
4. Configure server nodes
a) go to JBOSS_HOME/domain/configuration
b) edit host.xml file
c) locate servers that belongs to "main-server-group" in host.xml file and add following system
properties:
property name | property value | comments |
org.uberfire.nio.git.dir | /home/jbpm/node[N]/repo | location where the VFS asset repository will be stored for the node[N] |
org.quartz.properties | /jbpm/quartz-definition.properties | absolute file path to the quartz definition properties |
jboss.node.name | nodeOne | unique node name within cluster (nodeOne, nodeTwo, etc) |
org.uberfire.cluster.id | jbpm-cluster | name of the helix cluster |
org.uberfire.cluster.zk | localhost:2181 | location of the zookeeper server |
org.uberfire.cluster.local.id | nodeOne_12345 | unique id of the helix cluster node, note that ':' is replaced with '_' |
org.uberfire.cluster.vfs.lock | vfs-repo | name of the resource defined on helix cluster |
org.uberfire.nio.git.daemon.port | 9418 | port used by the GIT repo to accept client connections, must be unique for each cluster member |
org.uberfire.nio.git.ssh.port | 8001 | port used by the GIT repo to accept client connections (over ssh), must be unique for each cluster member |
org.uberfire.nio.git.daemon.host | localhost | host used by the GIT repo to accept client connections,
in case cluster members run on different machines this property must be set to actual host name instead of localhost otherwise synchronization won't work |
org.uberfire.nio.git.ssh.host | localhost | host used by the GIT repo to accept client connections (over ssh),
in case cluster members run on different machines this property must be set to actual host name instead of localhost otherwise synchronization won't work |
org.uberfire.metadata.index.dir | /home/jbpm/node[N]/index | location where index for search will be created (maintained by Apache Lucene) |
org.uberfire.cluster.autostart | false | delays VFS clustering until the application is fully initialized to avoid conficts when all cluster members create local clones |
examples for the two nodes:
<system-properties>
<property name="org.uberfire.nio.git.dir" value="/tmp/jbpm/nodeone"
boot-time="false"/>
<property name="org.quartz.properties"
value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
<property name="jboss.node.name" value="nodeOne" boot-time="false"/>
<property name="org.uberfire.cluster.id" value="jbpm-cluster"
boot-time="false"/>
<property name="org.uberfire.cluster.zk" value="localhost:2181"
boot-time="false"/>
<property name="org.uberfire.cluster.local.id" value="nodeOne_12345"
boot-time="false"/>
<property name="org.uberfire.cluster.vfs.lock" value="vfs-repo"
boot-time="false"/>
<property name="org.uberfire.nio.git.daemon.port" value="9418" boot-time="false"/>
<property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodeone" boot-time="false"/>
<property name="org.uberfire.cluster.autostart" value="false" boot-time="false"/>
</system-properties>
<system-properties>
<property name="org.uberfire.nio.git.dir" value="/tmp/jbpm/nodetwo"
boot-time="false"/>
<property name="org.quartz.properties"
value="/tmp/jbpm/quartz/quartz-db-postgres.properties" boot-time="false"/>
<property name="jboss.node.name" value="nodeTwo" boot-time="false"/>
<property name="org.uberfire.cluster.id" value="jbpm-cluster"
boot-time="false"/>
<property name="org.uberfire.cluster.zk" value="localhost:2181"
boot-time="false"/>
<property name="org.uberfire.cluster.local.id" value="nodeTwo_12346"
boot-time="false"/>
<property name="org.uberfire.cluster.vfs.lock" value="vfs-repo"
boot-time="false"/>
<property name="org.uberfire.nio.git.daemon.port" value="9419" boot-time="false"/>
<property name="org.uberfire.metadata.index.dir" value="/tmp/jbpm/nodetwo" boot-
time="false"/>
<property name="org.uberfire.cluster.autostart" value="false" boot-time="false"/>
</system-properties>
NOTE: since this example runs on single node host properties for ssh and git daemons are omitted.
Since repository synchronization is done between git servers make sure that GIT daemons are active (and properly configured - host name and port) on every cluster member.
5. Create user(s) and assign it to proper roles on application server
Add application users
In previous step security domain has been created so jbpm console (or kie-wb) users could be authenticated while logging on. Now it's time to add some users to be able to logon to the application once it's deployed. To do so:
a) go to JBOSS_HOME/bin
b) execute ./add-user.sh script and follow the instructions on the screen
- use Application realm not management
- when asked for roles, make sure you assign at least:
for jbpm-console: jbpm-console-user
for kie-wb: kie-user
add as many users you need, same goes for roles, listed above are required to be authorized to use the web application.
Add management (of application server) user
To be able to manage the application server as domain, we need to add administrator user, it's similar to what was defined for adding application users but the realm needs to be management
a) go to JBOSS_HOME/bin
b) execute ./add-user.sh script and follow the instructions on the screen
- use Management realm not application
Application server should be now ready to be used, so let's start the domain:
JBOSS_HOME/bin/domain.sh
after few seconds (it's still empty servers) you should be able to access both server nodes on following locations:
the port offset is configurable in host.xml for given server.
Deploy application - jBPM console (or kie-wb)
Now it's time to prepare and deploy application, either jbpm-console or kie-wb. As by default both application comes with predefined persistence that uses ExampleDS from AS7 and H2 data base there is a need to alter this configuration to use PostgreSQL data base instead.
Required changes in persistence.xml
- change jta-data-source name to match one defined on application server
java:jboss/datasources/psjbpmDS
- change hibernate dialect to be postgresql
org.hibernate.dialect.PostgreSQLDialect
Application build from source
If the application is built from source then you need to edit persistence.xml file that is located under:
jbpm-console-ng/jbpm-console-ng-distribution-wars/src/main/jbossas7/WEB-INF/classes/META-INF/
next rebuild the jbpm-distribution-wars module to prepare deployable package - once that is named:
jbpm-console-ng-jboss-as7.0.war