Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Anchor
Top
Top

Column

This section applies specifically to the Zephyr Enterprise Data Center product. The information below is generally used to ensure that system administrators can configure their new Zephyr Enterprise Data Center product instance on a Windows environment. 


This section provides step-by-step instructions for configuring the application nodes on Windows.

Add the first node to your Load Balancer (Nginx)


Zephyr Data Center relies on a load balancer to balance traffic between the nodes.

In case Nginx is being used as the Load Balancer, an additional step is to be performed in the Zephyr instance which is to edit the file <ZEPHYR_HOME>/tomcat/webapps/flex/WEB-INF/web.xml and update the AtmosphereServlet configuration by including an additional parameter called org.atmosphere.cpr.AtmosphereInterceptor with the below details -

Code Block
languagexml
<init-param>
   <param-name>org.atmosphere.cpr.AtmosphereInterceptor</param-name>
   <param-value>org.atmosphere.interceptor.NginxInterceptor</param-value>
</init-param>

Sample configuration - 

Code Block
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
 http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
 version="3.0">
 ...
<!-- new Atmosphere configuration -->
 <servlet>
        <servlet-name>AtmosphereServlet</servlet-name>
        <servlet-class>org.atmosphere.spring.bean.AtmosphereSpringServlet</servlet-class>
        ...
      <init-param>
         <param-name>org.atmosphere.cpr.AtmosphereInterceptor</param-name>
         <param-value>org.atmosphere.interceptor.NginxInterceptor</param-value>
      </init-param>
        ...
    </servlet>

   <servlet-mapping>
      <servlet-name>GenericAttachmentServlet</servlet-name>
      <url-pattern>/upload/document/genericattachment</url-pattern>
   </servlet-mapping>
   ...
 <distributable/>
</web-app>

After adding JIRA to the load balancer, ensure that basic functionality is working after restarting the JIRA instance by navigating to the instance, logging in, and noting any broken links or malfunctioning JIRA functionality.

Be sure to check that the base server URL is configured properly (to the load balancer public URL).

After you install Zephyr Data Center or add a new node to your environment, use the health checklist to check that your instance is configured and operating correctly.

Add a new Zephyr node to the cluster


Step 1:
Install Zephyr on the second node with following options set - zSkipZephyrSeverStart, zSkipStartupData, zSkipDropData, so that the data already present in the database is not reset. Zephyr recommends that your configuration deviates from the first installation as little as possible to ease the burden of documentation and deployment (e.g. Installation paths, users, file permissions, etc).


    • Sample command for Windows - 

      Code Block
      languagebash
      zephyr_5.0_*.exe -VzSkipZephyrSeverStart=true -VzSkipStartupData=true -VzSkipDropData=true


Step 2:
Ensure that the new host can access the shared data directory (e.g. ensure that you can read the contents of the shared zephyrdata directory and have write access to it)

Step 3: 
Stop Zephyr instance on all other nodes of the cluster

Step 4:
Update the file <ZEPHYR_HOME>/tomcat/webapps/flex/WEB-INF/classes/cluster.properties with the node details of the new instance and increase the count by one.

Sample configuration - 

Code Block
languagebash
cluster.node.count=2

cluster.node.2.key=node2
cluster.node.2.url=192.168.1.3


Step 5:
Copy the file <ZEPHYR_HOME>/tomcat/webapps/flex/WEB-INF/classes/cluster.properties from the first node onto <ZEPHYR_HOME>/tomcat/webapps/flex/WEB-INF/classes folder of the new node

Step 6:
Alter the <ZEPHYR_HOME>/tomcat/webapps/flex/WEB-INF/classes/cluster.properties file in the new node to reference the new cluster.key - All keys must be unique among nodes.

Step 7:
Remove the folder <ZEPHYR_HOME>/zephyrdata, instead link it to the mount point for the shared zephyrdata dairectory

Sample command on linux - 

Code Block
languagebash
ln -sf /data/zephyr/zephyrdata <ZEPHYR_HOME>/zephyrdata


Step 8:
Edit the <ZEPHYR_HOME>/tomcat/conf/web.xml and include a <distributable /> entry

Sample configuration - 

Code Block
languagexml
<web-app>
	...
<welcome-file-list>
<welcome-file>index.html</welcome-file>
<welcome-file>index.htm</welcome-file>
<welcome-file>index.jsp</welcome-file>
</welcome-file-list>

<distributable/>
</web-app>


Step 9:
Edit the <ZEPHYR_HOME>/tomcat/webapps/flex/WEB-INF/web.xml and include <distributable /> entry in the same

Sample Configuration - 

Code Block
languagexml
<web-app>
...
<!-- Uncomment the following if jdbc.properties secured property is true-->
 <!--security-constraint>
 <web-resource-collection>
 <web-resource-name>Zephyr</web-resource-name>
 <url-pattern>/*</url-pattern>
 </web-resource-collection>
 <user-data-constraint>
 <transport-guarantee>CONFIDENTIAL</transport-guarantee>
 </user-data-constraint>
 </security-constraint-->
 <distributable/>
</web-app>


Step 10:
Enable clustering in tomcat for session replication by updating <ZEPHYR_HOME>/tomcat/conf/server.xml and including the cluster section in the same

Sample Configuration - 

Expand
...
...
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
 channelSendOptions="8">

<Manager className="org.apache.catalina.ha.session.DeltaManager"
 expireSessionsOnShutdown="false"
 notifyListenersOnReplication="true"/>

<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
 address="239.255.42.99"
 port="54327"
 frequency="500"
 dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
 address="auto"
 port="4000"
 autoBind="100"
 selectorTimeout="5000"
 maxThreads="6"/>

<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>

<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
 filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>

<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
 tempDir="/tmp/war-temp/"
 deployDir="/tmp/war-deploy/"
 watchDir="/tmp/war-listen/"
 watchEnabled="false"/>

<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>

<!-- Use the LockOutRealm to prevent attempts to guess user passwords
via a brute-force attack -->
<Realm className="org.apache.catalina.realm.LockOutRealm">
...
...


Step 11:
Enable Test step history data clustering by following the below steps:

  1. Connect to the dversion database using your favorite database client
  2. Run the command select * from JACKRABBIT_JOURNAL_GLOBAL_REVISION and make note of the value for REVISION_ID that is fetched.
  3. Insert a row into JACKRABBIT_JOURNAL_LOCAL_REVISIONS table for the new node with the REVISION_ID fetched from the previous step and JOURNAL_ID as the cluster id (cluster.key) from cluster.properties file - Sample statement : insert into JACKRABBIT_JOURNAL_LOCAL_REVISIONS (JOURNAL_ID,REVISION_ID) values ('node2',9);

Step 12:
Edit the jdbc.properties file for elastic search by updating 

Sync the configurations of this new node with other nodes in the cluster


Step 1:
Edit the <ZEPHYR_HOME>/tomcat/webapps/flex/WEB-INF/template/hazelcast.xml.tmpl file on all existing nodes of the cluster and update the tcp-ip section to include the IP details of all nodes of the cluster with port 5701 and update the interfaces section with the ip interface of the current system

Sample Configuration - 

Code Block
languagexml
<network>
   ...
<join>
      ...
<tcp-ip enabled="true">
<member>192.168.1.2:5701</member>
<member>192.168.1.3:5701</member>
</tcp-ip>
      ...
</join>
<interfaces enabled="true">
<interface>192.168.1.3</interface>
</interfaces>
   ...
</network>


Step 2:
Make similar changes as step 2 to the file <ZEPHYR_HOME>/tomcat/webapps/flex/WEB-INF/classes/hazelcast-hibernate.xml on all existing nodes of the cluster as well

Step 3:
Ensure that the file <ZEPHYR_HOME>/tomcat/webapps/flex/WEB-INF/classes/cluster.properties on all existing nodes of the cluster with the node details of the new instance and increase the count by one

Sample Configuration - 

Code Block
languagebash
cluster.node.count=2

cluster.node.2.key=node2
cluster.node.2.url=192.168.1.3


Connect this new node to the load balancer


In case Nginx is being used as the Load Balancer, an additional step is to be performed in the Zephyr instance which is to edit the file <ZEPHYR_HOME>/tomcat/webapps/flex/WEB-INF/web.xml and update the AtmosphereServlet configuration by including an additional parameter called org.atmosphere.cpr.AtmosphereInterceptor with the below details -

Code Block
languagexml
<init-param>
   <param-name>org.atmosphere.cpr.AtmosphereInterceptor</param-name>
   <param-value>org.atmosphere.interceptor.NginxInterceptor</param-value>
</init-param>

Sample Configuration - 

Expand
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
 version="3.0">
...
<!-- new Atmosphere configuration -->
 <servlet>
<servlet-name>AtmosphereServlet</servlet-name>
<servlet-class>org.atmosphere.spring.bean.AtmosphereSpringServlet</servlet-class>
...
<init-param>
<param-name>org.atmosphere.cpr.AtmosphereInterceptor</param-name>
<param-value>org.atmosphere.interceptor.NginxInterceptor</param-value>
</init-param>
...
</servlet>

<servlet-mapping>
<servlet-name>GenericAttachmentServlet</servlet-name>
<url-pattern>/upload/document/genericattachment</url-pattern>
</servlet-mapping>
...
 <distributable/>
</web-app>


Also update the Nginx configuration to include the new node for load balancing and reload the Nginx configuration or restart nginx server- 

Code Block
languagebash
upstream tomcat {
 	ip_hash;
 	server 192.168.1.2;
	#new instance
	server 192.168.1.3;
 }

Verify that the new node is in the cluster and receiving requests by checking the logs on each node to ensure both are receiving traffic and also check that updates done on one node are visible on the other.

Repeat steps 6 and 7 for each new node added to the cluster.

Security

Ensure that only permit cluster nodes are allowed to connect to a JIRA Data Center instance's ehcache RMI port, which by default is port port 40001, through the use of a firewall and/or network segregation. Not restricting access to the ehcache RMI port could result in the compromise of a JIRA Data Center instance.

Cluster.properties file parameters

You can set the following parameters in the cluster.properties file:

ParameterRequiredDescription/value
jira.node.idYesThis unique ID must match the username and the BalancerMember entry in the Apache config
jira.shared.homeYesThe location of the shared home directory for all JIRA nodes
ehcache.peer.discoveryNo

Describes how nodes find each other:

default - JIRA will automatically discover nodes. Recommended
automatic - Will use EhCache's multicast discovery. This is the historical default method used by ehCache, but can be problematic for customers to configure and is no longer recommended by Atlassian for use with JIRA clustering

ehcache.listener.hostNameNoThe hostname of the current node for cache communication. JIRA Data Center will resolve this this internally if the parameter isn't set. 
If you have problems resolving the hostname of the network you can set this parameter.
ehcache.listener.portNo

The port the node is going to be listening to ( default = 40001)


if multiple nodes are on the same host or this port is not available, you might need to set this manually.

ehcache.listener.socketTimeoutMillisNoBy default this is set to the Ehcache default

If you set ehcache.peer.discovery = automatic then you need to set the following parameters:

  • ehcache.multicast.address

  • ehcache.multicast.port

  • ehcache.multicast.timeToLive

  • ehcache.multicast.hostName

Refer to the Ehcache documentation for more information on these parameters.


Back to Top ^