Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

About

Zephyr Data Center is a solution you use to organize testing by using cluster machines. You deploy Zephyr on multiple nodes, set up an external database and create a shared folder – all this is done to provide high availability and performance at scale for the users working with Zephyr. If a node fails for some reason, the requests are redirected to the other nodes:

The deployment components include the following: 

  • Load Balancer - balances the load across the cluster nodes.

  • Elasticsearch – a search engine deployed on the server node as a micro service or as a separate process in the virtual private network.  

  • Shared database a shared database used by the nodes. Zephyr supports MySQL, MS SQL Server, and Oracle Database. 

  • Shared directory a shared folder where attachments are stored. In a cluster environment, this folder must be shared with write permissions. Network-attached storage (NAS) and similar devices are supported.

It is recommended to have this clustered deployment behind a firewall.

Requirements

Installation procedure

To install Zephyr Enterprise on cluster nodes, you need to perform the following:

1. Install a database

You can install one of the following database management systems on any computer where Zephyr is not installed:

You must have a connector JAR file for your MySQL, Microsoft SQL Server or Oracle database.

2. Install Elasticsearch

Install Elasticsearch 5.5.0 (a search engine) on any computer where Zephyr is not installed.

3. Set up a shared directory

 On Windows

Share a folder by using NAS (network-attached storage) or a similar device:

1. Create a shared folder with read/write access in NAS.

2. On all the nodes, open the Program Files\Zephyr\tomcat\webapps\flex\WEB-INF\classes\jdbc.properties file, find the line

ZEPHYR_DATA = C:/Program Files/Zephyr/zephyrdata

and replace the C:/Program Files/Zephyr part with the IP address of the NAS shared folder. For example:

ZEPHYR_DATA= //192.168.11.141/zephyrdata

 On Linux

Install the NFS (Network File System) server and client on CentOS 7. To do that:

On the server side:

1. Install the required NFS packages by running the following command:

 yum install nfs-utils

 2. Create the Zephyrdata directory and allow access to it:

chmod -R 777 /homeZephyrdata

If you installed Zephyr as a non-root user, install the NFS server and client as a root user and create the Zephyrdata directory as a non-root user.

3. Start the following services and add them to the Boot Menu:

systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap

4. Open exports for editing –

sudo gedit /etc/exports

– and type the following:

/home/zephyrdata 192.168.0.101(rw,sync,no_root_squash,no_all_squash)
/home/zephyrdata 192.168.0.102(rw,sync,no_root_squash,no_all_squash)

 192.168.0.101 and 192.168.102 are the IP addresses of the clients.

5. Start the NFS service by running the following command:

 systemctl restart nfs-server

 6. Add the NFS service override in the CentOS 7.0 firewall-cmd public zone service:

firewall-cmd --permanent --zone=public --add-service=nfs
firewall-cmd –reload

The NFS server is ready to work.

On the client side:

1. Install the required NFS packages by running the following command:

yum install nfs-utils

2. Start the following services and add them to the Boot Menu:

systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap

3. Mount the NFS share on the client machine by running the command below:

mount -t nfs 192.168.0.100:/home/zephyrdata  /home/node1/zephyrdata

192.168.0.100 is the IP address of the server.

4. Change the ZEPHYR_DATA path to the mounted path for all the nodes in the opt\Zephyr\tomcat\webapps\flex\WEB-INF\classes\jdbc.propeties file.

For example:

ZEPHYR_DATA = /home/node1/zephyrdata

5. You are connected to the NFS share. Now you can crosscheck it by running the following command:

df -kh

4. Install Zephyr Enterprise on your nodes

When installing Zephyr on your nodes, keep in mind the following:

  • You can install Zephyr on any drive other than drive C.

  • Before installing Zephyr, you need to make sure the ITCC and Dversion databases are not installed on the node.

  • Use the same Zephyr version and build on all the nodes.

  • Zephyr installed on all the nodes must have the same license.

Installation steps:

Install Zephyr Enterprise on nodes. The installation steps on the first node differ from the steps you perform on the other nodes.

1. To install Zephyr Enterprise on the first node:

 On Windows

Follow the installation steps for Windows.

During the installation, select the Data Center deployment option:

 On Linux

Follow the installation steps for Linux.

During the installation, choose Data Center deployment as the deployment type:

2. After you install Zephyr, stop the node.

3. Install Zephyr Enterprise on another node. The way you do this depends on your operating system:

 On Windows

Open Command Prompt as an Admin and run the following command:

zephyr_6.6_xxx_setup_iRev_xxx.exe  -VzSkipStartupData=true

 On Linux

Open the terminal and run the following command:

sh zephyr_6.6_xxx_setup_iRev_xxx.sh  -VzSkipStartupData=true

4. Once the installation is complete, stop the node.

You can use any number of nodes. If you want to add more nodes, repeat steps 3-4.

5. After you install Zephyr Enterprise on all the nodes, you need to modify the following files:

 Zephyr\tomcat\conf\server.xml

In the file, find the following line:

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>

and replace it with the following:

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.22"
port="45564"
frequency="500"
dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto"
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
 zephyr/tomcat/webapps/flex/WEB-INF/template/hazelcast.xml.tmpl

1. Find the line –

<member>127.0.0.1:5701</member>

– and add the IPs of all the nodes instead of 127.0.0.1. For example:

<member>172.17.18.141:5701</member>
<member>172.17.18.157:5701</member>
<member>172.17.18.223:5701</member>
<member>172.17.18.201:5701</member>

2. Find the line –

interface>127.0.0.1</interface>

– and replace it with an IP address that has an asterisk instead of the last component. For example:

<interface>172.17.18.*</interface>

 zephyr/tomcat/webapps/flex/WEB-INF/classes/hazelcast-hibernate.xml

1. In this file, find the line –

<member>127.0.0.1:5702</member>

– and add the IP addresses of all the nodes instead of 127.0.0.1. For example:

<member>172.17.18.141:5702</member>
<member>172.17.18.157:5702</member>
<member>172.17.18.223:5702</member>
<member>172.17.18.201:5702</member>

2. Find the line –

<interface>127.0.0.1</interface>

– and replace it with an IP address that has an asterisk instead of the last component. For example:

<interface>172.17.18.*</interface>

 zephyr/tomcat/webapps/flex/html5/index.html

Find the line –

<input type="hidden" id="notificationConType" value="websocket" />

– and replace websocket with sse:

<input type="hidden" id="notificationConType" value="sse" />

 zephyr/tomcat/conf/web.xml

In the file, before the web-app tag, add the tag <distributable />:

6. On all the nodes, open the folder zephyr/tomcat/webapps/flex/WEB-INF/template, copy the file cluster.properties.tmpl, paste it to the folder zephyr/tomcat/webapps/flex/WEB-INF/classes and change the file name to cluster.properties.

7. Make the following changes in the cluster.properties file:

  • Remove the following lines:

#unique identifier for the cluster node
cluster.aws.enable=false
HAZELCAST_PASSWORD=huser
HAZELCAST_USERNAME=hpass
  • Update the following information:

cluster.key=node1 (this should be a unique name of the node)
cluster.node.1.key=node1
cluster.node.1.url=172.17.18.141 (the IP address of node 1)
cluster.node.count=4
cluster.node.2.key=node2
cluster.node.2.url= 172.17.18.157(the IP address of node 2)
cluster.node.3.key=node3
cluster.node.3.url=172.17.18.223 (the IP address of node 3)
cluster.node.4.key=node4
cluster.node.4.url=172.17.18.201 (the IP address of node 4)
 Example
cluster.key=node2 (this should be a unique name of the node)
cluster.node.1.key=node1
cluster.node.1.url=172.17.18.141 (the IP address of node 1)
cluster.node.count=4
cluster.node.2.key=node2
cluster.node.2.url= 172.17.18.157(the IP address of node 2)
cluster.node.3.key=node3
cluster.node.3.url=172.17.18.223 (the IP address of node 3)
cluster.node.4.key=node4
cluster.node.4.url=172.17.18.201 (the IP address of node 4)

The cluster.key name must be unique in each file.

8. Start the first node where you installed Zephyr. Zephyr will start on it. After Zephyr is launched, the HAZELCAST_USERNAME and HAZELCAST_PASSWORD values will be generated in the zephyr/tomcat/webapps/flex/WEB-INF/classes/cluster.properties file.

9. Copy the HAZELCAST_USERNAME and HAZELCAST_PASSWORD values and paste them to the zephyr/tomcat/webapps/flex/WEB-INF/classes/cluster.properties file on the other nodes.

Now you can use your nodes.

See Also

Upgrading Zephyr Enterprise

Support and Troubleshooting

  • No labels