Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Starting October 11, 2024 (Zephyr Enterprise 8.2), the Zephyr Enterprise documentation moved from its current location on Atlassian to a dedicated, standalone Zephyr Enterprise documentation page. Please see: https://support.smartbear.com/zephyr-enterprise/docs/en/zephyr-enterprise/zephyr-installation-and-upgrade-guides/zephyr-on-premise-production-installation/set-up-zephyr-data-center-cluster.html |
About
Zephyr Data Center is a Zephyr Enterprise solution with horizontal scalability support for high availability, scalability, and performance. It involves deploying Zephyr Enterprise on multiple nodes, which are stateless engines where all the business logic resides, along with an external database and a shared folder as shown below:
...
Load Balancer - balances the load across the cluster nodes.
Elasticsearch – a search engine deployed on the server node as a micro service or as a separate process in the virtual private network.
Shared database – a shared database used by the nodes. Zephyr supports MySQL, MS SQL Server, and Oracle Database.
Shared directory – a shared folder where attachments are stored. In a cluster environment, this folder must be shared with write permissions. Network-attached storage (NAS) and similar devices are supported.
RabbitMQ 3.12.10 – It is a robust messaging and streaming broker that's easily deployable across cloud environments and on-premises setups. Currently, installing RabbitMQ is optional for the users. Java 17 is required for using RabbitMQ 3.12.10.
ZE-Services – Currently, installing ZE-Services is optional for the users. Java 17 is mandatory for using ZE-Services.
ZE-Webhook Service is the component responsible for accepting the incoming events from Jira and enqueues them to the message broker for further processing.
ZE-Consumer Service is the component that picks the enqueued events from the message broker and updates ZE with the incoming data.
ZE-AuditService is the component that acts as the incoming endpoint for the Audit Logs generated during the incoming webhook event processing and enqueues them to the message broker.
ZE-AuditProcessor is the component responsible for reading the enqueued audit logs from the message broker and inserting them into ZE.
Note |
---|
It is recommended to have this clustered deployment behind a firewall. |
RabbitMQ and ZE Services should be installed in a separate server as we require Java 17.
Recommended Configurations
...
Configuration 1: Up to 100 concurrent users
Component | Number of Nodes | vCPU | System Memory | Application Memory | Storage |
---|---|---|---|---|---|
Application | 1 (single node) | 4 | 16 GB | 8 GB | 350 GB* |
Elasticsearch | |||||
Database |
* Depending on the database size.
Configuration 2: 100 - 500 concurrent users
Component | Number of Nodes | vCPU | System Memory | Application Memory | Storage |
---|---|---|---|---|---|
Application | 2 | 4 | 16 GB |
Min: 8GB | 350 GB | ||||
Elasticsearch | 1 | 4 | 16 GB | 8 GB | 350 GB |
Database | 1 | 8 | 16 GB | 16 GB | 350 GB* |
Load Balancer | A load balancer to distribute load among the application nodes | ||||
Shared | Shared among the application nodes to store attachment files for Requirements, Test Cases, and Executions |
* Depending on the database size.
Configuration 3: Over 500 - 1500 concurrent users.
Recommended OS: Linux
Component | Number of Nodes | vCPU | System Memory | Application Memory | Storage |
---|---|---|---|---|---|
Application | 4 | Min: 4 | 16 GB |
Min: 8 GB | 350 GB | ||||
Elasticsearch | 3 | 4 | 16 GB | 8 GB | 350 GB |
Database | 1 | Min: 16 | 32 GB | 16 GB | 350 GB* |
Load Balancer | A load balancer to distribute load among the application nodes | ||||
Shared | Shared among the application nodes to store attachment files for Requirements, Test Cases and Executions |
* Depending on the database size.
Info |
---|
|
Requirements
The same Java version must be installed on all the nodes.
The same time zone must be set on all the nodes and the current time must be synchronized (use the Network Time Protocol daemon (ntpd) or a similar service).
...
To install Zephyr Enterprise on cluster nodes, you need to perform the following steps:
1.
...
MySQL (make sure the ITCC and Dversion databases are not installed)
Note |
---|
You must have a connector JAR file for your MySQL, Microsoft SQL Server or Oracle database. |
2. Install Elasticsearch
Install Elasticsearch 6.8.1 (a search engine) on a separate node where Zephyr is not installed and configure it for your cluster environment.
Connect to an external database
2. Install & configure Elasticsearch
3. Install Zephyr Enterprise
...
on your
...
Install Zephyr on any drive other than drive C: to avoid permission issues, as administrator permissions may be required to work with this drive.
...
Use the same Zephyr version and build on all the nodes.
...
nodes
...
Installation steps:
Install Zephyr Enterprise on nodes. The installation steps on the first node differ from the steps you perform on the other nodes.
1. To install Zephyr Enterprise on the first node:
Expand | ||
---|---|---|
| ||
Follow the installation steps for Windows described in the Installing Zephyr Server section.
|
Expand | ||
---|---|---|
| ||
Follow the installation steps for Linux described in the Install Zephyr section.
|
2. After you install Zephyr, stop the node.
3. Install Zephyr Enterprise on another node. The way you do this depends on your operating system:
...
Expand | ||
---|---|---|
| ||
Open the terminal and run the following command:
|
...
Open Command Prompt as an Admin and run the following command:
zephyr_6.6_xxx_setup_iRev_xxx.exe -VzSkipStartupData=true
4
...
.
...
Info |
---|
Write down the IP of each node – you will need them at step 7. |
5. After you install Zephyr Enterprise on all the nodes, you need to modify the following files on each node:
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
In the file, find the following line:
and replace it with the following:
|
Expand | ||
---|---|---|
| ||
1. Find the line –
– and add the IPs of all the nodes instead of 127.0.0.1. For example:
2. Find the line –
– and replace it with an IP address that has an asterisk instead of the last component. For example:
|
Expand | ||
---|---|---|
| ||
1. In this file, find the line –
– and add the IP addresses of all the nodes instead of 127.0.0.1. For example:
2. Find the line –
– and replace it with an IP address that has an asterisk instead of the last component. For example:
|
Expand | ||
---|---|---|
| ||
Find the line –
– and replace websocket with sse:
|
Expand | ||
---|---|---|
| ||
In the file, before the web-app tag, add the tag <distributable />: |
...
7. Make the following changes in the cluster.properties file:
Remove the following lines:
Code Block |
---|
#unique identifier for the cluster node
cluster.aws.enable=false
HAZELCAST_PASSWORD=huser
HAZELCAST_USERNAME=hpass |
Update the following information:
Code Block |
---|
cluster.key=node1 (this should be a unique name of the node)
cluster.node.1.key=node1
cluster.node.1.url=172.17.18.141 (the IP address of node 1)
cluster.node.count=4
cluster.node.2.key=node2
cluster.node.2.url= 172.17.18.157(the IP address of node 2)
cluster.node.3.key=node3
cluster.node.3.url=172.17.18.223 (the IP address of node 3)
cluster.node.4.key=node4
cluster.node.4.url=172.17.18.201 (the IP address of node 4) |
...
title | Example |
---|
Code Block |
---|
cluster.key=node2 (this should be a unique name of the node)
cluster.node.1.key=node1
cluster.node.1.url=172.17.18.141 (the IP address of node 1)
cluster.node.count=4
cluster.node.2.key=node2
cluster.node.2.url= 172.17.18.157(the IP address of node 2)
cluster.node.3.key=node3
cluster.node.3.url=172.17.18.223 (the IP address of node 3)
cluster.node.4.key=node4
cluster.node.4.url=172.17.18.201 (the IP address of node 4) |
Note |
---|
The |
8. Start the first node where you installed Zephyr by using the hostname or the IP address of the machine. Zephyr will start on it. After Zephyr is launched, the HAZELCAST_USERNAME and HAZELCAST_PASSWORD values will be generated in the zephyr/tomcat/webapps/flex/WEB-INF/classes/cluster.properties file.
9. Copy the HAZELCAST_USERNAME and HAZELCAST_PASSWORD values and paste them to the zephyr/tomcat/webapps/flex/WEB-INF/classes/cluster.properties file on the other nodes.
...
Set up
...
title | On Windows |
---|
Share a folder by using NAS (network-attached storage) or a similar device:
...
a
...
2. On all the nodes, open the
Program Files\Zephyr\tomcat\webapps\flex\WEB-INF\classes\jdbc.properties file, find the line
ZEPHYR_DATA = C:/Program Files/Zephyr/zephyrdata
...
shared
...
ZEPHYR_DATA= //192.168.11.141/zephyrdata
...
title | On Linux |
---|
Install the NFS (Network File System) server and client on CentOS 7. To do that:
On the server side:
1. Install the required NFS packages by running the following command:
yum install nfs-utils
2. Create the Zephyrdata directory and allow access to it:
chmod -R 777 /homeZephyrdata
...
directory
...
3. Start the following services and add them to the Boot Menu:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap
4. Open exports for editing –
sudo gedit /etc/exports
– and type the following:
...
Now you can use your nodes.
See Also
...
...
Info |
---|
192.168.0.101 and 192.168.102 are the IP addresses of the clients. |
5. Start the NFS service by running the following command:
systemctl restart nfs-server
6. Add the NFS service override in the CentOS 7.0 firewall-cmd public zone service:
firewall-cmd --permanent --zone=public --add-service=nfs
firewall-cmd –reload
The NFS server is ready to work.
On the client side:
1. Install the required NFS packages by running the following command:
yum install nfs-utils
2. Start the following services and add them to the Boot Menu:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap
3. Mount the NFS share on the client machine by running the command below:
mount -t nfs 192.168.0.100:/home/zephyrdata /home/node1/zephyrdata
Info |
---|
192.168.0.100 is the IP address of the server. |
4. Change the ZEPHYR_DATA path to the mounted path for all the nodes in the
opt\Zephyr\tomcat\webapps\flex\WEB-INF\classes\jdbc.propeties file.
For example:
ZEPHYR_DATA = /home/node1/zephyrdata
5. You are connected to the NFS share. Now you can crosscheck it by running the following command:
df -kh