Open-E JovianDSS Storage and RAID Calculator

You can easily calculate the exact license that is required for your storage setup. This calculator helps you to find the exact license required for your storage setup with Open-E JovianDSS, based on your individual specification. The Open-E JovianDSS Storage and RAID Calculator is a tool which enables setting up configurations that are proven to be secure and efficient and that are recommended by Open-E.

Please note that despite Open-E JovianDSS's flexibility, some unconfirmed configurations might result in low efficiency and thus lower security. To ensure your data security, please set up and adjust your configuration exactly to the type of stored data.

You are viewing the calculation in read-only mode. To make changes, use the button, and activate the tool.

Activate edit mode
Select system type / architecture Re-start

Single node

Simple architecture with one server

Drives can be SAS or SATA. When using JBODs, SAS JBODs and SAS cables are required.

Shared storage HA Cluster

Single storage shared between the nodes

Common storage (internal, JBODs, JBOFs, etc.), both nodes are directly connected to all storage devices in the cluster at the same time. When using JBODs, SAS JBODs and SAS cables are required.

Non-shared storage HA Cluster

Each node has its own storage.

Each node has direct access only to its own storage devices. Nodes communicate with each other to access their storage counterparts. When using JBODs, SAS JBODs and SAS cables are required.

Only "2-way Mirror" and "4-way Mirror" redundancy types are allowed for "Non-shared storage HA clusters". To change the system type to the "Non-shared storage HA Cluster", ensure that all pools have a suitable redundancy type and make the calculation again.

Too many pools to change the system type

Too many pools exist in the configuration. The maximum number of supported pools in HA Cluster is 3. To change the system type to the "Non-shared storage HA Cluster", delete redundant pools.

Open-E JovianDSS Storage and RAID Calculator

Pool-0

You are viewing the calculation in read-only mode. To make your changes, use the button at the top of the page, and activate the tool.

Calculation parameters

Please fill out fields below.

based on required storage capacity

based on number of disks and data groups

TiB
TB

Help me choose

Usable data storage capacity: 0 TiB

Total disks in data groups: 0 disks

Single disk capacity: 0 disks

Redundancy level: RAID-0

Number of data groups: 0

Disks in data group: 0

Detailed calculations
Pool storage characteristics

Usable data storage capacity: 0 TiB

Total disks in data groups: 0 disks

Number of data groups: 0 groups

Disks in data group: 0 disks

Disk groups layout:

Data disk

Mirror or parity disk

Detailed storage calculations

What each value means?

Total storage capacity: 0.00 TiB

(0 disks x 0TB = 0.00TB = 0.00TiB)

Storage capacity after RAID is applied: 0.00 TiB

(0 disks x 0TB = 0.00TB = 0.00TiB)

Usable data storage capacity: 0.00 TiB

(0.00TiB x 0.9 = 0.00TiB)

Expected annual pool reliability

The expected period of time for a year when pool works in a reliable way, given the configuration and the possibility of disk failure.

00.0%

Not recommended! Storage solutions with expected annual reliability below 99.0% should not be considered for any production deployment.

Pool reliability parameters

Mean Time To Recovery (MTTR) in days: 0

Mean Time Between Failures (MTBF) in hours: 0

Change parameters

Pool performance index rating

Read performance index rating:

1 10

0.0 x single disk

Write performance index rating:

1 10

0.0 x single disk

Space efficiency

The ratio shows the percentage of the total pool capacity to be reserved for data.

00%

Pool capacity reserved for data

0.00TiB x 100% / 0.00TiB = 0%

Non-data groups

Non-data groups do not affect the size of license storage capacity regardless of their amount and capacity.

Write log disks: 0x 0GB

Read cache disks: 0x 0GB

Special devices disks: 0x 0TB

Deduplication disks: 0x 0TB

Spare disks: 0x 0TB

Total number of non-data disks: 0

Edit non-data groups

Open-E JovianDSS Storage and RAID Calculator
Select data group redundancy level

Please choose redundancy type.

NO REDUNDANCY

1 disk per group

There are no parity disks. The total capacity equals the capacity of all disks. If one disk failed, all data is lost.

Suitability for mission critical solutions:

NO REDUNDANCY

This redundancy level is not allowed for selected system architecture.

A group consists only of a SINGLE disk. This configuration in scope of the Pool behaves as a regular RAID-0.

The "No redundancy" configuration DOES NOT accept any disk failures. This configuration should not be used for mission critical applications at all!

It is also recommended not to exceed 8 of SINGLE disks in the Pool because a single disk damage results in the destruction of the whole Pool. The chances of suffering disk failures increase with the number of disks in the Pool.

Important
The pool performance with SINGLE drive in each group is the highest possible and is increasing with the number of groups (disks) in the pool. For mission critical applications, it is recommended to use RAID-Z2 or RAID-Z3, or 3-way mirror instead of "No redundancy".

This configuration can be used with Hardware RAID volumes where redundancy is preserved on a hardware level.

2-WAY MIRROR

2 disks per group

1 out of 2 disks is a mirror disk. 1 disk in the data group may fail. The total capacity equals the capacity of 1 disk.

Suitability for mission critical solutions:

2-WAY MIRROR

The chances of suffering multiple disk failures increase with number of MIRRORs in the Pool.

The 2-WAY MIRROR accepts single disk failure only per MIRROR group.
MIRRORs can be used for mission critical applications, but it is recommended not to exceed 12 MIRRORs in the Pool and to avoid HDDs bigger than 4TB (recommended up to 12*2=24 disks for mission critical applications and 24*2=48 disks for non-mission critical applications in the pool).

Note, the pool performance is increasing with number of MIRRORs in the pool. For mission critical applications and using disks bigger than 4TB or more than 12 groups, it is recommended to use 3-way MIRRORs or RAID-Z2 or RAID-Z3.

3-WAY MIRROR

3 disks per group

2 out of 3 disks are mirror disks. 2 disks in the data group may fail. The total capacity equals the capacity of 1 disk.

Suitability for mission critical solutions:

3-WAY MIRROR

This redundancy level is not allowed for selected system architecture.

The chances of suffering multiple disk failures increase with number of MIRRORs in the Pool.

The 3-WAY MIRROR accepts up to two disks failures per 3-WAY MIRROR group.
3-WAY MIRRORs can be used for mission critical applications, but it is recommended not to exceed 16 MIRRORs in the Pool and to avoid HDDs bigger than 10TB (recommended up to 16*3=48 disks for mission critical applications in the pool and 24*3=72 disks for non-mission critical applications in the pool).

Note, the pool performance is increasing with number of MIRRORs in the pool. For mission critical applications and using disks bigger than 10TB, it is recommended to use RAID-Z3.

4-WAY MIRROR

4 disks per group

3 out of 4 disks are mirror disks. 3 disks in the data group may fail. The total capacity equals the capacity of 1 disk.

Suitability for mission critical solutions:

4-WAY MIRROR

The chances of suffering multiple disk failures increase with number of MIRRORs in the Pool.

The 4-WAY MIRROR accepts three disks failure per MIRROR group.
The 4-WAY MIRROR is recommended for METRO Cluster that can be used for mission-critical applications.

It is also recommended not to exceed 24 of 4-WAY MIRROR groups in the Pool because the single group damage results with the whole Pool destruction (recommended up to 24*4=96 disks for mission-critical applications in the pool). HDDs bigger than 16TB should be avoided.

Note, the pool performance is increasing with number of MIRRORs in the pool.

RAID-Z1

3-8 disks in a group

1 disk in a data group may fail. The total capacity equals the sum of all disks minus 1.

Suitability for mission critical solutions:

RAID-Z1

This redundancy level is not allowed for selected system architecture.

The chances of suffering multiple disk failures increase with number of disks in the RAID-Z1 group.

The RAID-Z1 accepts single disk failure only per RAID-Z1 group.
The RAID-Z1 can be used for NON-mission critical applications and it is recommended to not exceed 8 disks per group and to avoid HDDs bigger than 4TB.

It is also recommended not to exceed 8 RAID-Z1 groups in the Pool because the single group damage results with the destruction of the whole Pool (recommended up to 8*8=64 disks for non-mission critical applications in the pool).

Note, the pool performance is doubled with 2 * RAID-Z1 with 4 disks each comparing to single RAID-Z1 with 8 disks. For mission critical applications, it is recommended to use RAID-Z2 or RAID-Z3 or 3-way mirrors instead of RAID-Z1.

RAID-Z2

4-24 disks in a group

2 disks in a data group may fail. The total capacity equals the sum of all disks minus 2.

Suitability for mission critical solutions:

RAID-Z2

This redundancy level is not allowed for selected system architecture.

The chances of suffering multiple disk failures increase with number of disks in the RAID-Z2 group.

The RAID-Z2 accepts up to two disks failure per RAID-Z2 group.
The RAID-Z2 can be used for mission critical applications.

It is recommended not to exceed 12 disks per group for mission critical and 24 disks for NON-mission critical applications. It is also recommended to not exceed 16 of RAID-Z2 groups in the Pool because the single group damage results with the destruction of the whole Pool (recommended up to 16*12=192 disks for mission critical applications and 16*24=384 disks for non-mission critical in the pool). HDDs bigger than 16TB should be avoided.

Note, the pool performance is doubled with 2 * RAID-Z2 with 6 disks each comparing to single RAID-Z2 with 12 disks. If 3 disks failure per RAID group is required, it is recommended to use RAID-Z3.

RAID-Z3

5-48 disks in a group

3 disks in a data group may fail. The total capacity equals the sum of all disks minus 3.

Suitability for mission critical solutions:

RAID-Z3

This redundancy level is not allowed for selected system architecture.

The chances of suffering multiple disk failures increase with number of disks in the RAID-Z3 group.

The RAID-Z3 accepts up to three disks failure per RAID-Z3 group.
The RAID-Z3 can be used for mission critical applications.

It is recommended not to exceed 24 disks per group for mission critical and 48 disks for NON-mission critical applications. It is also recommended to not exceed 24 of RAID-Z3 groups in the Pool because the single group damage results with the whole Pool destruction (recommended up to 24*24=576 disks for mission critical applications and 24*48=1152 disks for non-mission critical in the pool). HDDs bigger than 16TB should be avoided.

Note, the pool performance is doubled with 2 * RAID-Z3 with 12 disks each comparing to single RAID-Z3 with 24 disks.

Data disk Mirror or parity disk

Learn more about data redundancy in ZFS

The redundancy level sets the number of parity disks in a data group. This number specifies how many disks may fail without losing operation of the data group. Higher parity levels require more calculation from the system, which increases redundancy at the cost of performance.

RAID-Z is a data parity distribution scheme like RAID-5, but uses dynamic stripe width: every block is it's own RAID stripe, regardless of block size, resulting in every RAID-Z write being a full-stripe write. RAID-Z is also faster than traditional RAID-5 because it does not need to perform the usual read-modify-write sequence.

Pool storage characteristics - what each value means

To understand the presented values please note that storage hardware is using the base 10 system, therefore disks capacity is shown in TB (1012 byte units). Software is using the base 2 system, therefore the storage capacity is shown in TiB (240 byte units). For more information refer to our blog article .

Total storage capacity is the sum of the capacity of all disks before redundancy is applied and converted to units used by operating systems (TB → TiB).

Storage capacity after RAID or disk mirroring is applied is the sum of the capacity of all disks after redundancy is applied and converted to units used by operating systems (TB → TiB). This capacity excludes parity disks.

Usable data storage capacity is a functional storage space for user data after redundancy has been applied, and about 10% of pool space has been reserved. To ensure the pool is working correctly and efficiently, roughly 10% of its capacity must be reserved. However, this value can change depending on:

  • number of disks,
  • disks capacity,
  • number of groups,
  • redundancy type.

For this reason, the calculated values are approximate.

Expected annual pool reliability parameters
General information

The calculation is based on the pool configuration and pool reliability parameters.

Pool configuration parameters used to make the calculation:

  • Number of data groups
  • Amount of disks in data group
  • Amount of parity disks

Pool reliability parameters:

  • Mean Time To Recovery (MTTR)
  • Mean Time Between Failures (MTBF) or Annual Failure Rate (AFR)

You can use the default pool reliability parameters or enter your own based on data provided by the disks vendor. More precise data make the result more accurate.

Pool reliability parameters description

Mean Time To Recovery (MTTR)

MTTR applies to the average time that takes to recover the pool after a disk failure (including RAID rebuild).

Mean Time Between Failures (MTBF)

MTBF is the predicted elapsed time between disks failures during normal system operations.

Annual Failure Rate (AFR)

Annual Failure Rate is the parameter that represents the same information as MTBF, but expressed as a percentage. It is the probability of a disk failing during a single year. Which value will be available about specific disk depends on a disk vendor.

Change parameters

Mean Time To Recovery

Mean Time To Recovery (MTTR) applies to the average time that takes to recover the pool after a disk failure (including RAID rebuild).

days

Disks reliability

Mean Time Between Failures (MTBF) is the predicted elapsed time between disks failures during normal system operations.

Annual Failure Rate (AFR) is the same value calculated as probability of a disk failing during a single year.

MTBF

hours

AFR

%
Pool performance index rating

Read performance index rating:

1 10

7.4 x single disk

Write performance index rating:

1 10

4.9 x single disk

The pool performance index rating shows the performance index of a given pool compared to a single disk’s performance without taking into account the write cache and read cache.

Example: A performance index of 4.9 is equal to the aggregated performance of 4.9 single disks.

The pool performance index is important when it comes to sustained I/O, where the cache fills up quickly, as that causes the overall system performance to depend on the pool performance itself. The performance index rating is affected by:

  • the number of groups in the pool - the main factor
  • the number of disk drives in the group
  • the redundancy type.

In some cases, the performance index rating for SSDs may be lower than that of HDDs. However, this doesn’t mean that SSD storage performance will be slower than HDD storage performance. A single SSD works faster than a single HDD, so storage solutions composed of SSDs will be faster despite the lower performance index rating.

Note! These results are theoretical and may not reflect the final system performance, which depends on more factors than the data groups’ performance. The final pool performance and storage performance are also dependent on the devices used (e.g., cache or RAM), pool configuration, and various other factors. For example, there is a strong correlation between a ZFS zvol volblocksize and performance. As a general rule, the smaller the zvol volblocksize, the more IOPS that can be achieved; the larger the zvol volblocksize, the greater the throughput (MB/s).

Add non-data groups

Write log disks

A storage area on data disks that temporarily holds synchronous writes until they are written to the pool. Stored on separate media from the data, typically on a fast device such as a SSD.

GB

Read cache disks

Provide an additional layer of caching between main memory and disk. For read-heavy workload, cache devices allow much of the work to be served from low latency media.

Note! Not recommended for the pools consisting of SSDs as they do not noticeably improve reading speed.

GB

Special devices disks

Devices dedicated to storing metadata, indirect blocks of user data, and any deduplication tables by default. Optionally, special devices can also store small file blocks with sizes defined by the user. By storing these blocks on faster devices, such as SSDs, users can improve access times and reduce the fragmentation of their pools. Deduplication tables can be excluded from the special group and stored in the deduplication group specifically dedicated to that.

GB

Deduplication disks

Devices dedicated only to storing the deduplication tables to improve the system's performance and scalability.

GB

Spare disks

Spare disks are drives that are kept on active standby for use when a disk drive fails. The function enables spare disks to be used for any data group. It should be disks with a capacity equal or more than disks in data groups.

Add pool

The maximum number of supported pools in HA Clusters is 3.

The maximum number of supported pools in HA Cluster

The information below is based on the release note to the latest version of the Open-E JovianDSS.

For Shared and Non-shared storage HA Clusters, the maximum number of supported pools in Open-E JovianDSS is 3. If you have more than 3 pools, an unexpectedly long failover time might occur. The failover mechanism procedure moves pools in sequence. If all pools are active on a single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds, a default iSCSI timeout in Hyper-V Clusters. Under heavy load, a too long time of cluster resources switching may also occur in some environments.

The maximum number of supported pools has been reached

The maximum number of supported pools in HA Clusters is 3. Since three pools already exist in the configuration, you can't clone the selected pool.

System summary

System type

Shared storage HA Cluster

Total disks in data groups

0 disks

Total deduplication disks

0 disks

Pools in the system

0 pools

Total write log / read cache disks

0 disks

Total spare disks

0 disks

Total usable data storage capacity

0.00TiB

Total special devices disks

0 disks


Pools comparison

Usable
capacity
Redundancy
level
Disks in
data groups
Pool
reliability
Space
efficiency
Read
performance
Write
performance
Calculation name

Please fill out fields below.

You can download your calculation as a PDF file or save it in your Portal account. Use the field below to enter a unique name. The maximum length of the name field is 50 characters.