Best method to connect 24 SSDs for software RAID?

Hello,

I have been using 3108 RAID controllers with supermicro servers for a long time.
It works fine.

However, due to the increase in SSD performance, I was always aware that software RAID will probably be better in performance along with some filesystems such as ZFS having additional unique advantages.
So I am finally considering making the leap from hardware RAID to software RAID, especially since NVME drives from what I know don't even have hardware RAID support.

In the past when connecting many drives I noticed I have to setup each signle drive as a RAID 0 through the RAID card, which I think would decrease performance.

So I have to questions regarding best method to connect 24 SSDs for software RAID?

1.
What is the standard way of connecting 24 SATA SSDs to a storage server and use each drive individually and/or setup as software raid?
From what I know a RAID card such as 3108 is usually needed to connect so many drives to the mainboard.

2.
How does 24 nvme drives work in Storage servers that support it? Does mainboard have shared PCIe lanes that are connected with the nvme drives?

Thank you.


Similar Content



Home Server Question

So I've run into a little issue with a build I'm working on and I have a few questions. I'm going to be working with a proxmox cluster running NFS storage for nexcloud instances because its a personal home server ill be using the mobo Tuf-Gaming B550-Plus with the AMD Ryzen 5 3600 processor.

I'm going to install proxmox on a NVMe SSD drive and as well add 6x4TB SAS drives for the storage.

Now my question is the mobo has an onboard raid card but after reading a few guides I see a lot of them saying don't use raid arrays use the NFS settings in proxmox to combine the storage any other options available for storage with proxmox?

Would this be possible without a raid card? If not which raid card do I need? Would onboard raid support 4TB SAS drives? I've been scouring google for the last couple days but not coming up with anything significant that would help me finish off the build.

Any advice would be much appreciated.

Thanks.

Replacing a Hard Drive with RAID1 ( Software RAID )

This is my first post on Web Hosting Talk Forum. I need your help. My server consists of 2 hard drives. 1TB x 2 SATA. In last week my server gets slow down and the data center noticed that I have an issue with my hard drive. They said " We are showing drive SDB is failing and will need to be replaced, " They asked me to take a backup and replace the hard drive immediately.

I have 2 hard drives and I have installed CentOS 7 with software RAID 1 ( Software RAID ). So now my question is do I need to restore the backup once we replace the hard drive. ? or is it syncing with RAID without any manual backup restoration ?

ssd and raid speed

Hi,

With your experience,

1. For hosting sites(such as WordPress),
Will nvme ssd be much quicker than sata ssd ?

2. Will ssd with hardware raid card be much quicker than mdadm software raid ?

Does server virtualization (Virtuozzo) impact server performance?

I'm currently designing a new server architecture in colab with a server tech, we'll need 4 dedicated servers and he has recommended to setup server virtualization via Virtuozzo, his reasoning is that it will make it easy to migrate to better hardware when needed. This seems like a good idea but some devs are telling my that it will degrade the performance of the boxes is this true? His plan is to install OS on 2 x SATA(Raid 1) drives and then place the virtualization on 2 x SSD's (Raid 1)
I would like a 2nd opinion on this,
Does the virtualization degrade performance, IF it does then by how much? Is it really this convenient to use such a virtualization setup in reference to migration? Any other thing to consider?

Thanks!

How much I/O should I get from this options?

So I'm looking forward to buy something like the Infra-3 server from OVH (https://www.ovhcloud.com/es/bare-metal/infra/infra-3/). They offer a 4x60GB SSD Sata hard Raid, and also a 3x3.84TB SSD NvMe Soft Raid. And althought I want more space, I think they sold the 4x960 so people can wain more speed? How much more speed could I win from that configuration?

Also they sold a 3x6TB HHD SATA Hard RAID. I want to know how much speed on I/O should I get in every case.

Currently I got a Infra-2 (https://www.ovhcloud.com/es/bare-metal/infra/infra-2/) with 2x960GB SSD NvMe SoftRaid + 2 x 6TB HDD Sata Soft Raid, How can I test the speed of I/O of those hard drives?

And If I start a new server, what kind of partitioning do you recommend to improve the I/O?

hardware or mdadm raid10 ?

Hi,

if i have 8 SATA3 SSD, for shared hosting,
i want to run cloudlinux and cpanel on it.

compare hardware raid card with mdadm raid10,
do you recommend which one ?

i make searching,

1. some people say raid card will be quicker,
but cpu are quick now,does it really have the difference ?

2. some people say raid card has battery and mdadm only server power,
when server's power fail,
mdadm's raid data may break,is it real ?

How to configure RAID on server?

I apologize if i opened the read in the wrong category. I struggled since 4 days to configure server. It has 4 disk each 4TB but cannot use whole of them
I chose RAID Type: Software, RAID Level 5 and "number of disk" - 4 but it shows me only 10TB. Can anyone please help me?

How to configure server for best perfomance?

Hello, i want to configure HP ProLiant DL360p G8 that support 4 x 3.5 Bay.
I think to install one SSD and 3 HDD. 1 TB SSD + 3 x 10 TB.
But what is about RAID system?
With RAID 0 all of 4 hard disk are combined and will the ssd have any effort? I want to install OS on ssd so that it will be faster but on RAId 0 does it work so?
With another RAID system i cannot use all DATA of this 10 TB, is it right yes?

Is it generally possible to configure 1 TB SSD + 3 x 10 TB so that OS is running on SSD and i have also 30 TB space?

Thank you

Patriot Burst SSds any experience with them on servers in hardware raid?

I been using Crucial mx500s and they work fine, but need to change them out about every year just keep safe on nand wear on the hosting server. As was going go with higher wear drive to get 2 or 3+ years, but not seeing any available. I have used Patriot SSds in the past and notice they seem to fail with drive still readable to recover on home user computers more often then other drives in my personal experience. SO checking I notice the nand write endurance is over double of crucial. So I was just wondering if anyone has tried the Patriot burst on their servers or not and experience they have had. I am considering trying a couple.

Thanks in advance for any and all comments.

Paul

Looking to get same deal we have now at another DC (30 Gbps and many Nvme SSD drives)

We currently have:

3 servers in EU

specs for each:
CPU: 2x Xeon Silver 4114 CPU
RAM: 120 GB ram
10 Gbps unmetered dedicated (per server , total 30 Gbps)
22 x 2TB SSD PCI NVME - RAID 0

current cost - 4500$ per month for all 3.

We are looking for the exact same setup at another DC in Western Europe, DE,NL,UK would be ideal.
can compromise on CPU / RAM if it will help reaching this price, as its less relevant for our usage.

Thanks.