Which server will be faster ?

Hi,

I need a dedicated server and I found the following configs which are at same price:

E3-1245v5, 32GB RAM, SoftRAID 2x4TB SATA
E3-1245v5 , 32GB RAM, SoftRAID 2x450 GB NVME

I need storage space but at the same time I cannot compromise much on the speed.
I know that NVMe will be faster but the question is how much slower the SATA will be from the NVMe ?

Will it be very slow or somewhat slow ? Please clarify.

Thanks.


Similar Content



How much I/O should I get from this options?

So I'm looking forward to buy something like the Infra-3 server from OVH (https://www.ovhcloud.com/es/bare-metal/infra/infra-3/). They offer a 4x60GB SSD Sata hard Raid, and also a 3x3.84TB SSD NvMe Soft Raid. And althought I want more space, I think they sold the 4x960 so people can wain more speed? How much more speed could I win from that configuration?

Also they sold a 3x6TB HHD SATA Hard RAID. I want to know how much speed on I/O should I get in every case.

Currently I got a Infra-2 (https://www.ovhcloud.com/es/bare-metal/infra/infra-2/) with 2x960GB SSD NvMe SoftRaid + 2 x 6TB HDD Sata Soft Raid, How can I test the speed of I/O of those hard drives?

And If I start a new server, what kind of partitioning do you recommend to improve the I/O?

Looking to get same deal we have now at another DC (30 Gbps and many Nvme SSD drives)

We currently have:

3 servers in EU

specs for each:
CPU: 2x Xeon Silver 4114 CPU
RAM: 120 GB ram
10 Gbps unmetered dedicated (per server , total 30 Gbps)
22 x 2TB SSD PCI NVME - RAID 0

current cost - 4500$ per month for all 3.

We are looking for the exact same setup at another DC in Western Europe, DE,NL,UK would be ideal.
can compromise on CPU / RAM if it will help reaching this price, as its less relevant for our usage.

Thanks.

Looking for replacement of WebNX Storage server and servers with iGPU

Webnx has announce their servers wont be back for weeks. We dont have weeks.

Looking for these in the West Coast USA.

500TB storage server/s, nvme drives for cache, 1gbps (10gbps for a month if possible) unmetered incoming, 40gbps private networking (aggregate if on multiple storage servers)
15 x server with iGPU, 32gb Ram, 1tb ssd/nvme, 1gbps private network, 1gbps public with at least 40TB outgoing.

We require free private networking for all servers.
Looking for something close to the pricing of webnx.

<<snipped>>

Thanks.

Looking for some suggestions regarding CPU selection

I am planning to upgrade my current, very old server (Dual E5-2430 Xeons, 12 cores/24 HT with SATA SSDs) which has been doing duty as a shared hosting server for around 400 websites. Have noticed slow website loading times. 95% sites are using PHP/MySQL(Wordpress).

From the time I got the server and now, so many new CPUs have come out and looks like the xeon e3/e5 lines have been discontinued. Dual scalable silver/gold xeon servers are out of my budget, so the only alternative is a E-2288G or a W-1290P (more interested in this one) xeon with NVMe drives. Passmark scores for these servers are much higher than my current one, but moving to these new servers will mean a downgrade in the number of cores.

Looking for some personal experiences with these E-2288G/W-1290P CPUs, so that I am not actually getting a downgrade by changing servers.

I don't need more than 64GB of RAM and a 2x2TB NVMe SSD is fine.

CEPH Build - EPYC NVME Power Consumption

Hi All -

Wanted to see people's experience with Ceph builds (3/5 nodes) using an AMD Epyc2 with NVME drives. One of my questions is actual power consumption; trying to understand what our electricity costs will be like for the year . I am looking at this for our potential config with 3 nodes:

1x Supermicro A+ Server 1114S-WN10RT
1x AMD EPYC 7302P, 3.00GHz, 16C/32T, Socket SP3, tray
4x 32GB SK hynix DDR4-3200 CL22 (2Gx4) ECC reg. DR
1x MON: 256GB Samsung SSD PM981a NVMe, M.2 (PCIe)
10x OSD: 7,68TB Samsung SSD PM1733, 2.5 Zoll, U.2 PCIe 4.0 x4, NVMe
1x Mellanox MCX414A-BCAT ConnectX-4 EN NIC
- 40/56GbE dual-port QSFP+ PCIe3.0 x8

Looking for a dedicated server with intel iGPU

I am looking for a dedicated server for plex which can do occasional HW transcoding.
I am looking for an unmanaged server
Processor: Intel Skylake or newer with iGPU
Disk: 512 - 1TB NVMe
RAM: 16GB+
Network: 1Gbps port with over 10TB - unlimited(preferred)
Location: US(preferred) or good peering to US
Currently looking at Hetzner EX42-NVME, so anything in that price range works for me.

In need of self-managed storage dedicated server ~100 TB+ in Europe, Russia or CIS for image hosting

Hello everyone. Have been a long time lurker, but never registered. So, firstly want to say big thank you to this wholesome community for always giving great advices to everyone out there. You guys rock!

I am trying to find an alternative to Hetzner's SX133 which is ridiculously great priced. How do they even do that? Well done Hetzner. On 24/03/2021 - SX133's specs a
Intel® Xeon® W-2145 8 cores, 16 threads,
128 GB DDR4 ECC,
2 x 960 GB NVMe SSD,
10 x 16 TB (SATA 6 Gb/s 7200 rpm) a whooping 160 TB of storage,
1 Gbit/s connection,
Unlimited traffic,
159.00 (without TAX) I need this for an image hosting website, where users do sometimes upload adult content. Sadly Hetzner TOS (point 6.2) does not allow that and they are entitled to block access to the account of any customer who violates this. Which is fine - it's their rules.


My budget is 180 EUR (without TAX), but I don't even need such fancy specs as Hetzner provides. I am completely okay to settle with something like:
4 core, 8 thread CPU,
32 GB DRR3 ECC,
2 or even 1 240 GB SATA SSD,
The most important factor is storage. The more the better, the minimum would be ~100 TB (SATA 6 Gb/s 7200 rpm)
1 Gbit/s connection,
Website last month traffic was 500 TB, but since 90% is cached by CDN...would still need about 50 TB. Obviously unlimited traffic would be great. Seems like very few hosting companies actually offer dedicated servers for storage purposes. I have already checked OVH, Leaseweb, OneProvider, Flaunt7 and some local country hostings, but it seems like noone is even close in comparison to what Hetzner offers. It's like their offer is from the future and hopefully in 1-2 years time, others will get there as well.

It be nice if server were located somewhere close to West/Middle of Russia.


P.S As another option, maybe I could rent more then one dedicated server so that their storage capacity would be ~100 TB and going with CEPH instead of RAID6, but still, can't exced 180 EUR budget. Any ideas greatly appreciated.

Best method to connect 24 SSDs for software RAID?

Hello,

I have been using 3108 RAID controllers with supermicro servers for a long time.
It works fine.

However, due to the increase in SSD performance, I was always aware that software RAID will probably be better in performance along with some filesystems such as ZFS having additional unique advantages.
So I am finally considering making the leap from hardware RAID to software RAID, especially since NVME drives from what I know don't even have hardware RAID support.

In the past when connecting many drives I noticed I have to setup each signle drive as a RAID 0 through the RAID card, which I think would decrease performance.

So I have to questions regarding best method to connect 24 SSDs for software RAID?

1.
What is the standard way of connecting 24 SATA SSDs to a storage server and use each drive individually and/or setup as software raid?
From what I know a RAID card such as 3108 is usually needed to connect so many drives to the mainboard.

2.
How does 24 nvme drives work in Storage servers that support it? Does mainboard have shared PCIe lanes that are connected with the nvme drives?

Thank you.

IOFLOOD - 3 Month Review

IOFLOOD 3 Month Review

I signed up with IOFLOOD for a Dual E5-2680v2 dedicated server about 3 months ago and wanted to give an update.

When signing up there is a 24-72 hour set up time. They were able to set me up earlier as I was a Webnx refugee. (If set up time is important to you, just take a note)

Let's talk speed, the speed test's download is 600 Mbps and upload 400 Mbps

Now, server response time for webpages is 100 MS or less from the tests done.

Sometimes email response time is a little slower than the liquid webs of the world. (10 minute response time or less). But we have had little reason to contact support.

Most important of all is uptime. We have had 100% uptime since signing up. We did have downtime recently for a ram upgrade. (Which is to be expected).

So, I would give them an A+
I have upgraded some features of my server. But current specs and price are below. I added additional RAM from the base set up and some additional IP's.. But couldn't be happier with the service and the price is quite reasonable.

Server specs and price
Dual E5-2680v2, 100TB / mo bandwidth on 1gbps, 256GB RAM, 2x 1.6TB NVMe SSD, unmanaged, 2x /29 IPs, $191.50.

10 Gbps Server Settings and Nginx Config

Hello All,

I recently upgraded a dedicated server I have with Hivelocity in their Tampa 2 datacenter to 10 Gbps networking and I'm having a hard time getting anything above 1.5 Gbps out of the server. I have been searching and trying different settings with Nginx for 2 days with no luck on improving the situation and support has not been very helpful with this one. We did a iperf test a couple times and they only got 750 Mbps to there speedtest server which I thought was odd.

Does anyone who has experience with 10 Gig networking have any tips I could try for Nginx or server OS optimizations? The server is serving a 300 Mb static file (that is downloaded in 5 threads at once) for the most part and needs to handle downloads from it of about 3000 Mbps (Verizon Ultrawideband). The server is a Six Core Xeon with 32GB of ram and 512GB NVMe SSD.

Any help is greatly appreciated.