Watch out, the Chia miners are coming

Warning to all dedicated hosts!

Huge amounts of our sales leads are now for Chia miners.

They use massive cpu (and therefore power) to plot very large hard drives.

They always want an SSD (this improves plotting speed). They will burn out your SSDs. Every 1gb of HDD space "plotted" will require 15gb of writes to your SSDs.

When Chia crashes, this "demand" will disappear, leaving you with burnt out SSDs, large hard drives you don't need, and large power commitments without a customer to pay for them.

If you entertain these sales at all, make sure to use SSD with high endurance (3 drive writes per day rated or higher). Make sure to charge a setup fee.

Right now, a storage server that might be worth $1000/mo can bring in $1000/day in Chia. This is making people go bonkers.

Be careful out there!


Similar Content



dedicated server for chia plotting.

hell guys;
i am seeking dedicated server for chia mining.

16 or 32 gb ram

3 or 4 raid ssd drive

500 gb sata drive

16 core cpu

windows or esxi

i need test first 24 hours without payment.


thank you.

Best method to connect 24 SSDs for software RAID?

Hello,

I have been using 3108 RAID controllers with supermicro servers for a long time.
It works fine.

However, due to the increase in SSD performance, I was always aware that software RAID will probably be better in performance along with some filesystems such as ZFS having additional unique advantages.
So I am finally considering making the leap from hardware RAID to software RAID, especially since NVME drives from what I know don't even have hardware RAID support.

In the past when connecting many drives I noticed I have to setup each signle drive as a RAID 0 through the RAID card, which I think would decrease performance.

So I have to questions regarding best method to connect 24 SSDs for software RAID?

1.
What is the standard way of connecting 24 SATA SSDs to a storage server and use each drive individually and/or setup as software raid?
From what I know a RAID card such as 3108 is usually needed to connect so many drives to the mainboard.

2.
How does 24 nvme drives work in Storage servers that support it? Does mainboard have shared PCIe lanes that are connected with the nvme drives?

Thank you.

Patriot Burst SSds any experience with them on servers in hardware raid?

I been using Crucial mx500s and they work fine, but need to change them out about every year just keep safe on nand wear on the hosting server. As was going go with higher wear drive to get 2 or 3+ years, but not seeing any available. I have used Patriot SSds in the past and notice they seem to fail with drive still readable to recover on home user computers more often then other drives in my personal experience. SO checking I notice the nand write endurance is over double of crucial. So I was just wondering if anyone has tried the Patriot burst on their servers or not and experience they have had. I am considering trying a couple.

Thanks in advance for any and all comments.

Paul

Replacing a Hard Drive with RAID1 ( Software RAID )

This is my first post on Web Hosting Talk Forum. I need your help. My server consists of 2 hard drives. 1TB x 2 SATA. In last week my server gets slow down and the data center noticed that I have an issue with my hard drive. They said " We are showing drive SDB is failing and will need to be replaced, " They asked me to take a backup and replace the hard drive immediately.

I have 2 hard drives and I have installed CentOS 7 with software RAID 1 ( Software RAID ). So now my question is do I need to restore the backup once we replace the hard drive. ? or is it syncing with RAID without any manual backup restoration ?

Looking for servers in three separate US locations, same provider preferred

Hello,
We're looking for three unmanaged servers, each in a different datacenter within the US - preferably two coasts and central US. Outside of the US is not an option. We will expand this to 3 at each location eventually.

Minimum specs:

24 Core Xeon (2.0 GHz or faster)
128GB DDR4
200GB SSDs in RAID1
3.84TB SSD
100TB on 10Gbps port
IPMI is a must (We must be able to power off and on at will)
Clean IPs /29 on each host
While this is unmanaged, we want 24/7 US based support for hardware replacement
Budget is ~$250 USD/Mo for each host.
Use case is for deployment of a large web app. Datacenter redundancy is essential for uninterrupted access to at least one host at all times.

Note that we were using Purevoltage, which is an exceptional company and provided excellent service to us. However, at this time (soon to change), they do not have IPMI access or the ability for us to manage powering off/on the hosts via their KVM. We are doing multiple reloads of various virtualization options to find what we want to go forward with, which we discovered cannot be handled efficiently through ticketing.

Thank you for any recommendations.

How much I/O should I get from this options?

So I'm looking forward to buy something like the Infra-3 server from OVH (https://www.ovhcloud.com/es/bare-metal/infra/infra-3/). They offer a 4x60GB SSD Sata hard Raid, and also a 3x3.84TB SSD NvMe Soft Raid. And althought I want more space, I think they sold the 4x960 so people can wain more speed? How much more speed could I win from that configuration?

Also they sold a 3x6TB HHD SATA Hard RAID. I want to know how much speed on I/O should I get in every case.

Currently I got a Infra-2 (https://www.ovhcloud.com/es/bare-metal/infra/infra-2/) with 2x960GB SSD NvMe SoftRaid + 2 x 6TB HDD Sata Soft Raid, How can I test the speed of I/O of those hard drives?

And If I start a new server, what kind of partitioning do you recommend to improve the I/O?

Welp, Guess I need a Managed Dedicated Server Provider...who do you recommend?

This current admin company is driving me insane! They are literally making my server worse...

I have lost so much money in production time of not being able to develop websites!

This sever is for basic business sites.

So I am moving this server from dedicated self-managed (with 3rd party admin) and now need a fully managed server.

This is just to host multiple websites. so we very rarely have issues on the server.

I need cpanel/whm

I am used to having multiple processors with multiple cores, but they are quite old, so maybe the more powerful processors can beat it?

It's an old server but has been a workhorse for development with no real issues.

i'd like the equivalent or better. Ideally under $200 month ($150 would be great!)

-- is this even possible?

who do you recommend? Any black friday/cyber Monday specials?



--------current server specs---
Intel 2x L5630
Dedicated Server
Operating System: cPanel/WHM (CentOS 7 x64)
Bandwidth: 20TB on 1Gbps Port
Service Title: Intel 2x L5630
Service Options: Service Plan: Intel 2x Xeon L5630 Westmere 4-Core Dell Node
Operating System: Linux- CentOS 64-bit with cPanel/WHM 64-bit
Hard Drive 1: 500GB HDD
Hard Drive 2: None (+$0.00)
Hard Drive 3: None (+$0.00)
Hard Drive 4: None (+$0.00)
Hard Drive 5: None (+$0.00)
Hard Drive 6: None (+$0.00)
Raid Card- LSI 9260-8i 6G w/ 512MB Cache: Included / No Raid (+$0.00)
Power Supply: Dual Power (+$0.00)
RAM: 24GB (+$0.00)
Bandwidth: 20.0TB on 1Gbps Port (+$0.00)
IPv4 Addresses: 5 Usable (/29) (+$0.00)

[London, England] Need a dedicated server (or two)

Looking for 32 cores / 64 threads (or more), 256GB ECC RAM, two 2TB NVMe SSDs. I'll handle all server administration. Access via KVM over IP (or similar) would be great. Not looking for the cheapest. I'd rather pay for better support as there might be times where I need a little helping hand. I'll also need a block of IPv4 and IPv6 addresses and this will be hosting virtual machines.

Looking for some suggestions regarding CPU selection

I am planning to upgrade my current, very old server (Dual E5-2430 Xeons, 12 cores/24 HT with SATA SSDs) which has been doing duty as a shared hosting server for around 400 websites. Have noticed slow website loading times. 95% sites are using PHP/MySQL(Wordpress).

From the time I got the server and now, so many new CPUs have come out and looks like the xeon e3/e5 lines have been discontinued. Dual scalable silver/gold xeon servers are out of my budget, so the only alternative is a E-2288G or a W-1290P (more interested in this one) xeon with NVMe drives. Passmark scores for these servers are much higher than my current one, but moving to these new servers will mean a downgrade in the number of cores.

Looking for some personal experiences with these E-2288G/W-1290P CPUs, so that I am not actually getting a downgrade by changing servers.

I don't need more than 64GB of RAM and a 2x2TB NVMe SSD is fine.

Hetzner review

I've used Hetzner on and off for the past 10 years or so. I'm currently part of a few different projects that are using Hetzner, with in total around 30 active servers at the moment, however in total across the years, I must have had around 50-60 servers with Hetzner.

Sales: 6/10
I've had limited contact with the sales team, simple questions are answered quickly and there is live chat for these types of questions as well. Sales responses are anywhere from a couple of hours to a full day. The only real issue I've encountered with sales or billing is the inability to remove the "flexi pack" charge. This was a surcharge that was later abolished for when you wanted to customise the hardware of the server, however clients with existing servers were still being charged this fee. When I requested this charge be removed from some of our servers, sales said that it would only be removed after we had the servers for 2 years. So the solution? Cancel the older servers and order new ones. This was a hassle to overcome some pointless inflexibility, although we ended up with newer/better hardware at a cheaper price.

Setup: 8/10
Setup times are usually within an hour or so. Customised servers typically take a couple of days. I've encountered 1 mistake, with incorrect hardware as they misread our setup notes, this was corrected within an hour upon being notified of the mistake.

Support: 7/10
Responses typically take 10-30mins, I believe the longest I've had to wait has been an hour. Support responses have been a little curt at times, but this could be due to the language differences, and these responses are in the minority. Failed hard drive replacements take around 30 mins upon being notified. A major downside is lack of any client side ticketing system, with everything having to be dealt with via Email.

Network: 8/10
All servers have a 1Gbps dedicated port by default, sometimes there can be issues maxing the port, but it's rare to see anything below 800Mbps when I've tested, although this hasn't been extensive testing. Stability has been good, there was a hiccup earlier in the year which resulted in some packet loss, although this didn't affect us very much.

Control Panel: 5/10
As I said, lack of a proper ticketing system is a major issue which I'm surprised hasn't been rectified yet. Other than that, their panel is pretty feature rich, with extra features like alerts/notifications, decent bandwidth graphs, power management, etc.

Pricing/Value: 11/10
Hetzner's value for money is completely insane. If budget is your primary concern and you need an unmanaged provider in Europe, I don't think you can do any better.