Best hard disks for Microserver

Best hard disks for Microserver

Author
Discussion

essayer

9,067 posts

194 months

Monday 12th June 2017
quotequote all
IMO, RAID 5 is a bad idea nowadays - the probability of second drive failure while rebuilding after a failure is too high, especially if the drives all came from the same batch.

RAID 6, 10 or ZFS for me. Or depending on the use case, larger drives and raid 1.

And of course: "RAID is not backup"


HantsRat

Original Poster:

2,369 posts

108 months

Monday 12th June 2017
quotequote all
Think i'll just get 2 x 2tb in a RAID 1. That should do the trick.

Angrybiker

557 posts

90 months

Monday 12th June 2017
quotequote all
I have a home built NAS (credit to my brother) in a compact little Lian-Li box, running Linux and BTRFS. 5 x WD 4TB dives on Raid 5.

BTRFS seems somehow to 18Tb useable space rather than the 16 you might expect. Not sure why this is but yes I am sure it's set up properly.

A lot cheaper than a ready made NAS with planned obsolescence (I have 2 legacy 5 bay QNAP NASes each filled with 1Tb drives on RAID 5. I like the active failure monitoring on the QNAP interface for peace of mind, but I hate that they cost over 500 notes each and don't support larger capacity drives and I'm sorry but I'm not giving QNAP another chunk of change every time I want more capacity. I can slap 5 12Tb drives into my current, no problem).

(I also hate the way you have to run a QNAP 'find my device' tool to actually locate the damn things on your network, whereas with mine I just plug it in and map to the drive and job done).

alfaben

166 posts

155 months

Monday 12th June 2017
quotequote all
HantsRat said:
I'm pretty sure the HP Microserver doesn't support RAID 5.
Indeed, all generations support with the onboard RAID controller support 0 and 1. You'd need a dedicated array card to get anything better.

hyphen

26,262 posts

90 months

Monday 12th June 2017
quotequote all
Murph7355 said:
HantsRat said:
Is it worth getting SSD's?
...
Not in my opinion. I'm not convinced they're great for the hammer a NAS can give to disks. Sounds counter intuitive with SSDs having no moving parts etc...but NAS units can/do run warm very easily and this is not the friend of SSDs. They also didn't used to have great longevity with either excessive read or write cycles (can't recall which. Think it was read). I'll admit that my recent experience of SSDs is relatively limited though.

They're also expensive byte for byte.
SSD's will be quieter, so if the server is not going to be hidden away then may be worth it for the noise.

Murph7355

37,714 posts

256 months

Monday 12th June 2017
quotequote all
essayer said:
IMO, RAID 5 is a bad idea nowadays - the probability of second drive failure while rebuilding after a failure is too high, especially if the drives all came from the same batch....
Surely the probability of drive failure is no different to how it is at any other time? MTBF rates are just that - "means".

Ref same batch, I tend to hold a spare at home bought at a very different time. So would be unlucky to get two in the same batch.

IME (home devices for the last 11yrs and professionally much longer) I don't think I can recall having a HDD that experienced catastrophic failure. There are tell tales that give plenty of notice. Which I guess is just asking for bother smile

essayer said:
...Or depending on the use case, larger drives and raid 1.
Within reason I'd always go for lower capacity but more of them from a resilience point of view.

essayer said:
...
And of course: "RAID is not backup"
100%

hyphen said:
SSD's will be quieter, so if the server is not going to be hidden away then may be worth it for the noise.
True. But 6-7x the cost buys you a lot of sound proofing smile

Gary C

12,433 posts

179 months

Monday 12th June 2017
quotequote all
Data centre spec drives will give you more reliability (not upto date but re04's used to be a good drive), but reds seem to be a sweet spot for home raids.

That said, mine is running on a cobbled together mix of old drives from my last PC build and have been operating continuously for 3 years now without a hitch.

essayer

9,067 posts

194 months

Monday 12th June 2017
quotequote all
Murph7355 said:
Surely the probability of drive failure is no different to how it is at any other time? MTBF rates are just that - "means"
The problem is when a drive fails and you insert the new one the array has to rebuild, which with 4TB drives you're looking at the drives all being hammered for 10hrs+. That's a stress test which might be too much for the remaining three drives.
I can definitely recommend FreeNAS with ZFS on the HP, provided you put enough memory in!

alock

4,227 posts

211 months

Monday 12th June 2017
quotequote all
Murph7355 said:
essayer said:
IMO, RAID 5 is a bad idea nowadays - the probability of second drive failure while rebuilding after a failure is too high, especially if the drives all came from the same batch....
Surely the probability of drive failure is no different to how it is at any other time? MTBF rates are just that - "means".
A large RAID 5 still makes sense for redundancy, i.e. you need to keep the service running during the day and then maybe restore from a backup out of hours.

RAID 5 does not make sense for protecting data in any way. Statistically a large rebuild will fail. You are lucky if it doesn't.
http://www.smbitjournal.com/2012/05/when-no-redund...

If a disk fails in a RAID 5, you are far better off running an incremental backup to update your last full backup. This will apply the least load on the remaining disks. If you cannot do this then a full backup is still a better option than a re-build.

Personally I would also avoid a RAID 1 on a home NAS as well. A hot-backup disk (not directly exposed to the network) with a daily or weekly synchronization covers you for the more common home scenarios such as accidental deletions and encrypting malware.

mikef

4,872 posts

251 months

Monday 12th June 2017
quotequote all
HantsRat said:
Is it worth getting SSD's
Nope (I've been there). Unless you are buying very expensive enterprise SSDs they aren't designed for 24x7x365 operation (I had two fail in a week, of a model that had worked fine as a laptop system disk) and unless you are running 40GB/100GB Ethernet (highly unlikely) then the SSDs won't be any faster than HDDs as the Ethernet connection will be the bottleneck. They will rebuild a RAID array fairly quickly though

mikef

4,872 posts

251 months

Monday 12th June 2017
quotequote all
hyphen said:
SSD's will be quieter, so if the server is not going to be hidden away then may be worth it for the noise.
Strangely enough, I found the opposite. Under load, in a Synology DS414slim (4x 2.5" disk) NAS unit, the SSDs were running a couple of degrees warmer than WD Reds and the fans were slightly louder

mikef

4,872 posts

251 months

Monday 12th June 2017
quotequote all
essayer said:
the probability of second drive failure ... is too high, especially if the drives all came from the same batch.
Great insight, that had not occurred to me; as consumers building a NAS we are likely to order a handful of drives from eBuyer or Scan and they are indeed likely to come fro ,the same batch

Murph7355

37,714 posts

256 months

Monday 12th June 2017
quotequote all
alock said:
A large RAID 5 still makes sense for redundancy, i.e. you need to keep the service running during the day and then maybe restore from a backup out of hours.

RAID 5 does not make sense for protecting data in any way. Statistically a large rebuild will fail. You are lucky if it doesn't.
http://www.smbitjournal.com/2012/05/when-no-redund...

If a disk fails in a RAID 5, you are far better off running an incremental backup to update your last full backup. This will apply the least load on the remaining disks. If you cannot do this then a full backup is still a better option than a re-build.

Personally I would also avoid a RAID 1 on a home NAS as well. A hot-backup disk (not directly exposed to the network) with a daily or weekly synchronization covers you for the more common home scenarios such as accidental deletions and encrypting malware.
Hmmmm. It's getting late and I've had a few glasses of vino but something's not right in the logic used in that article.

First of all, by the logic used I would definitely have had a terminal failure at home, let alone witnessed shed load of others in a professional context over the last 25yrs. I haven't. I've had cause to rebuild my home arrays 4, possibly 5 times in the time I've been using NAS units at home and never once had issues. With the stats he's noting, with that sort of luck I should do the lottery tomorrow smile

Equally I don't recall ever having a catastrophic "URE" either. In day to day use or during a rebuild.

My home NAS units only have 4 drives apiece, and critical data is mirrored between them. I don't know of any places using dozens of disks in a single RAID 5 array, and would agree that starts to make not much sense!

I'll have a better read later.

I agree 100% that a RAID array is no replacement for a backup regime. But I'm not really sold on the thrust of the arguments in the article smile

ging84

8,897 posts

146 months

Tuesday 13th June 2017
quotequote all
You won't have had a catastrophic URE because the so called unrecoverable read error is very much recoverable.
What it is ultimately is a bad sector, and although in a raid a bad sector on a single disk can be handled with zero risk of data loss, most of us get by on machines with no redundancy without ever suffering an issue. This is because we don't just write raw data to disk, we use a file system which has in built error handling to deal with this very common occurrence.
A URE occurring while a raid has a loss of redundancy does not make the array useless, It results in a bad sector which in most cases results in bad block that can be reliably repaired by the file system tool.

Angrybiker

557 posts

90 months

Tuesday 13th June 2017
quotequote all
To all those talking about raid not a backup: Yeah but what cost effective backup solution would you realistically employ for 15Tb of data?

alock

4,227 posts

211 months

Tuesday 13th June 2017
quotequote all
Angrybiker said:
To all those talking about raid not a backup: Yeah but what cost effective backup solution would you realistically employ for 15Tb of data?
Why do you have 15TB of data? What level of importance does it have to you? How you you feel if you lost it all? For a home user to end up with 15TB of data is a choice. Many of us have chosen not to go down the path of having a huge media server because keeping it backed up is hard.

Pragmatically, a single solution is rarely the correct answer. I treat my documents and family photos very differently to my music and movies. I have about 60GB of important stuff that is protected using 7 different backups across 4 different locations. This data could never be recreated if lost, e.g. pictures of my kids growing up.

I have about 3TB of less important stuff such as music and movies. I've deliberately kept this below the single disk limit so I can justify only having a single external disk backup. All this data could be rebuilt with money, e.g. re-buying the movies. Now that 8TB disks are more affordable, I might replace my 4GB disks and allow myself more media storage.

budgie smuggler

5,384 posts

159 months

Tuesday 13th June 2017
quotequote all
Angrybiker said:
To all those talking about raid not a backup: Yeah but what cost effective backup solution would you realistically employ for 15Tb of data?
Yes it's a nuisance backing up, I use Crashplan and have 6TB uploaded.



mikef said:
HantsRat said:
Is it worth getting SSD's
unless you are running 40GB/100GB Ethernet (highly unlikely) then the SSDs won't be any faster than HDDs as the Ethernet connection will be the bottleneck. They will rebuild a RAID array fairly quickly though
The difference is that a mirrored and striped array of SSDs will saturate a GB ethernet line easily under all conditions, we have an array of WD Reds at work with a 32GB ARC cache (ZFS) plus a 128GB L2 ARC they still don't get anywhere near saturating it with a true random workload. 20MB/s absolute tops.
Now whether you need that performance for your workload is another matter. smile


Murph7355 said:
Surely the probability of drive failure is no different to how it is at any other time? MTBF rates are just that - "means".

Ref same batch, I tend to hold a spare at home bought at a very different time. So would be unlucky to get two in the same batch.

IME (home devices for the last 11yrs and professionally much longer) I don't think I can recall having a HDD that experienced catastrophic failure. There are tell tales that give plenty of notice. Which I guess is just asking for bother smile
It's not the risk of complete failure of the disk (although that is also an issue), it's the risk of failed reads during the rebuild.

Also your failure record is great, but pure luck. We had only 1 or 2 failures for 3 years then this year I must have replaced about 20 disks. All different brands etc. Just bad luck. We monitor SMART on every device and only once or twice did we get any clues before they fell over.



Edited by budgie smuggler on Tuesday 13th June 10:42


Edited by budgie smuggler on Tuesday 13th June 11:10

essayer

9,067 posts

194 months

Tuesday 13th June 2017
quotequote all
Angrybiker said:
To all those talking about raid not a backup: Yeah but what cost effective backup solution would you realistically employ for 15Tb of data?
It depends on
Budget
Connectivity
Consequences of failure (business impact)
How much of that 15TB really requires backing up
Whether versioning / incrementals can be done

You could set up another drive array/NAS in a datacenter and back up to that using something like CrashPlan

For home use, chances are the truly irreplaceable stuff (photos etc) will be in the GBs. Easy to sync with cloud backup / USB drive etc.


mikef

4,872 posts

251 months

Tuesday 13th June 2017
quotequote all
If it's a 15TB production database, that's one thing

If it's mainly write-once media (video, audio) that you periodically add to, then for the bulk of it you just need a copy (or better, two) at offsite locations

External SATA dock, a few 4TB disks, zip up directories with password, burn, take over to your mum's and then just back up the essential/current stuff to cloud. Repeat when you have another 4TB of new material

HantsRat

Original Poster:

2,369 posts

108 months

Tuesday 13th June 2017
quotequote all
What about memory? I want to add another 4GB stick in the server, Can anyone provide a link for which is compatible? I've read it needs to be ECC non registered?