Best hard disks for Microserver

Best hard disks for Microserver

Author
Discussion

HantsRat

Original Poster:

2,369 posts

108 months

Sunday 11th June 2017
quotequote all
Can anyone recommend which hard disks to get for the HP Microserver? I don't really understand the colour code scheme on some of the SATA drives.

I'll probably fill all 4 drive bays so maybe 4x 1TB drives?

Murph7355

37,711 posts

256 months

Sunday 11th June 2017
quotequote all
I've been using Western Digital Reds for the last few years.

Prior to that I had Seagate Barracudas, but they started to fail too much. The WD's had a good reputation for longevity and are proving OK thus far.

TonyRPH

12,971 posts

168 months

Sunday 11th June 2017
quotequote all
Another recommendation for WD reds here.

HantsRat

Original Poster:

2,369 posts

108 months

Sunday 11th June 2017
quotequote all
Are reds fine for a RAID array and in an always on environment?

mikef

4,872 posts

251 months

Sunday 11th June 2017
quotequote all
Yes, designed for always-on operation

I run two NAS's with Reds, and can recommend them for noise, heat and reliability. I'd think twice about disk size though, 4x1TB disks in RAID-10 is less than 2 GB usable space

HantsRat

Original Poster:

2,369 posts

108 months

Sunday 11th June 2017
quotequote all
Is it worth getting SSD's?

The OS will be on an SD card for all 4 drives is pure storage.

I might just get 2 x 2TB's on a raid 1 instead.

RobDickinson

31,343 posts

254 months

Sunday 11th June 2017
quotequote all
what are you going to be storing how are you going to be accessing it?

HantsRat

Original Poster:

2,369 posts

108 months

Sunday 11th June 2017
quotequote all
RobDickinson said:
what are you going to be storing how are you going to be accessing it?
It will be storing VM files for various machines accessed from esxi.

caelite

4,274 posts

112 months

Sunday 11th June 2017
quotequote all
I've heard fantastic things about WD reds.

Personally I use HGST Ultrastar 7k3000s, had 4 of them in my server for the last 5 years and they have caused me no grief, also consistently come out on top in backblazes less than scientific year drive tests.

b0rk

2,303 posts

146 months

Monday 12th June 2017
quotequote all
SSD's won't deliver a meaningful performance benefit in that application.

The non hot plug disks from HPE for Midline array applications would be rebranded HGST deskstar NAS's.

PotatoSalad

601 posts

83 months

Monday 12th June 2017
quotequote all
Murph7355 said:
I've been using Western Digital Reds for the last few years.

Prior to that I had Seagate Barracudas, but they started to fail too much. The WD's had a good reputation for longevity and are proving OK thus far.
Same here, I have few Qnap NAS boxes at work and they all came with 3TB Barracudas, bought within 20 months from each other so I can't blame a faulty batch. I had at least one failing each month until I replaced all with WD Reds, absolutely no issues with over 30 drives. Highly recommended.

Slushbox

1,484 posts

105 months

Monday 12th June 2017
quotequote all
The WD Reds have a few extra features over WD Greens; TLER support, useful in RAID Arrays, a listed MTBF of 1,000,000 hours, plus a 3 year warranty. They also have a longer head parking delay and 3D Active Balance (vibration damping.)

However we have Greens in our NAS boxes with no issues (Yet.) We also have a Seagate Barracuda 500GB running in a Dell Precision workstation since 2010. It has no SMART errors or other problems, so it's become a curiosity, like those light old bulbs that never burn out.

The question of RAID 1 over no RAID is harder to answer, RAID 1 doubles your storage cost, (halves the total capacity) but most NAS boxes will re-sync a RAID 1 pair if a single drive is replaced.

So, Reds is good, SSD's tend not to show much improvement, as the transfer speeds from many NAS boxes are fairly low, even over wired Gigabit Ethernet (10 MB/sec on some of them.)

If it's a critical NAS, then adding an Uninterruptible Power Supply (UPS) feed is a cheap way to keep it happy.


Edited by Slushbox on Monday 12th June 07:03

Murph7355

37,711 posts

256 months

Monday 12th June 2017
quotequote all
mikef said:
Yes, designed for always-on operation

I run two NAS's with Reds, and can recommend them for noise, heat and reliability. I'd think twice about disk size though, 4x1TB disks in RAID-10 is less than 2 GB usable space
A good reason not to use RAID 10. Life was much simpler when it only went up to 5 biggrin

In all seriousness, with a 3+ disk array I still see little to no reason to use anything other than RAID 5 (and do myself). RAID 1 has merits occasionally. But 5 gives a good balance of performance, reliability and capacity IMO.

Disks don't fail anywhere near as often as they used to, and there are plenty of warning signs that a decent array will report back on. I just make sure I replace any disk that starts throwing warnings immediately (and quite possibly have some reasonably serviceable disks sat in a draw waiting to be destroyed as a result smile).

This does raise one more challenge though - you store a lot of data on a NAS and need to think about what you will do if the unit itself (rather than the disks) fails and how you will back it up (RAID is NOT a replacement for a good backup/recovery solution).

Once you start admitting you need RAID, you need to get a second unit of equivalent size and back the whole array up to it IMO. Preferably in a different location and preferably with "fire breaks" applied to try to prevent issues with malware etc. (Call it manual RAID 10 smile).

HantsRat said:
Is it worth getting SSD's?
...
Not in my opinion. I'm not convinced they're great for the hammer a NAS can give to disks. Sounds counter intuitive with SSDs having no moving parts etc...but NAS units can/do run warm very easily and this is not the friend of SSDs. They also didn't used to have great longevity with either excessive read or write cycles (can't recall which. Think it was read). I'll admit that my recent experience of SSDs is relatively limited though.

They're also expensive byte for byte.


HantsRat

Original Poster:

2,369 posts

108 months

Monday 12th June 2017
quotequote all
I'm pretty sure the HP Microserver doesn't support RAID 5.

SwissJonese

1,393 posts

175 months

Monday 12th June 2017
quotequote all
TonyRPH said:
Another recommendation for WD reds here.
^^^ Yep another recommendation as put these in my home server, running 24/7 for last year no problem

TonyRPH

12,971 posts

168 months

Monday 12th June 2017
quotequote all
HantsRat said:
I'm pretty sure the HP Microserver doesn't support RAID 5.
It probably only supports mirroring or striping, most basic systems do.

For RAID 5 and beyond, you'd need a proper third party RAID controller.

Also - if you plan to use it with Linux, the built i RAID likely won't work either, as it's usually 'host raid' which means the O/S and driver does all the work.

Linux will simply see it as JBOD.


caelite

4,274 posts

112 months

Monday 12th June 2017
quotequote all
I use raid 5, imo it is great with drives up to 2tb, with larger more modern drives I would go for raid 6, less usable space, but minimises the chances of a second drive failure whilst rebuilding the array, which is becoming a big problem for raid 5 in industry.

JulianHJ

8,741 posts

262 months

Monday 12th June 2017
quotequote all
HantsRat said:
I'm pretty sure the HP Microserver doesn't support RAID 5.
Unraid might be worth a look. I run it on my Microserver and I'm very happy with it.

TonyRPH

12,971 posts

168 months

Monday 12th June 2017
quotequote all
If you want proper RAID, just buy an HP P410 RAID controller off Ebay.

You can pick them up for around £25 - you just need one with a low profile bracket IIRC.

Also - IIRC - the microserver back plane sata connectors will plug straight into the P410.

The P410 also has good O/S support, including VMWare ESXi.


smirkoff2000

7 posts

146 months

Monday 12th June 2017
quotequote all
Not sure which OS you're looking at but I currently use Rockstsor on an HP Microserver (N40L) with WD Red but as Raid 1.

Rockstor uses a BTRFS file system which, from what i understand, has series issues of potential data loss with RAID 5/6 at the moment (there is meant to be an upcoming fix in Kernel 4.12). You are supposed to be able to convert to a different RAID type on BTRFS though. Which is my plan if the fix actually happens.