SSDs - State of the Product?

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,781
Location
USA
I'm not sure what you meant by when not using it. The drive must stay in the case/enclosure to connect to the wires and USB port.
For example I'm using a 2.5" USB-C to SATA enclosure for the 4TB drives. If I get the 7.68TB U.3 or SAS 2.5" drive, is there one for that? I'm not seeing any TLC SATA. Or maybe I should go with NVMe again and hope to find as better enclosure that doesn't degenerate performance to much. https://www.amazon.com/SABRENT-Internal-Extreme-Performance-SB-RKT4P-8TB/dp/B09WZK8YMY
 
Last edited:

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,897
Location
I am omnipresent
Website
s-laker.org
All the U.2 bridges I see are for a bare drive with U.2 interface to USB. None seem to offer an enclosure, although a 2.5" form factor is somewhat more protected than an M.2 drive. MLC drives of substantial capacity are available. You should have all sorts of options opened up once the enterprise drives that can handle multiple full writes per day are an option.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,781
Location
USA
Got it. That idea is nixed. So the options are the M.2 8TB or SATA 7.68TB, neither of which are great.
I have a Sarbent NVmE enclosure for my M.2 drives, but it is slow and inconsistent despite not overheating. I read many negative reviews about USB-C enclosures and some are burning up the SSDs. :(
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,781
Location
USA
I'm seeing 64°C on the external 4TB Sandik Pro AF55 during sustained writes. The drive appears as the WD_BLACK SN850XE 4000GB in the Disk Info. It is strange since SN850XE does not exist as a model. I'm not sure how reliable that info is because the S/N does not match the product label.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,781
Location
USA
Meanwhile, in my main computer the boot SSD is misrepresented by Windows. What can possibly cause that? Is Device Manager not reading the model info from the drive, but taking it from historical data?
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,897
Location
I am omnipresent
Website
s-laker.org
You might want to check WD's diagnostic software to see what's up with it. It's not impossible that Device Manager could see it as a generic drive or drive name, if the driver or firmware ID matches something else it knows about.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,781
Location
USA
CrystalInfo indicates correctly that the SSD is 970 Pro, but Windows sees it as 970 EVO Plus. Very strange. WD does not see the drive at all.
I know it is the Pro because the capacity is 512GB not 500GB. But why is it wrong in Windows and other software that just reads from Windows? I've also noticed that generic software doesn't read the NVMe SMART data correctly for either WD or Samsung. I assume their goals are to spyware.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,389
Location
Eglin AFB Area
I personally can't think of a single use-case (relevant to me or our clients) where something that big (and expensive!) could be of use. None of our clients have storage needs that intensive, we don't, and I personally certainly don't, I'm agonizing over finding a group of 3 WD Red Pros/IronWolves with CMR and ERC at 8TB without spending 600 dollars for the privilege.
 
Last edited:

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,897
Location
I am omnipresent
Website
s-laker.org
I'm agonizing over finding a group of 3 WD Red Pros/IronWolves with CMR and ERC at 8TB without spending 600 dollars for the privilege.

I mean don't go hastily buying WD drives. But you should be able to find HGST He8s with a 5 year warranty for under $100. Datacenter used second hand drives can be an absolute bargain though.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,389
Location
Eglin AFB Area
Don't those used enterprise drives have huge amounts of power-on hours? Granted, I usually look to power-on count for a sign of wear over hours, but I thought these came at such a high amount of hours that it was concerning. I'd really rather not spend the money on the drives only to turn around in a month or so and have them turn up dead, but then again, if I can get a matched trio for a RAID5 that might not matter as much. Where are you seeing them with a 5-year warranty? The max I'm seeing is 1 or 2. I also don't have a SAS controller, just plain SATA since I'm repurposing desktop hardware for the task instead of using a proper server. Are helium drives even CMR? The shingles would make the software RAID kick them out as the latency would be too high, I would think.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,897
Location
I am omnipresent
Website
s-laker.org
Don't those used enterprise drives have huge amounts of power-on hours?

Not always. There are vendors willing to warranty them for five years from date of purchase, if that's a concern. I've noticed that a couple resellers are offering variable length warranties for the same model of drive, which suggest that they're aware of that issue as well.

I get datacenter drives from the ops at my colo. Sometimes I get drives that have less than 200 power on hours.

I've decided that 8TB drives are the largest I'm willing to use in arrays though. Even RAID6 tends to break down at around 36TB, so there's not much point to doing anything but mirroring or copying data elsewhere.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,567
Location
USA
Why only 8TB in RAID6? What part breaks down? Maybe it's slightly different but I'm running 6x20TB in raidz2 which is similar to RAID 6 and it works great.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,353
Location
Monterey, CA
Merc probably knows things I don't, but the concern as I understand it is about the arrays ability to rebuild in a reasonable amount of time after a failure, and the likelihood of another failure before that rebuild has completed.

This is why I've been using fast drives if I need a larger array, but all the important arrays I run are now at least RAID10 if not RAID15.
 

sedrosken

Florida Man
Joined
Nov 20, 2013
Messages
1,389
Location
Eglin AFB Area
I don't have anything that's so monumentally important here at home that anything more than a RAID5 is really needed -- I make regular backups to cold storage and more beyond that for the stuff that's actually important.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,567
Location
USA
I definitely get the concern with time of rebuilds. I'd expect a drive rebuild might take:

Ultrastar DC HC650
50% capacity (10TB/drive)
HDD speed: ~280MB/sec
595 minutes (~10 hours)

I'd need to lose 2 more drives before failure in around 10 hours or even double it to 20 hours to be conservative. I find that to be fine for my use case given I'll have backups if needed.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,781
Location
USA
SSDs will rebuild so much faster. The reasons for HDDs are mostly cost and TBW. Uber for, SSDs is 2-3 orders of magnitude better. It's really too bad that the higher capacity SSDs are in the enterprise sector and not so amenable to the desktop NAS or computer.

I have no issues with my 8x18TB hard drives in the NAS with Z2. The HDD noise is obnoxious compared to how quiet other components are.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
20,897
Location
I am omnipresent
Website
s-laker.org
Why only 8TB in RAID6? What part breaks down? Maybe it's slightly different but I'm running 6x20TB in raidz2 which is similar to RAID 6 and it works great.

Are you snapshotting in addition to running dual parity or just relying on dual parity? I'm assuming you are, but that eats into the capacity of a pool depending on how much your data changes.

Plain old RAID6 or RAID6-like arrays run in to the same statistical certainty of a read error at roughly 12TB/parity drive that RAID5 does. It gets uglier as the arrays get bigger. You can add Snapshotting with RAIDz2 if you have capacity for it, but there's a sanity check in terms snapshot storage and having spares on hand. I think things are better on the SSD side of things, but it's not like I have actual dozens of SSDs.

My solution for the time being are "small" ~22ishTB RAID6 volumes of either 6 or 8TB drives mixed between Windows (Storage Spaces) and Linux (ZFS) hosts, with SnapRAID on the Windows side to handle snapshotting. I have a total of 7 16TB drives and 5 18TB drive that I'm just using as 2-disk mirror sets with a snapshot drive each. SnapRAID is a bit more fiddly and needs some scripting to do what I want from it and I'm not in love with it. Updating the snapshots once a day is fine for me right now. It just adds something handy that Windows didn't have before and it is well behaved IMO.

Don't get me wrong: If I need a giant array for some stupid reason, I'm willing to make one temporarily. I just don't want data to live there long term.

Right now I have about ~170TB of data I care about. A lot of it has been migrated to the mirrored drives and I've been able to pull almost all my shitty SMR drives, which is good news. I haven't lost a substantial amount of data in a couple decades but at the same time I don't have warm fuzzies about where we are right now with high capacity mechanical drives.
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,567
Location
USA
I only keep a single snapshot at a time on my main NAS and then zfs send each snap to my other NAS as a form of incremental-forever backups. Both run dual parity in their vdevs. The snapshots are more a function of the filesystem than anything specific to limiting the drive size for URE's. I don't mind that the convenience of snapshots come at the expense of space with COW filesystems.

I see your point for consumer drives rated 1.0e+14 gets you to the 12TB/parity but these 20TB drives are rated 1.0e+15 which brings it to around 114TB. Like anything it's about having good backups which is how I diversify my data anyway. Running with a single pool with fewer larger drives means less noise, heat, and parts/complexity so I'm preferring that over lots of pools or vdev/volumes with smaller drives.

Going with 6x20 allows me to expand nicely in the future if I need to add more space with another 6x20 vdev to the pool.
 

ddrueding

Fixture
Joined
Feb 4, 2002
Messages
19,353
Location
Monterey, CA

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
15,781
Location
USA
I updated the Firmware of one 980 Pro so far and there is also one offsite. They are small sizes. I did not see any point in upgrazing my two original (v1) 2TB 970 EVO Plus SSDs which perform similarly and don't have any issues. The later (v2) 970 EVO Plus are not so great at unbuffered writes so I avoided them. The last ~30TB or so NVMe drives I ordered are all WD. Given the 990 Pro issues as well, Samsung has lost the plot. Hynix has the best SSDs per the benchmarks, but like Samsung none are 4TB in 2280 M.2 FF.
 
Top