this post was submitted on 10 Jun 2024
16 points (94.4% liked)

datahoarder

6841 readers
1 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 5 years ago
MODERATORS
 

Are they worth considering or only worth it at certain price points?

top 9 comments
sorted by: hot top controversial new old
[–] krolden@lemmy.ml 11 points 6 months ago

Depends on how much you value your data and how much redundancy you have. I bought a 20tb “manufacturer certified” drive from SPD the other day and it tests fine, but I’m not going to put valuable data on it. Maybe if this drive outlives my shucked easy stores I’ll buy more. But for now my main raid array is new drives only that I’ve throughly tested before installing.

[–] jet@hackertalks.com 7 points 6 months ago* (last edited 6 months ago)

What is a renewed drive? Do they have a datasheet with MTBF defined?

Spinning disks, or consumable flash?

What is the use case? RAID 5? Ceph? JBOD?

What is your human capital cost of monitoring and replacing bad disks?

Let's say you have a data-lake with Ceph or something, it costs you $2-5 a month to monitor all your disks for errors, predictive failure, debug slow io, etc. The human cost of identifying a bad disk, pulling it, replacing it, then destroying it - Something like 15-30m. The cost of destroying a drive $5-50 (depending on your vendor, onsite destruction, etc)

A higher predictive failure rate of "Used" drives, has to factor in your fixed costs, and human costs. If the drive only lasts 70% as long as a new drive, then the math is fairly easy.

If the drive gets progressively slower (i.e. older SSDs) then the actual cost of the used drive becomes more difficult to model (you have to have a metric for service responsiveness, etc).

  • if its a hobby project, and your throwing drives into a self-healing system, then take any near free disks you can get, and just watch your power bill.

  • If you make money from this, or the downside of losing data is bad, then model out the higher failure rate into your cost model.

[–] MangoPenguin@lemmy.blahaj.zone 3 points 6 months ago* (last edited 6 months ago)

I'm running several used ("renewed") enterprise SAS HDDs and enterprise SATA SSDs. They've been solid so far.

The HDDs came with about 30k hours each which is not bad at all, and the SSDs only had around 100 TB written out of the total 6.2 PB rating.

I'm not sure I would do used with standard consumer HDDs, they typically don't last as long and are likely abused a lot more in a desktop PC vs a datacenter server.

As always have proper backups in place, all drives fail eventually no matter where you buy them.

[–] MalReynolds 3 points 6 months ago

RAID is your friend. If you can't afford to lose one, you might have a bad time (applies to all drives anyway). Manufacturer refurbs are your best bet.

[–] Nogami@lemmy.world 3 points 6 months ago

I've been using renewed (refurbished) 8TB drives off of Ebay - SAS 8TB for $50-60 each. Not a single failure in over a year on the dozen or so drives I'm running right now. I'm running unRAID with a combination of unRAID's native array drives (for media and "disposable" stuff) in a dual parity config, and ZFS (with snapshots replicated to a live backup on a secondary server) for important personal stuff (and backed-up off-site a few times a year).

Even if something were to perish, I have enough spares to just chuck one in and let it resilver without worrying at all. I'm content with this as a homelabber and when I'm not supplying critical service for a business, etc.

[–] RegalPotoo@lemmy.world 2 points 6 months ago

I've not heard any out-and-out horror stories, but I've got no first hand experience.

I'm planning on picking up 3x manufacturer recertified 18TB drives from SPD when money allows, but for now I'm running 6x ancient (minimum 4 years old) 3TB WD Reds in RAID 6. I keep a close eye on SMART stats, and can pick up a replacement within a day if something starts to look iffy. My plan is to treat the 18TBs the same; hard drives are consumables, they wear out over time, and you have to be ready to replace them when they do

[–] Shdwdrgn@mander.xyz 1 points 6 months ago

Sounds like the term used by Amazon? I picked up eight 18TB "renewed" drives that have been in constant use for over a year now under a ZFS filesystem. Not a single error yet and the pool is about half full. At the time I bought them, they were about $100 cheaper (each) than brand new drives so that saved me quite a bit of money, but they were also a fairly new line of drives so there couldn't have been much previous use on them anyway.

[–] punkcoder@lemmy.world 0 points 6 months ago

Purchased 5 renewed drives from amazon, 10 months in 3 have had to be replaced because of escalating bad sectors, all three were outside of the refurbish guarantee… one by only a week. Save your money and go with the new drives.