this post was submitted on 13 Sep 2024
154 points (98.7% liked)

Technology

1331 readers
484 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution

founded 11 months ago
MODERATORS
 

Hard drives from the last 20 years are now slowly dying.

all 33 comments
sorted by: hot top controversial new old
[–] unexposedhazard@discuss.tchncs.de 42 points 1 month ago (2 children)

Isnt the standard preservation system tape drives? They tested the longevity of different storage solutions ages ago to avoid stuff like this.

[–] BrikoX@lemmy.zip 32 points 1 month ago* (last edited 1 month ago) (1 children)

It is. Magnetic tape is still king.

[–] JoMiran@lemmy.ml 11 points 1 month ago (2 children)

I have a crate of old hard drives going back to the late nineties. Am I the only person that migrates the data to new drives regularly? At this point it is a yearly tradition for me to pick up larger drives during the Black Friday / Cyber Monday sales. Why rely on old 4tb drives when you can move them all to fresh 14tb drives?

[–] BrikoX@lemmy.zip 4 points 1 month ago (1 children)

NAS is another option instead of relying on random assortment of drives.

But it's most cost-effective to use cold storage like Backblaze if you don't need to access that data and just want to archive it.

[–] JoMiran@lemmy.ml 7 points 1 month ago (1 children)

What I meant by drives are NAS. I buy the drives on sale spin up a new array, migrate the data, and redirect the mount point.

I use to cold store until I realized that unless I have access to it, it might as well not exist. Now I keep everything live, even backups going back to 1997.

The only data I have "lost" are copies of my old warez CDs from eastern Europe because I have no idea where I have stashed them, and a pack of Zip Disks because I have no functioning Zip Drive.

[–] BrikoX@lemmy.zip 3 points 1 month ago

Phew, I was imagining a closet of drives. NAS is great.

Cold storage is always controversial as you are storing it on someone else's hardware, but it is by far the most cost-effective option. Just a single month's electricity cost in some places can match years of cold storage.

Using both of course is recommended, as cold storage acts as another backup vector in case your own storage ever gets catastrophic failure due to fire or flooding. 3-2-1 rule and all. But cost is always a factor in people using the best practices.

[–] abcd@feddit.org 3 points 1 month ago

I’m the opposite: I migrated 2 4TB drives from my first NAS into the actual one. The drives are going strong and nearing ten years (!) of run time. Two out of eight drives died in this server since 2017. Both were newer. I’m not going to change a single disk before it dies. Most value for money in my opinion.

But I can afford this „risk“: My server has a redundancy of 2 disks. It has a local USB backup, is mirrored to two remote servers in different locations with local backups as well.

[–] Akareth@lemmy.world 6 points 1 month ago (1 children)

One reason why I love btrfs is the ability to add (and remove) arbitrarily sized drives to the disk array while maintaining multiple redundant copies of my files.

[–] MentallyExhausted@reddthat.com 2 points 1 month ago

unRAID can do this with both xfs and btrfs.

[–] Know_not_Scotty_does@lemmy.world 6 points 1 month ago (1 children)

Drive failure in the 00s was really common. I lost 2 or 3 separate drives from different mfg over the course of a couple years. Newer drives are better but even in modern nas setups, I planned on losing at least 1 drive per year on a 4 drive nas even fresh out of the box.

Always keep data your care about in at least 3 places and in at least 2 different mediums with one preferably offsite. I like to have one drive in use, one backup that sync's daily, and one that I keep in cold storage unplugged. Then swap the sync drve and cold storage drive every so often.

[–] Rhaedas@fedia.io 4 points 1 month ago

So far over a span of maybe 40 years of computing I've only lost two HDDs. A number of 5 1/4 floppies back then but that was typical. Both drives I was able to pull most of the info off to a new drive, so yay for the mechanical drive, where a SSD you're left with either a miracle, or looking for the experts to retrieve something. I'm no power user, so perhaps that's part of the reason, but ever since we got into the giga and tera range of storage my first thought is always...wow, that's a lot to lose at one time.

[–] possiblylinux127@lemmy.zip 5 points 1 month ago

30 year old hard drives failing?

Color me shocked