this post was submitted on 30 Mar 2024
90 points (96.9% liked)

Selfhosted

39921 readers
304 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I have a server running Debian with 24 TB of storage. I would ideally like to back up all of it, though much of it is torrents, so only the ones with low seeders really need backed up. I know about the 321 rule but it sounds like it would be expensive. What do you do for backups? Also if anyone uses tape drives for backups I am kinda curious about that potentially for offsite backups in a safe deposit box or something.

TLDR: title.

Edit: You have mentioned borg and rsync, and while borg looks good, I want to go with rsync as it seems to be more actively maintained. I would like to also have my backups encrypted, but rsync doesn't seem to have that built in. Does anyone know what to do for encrypted backups?

top 50 comments
sorted by: hot top controversial new old
[–] pe1uca@lemmy.pe1uca.dev 36 points 7 months ago (1 children)

Well, I'm just starting with serious backups, AFAIK you only need to backup the data which you can't replicate.

Low seeded torrents are just hard to get, but not impossible. Personal photos, your notes, any other files generated by you are the ones which need backups.

[–] taladar@sh.itjust.works 23 points 7 months ago (1 children)

Ideally you want to backup everything that you didn't explicitly exclude since otherwise there is always something you forgot.

[–] pe1uca@lemmy.pe1uca.dev 8 points 7 months ago

Well, I have my personal data in a specific folder, everything there is backed up.
General media is in another one, which isn't included.

[–] Deckweiss@lemmy.world 16 points 7 months ago* (last edited 7 months ago) (1 children)

The software borgbackup does some insane compression.

It is more effective if you backup multiple machines tbh (my 3 linux computers with ~600gb used each get compressed down to a single ~350gb backup, because most of the files are the same programs and data over and over again)

But it might do a decent enough job in your case.

So one of the solutions might be getting a NAS and setting up borgbackup.

You could also get a second one and put it in your parents or best friends home for an offsite backup.

That way you don't have to buy as large of a drive capacity, but will only have fixed costst (+electricity) instead of ongoing costs for some rented server storage.

I guess that would be about 400$ per such a device, if you get a used office pc and buy new drives for it.


Tape seems to be about half the price per TB, but then you need special reader/writer for it, which are usually connected via SAS and are FUCKING EXPENSIVE (over 4000$ as far as I can see).

It only outscales HDDs in price after like ~600TB

[–] taladar@sh.itjust.works 3 points 7 months ago (2 children)

How do you handle the cache invalidation issue with Borg when backing up multiple systems to one repo? For me if I access a Borg repository from multiple computers (and write from each) it has to rebuild the cache each time which can take a long time.

[–] Deckweiss@lemmy.world 3 points 7 months ago* (last edited 7 months ago) (1 children)

I seperate them by archive name prefix and never had the issue you describe.

Edit: it seems I just never noticed it, but the docu suggest you're right. Now I am confused myself lol.

https://borgbackup.readthedocs.io/en/stable/faq.html#can-i-backup-from-multiple-servers-into-a-single-repository

[–] zeluko@kbin.social 2 points 7 months ago

big reason why i switched to kopia, borg just doesnt cut it anymore..

load more comments (1 replies)
[–] solrize@lemmy.world 10 points 7 months ago* (last edited 7 months ago) (1 children)

I've been using Borg and Hetzner Storage Box. There are some small VPS hosts that actually beat Hetzner's pricing but I have been happy with Hetzner so am staying there for now. With 24TB of data you could also look at Hetzner's SX64 dedicated server. It has a 6 core Ryzen cpu and 4x 16TB HDD's for 81 euro/month. You could set it up as RAID 10 which would give you around 29 TiB of usable storage, and then you also have a fairly beefy processor that you can use for transcoding and stuff like that. You don't want to seed from it since Hetzner is sticky about complaints that they might get.

Tape drives are too expensive unless you have 100s of TB of data, I think. Hard drives are too unreliable. If you leave one in a closet for a few years, there's a good chance it won't spin back up.

[–] dan@upvote.au 3 points 7 months ago* (last edited 7 months ago) (1 children)

for 81 euro/month.

You can probably find something cheaper from their auction servers.

I've got a storage VPS with HostHatch for my backups. It's one of their Black Friday deals from a few years ago - 10TB storage for $10/month. Not sure they'll offer that pricing again, but they did have something similar for around double the price during sales last year (still a good deal!)

Tape drives are too expensive unless you have 100s of TB of data, I think

The drives are expensive, and some manufacturers have expensive proprietary software, but the tapes themselves are cheaper per TB than hard drives, and they usually have a 20 or 30 year life guarantee. People seem to think tapes is old technology but modern tapes can fit 18TB uncompressed (they say 45 TB compressed but idk).

The default tier of AWS glacier uses tape, which is why data retrieval takes a few hours from when you submit the request to when you can actually download the data, and costs a lot.

[–] mea_rah@lemmy.world 2 points 7 months ago

The default tier of AWS glacier uses tape, which is why data retrieval takes a few hours from when you submit the request to when you can actually download the data, and costs a lot.

AFAIK Glacier is unlikely to be tape based. A bunch of offline drives is more realistic scenario. But generally it's not public knowledge unless you found some trustworthy source for the tape theory?

[–] ramble81@lemm.ee 8 points 7 months ago* (last edited 7 months ago) (1 children)

I have my BD/DVD/CD collection backed up to S3 Glacier. It’s incredibly cheap, offsite, and they worry about the infrastructure. The amount of Hard drive and infrastructure space you’ll need to back up nearly that amount will cost you the about the same give or take. Yes it’ll cost a bit in the event of a catastrophic restore, but if I have something happen at the house, at least I have an offsite backup.

[–] dan@upvote.au 3 points 7 months ago (1 children)

How much does Glacier cost you? Last time I checked, some hosts had warm storage for around the same price, at least during Black Friday or New Year sales.

load more comments (1 replies)
[–] TedZanzibar@feddit.uk 7 points 7 months ago

Short answer: figure out how much of that is actually irreplaceable and then find a friend or friends who'd be willing to set aside some of their storage space for your backups in exchange for you doing the same.

Tailscale makes the networking logistics incredibly simple and then you can do the actual backups however you see fit.

[–] taladar@sh.itjust.works 6 points 7 months ago (2 children)

I have just been using Borg with a Hetzner Storagebox as the target. That has the advantage of being off-site and not using up a lot of space since it deduplicates. It also encrypts the backup. It might take a while for the initial backup at 24TB though depending on your connection.

[–] sturlabragason@lemmy.world 5 points 7 months ago (1 children)

Shit I’ve never heard of Hetzner but their pricing makes de-Googling all my decades of family photos a viable option! Thanks!

[–] blackstrat@lemmy.fwgx.uk 2 points 7 months ago

...until they change their prices. Always make sure you have a local copy and a way out

[–] ponchow8NC@lemmynsfw.com 2 points 7 months ago (4 children)

Damn never heard of them looks great. Is there any catch or is it like a small company that might go out of business in a few years? I still haven't had to backup more then 4tb but once I do get up to those numbers they might be the best option compared to offsite hard drives like I been doing

[–] buedi@kbin.social 5 points 7 months ago (1 children)

As mentioned already, Hetzner is a very big Hoster in Germany. I am a customer since nearly 15 years now and in all that time they also rised the prices only once for the package I use (and I think it was only recently in 2023 or so where it went from 4,90€ to 5,39€). Also their Storage Box seems to be not only one of the cheapest out there I have seen, but as far as I remember, you do not have to pay for the traffic if you want to restore your data, like it is with other hosters. Also they had a good service, were responsive if I opened a Ticket in the past and I can not remember if I had ever problems with the service I use (Web Hosting package).

[–] 7Sea_Sailor@lemmy.dbzer0.com 2 points 7 months ago* (last edited 7 months ago)

Can confirm that there is 0 ingress or egress fees, since this is not an S3 container storage server, but a simple FTP server that also has a borg&restic module. So it simply doesnt fall into the e/ingress cost model.

[–] dan@upvote.au 4 points 7 months ago

is it like a small company that might go out of business in a few years?

Hetzner is one of the largest hosting companies in the world.

[–] taladar@sh.itjust.works 3 points 7 months ago

They are anything but small. They are probably one of the biggest German hosting companies out there.

load more comments (1 replies)
[–] hperrin@lemmy.world 5 points 7 months ago (2 children)

I have a machine at my parents’ house that has a single 20TB drive in it. I’ll log in once in a while and initiate an rsync to bring that up to current with my RAID at home. The specific reason I do it manually is in case there’s a ransomware attack. I won’t copy bad data. That’s also the reason I start it from the backup machine. The main machine doesn’t connect, the backup machine does, so ransomware wouldn’t cross that virtual boundary.

load more comments (2 replies)
[–] ancoraunamoka@lemmy.dbzer0.com 5 points 7 months ago (14 children)

I am simple man s I use rsync.

Setup a mergerfs drive pool of about 60 TiB and rsync weekly.

Rsync seems daunting at first but then you realize how powerful and most importantly reliable it is.

It's important that you try to restore your backups from time to time.

One of the main reasons why I avoid softwares such as Kopia or Borg or Restic or whatever is in fashion:

  • they go unmantained
  • they are not simple: so many of my frienda struggled restoring backups because you are not dealing with files anymore, but encrypted or compressed blobs
  • rsync has an easy mental model and has extremely good defaults
[–] lemmyvore@feddit.nl 5 points 7 months ago (3 children)

As long as you understand that simply syncing files does not protect against accidental or malicious data loss like incremental backups do.

I also hope you're not using --delete because I've heard plenty of horror stories about the source dir becoming unmounted and rsync happily erasing everything on the target.

I used to use rsync for years, thinking just like you, that having plain old files beats having them in fancy obscure formats. I'm switching to Borg nowadays btw, but that's my choice, you gotta make yours.

rsync can work incrementally, it just takes a bit more fiddling. Here's what I did. First of all, no automatic --delete. I did run it every once in a while but only manually. The sync setup was:

  • Nightly sync source into nightly dir.
  • Weekly sync nightly dir into weekly dir.
  • Monthly tarball the weekly dir into monthly dir.

It's not bad but limited in certain ways, and of course you need lots of space for backups — or you have to pick and choose what you backup.

Borg can't really get around the space for backups requirement, but it's always incremental and between compression and deduplication can save you a ton of space.

Borg also has built-in backup checking and recovery parity which rsync doesn't, you'd have to figure out your own manual solution like par2 checksums (and those take up space too).

[–] bandwidthcrisis@lemmy.world 2 points 7 months ago (1 children)

Re needing lots of space: you can use --link-dest to make a new directory with hard links to unchanged files in a previous backup. So you end up with de-duplicated incremental backups. But borg handles all that transparently, with rsync you need to carefully plan relative target directory paths to get it to work correctly.

[–] lemmyvore@feddit.nl 2 points 7 months ago

Yeah Borg will see the duplicate chunks even if you move files around.

load more comments (2 replies)
[–] mea_rah@lemmy.world 3 points 7 months ago

FWIW restic repository format already has two independent implementations. Restic (in Go) and Rustic (Rust), so the chances of both going unmaintained is hopefully pretty low.

load more comments (12 replies)
[–] narc0tic_bird@lemm.ee 4 points 7 months ago

I backup my /home folder on my PC to my NAS using restic (used to use borg, but restic is more flexible). I backup somewhat important data to an external SSD on a weekly basis and very important data to cloud storage on a nightly basis. I don't backup my *arr media at all (unless you count the automated snapshots on my NAS), as it's not really important to me and can simply be redownloaded in most cases.

So I don't and wouldn't apply the 321 rule to all data as it's simply too expensive for the amount of data I have and it'd take months to upload with my non-fiber internet connection. But you should definitely apply it to data that's important to you.

[–] douglasg14b@lemmy.world 3 points 7 months ago* (last edited 7 months ago) (1 children)

I might be crazy but I have a 20TB WD Red Pro in a padded, water proof, locking, case that I take a full backup on and then drive it over to a family members 30m away once a month or so.

It's a full encrypted backup of all my important stuff in a relatively different geographic location.

All of my VM data backs up hourly to my NAS as well. Which then gets backed up onto the large drive monthly.

Monthly granularity isn't that good to be fair but it's better than nothing. I should probably back up the more important rapidly changing stuff online daily.

[–] dan@upvote.au 4 points 7 months ago (1 children)

30m away

30 minutes, 30 miles, or 30 metres?

[–] douglasg14b@lemmy.world 2 points 7 months ago (1 children)

Yes.

I'm sure one can reasonably infer that I do not mean 30 meters.

Conveniently at highway speeds 30 minutes and 30 miles away are essentially equal.

I'll try and use appropriate notation next time

[–] dan@upvote.au 2 points 7 months ago

I was just joking :)

30 minutes can vary a lot depending on traffic. If there's traffic, it can take me 30-40 minutes to get home from work even though it's only 11 miles away and ~15 mins with no traffic.

[–] sepi@piefed.social 3 points 7 months ago

I put the prndl in r and just goose it

[–] capital@lemmy.world 3 points 7 months ago* (last edited 7 months ago)

My use case is basically the same as yours.

I do restic to Wasabi.

I've been on restic for a few years now and have never had an issue. I started out using Google Drive for the backend but that was though my college which went away eventually so I swapped over to Wasabi but I'm considering B2.

It's actively maintained and encrypted.

There are a handful of backends it supports but can be extended by writing to an rclone backend.

[–] ErwinLottemann@feddit.de 3 points 7 months ago (1 children)

to your edit: rsync is a tool to copy/move files, borg is a backup utility. there are scripts that use rsync to create proper backups, but if you want to go by 'more actively maintained' you should look into how these scripts are maintained, not rsync itself.
on the other hand - borg is actively maintained, there even are releases in the last two days, one stable and one beta. it also fulfills your 'encrypted backup' requirement and has a versioned backups built in.
tl;dr comparing borg backup and rsync is comparing apples and oranges

load more comments (1 replies)
[–] cybersandwich@lemmy.world 3 points 7 months ago* (last edited 7 months ago) (4 children)

I don't have nearly that much worth backing up(5TB--and realistically only 2TB is probably critical), but I have a Synology Nas(12TB raid 1) and truenas (zfs striped/mirrored) that I back my stuff to (and they back up to each other).

Then I have a raspberry pi with a USB drive (8tb) at my parents house 4 hours away, that my Synology backs up to (over tailscale).

Oh, and I have a USB HDD(8tb) that I plug in and backup my Synology Nas to and throw in my fireproof safe. But thats a manual backup I do once every quarter or 6 months if I remember. That's a very very last resort backup.

My offsite is at my parents.

And no, I have not tested it because I don't know how I'm actually supposed to do that.

load more comments (4 replies)
[–] lorentz@feddit.it 2 points 7 months ago (1 children)

I use rclone, which is essentially rsync for cloud services. It supports encrypion out of the box.

[–] bandwidthcrisis@lemmy.world 2 points 7 months ago

I like the versatility of rclone.

It can copy to a cloud service directly.

I can chain an encryption process to that, so it encrypts then backs up.

I can then mount the encrypted, remote files so that I can easily get to them locally easily (e.g. I could run diff or md5 on select files as naturally as if they were local).

And it supports the rsync --backup options so that it can move locally deleted files elsewhere on the backup instead of deleting them there. I can set up a dir structure such as Oldfiles/20240301 Oldfiles/20240308 Etc that preserve deletions.

[–] CameronDev@programming.dev 2 points 7 months ago (1 children)

It depends on the value of the data. Can you afford to replace them? Is there anything priceless on there (family photos etc)? Will the time to replace them be worth it?

If its not super critical, raid might be good enough, as long as you have some redundancy. Otherwise, categorizing your data into critical/non-critical and back it up the critical stuff first?

[–] taladar@sh.itjust.works 5 points 7 months ago (2 children)

RAID is not backup. Many failure sources from theft over electrical issues to water or fire can affect multiple RAID drives equally, not to mention silent data corruption or accidental deletions.

[–] tal@lemmy.today 3 points 7 months ago (1 children)

Yeah...I've never totally lost my main storage and had to recover from backups. But on a number of occasions, I have been able to recover something that was inadvertently wiped. RAID doesn't provide that.

Also, depending upon the structure of your backup system, if someone compromises your system, they may not be able to compromise your backups.

If you need continuous uptime in the event of a drive failure, RAID is an entirely reasonable thing to have. It's just...not a replacement for backups.

load more comments (1 replies)
load more comments (1 replies)
[–] rambos@lemm.ee 2 points 7 months ago

I use Kopia to backup all personal data (nextcloud, immich, configs, etc) daily to another disk in the same server and also to backblaze B2. Its not proper 321 but feels good enough. I dont backup downloadable content because its expensive

[–] Shimitar@feddit.it 2 points 7 months ago (2 children)

Anything I can download again doesn't get backup, but it sits on a RAID-1. I am ok at losing it due to carelessness but not due to a broken disk. I try to be carefully when messing with it and that's enough, I can always download again.

Anything like photos notes personal files and such gets backedup via restic to a disk mounted to the other side of the house. Offsite backup i am thinking about it, but not really got to it yet. Been lucky all this time.

From 10tb of stuff, the totality of my backupped stuff amount to 700gb. Since 90% of are photos, the backup size is about 700gb too. The actually part of that 700gb that changes (text files, documents..) amount to negligible. The photos never change, at most grow a bit over time.

load more comments (2 replies)
[–] sloppy_diffuser@sh.itjust.works 2 points 7 months ago

Important stuff (about 150G) is synced to all my machines and a b2 Backblaze bucket.

I have a rented seed box for those low seeder torrents.

The stuff I can download again is only on a mirrored lvm pool with an lvmcache. I don't have any redundancy for my monerod data which is on an nvme.

I'm moving towards an immutable OS with 30 days of snapshots. While not the main reason, it does push one to practicing better sync habits.

load more comments
view more: next ›