this post was submitted on 13 Jun 2023
78 points (98.8% liked)

Selfhosted

40018 readers
736 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I see many posts asking about what other lemmings are hosting, but I'm curious about your backups.

I'm using duplicity myself, but I'm considering switching to borgbackup when 2.0 is stable. I've had some problems with duplicity. Mainly the initial sync took incredibly long and once a few directories got corrupted (could not get decrypted by gpg anymore).

I run a daily incremental backup and send the encrypted diffs to a cloud storage box. I also use SyncThing to share some files between my phone and other devices, so those get picked up by duplicity on those devices.

top 50 comments
sorted by: hot top controversial new old
[–] dead@keylog.zip 16 points 1 year ago

What's my what lmao?

[–] OddFed@feddit.de 12 points 1 year ago
[–] idunnololz@lemmy.world 8 points 1 year ago

Are cyanide tablets a backup strategy?

[–] davad@lemmy.world 6 points 1 year ago (1 children)

Restic using resticprofile for scheduling and configuring it. I do frequent backups to my NAS and have a second schedule that pushes to Backblaze B2.

[–] fbartels@lemmy.one 5 points 1 year ago

Another +1 for restic. To simplify the backup I am however using https://autorestic.vercel.app/, which is triggered from systemd timers for automated backups.

[–] Elbullazul@lem.elbullazul.com 5 points 1 year ago* (last edited 1 year ago) (1 children)

I run a restic backup to a local backup server that syncs most of the data (except the movie collection because it's too big). I also keep compressed config/db backups on the live server.

I eventually want to add a cloud platform to the mix, but for now this setup works fine

[–] tgxn@lemmy.tgxn.net 3 points 1 year ago (2 children)

Restic is great! I run it in a container using mazzolino/restic image hooked up to Backblaze for all my important stuff!

load more comments (2 replies)
[–] Bdking158@kbin.social 5 points 1 year ago

Can anyone ELI5 or link a decent reference? I'm pretty new to self hosting and now that I've finally got most of my services running the way I want, I live in constant fear of my system crashing

[–] ipkpjersi@lemmy.one 5 points 1 year ago* (last edited 1 year ago) (1 children)

I usually write my own scripts with rsync for backups since I already have my OS installs pretty much automated also with scripts.

load more comments (1 replies)
[–] linearchaos@lemmy.world 5 points 1 year ago

Irreplaceable media: NAS->Back blaze NAS->JBOD via duplicacy for versioning

Large ISOs that can be downloaded again, NAS -> JBOD and or NAS -> offline disks.

Stuff that's critical leaves the house, stuff that would just cost me a hell of a lot of personal time to rebuild just gets a copy or two.

[–] thatsnothowyoudoit@lemmy.ca 4 points 1 year ago* (last edited 1 year ago)

Large/important volumes on SAN-> B2.

Desktop Macs -> Time Machine on SAN & Backblaze (for a few)

Borgbackup is great and what we used for all our servers when they were pets. It's a great tool, very easy to script and use.

[–] knaak@lemmy.world 4 points 1 year ago

I have a raspberry pi with an external drive with scripts to rsync each morning. Then I have S3 deep glacier backups for off site.

[–] 0xpr03@feddit.de 4 points 1 year ago* (last edited 1 year ago)

Daily offsite to a backup server via restic (+ a self written wrapper for multiple targets). Restic can also run with anything else (sftp, s3 APIs etc). Kinda modern duplicity / borg. Full encrypted and incremental.

[–] rambos@lemmy.ml 4 points 1 year ago (1 children)

Am I the only one using kopia :)?

Im quite new in selfohsting and backups. I went for duplicaty and it is perfect, but heared bad stories and now I use kopia daily backups to another drive and also to B2. Duplicaty is still doing daily backups, but only few important folders to google drive.

Ive heared only good stories about kopia and no one mentioned it

[–] manned_meatball@lemmy.ml 3 points 1 year ago

there are dozens of us, dozens!

[–] Showroom7561@lemmy.ca 4 points 1 year ago (4 children)

All devices backup to my NAS either in realtime or at short intervals throughout the day. I use recycling bins for easy restores for accidentally deleted files.

My NAS is set up on a RAID for drive redundancy (Synology RAID) and does regular backups to the cloud for active files.

Once a day I do a hyperbackup to an external HDD.

Once a month I backup to an external drive that lives offsite.

Backups to these external HDDs have versioning, so I can restore files from multiple months ago, if needed.

The biggest challenge is that as my NAS grows, it costs significantly more to expand my backups space. Cloud storage and new external drives aren't cheap. If I had an easy way to keep a separate NAS offsite, that would considerably reduce ongoing costs.

load more comments (4 replies)
[–] KitchenNo2246@lemmy.world 4 points 1 year ago

I use borgbackup + zabbix for monitoring.

At home, I have all my files get backed up to rsync.net since the price is lower for borg repos.

At work, I have a dedicated backup server running borgbackup that pulls backups from my servers and stores it locally as well as uploading to rsync.net. The local backup means restoring is faster, unless of course that dies.

[–] OutrageousUmpire@lemmy.world 4 points 1 year ago* (last edited 1 year ago) (1 children)

I realized at one point that the amount of data that is truly irreplaceable to me amounts to only - 500GB. So for this important data I back up to my NAS, then from there backup to Backblaze. I also create M-Discs. Two sets, one for home and one I keep at a fiends’ place. Then because “why not” and I already had them sitting around I also backup to two sd cards and keep them on site and off site.

I also backup my other data like tv/movies/music/etc but the sheer volume of data gives me one option, that being a couple usb hard drives I back up to from my NAS.

[–] lupec@lemmy.world 2 points 1 year ago

It's still a WIP but that's pretty much where I'm at as well, was going crazy trying to figure out which multi terabyte service I was going to use when in reality the actually irreplaceable stuff falls well under a single TB of data lol. Might go with Backblaze as well.

[–] gerowen@lemmy.world 4 points 1 year ago (1 children)

I have an external hard drive that I keep in the car. I bring it in once a month and sync it with the server. The data partition is encrypted so that even if it were to get stolen, the data itself is safe.

[–] bernard@lemmy.film 4 points 1 year ago (1 children)

I have a similar 321 strategy without using someone else's server and needing to traverse the internet. I keep my drive in the pool shed, since if my house was to blow up or get robbed, the shed would probably be fine.

load more comments (1 replies)
[–] DawnOfRiku@lemmy.world 4 points 1 year ago

Personal files: Syncthing between all devices and a TrueNAS Scale NAS. TrueNAS does snapshots 4 times a day, with a retention policy of 30 days. From there, a nightly sync to Backblaze B2 happens, also with a 30 day retention policy. Occasional manual backups to external drives too.

Homelab/Servers: Proxmox VM and LXC container exports nightly to TrueNAS, with a retention policy of 7 days. A separate weekly export happens to a separate TrueNAS share, that gets synced to B2 weekly, with a retention policy of 30 says. Also has occasional external drive backups.

[–] dan@upvote.au 4 points 1 year ago

I use Borgbackup 1.2.x. It works really well. Significantly faster than Duplicity. Borg uses block-level deduplication instead of doing incremental backups, meaning the backup won't grow indefinitely like with duplicity (this is why you have to periodically do a full backup with Duplicity). The Borg server has an "append-only" mode meaning the client can only add data to the backup and not remove it - this is useful because if an attacker were to gain access to the client, they can't delete all your backups. This is a common issue with other backup systems - the client has full access to the backup, so there's nothing stopping an attacker from erasing the client system plus all its backups.

For storing the backups, I have two storage VPSes - One with HostHatch in Los Angeles ($10/month for 10TB space) and one with Servarica in Montreal Canada (3.5GB space for $84/year).

Each system being backed up performs the backup twice - Once to each VPS. Borgbackup recommends this approach over only performing one backup then rsyncing it to a different server. The idea is that if one backup gets corrupted (or deleted by an attacker, etc), the other one should still be OK as it's entirely separate.

I just wanted to say that I appreciate the input from everyone here. I really need to work on my backup solution and this will be helpful.

[–] dyslexicjedi@lemmy.world 3 points 1 year ago (5 children)

Main NAS backups to Secondary NAS (onsite - 10G link). Secondary NAS backups up to Offsite (Hetzner server) weekly. Only important data, not Linux ISOs etc.

load more comments (5 replies)
[–] tomhellier@lemmy.ml 3 points 1 year ago

Cross my fingers 🤞

[–] skimdankish2@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

I use....

  • Timeshift ->Local backup on to my RAID array
  • borgbackup -> borgbase online backup
  • GlusterFS -> experimenting with replicating certain apps across 2 raspberry pi's
[–] mikehunt@lemmy.world 3 points 1 year ago

Backup everything locally in proxmox on separate storage, another copy to a local nas and a third one to backblazes cloud storage.

[–] hxhz@lemmy.world 3 points 1 year ago

I use a Backuppc instance hosted on an off site server with a 1Tb drive. It connects through ssh to all my vms and backups /home and any other folders i may need. It handles full and incremental backups, deduplication, and compression.

My critical files and folders are synced from my mas to my desktop using syncthing. From there I use backblaze to do a full desktop backup nightly.

My Nas is in raid 5, but that's technically not a backup.

[–] Borgzilla@lemmy.ca 3 points 1 year ago

I back up my home folder to an encrypted drive once a week using rsync, then I create a tarball, encrypt it, and upload it to protondrive just in case.

[–] banana1@lemmy.ca 3 points 1 year ago (2 children)

Personally I do:

  • Daily snapshots of my data + Daily restic backup on-site on a different machine
  • Daily VM/containers snapshot locally and on a different machine, keeping at least 2 monthly, 2 weekly and 2 daily backups
  • Weekly incremental data backup in an immutable B2 bucket, with a new bucket every month and a 6 month immutability (so data can't be changed/erased for 6 month)
  • Weekly incremental data backup on an other off-site machine
  • Monthly (but I should start doing it weekly) backup of important data (mainly documents and photos) on removable medias that I keep offline in a fire-proof safe

Maybe it's overkill, maybe it's not enough, I'll know when something fail and I am screwed, ahah

As a note, everybody should test/check their backup frequently. I once had an issue after changing an IP address and figured out half my backups where not working 6 month later...

load more comments (2 replies)
[–] Amius@yiffit.net 3 points 1 year ago

Holy crap. Duplicity is what I've been missing my entire life. Thank you for this.

[–] local_taxi_fix@lemmy.world 3 points 1 year ago

For PCs, Daily incremental backups to local storage, daily syncs to my main unRAID server, and weekly off-site copies to a raspberry pi with a large external HDD running at a family member's place. The unRAID server itself has it's config backed up to the unRAID servers and all the local docker stores also to the off-site pi. The most important stuff (pictures, recovery phrases, etc) is further backed up in Google drive.

[–] conrad82@lemmy.world 3 points 1 year ago

I use syncthing to sync files between phone, pc and server.

The server runs proxmox, with a proxmox backup server in VM. A raspberry pi pulls the backups to an usb ssd, and also rclone them to backblaze.

Syncthing is nice. I don't backup my pc, as it is done by the server. Reinstalling the pc requires almost no preparation, just set up syncthing again

[–] dimspace@lemmy.world 3 points 1 year ago

All nextcloud data gets mirrored with rsync to a second drive, so it's in 3 places, original source and twice on the server

Databases are backed up nightly by webmin to second drive

Then installations, databases etc are sent to backblaze storage with duplicati

[–] vivia@sh.itjust.works 2 points 1 year ago (2 children)

For my server I use duplicity, with a daily incremental backup and sending the encrypted diffs away. I researched a few more options some time ago but nothing really fit my use case, but I'm also not super happy with duplicity. Thanks for suggesting borgbackup.

For my personal data I have a NextCloud on a RPi4 at my parents' place, which also syncs between my laptop that I've left there. For an offline and off-site storage, I use the good old strategy where I bring over an external hard drive, rsync it, and bring it back.

[–] kat@feddit.nl 3 points 1 year ago

No problem! I also see Restic a lot in this thread, so I'll probably try both at some point

[–] tyfi@lemmy.world 2 points 1 year ago

I feel the exact same. I've been using Duplicacy for a couple years, it works, but don't totally love it.

When I researched Borg, Restic, others, there were issues holding me back for each. Many are CLI-driven, which I don't mind for most tools. But when shit hits the fan and I need to restore, I really want to have a UI to make it simple (and easily browse file directories).

[–] craftymansamcf@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

For smaller backups <10GB ea. I run a 3 phased approach

  • rsync to a local folder /srv/backup/
  • rsync that to a remote nas
  • rclone that to a b2 bucket

These scripts run on the cron service and I log this info out to a file using --log-file option for rsync/rclone so I can do spot checks of the results

This way I have access to the data locally if the network is down, remotely on a different networked machine for any other device that can browse it, and finally an offsite cloud backup.

Doing this setup manually through rsync/rclone has been important to get the domain knowledge to think about the overall process; scheduling multiple backups at different times overnight to not overload the drive and network, ensuring versioning is stored for files that might require it and ensuring I am not using too many api calls for B2.

For large media backups >200GB I only use the rclone script and set it to run for 3hrs every night after all the more important backups are finished. Its not important I get it done asap but a steady drip of any changes up to b2 matters more.

My next steps is to maybe figure out a process to email the backup logs every so often or look into a full application to take over with better error catching capabilities.

For any service/process that has a backup this way I try and document a spot testing process to confirmed it works every 6months:

  • For my important documents I will add an entry to my keepass db, run the backup, navigate to the cloud service and download the new version of the db and confirm the recently added entry is present.
  • For an application I will run through a restore process and confirm certain config or data is present in the newly deployed app. This also forces me to have a fast restore script I can follow for any app if I need to do this every 6months.
[–] cwiggs@lemmy.world 2 points 1 year ago

My important data is backed up via Synology DSM Hyper backup to:

  • Local external HDD attached via USB.
  • Remote to backblaze (costs about $1/month for ~100gb of data)

I also have proxmox backup server backup all the VM/CTs every few hours to the same external HDD used above, however these backups aren't crucial, it would just be helpful to rebuild if something went down.

[–] kabouterke@lemmy.world 2 points 1 year ago

In short: crontab, rsync, a local and a remote raspberry pi and cryptfs on usb-sticks.

[–] xionzui@lemmy.world 2 points 1 year ago

I use backupninja for the scheduling and management of all the processes. The actual backups are done by rsync, rdiff, borg, and the b2 tool from backblaze depending on the type and destination of the data. I back up everything to a second internal drive, an external drive, and a backblaze bucket for the most critical stuff. Backupninja manages multiple snapshots within the borg repository, and rdiff lets me only copy new data for the large directories.

load more comments
view more: next ›