Self-Hosted Main

496 readers
2 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

For Example

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

founded 1 year ago
MODERATORS
1
2
 
 

My son has a bag which he takes with him to Kindergarten every day. I'd like to throw in something like an Apple AirTag to be able to see where the bag is, but I have a couple of requirements:

  • No subscription
  • Should work in South Korea (AirTag does not work here)
  • Sometimes it's me who brings hem to Kindergarten and I have a Android phone, sometimes it's his mom with a iPhone
  • It should be somehow connectable to HomeAssistant as a device tracker to see where the bag is (or at least if it's at home or not)

Any ideas what would work?

3
 
 

Hi there!

Since the last time the LemmyWebhook package gained quite a few new capabilities so I've decided it's time for another post.


Quick intro to the package: It adds support for webhooks to Lemmy, meaning you can get notified of events to automatically react to, instead of having to poll for everything, often using multiple http requests. Everything is done in a quite efficient way which avoids hitting your database as much as possible and if it does, it only uses queries on primary key. You can also (optionally) make it available to other users who can then run their bots on your instance only on the permissions you allow them, meaning if you only grant them access to post events, they don't also get access to new user events.


So, what's new?

  • When you listen for an update event, you get the previous version of data in addition to the current one, meaning you can directly compare what has changed
  • New function for getting parent comment id have been added, with this you can for example detect if someone is replying to your bot
  • You can now listen for community subscribe/unsubscribe event

As usual, let me know what you think, feel free to offer suggestions or ask questions.

4
 
 

cross-posted from: https://infosec.pub/post/8864206

I bought a Silicondust HD Homerun back before they put their website on Cloudflare. I love the design of having a tuner with a cat5 port, so the tuner can work with laptops and is not dependent on being installed into a PC.

But now that Silicondust is part of Cloudflare, I will no longer buy their products. I do not patronize Cloudflare patrons.

I would love to have a satellite tuner in a separate external box that:

  • tunes into free-to-air content
  • has a cat5 connection
  • is MythTV compatible

Any hardware suggestions other than #Silicondust?

5
6
1
submitted 9 months ago* (last edited 9 months ago) by Darkassassin07@lemmy.ca to c/main@selfhosted.forum
 
 

-post won't delete, so redacted instead-

7
 
 

I installed paperless and really like it. I successfully get the consumption folder and email fetching working, but I have one small concern. All documents from consumption folder and email are immediately processed into Storage path as soon as Paperless importing them with all "matching" tags, document type, which many documents I haven't REVIEWED or SAVED yet, but they're already moved to the folder.

Is there a way to set documents from consumption folder and emails into a folder or maybe a queue before I manually review and save them ?

8
 
 

What I'm trying to do:

I've recently set up a home media server (Jellyfin, Radarr, Sonarr, etc) and would like to be able to give external access to the Jellyfin server to a few family members. Additionally, I'd like to establish an internally and externally accessible dashboard (probably using Homepage) that facilitates access to various services (e.g. Sonarr, Radarr, qBittorrent), as well as Frigate's dashboard, and allows access to a separate Home Assistant box's dashboard.

Ideal set up:

The dashboard would be accessible through https://dashboard.lemtrees.com/. Individual services would be accessible directly through https://.lemtrees.com/ (e.g. https://sonarr.lemtrees.com/). Access to this dashboard should be safe and secure, and accessible from anywhere (i.e. not just my phone or a pre-approved device) if possible. Access to this dashboard would facilitate access to the Home Assistant box's HA dashboard.

The external Jellyfin access needs to be rather simple, so ideally I could tell my family members to just install the Jellyfin app and point them to https://jellyfin.lemtrees.com/. It is my understanding that this traffic should not go through Cloudflare in order to not violate their TOS.

Current set up:

  • Domain
    • I have a domain name I wish to use (not actually lemtrees.com) through Namecheap.
  • Internal network config
    • Outside -> Comcast router (in Bridge Mode) -> Google Home wi-fi router
      • Wi-fi devices (e.g. phones)
      • 8-port Netgear switch (Ethernet devices)
        • "Media Server" PC
        • "Home Assistant" Intel NUC PC
        • Personal PC
        • Various device gateways (e.g. Philips Hue, Lutron Caseta)
      • (Note: The Google Home app is used to establish DHCP IP reservations / static IPs)
  • "Home Assistant" Intel NUC PC
    • Home Assistant OS (handles home automation)
    • PiHole (currently used to resolve "mediaserver" as the correct IP address internally)
    • Updates a DuckDNS entry (which isn't presently used)
    • (Note: Home Assistant dashboard is not presently accessible externally but I would like it to be)
  • "Media Server" PC
    • Runs Debian
    • Hosts media (one SSD for the OS/etc, multiple HDDs for media storage)
    • Runs Jellyfin server
    • Runs the *arrs, like Sonarr and Radarr
    • Runs NordVPN
    • Runs qBittorrent (network access bound to NordVPN)
    • (Note: Presently do NOT have Docker installed but will)
    • Frigate NVR
      • (Note: Not yet installed/configured, will get set up on Docker)
      • Will provide Wyze cam access and recordings
      • Will stream one Wyze cam to Twitch

What am I after?

Please recommend how to get from where I am to my "ideal set up". I've been reading and frankly just feel a bit overwhelmed. Lots of people want to make things complex just for the challenge of setting it up, but I do that kind of thing all day at work and here I just want to easiest to set up and maintain solution available.

Everything I read seems to have some reason why it won't work, but I may be misunderstanding some of them and especially how they work together. Cosmos Server seems to require that all of my apps be in Docker containers (which isn't the case), Tailscale seems to require that I set up a VPN for whoever wants to use it (not an option for family or for getting to my dashboard from a work PC), Authentik might work for the dashboard (but not all of the apps support SSO) but not for a Jellyfin server, etc. I'm still wrapping my head around setting up a reverse proxy, a VPN tunnel or Cloudflare or something (or just somehow using my NordVPN connection?), not needing to forward ports, etc.

I would greatly appreciate any assistance in wrapping my head around a straightforward way to get my "ideal set up" working.

9
 
 

Hi everyone, as the title says I just moved houses and ISPs and now cannot access my server's services through nginx. I check that they are up and running as I have tailscale setup via my phone and can access the services through that. When I go into Nginx I see that I can "setup" an http reverse proxy (when i click on it brings up a blank page) but when I go to setup ssl I get an internal service error. I've also double checked and my domain is pointing to the right IP address and all required ports are forwarded. Any thoughts as to why this would be occurring? I initially thought it was a config issue but after removing nginx container and stack via portainer and redeployment I still get the exact same issues. Hopefully someone can help.

Edit 1: I've also tried different IPs (docker & tailscale internal IP) when setting up reverse proxy host in nginx to no avail. Tailscale internal IP was working flawlessly before move.

10
 
 

Hey!

I am currently using Yunohost on my HP EliteDesk 600 G3 but I want to switch to a docker based system. That makes it a lot more flexible. Most important apps are: Vaultwarden and Nextcloud. I dont have a lot of data so 2TB is mostly enough (but would be nice if I can extend that).
Disks: 2x2TB SSD and 1x1TB SSD

I am using a Synology as my Backup for my Data (sending backup every night via restic)

So my question is: What OS do I use for this?
Had a look at:
- OMV: nice, OpenSource, but Docker stuff is since the new update...not so nice.
- unraid: tested it, very easy to handle, but feels like "to powerful" and I am only using SSDs and I read you should use HDDs for the array.
- Debian + Portainer: Both Options above are powerful and I think more for "save a lot of data" systems. Debian+Portainer sounds like an minimalistic solution for what I want, but I dont know if this I have to configure a lot and having a lot of work with it. I am not very experience with that (I know how to use docker but I am not a pro)
- Something different?

Thanks for your help!
(sorry for bad english ^^)

11
 
 

Hello, I'm doing my darndest to learn docker but I'm a bit lost in the sauce for understanding how to best structure the setup for backing up and portability.

My current approach is that I keep my docker compose yaml file in git (just using 1 for the time being) and all of my container configs live in another directory outside of this git repo. I understand if I were to move systems or my system were to fail, I'd need these docker config folders to setup my containers on a new system.

As for backing the config folders up, I plan to relocate them to a shared folder on my NAS. Is this the right way to do this or is there a better way to approach this?

Hopefully the lingo is correct, still learning. Thanks a ton!

12
 
 

I’m in the process of selecting a web based ssh app to add all my ssh servers in one place and i’ve tried apache guacamole and it’s been working fine,

Also I’m trying sshwifty but the thing is, sshwifty doesn’t have a login interface before accessing the data so it’s not the best thing, so I’ve made an install and asking if that’s the best for my current setup..

I actually don’t have authelia nor authentik to put it behind 2fa app, and i don’t plan to install one soon BUT i installed sshwifty on oci vm that have a public ip of 123.123.123.123, and i only allowed port 8182 for this ip address so i added in the security list 123.123.123.123/32 so no one can access this app except localhost, and then i installed cloudflare tunnel into this vm and activated otp by email and allowed only my email.

So my question is, is this secure enough?

13
 
 

Hi all,

I very briefly kicked the tires on Headscale, and whilst it certainly seemed very impressive, I did have a few concerns.

Primarily, that non-admin users don't seem to need to consent to having config changes applied to their devices. Whilst it's assumed admins are trustworthy (I'd like to think so!), it just struck me as not the way I'd want something to function when it comes to direct access between devices, routes etc. It also doesn't seem like it logs and tells users when something has changed, so shenanigans could occur, and the user would be unaware of it, especially if it got put back to its prior state of config.

Also seems to lack a self-service aspect to it, where if a user got a new device or had to reinstall their OS and had no backups then they'd need to ask me to be added back to the mesh. Ideally, a user would be able to add their own devices to their own group and allow interoperability between their own devices, but selectively open up access to specific devices to others not owned by them without me needing to configure it for them.

Ideally, I'm looking for something that's equally performant, available on plenty of different OS, allows users to understand and consent to config changes, and also manage their own devices.

Our primary usage scenario is working remotely together via a few bits of software that don't have WAN features or servers and only allow real-time collaboration via LAN.

There's every chance I'm completely wrong about all the above too!

14
 
 

Hey there!

I've been trying to set up Firefly III and it's data importer on my server. These are running on Docker Compose and I installed it according to the instructions found here

Now while Firefly itself is running fine, it's data import tool doesn't really want to work. I can connect to it just fine, however, after I put in my client ID, the next step refuses to connect. Firefly III is running externally on port 90, internal port 8080 and the importer is on port 91, internal 8080.

Here are the logs for the data importer container:

2023-12-04T00:01:14.522957414Z [2023-12-04 01:01:14] local.INFO: The following configuration information was found:
2023-12-04T00:01:14.523048918Z [2023-12-04 01:01:14] local.INFO: Personal Access Token: "" (limited to 25 chars if present)
2023-12-04T00:01:14.523135810Z [2023-12-04 01:01:14] local.INFO: Client ID            : "0"
2023-12-04T00:01:14.523232694Z [2023-12-04 01:01:14] local.INFO: Base URL             : "http://app:8080"
2023-12-04T00:01:14.523318507Z [2023-12-04 01:01:14] local.INFO: Vanity URL           : "http://192.168.1.48"
2023-12-04T00:01:14.532544424Z 192.168.1.36 - - [04/Dec/2023:01:01:14 +0100] "GET /token HTTP/1.1" 200 2711 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36"
2023-12-04T00:01:19.304786075Z [2023-12-04 01:01:19] local.DEBUG: Now at App\Http\Controllers\TokenController::submitClientId
2023-12-04T00:01:19.336493333Z [2023-12-04 01:01:19] local.DEBUG: Submitted data:  {"client_id":"9"}
2023-12-04T00:01:19.336602087Z [2023-12-04 01:01:19] local.DEBUG: [a] Base URL is "http://app:8080"
2023-12-04T00:01:19.336718720Z [2023-12-04 01:01:19] local.DEBUG: [b] Vanity URL is now "http://app:8080"
2023-12-04T00:01:19.336810283Z [2023-12-04 01:01:19] local.DEBUG: [c] Vanity URL is now "http://192.168.1.48"
2023-12-04T00:01:19.336911337Z [2023-12-04 01:01:19] local.DEBUG: Now in App\Http\Controllers\TokenController::redirectForPermission(request, "http://app:8080", "http://192.168.1.48", 9)
2023-12-04T00:01:19.337260478Z [2023-12-04 01:01:19] local.DEBUG: Query parameters are {"client_id":9,"redirect_uri":"http://192.168.1.48:91/callback","response_type":"code","scope":"","state":"XZOM8xz6f49vORmQiLcUbfPGErlW3RLRAVezTREf","code_challenge":"gVqC9BPzNaApEO_DOdt-uJaNTAFpQxjsx32yTFhNjGk","code_challenge_method":"S256"}
2023-12-04T00:01:19.337349471Z [2023-12-04 01:01:19] local.DEBUG: Now redirecting to "http://192.168.1.48/oauth/authorize?" (params omitted)
2023-12-04T00:01:19.339413140Z 192.168.1.36 - - [04/Dec/2023:01:01:19 +0100] "POST /token/client_id HTTP/1.1" 302 2799 "http://192.168.1.48:91/token" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36"

I saw something in the docs regarding refused to connect errors, but didn't find anything relating to my issue. Anyone know what might be going on here?

Thanks!

15
 
 

It's been two months since I am running a mail server. I worked on a beautiful UI like SendGrid and MailGun for more than six months. I plan to start a transactional email service.

I bought a range and rented another /24 range because I didn't want to have a bad neighbour on the subnet. I even got my own ASN because jerks like UCEProtect often put big ISPs on a blacklist at the ASN level.

Of course, I have got a decent experience with this. I wrote my own SMTP server, email routing, and other things such as bounce and suppression handling. In a sense, everything is fine. RDNS, DKIM, DMARC, and SPF.

I know that IP needs to warm up, so that's where I started. I paid for a few services to help me warm up, and it took me about two months to do so. Okay so far. The email was delivered 100% of the time to Gmail, but not at all to Yahoo and Outlook. The delivery rate to these two companies started to get better around last week, though. Some IP addresses started getting a 100% delivery rate.

Then, I started testing my service on one of my websites. Of course, 100% transactional emails with account confirmation links ONLY. It was working great. Nearly 2,000 emails, 3,000+ opens and about 2,500 clicks daily on an average.

I've also subscribed to Glock Apps and MXToolbox to measure my email deliverability and monitor IPs.

Just today, I received an email with all half of my active IP addresses and sending/tracking domain blacklisted by Spamhaus. They categorize it under "spam domain", but I looked through my server logs (yes, everything is logged) and found no evidence of spam. Only transactional and warmup emails sent. I opened a ticket with Spamhaus and refuse to unblock my IP addresses and domains.

I spent 6 months and $20,000+ working on this, only to be butchered by Spamhaus. I want to kill myself. How can Spamhaus be the police, judge and the executioner?

16
 
 

Hey all,

Just purchased a little Lenovo M720q and an external 8TB HDD to start off.
Wanted to use Proxmox so I can play around with VMs in the future.

Here is my plan:

  • Install Proxmox on the PC storage
  • Install tailscale on the host so I can have remote access
  • Create a VM and use docker-compose for Plex, *arrs, etc.
  • Create a VM for my download clients, and putting this entire VM behind a VPN
  • Create a VM/LXC, mount the external drive to it so I can set up a NFS/SMB or something (still researching this) so the other VMs/containers will all have access to it

I am not sure with file systems to use for the Proxmox host and external storage. Been seeing people recommend EXT4, NTFS, ZFS

17
 
 

Hello everyone, I'm looking for a selhosted alternative to https://cryptapi.io
I've found that list here https://github.com/alexk111/awesome-bitcoin-payment-processors and " Keagate" is almost what I'm looking for, but missing some coins and looks like dead... Are there some good alternatives or some which support "API-Mode" with callbacks?

18
 
 

Hello, I'm wanting to build a small server and self host a couple services. However, I'm not really sure if what I'm trying to do will be possible and/or simple enough for a beginner.

I want to host Nextcloud, Vaultwarden, and Jellyfin/Plex to start. However, I want to make Nextcloud and Vaultwarden both available outside my network with a domain (I.e. nc.mydomain.com and vw.mydomain.com). Is this possible to do? To my understanding, both services have the same IP (since they're both on the same machine) so I'm not sure how I would configure it on my domain provider end. Also, does opening up these services put my others at a higher security risk?

19
20
 
 

I'd like a docker image that gives me current system statistics like btop does on a web page in my docker environment. Is there something that isn't grafana/prometheus where I have to set up all my data and queries? I'm working towards learning those, but life has slowed that down. I'd like to see CPU/Disk/Network usage and system temp. Open to suggestions.

21
 
 

I am new to self hosting so please bear with me:

I just found out my wife is paying Deluxe hosting (https://www.deluxehosting.com/) $100 a month for a website. There is nothing spectacular about the site and there are grammatical errors that someone should have caught. I feel like the cost of the site is a bit ridiculous and want to run a thought by you all before I try to implement it:

My thought was to host a Wordpress site on one of Linodes shared CPU plans. I was looking at the $5 per month plus $2 backup plan. As long as my wife owns the domain name of her site, this should work, right? Is there something better you can think of? Are there additional costs to this set up that I am missing?

Thanks!

22
 
 

I am not a native English speaker and using google translation. If I offend you, I apologize. 😃

The biggest problem is the disk occupation. Whatever portainer of others panels, they cannot allow me dispatch the storage location. Espacially the docker cost a high require of storage. I am a SLC/MLC believer, and i am always worry about the read/write consume cost by docker, because the folder locate in the system partition.

Another misery is that many applications' set up script will pull many image and say nothing in the introdution. If i want to undo the installation, it is so hard to tell which one is foreign.

The following is my personal technology stack rank of prefer to install. The difficulty of deploy is the key judgment, and the web access will be the second factor:

  1. TypeScript/Node/JavaScript —— Powerful package management. No compile requirement. Easy to customize, embed and reuse in personal projects.
  2. Go —— One file and one command.
  3. PHP —— No compile requirement. Easy to customize. Code fast. Abundant LNMP support tool.
  4. Java —— Less files. Easy to run especialy springboot framework. But more and more project prefter front and rear separation.
  5. Python —— No compile requirement. Easy to customize. But version problem is unbearable. The package management is centralized. Huge size package will be download into the system disk (The root folder), espacially the AI app.
  6. C# —— If lucky enoungh, it is possible to run on the linux. But many of them are served for windows desktop.
  7. C/C++/Ruby —— System invasion, hard to backup and deploy.

All the kind of project above, I am using supervisor and LNMP panel to manage, and deside the location among my 3 SSDs and 6 HDDs.

The .Docker and podman might be the trend, but in my humble oppinion, in most of senarion there is no need to use them. Native deployment can handle more and actually is not too hard, except something complex like Calibre and Jellyfin. Also I am looking for Alternative option of Calibre and Jellyfin.

https://preview.redd.it/yakxxr1e974c1.png?width=915&format=png&auto=webp&s=58fc3642d6677ccf65bcf568ddee1fa4f7843990

23
 
 

I have TrueNAS scale virtualized in Proxmox. I created a smb share that I can access on my windows 10 computer by entering the ip address of my TrueNAS and the following folder:

\\10.XX.XX.XXX\prox-share

I can add files to this share on TrueNAS from my Windows 10 machine. So, I know the share is working. On the share, I have created the folder MEDIA. Inside MEDIA, I placed the folders named movies and shows.

When I go into Jellyfin and click on add media library, I select content type: movies.

Then enter the display name of "Movies".

Then I go to folders and I cannot for the life of me figure out how to get this to work. At the bottom where it says "Shared network folder:" I enter 10.XX.XX.XXX/prox-share. Then for folder, if I put "/media/movies", I get the message "The path could not be found. Please ensure the path is valid and try again ". When I just enter /media (instead of /media/movies) in the folder name, it will accept that, but none of my movies are showing up in my media.

Any idea on what I am doing wrong? Thanks!

24
 
 

Just to start off, I have basic knowledge when it comes to networking and DNS setup.

I had PiHole installed for over a year, ad blocking working fine but there was unexplained lag/slowness across the devices.

My internet is not bad, 350mbps 5G home (no other options available in my area).

For example:

-Videos on X (Twitter) and TikTok would take around 3 to 5 seconds to load and start playing. When switching to mobile carrier data it is loading instantly.

-Github pulls frequently fail even though the domain is whitelisted.

Recently I decided to change from PiHole to Adguard Home, it's been over a week now and internet is much much faster. the above mentioned examples are not an happening anymore. overall browsing is also faster.

I don't know what was causing the issue with PiHole but I thought I would share this experience in case someone else is having similar issues.

I would also be very interested to know any logical explanation to this experience.

Edit: Hosting is on Physical server running ProxMox, not raspberry pi.

25
 
 

I read a year or two ago people were buying used hard drives in bulk and stress testing them in something like a 40% failure rate and keeping the 60%, so basically buying 10 hard drives to get six. Does someone have information on this?

view more: next ›