this post was submitted on 07 Jul 2023
12 points (87.5% liked)

Selfhosted

40157 readers
588 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

3 days ago I setup fail2ban. Nothing fancy, just reading the logs of my docker containers where it applied.

Then 2 days ago my server crashed out of nowhere, nothing in the f2b logs (I thought I had banned the entire internet by mistake), doing a nap just tells me port 80 and 443 were open (a few more should have been for Plex).

The same happened yesterday and I pulled the cable just in case I was being hacked (I'm paranoid but not too much), and looked in it. usually I ssh from my local network into the server, but couldn't this time, so I put a screen on it and it was quickly flooded with systemd failures and ext4 errors.

I reformated the disk a few months ago and ran a SMART, it told me the disk was fine, no error detected. It is a chonky 2TB disk and I have at most 150gigs used (movies, music, backups waiting to be transferred on daily basis to other servers/media, dockers).

Where should I look? I know how to work with Linux but when looking for a problem like this, except using systemctl status/restart I'm lost.

top 11 comments
sorted by: hot top controversial new old
[–] SheeEttin@lemmy.world 7 points 1 year ago (1 children)

fsck and reboot, and make sure your backups work

[–] SuperFola@programming.dev 1 points 1 year ago

I turned it off after taking the pic. Will try, thanks

[–] NeoLikesLemmy@lemmy.fmhy.ml 3 points 1 year ago* (last edited 1 year ago)

Shut down, check all the wires and plugs. So many times it is the simple things.

If it's not hardware, then it's the filesystem: Either some disk (or quota) is full, or not mounted, or a filesystem is damaged.

[–] bigredgiraffe@lemmy.world 3 points 1 year ago (1 children)

Did you run a short or long SMART test? How old is the drive? Usually those errors are the beginnings a failing drive in my experience unless you are doing mass I/O (like TB/s) and it can’t update the filesystem meta fast enough for some reason.

[–] SuperFola@programming.dev 2 points 1 year ago (1 children)

Short SMART test, the drive is 5 years old at most. The most I'm doing on the drive is MB/s... So it should be fine

I'll try to get a new drive, probably SSD. Can it be related to the drives card? I have an hardware RAID drive aggregator card, with a single disk on it (had 3 in the past and the 2 oldest ones just died at the same time, making scratching noises ; they were 10+ years old but SMART said everything is fine (short one, haven't tried a long one as smartctl -a tells me it will take about 5-10 for all disks).

[–] bigredgiraffe@lemmy.world 2 points 1 year ago

Yeah the long test does a surface inspection and checks all of the sectors so it will take a while, you can run it in the background though and it might default to that.

Do backup important data though as if it is in fact failing the extra I/O might tip it over the edge, can’t be too careful.

If it was me, I would probably run fsck and reboot the first time in case it was a fluke and then investigate the drive if it happens repeatedly.

If you are worried about it’s age, the SMART will also tell you the power on hours of the drive, that’s the age that matters (well, and TB written sometimes). Each manufacturer has different mean time between failure ratings depending on the type of drive as well, you can also check backblaze data sometimes.

Hope that helps!

[–] LachlanUnchained@lemmyunchained.net 1 points 1 year ago (1 children)
[–] SuperFola@programming.dev 1 points 1 year ago (2 children)

It boots "fine", but I'm now pretty sure it will crash again the same way. The disk is only 5 years old, I hope it's not an hardware problem.

5 years. Gees haha. Not a bad innings.

[–] ABeeinSpace@lemmy.world 1 points 1 year ago

5 years old is pretty old for a hard drive

[–] erre@feddit.win 1 points 1 year ago

Maybe an issue with ram? Could be loose, dusty, going bad.

load more comments
view more: next ›