maf

joined 1 year ago
[–] maf@szmer.info 2 points 9 months ago (2 children)

You can try the "Advanced Search" below the predefined search presets now. It requires some understanding of JavaScript but should be fairly flexible. Unfortunately I didn't figure out any JavaScript-free interface :/

[–] maf@szmer.info 2 points 9 months ago

Unfortunately not. Somebody also asked me about finding the best fighters elsewhere. Like you say, it would be cool to be able to search for any trait combination. I think that would be more general and also easier to understand. Also requesting specific pals or using some custom scoring functions would be nice.

I'm still thinking how to do this. Maybe the user could write a snippet of JavaScript and the C++ side could call it to score the Pals... It would be pretty flexible but also rather difficult to use... I'm traveling so I can only theorize now. If you have any ideas, let me know. I'll probably have some time to code tomorrow.

[–] maf@szmer.info 9 points 9 months ago (1 children)

Update: it was a bit of work but should be working now :) Drag the Level.sav file from %LOCALAPPDATA%\Pal\Saved\SaveGames\ onto the page and it should load all of your Pals automatically. Thanks for the tip!

[–] maf@szmer.info 5 points 9 months ago

I just looked at https://github.com/cheahjs/palworld-save-tools and I guess it should be possible :) Palworld uses a format called gvas (part of Unreal Engine) which seems to be a zlib-compressed sequence of key/value pairs. When get some time to play again I'll probably look into this. Entering this data through a website is pretty annoying! 😅

 

I've been interested in finding the best Pal breeding strategies, which is pretty difficult because of gender limitations, passive skill inheritance & all sorts of special cases in the Pal breeding system. So I wrote this small utility to help me find the strongest Pals that I can breed & plan the optimal breeding.

If you're into breeding pals you might find this useful :)

[–] maf@szmer.info 13 points 11 months ago (12 children)

This restriction is meant to protect high definition content from being ripped by pirates. Open systems don't offer the same DRM guarantees as the locked ones.

[–] maf@szmer.info 1 points 1 year ago

Oh, yeah that would make sense. I think that would solve the whole security aspect :)

[–] maf@szmer.info 1 points 1 year ago (2 children)

Yeah. LetsEncrypt usually verifies whether the client asking for a certificate owns the domain by sending a HTTP-based challenge. Gatekeeper could pass it by intercepting traffic on port 80. But any LAN device could also pass it by asking for port 80 to be temporarily forwarded. This means that LetsEncrypt TLS certificates are not worth much in LAN environment. Malicious IoT device could convince other LAN hosts that it owns the router IP be sending spoofed ARP announcements. Whenever any LAN device would try to visit Gatekeeper web UI, it would actually visit a fake web UI hosted by the malicious IoT device. The IoT device could then sniff the administrator password and perform privileged actions in the real web UI.

[–] maf@szmer.info 1 points 1 year ago (4 children)

Thank you for the feedback! I have to admit I wasn't aware of how important port forwarding is. Stepping back I guess I'll need a better way of gauging how important specific features are to people. I'll have to think about this a little bit more...

Your question about security is something that I think about a lot. I don't think of LAN & internet as significantly different in terms of security. I also worry about potentially malicious LAN devices attempting to exploit local DNS, DHCP or web UI. I've profesionally worked on anti-malware and I've seen malware preloaded on new phones by factory workers & resellers, suspiciously exploitable flaws in stock firmware (which I guess was a backdoor with plausible deniability), fake monetization SDKs that are actually botnets (so application developers have been unknowingly attaching bots to their apps). There is also the problem of somebody gaining the physical access to your LAN network (for example by connecting a prepared device to an ethernet port for a couple of seconds). While those things may seem far fetched and commercial routers ignore them, I'd like to do something better here.

In terms of preventing C++ footguns, I'm relying on compilation arguments (-fstack-protector, -D_FORTIFY_SOURCE=2), safe abstractions (for example std::unique_ptr, std::span, std::array...), readability (single-threaded, avoiding advanced primitives or external libraries) & patience (I think that time pressure is the biggest source of bugs).

In terms of protocol level security, so far I've been able to secure the update path (so that MITM attackers can't inject malicious code). The web UI is a big problem for me because to do any privileged operations I'll have to authenticate the user first. Firstly I'm not exactly sure how to even do that. Password seems like the best option but I'm still trying to think of something better. There is this new WebAuthn thing which I'll have to look into. Second issue with web UI is that I need to protect the authentication channel. This means that local web UI will need TLS. And this in turn means that I'll have to somehow obtain a TLS cert somehow. Self-signed certs produce nasty security warnings. Obtaining one from LetsEncrypt seems easy - assuming the router has public IP (which may not always be the case). But even if I obtain a LetsEncrypt cert, any LAN device can do the same thing, so the whole TLS can still be MITM-ed. It would be really great if web browsers could "just establish encrypted channel" and not show any security warnings along the way...

[–] maf@szmer.info 2 points 1 year ago* (last edited 1 year ago)

So you’re not remapping the source ports to be unique? There’s no mechanism to avoid collisions when multiple clients use the same source port?

Regarding port collisions. In Gatekeeper there are both - a Symmetric NAT & Full Cone NAT. Both are used in tandem. I didn't mention the former before. Symmetric NAT takes precedence over Full Cone NAT when a connection has already been established (we observed the remote host and have a record of which LAN IP they're talking to). You're 100% correct that without Symmetric NAT there would be port collisions and computers in LAN would fight over ports. I actually started out with just the Full Cone NAT only (where collisions can happen) and used it on my network for a couple of weeks. It seemed to work in my home environment but I was a little worried about potential flakiness so I've implemented a backup mechanism eventually.

Full Cone NAT implies that you have to remember the mapping (potentially indefinitely—if you ever reassign a given external IP:port combination to a different internal IP or port after it’s been used you’re not implementing Full Cone NAT) (...)

Ah, I also recalled something like that! What you're saying about NAT assignments being permanent & requiring multiple IPs to avoid collisions. I think there was a course at my university or some Cisco course that taught that... I haven't been able to find any online sources that would confirm those definitions today but I also remember something along the lines of what you're describing. I have no idea what happened with those terms. Maybe the "permanent assignments" don't make much sense in wireless networks WiFi devices can appear and disappear at any time?

Edit: I found it - the proper term for this was "Static NAT" (as opposed to "Dynamic NAT" where the redirections expire).

(...) but not that the internal and external ports need to be identical.

Right. Port preservation is not a strictly necessary part of Full Cone NAT. It's a nice feature though. I guess the technical classification would be "Full Cone NAT with port preservation".

(If you do have sufficient external IPs the Linux kernel can do Full Cone NAT by translating only the IP addresses and not the ports, via SNAT/DNAT prefix mapping. The part it lacks, for very practical reasons, is support for attempting to create permanent unique mappings from a larger number of unconstrained internal IP:port combinations to a smaller number of external ones.)

This is very cool indeed. I didn't knew that. Thanks!

[–] maf@szmer.info 3 points 1 year ago

I understand that your goal is to learn something new.

In my opinion ambitious, goal-oriented projects may either backfire or turn you into a legend. There will be many issues along the way and while they are all ultimately solvable, the difficulty may kill your motivation. Alternatively, if you manage to power through, then after some period of learning (potentially years) and keeping the fixation on specific problem you might emerge as a domain expert. Either way it's a risky bet.

If I might leave some advice for newcomers it would be to learn how to perform some simple tasks & focus on creating projects that you're confident can be built from things you already know. Over time you'll increase the repertoire of tasks that you can perform, and therefore be able to build increasingly advanced projects.

[–] maf@szmer.info 28 points 1 year ago* (last edited 1 year ago) (2 children)

In terms of types of users I agree with what you're saying but I also think that there are some shades of gray in between. There are people who love to tinker and would manually configure every service on their router, compiling everything from scratch, reading manuals, understanding how things work (they'll probably choose dnsmasq, systemd-networkd, graphana over Gatekeeper). In my experience this approach is pretty exciting for the first couple of years & then gradually becomes more and more troublesome. I think Gatekeeper's target audience are the people who would like to take ownership of their network (and have some theoretical understanding) but don't want to fully dive into the rabbit's hole and configure everything manually.

In terms of problem solved: I agree that Gatekeeper solves a similar problem. I think it's different from those projects because it tightly integrates all of the home gateway functions. While this goes against the Unix philosophy, I think it creates some advantages:

  1. Possibility of cross-cutting features.
  2. Better performance (lower disk usage, lower RAM usage, lower CPU load).
  3. Seamless integration.

Functions of home routers are conventionally spread out over many components (kernel & a bunch of independently developed userspace tools) which talk to each other. Whenever we want to create a cross-cutting feature (for example live traffic graphs) we must coordinate work between many components. We need to create kernel APIs to notify userspace apps about new traffic, create userspace apps to maintain a record of this traffic & a web interface to display it. It's difficult organizationally. In a monolith, where all code is in one place, such cross-cutting features can be developed with less friction.

From the performance point the conventional approach is also less efficient. The tools must talk to each other. Quite often through files (logs & databases). It's wearing down SSDs & causing CPU load that could be otherwise avoided. A tightly integrated monolith needs to write files only periodically (if ever) - because all data can be exchanged through RAM.

From the complexity standpoint the conventional approach is also not great because each of the tools needs to know how to talk with the others. This is usually done by administrator, configuring every service according to its manual. When everything is built together as a monolith, things can "just work" and no configuration is necessary.

Edit: Please don't be offended by my verbosity. From your question I see that you know this stuff already but I'm also answering to the fresh "selfhosted" audience :)

[–] maf@szmer.info 3 points 1 year ago

There are a few proofs against existence of god. Ineffectiveness of prayer. Impossibility of miracles under controlled conditions. Biological nature of human cognition which precludes life after death.

 

So I've been running self-hosted email using Mailu for a couple of months (after migrating out of Google Workspace). Today it turned that although my server seems to be capable of sending and receiving emails, it also seems to be used by spammers. I've stumbled upon this accidentally by looking through logs. This seems to have been going on for all this time (first "unknown" access happened just a couple of hours after I've set everything up).

While browsing the logs there were just so many crazy things happening - the incoming connections were coming through some kind of proxy built-in to Mailu, so I couldn't even figure out what was their source IP. I have no idea why they could send emails without authorization - the server was not a relay. Every spammy email also got maximum spam score - which is great - but not very useful since SMTP agent ignored it and proceeded to send it out. Debugging was difficult because every service was running in a different container and they were all hooked up in a way that involved (in addition to the already mentioned proxy) bridges, virtual ethernet interfaces and a jungle of iptables-based NAT that was actually nft under the hood. Nothing in this architecture was actually documented anywhere, no network diagrams or anything - everything has to be inferred from netfilter rulesets. For some reason "docker compose" left some configuration mess during the "down" step and I couldn't "docker compose up" afterwards. This means that every change in configuration required a full OS reboot to be applied. Finally, the server kept retrying to send the spammy emails for hours so even after (hypothetically) fixing all the configuration issues, it would still be impossible to tell whether they really were fixed because the spammy emails that were submitted before the fix already got into the retry loop.

I have worked on obfuscation technologies and I'm honestly impressed by the state of email servers. I have temporarily moved back to Google Workspace but I'm still on the lookout for alternatives.

Do you know of any email server that could be described as simple? Ideally a single binary with sane defaults, similarly to what dnsmasq is for DNS+DHCP?

 

I'd like to share the project that I've worked on the past couple of weeks. I've started it after finding about how professional routers (specificaly Unifi) are managed and thinking that there should be a simalar open-source software for home networks.

In the near future I'd like to support automatic updates, interface auto-configuration, port redirection, UPnP, ad blocking and other functions that make home networks more transparent and easier to control.

view more: next ›