Doom was officially ported to Linux in 1994, and a modified version of Linux Doom was made source-available in 1997, then open-source (GPLv2) in 1999. It was one of the first high-quality open-source games. Those versions do not work on current Linux distros, but they have enabled modern source ports such as PrBoom+ and Chocolate Doom to be developed, and those are available in nearly every distro's repository.
ipacialsection
The website claims that sponsors have no direct influence on the project ("board seats are not for sale"). The reality is that no project of sufficient scale to fully implement web standards can survive without a significant amount of funding.
These are the mixes of the Federation DJ Enterprise. His five-hour mission: to spin strange new records, to seek out new sounds and new labels, to boldly crate-dig where no DJ has dug before. (disco Alexander Courage theme plays)
It's very new. Previously the system would just drop to a console with a message saying "Kernel panic: not syncing: [reason]" and a whole bunch of debug info.
But still, on a well-maintained system, that pretty much never happens. Mainly because Linux is significantly more resilient to faults in device drivers than Windows.
I'm against a megathread. That would be too busy and I think there will be more than enough to discuss about each episode.
For entirely selfish reasons, I'd like individual discussion threads for each episode that come out one or two a day, since that's the pace I expect to be watching it (optimistically).
Though, I think the best option for everyone might be five-episode blocks. That would allow both bingewatchers and slower viewers to enjoy the conversation without spamming the feed, and will match up well enough with the "parts" it would have been split into if it aired on Nickelodeon that both broad and individual episode discussions will make sense.
Yeah, 50% (ram / 2) seems about right.
The major tradeoff with zRAM is that programs are much more likely to crash due to running out of memory, but will run faster when memory is running low and freezes are less likely. You can think of it as offloading the pressure that traditional swap puts onto your disk, onto the (much faster) CPU. There will be an impact on CPU usage, but not enough to cause noticeable slowdown; in my experience running Linux, the CPU is almost never the reason something is slow, and is only going to be under significant pressure if you're running a 3D game in software rendering, compiling a large program, or another complex CPU-bound task.
I wouldn't recommend making the switch unless you often encounter system freezes or slowness while running tasks that use a lot of RAM (like web browsing on certain sites, or gaming), but it will improve things in that case.
You can install an antivirus, but you really don't need to. Malware for Linux is rare, and malware that targets desktop Linux users is extremely rare (to the point that it's a newsworthy story every time it does appear). Most distros have ClamAV and the frontend ClamTk in their repos, but it's primarily used to scan servers for Windows malware before it reaches its intended target. Some Windows malware can still be harmful if run with Wine/Proton, but unless you're downloading and running a lot of Windows software from unofficial sources (which you shouldn't have any reason to) that won't be a risk.
I'm using an AMD Ryzen iGPU on Wayland. I switched to Testing because the support already existed, but the kernel and mesa versions in stable were buggy for my particular GPU and I didn't want to make a FrankenDebian.
It's not systemd's fault, though systemd most often implements offline updates. The arguments for and against offline updates have nothing to do with systemd.
A lot of Linux distros, and graphical package managers like Discover and GNOME Software, are moving in that direction, under the argument that updating while online can cause disruptions to running software, in the worst case including the package manager itself (which can brick the system if it occurs in the middle of a critical update), and updates can't be applied until the affected program (or the system, in case of critical components like the kernel) restarts anyway. Fedora Magazine explains the reasoning here: https://fedoramagazine.org/offline-updates-and-fedora-35/
In my personal experience though, I have never had an issue enabling automatic online updates on Debian Stable, and have had computers stay online for several months without any noticeable issues beyond Firefox restarting, so the risk is there but it's pretty minor.
Depends on your desktop environment. Look for an "autostart" or "startup applications" setting. If you're on KDE, this could also be caused by "Restore previous session" under Settings -> Startup and Shutdown -> Desktop Session.
Bit confused about what you're looking for. If you're just SSH/VNC ing into devices on the same local network, then you can simply use their local IP address, which you can find with a command like
ip addr
and will rarely change, or their hostname if your network is configured properly. There are several GUIs that can remember connection info for you, so you likely will only need it once. It's also quite easy to scan the local network for SSH servers if you have nmap (nmap -p22 <your ip address range, e.g. 192.168.0.1/24>
). If you need to connect to a device on your home network from a different network, any VPN software can achieve that. I'm not aware of any remote desktop solution that doesn't require a network connection, but your network doesn't necessarily need to be connected to the Internet.Are you looking for a GUI that combines all those things?