stardreamer

joined 1 year ago

I would argue that this is something that should be taught in every undergraduate Operating Systems course. But if someone posting it here benefits teens, self-taught hobbyists, and old-timers getting back into the field so be it.

Someday all that waterfront property will become waterbottom property.

[–] stardreamer@lemmy.blahaj.zone 3 points 1 year ago* (last edited 1 year ago)

Some people play games to turn their brains off. Other people play them to solve a different type of problem than they do at work. I personally love optimizing, automating, and min-maxing numbers while doing the least amount of work possible. It's relatively low-complexity (compared to the bs I put up with daily), low-stakes, and much easier to show someone else.

Also shout-out to CDDA and FFT for having some of the worst learning curves out there along with DF. Paradox games get an honorable mention for their wiki.

[–] stardreamer@lemmy.blahaj.zone 2 points 1 year ago* (last edited 1 year ago)

It's a Chinese character. Pronounced jiǒng.

It was really only used as an emoticon in Asia during the 2010s though

[–] stardreamer@lemmy.blahaj.zone 39 points 1 year ago* (last edited 1 year ago) (1 children)

The argument is that processing data physically "near" where the data is stored (also known as NDP, near data processing, unlike traditional architecture designs, where data is stored off-chip) is more power efficient and lower latency for a variety of reasons (interconnect complexity, pin density, lane charge rate, etc). Someone came up with a design that can do complex computations much faster than before using NDP.

Personally, I'd say traditional Computer Architecture is not going anywhere for two reasons: first, these esoteric new architecture ideas such as NDP, SIMD (probably not esoteric anymore. GPUs and vector instructions both do this), In-network processing (where your network interface does compute) are notoriously hard to work with. It takes CS MS levels of understanding of the architecture to write a program in the P4 language (which doesn't allow loops, recursion, etc). No matter how fast your fancy new architecture is, it's worthless if most programmers on the job market won't be able to work with it. Second, there're too many foundational tools and applications that rely on traditional computer architecture. Nobody is going to port their 30-year-old stable MPI program to a new architecture every 3 years. It's just way too costly. People want to buy new hardware, install it, compile existing code, and see big numbers go up (or down, depending on which numbers)

I would say the future is where you have a mostly Von Newman machine with some of these fancy new toys (GPUs, Memory DIMMs with integrated co-processors, SmartNICs) as dedicated accelerators. Existing application code probably will not be modified. However, the underlying libraries will be able to detect these accelerators (e.g. GPUs, DMA engines, etc) and offload supported computations to them automatically to save CPU cycles and power. Think your standard memcpy() running on a dedicated data mover on the memory DIMM if your computer supports it. This way, your standard 9to5 programmer can still work like they used to and leave the fancy performance optimization stuff to a few experts.

Also if the router blocks icmp for some reason you can always manually send an ARP request and check the response latency.

[–] stardreamer@lemmy.blahaj.zone 19 points 1 year ago* (last edited 1 year ago)

So let me get this straight, you want other people to work on a project that you yourself think is a hassle to maintain for free while also expecting the same level of professionalism of a 9to5 job?

And that's fine. Plenty of authors are great at writing the journey and terrible at writing endings. And from what we've gotten so far at least he now knows what not to do when writing an ending.

I'm saying that network traffic is exploding exponentially. Sure, right now 20GbE is enough. but in two years? Four? It's not the throughput per device that is increasing, it's the number of networked devices. For a family (or several college students) that's into stuff like this its possible that they've already reached the peak capacity of 10GbE. I do agree that it's way too expensive though.

That being said, I'm personally very happy running all of my stuff off of 1GbE. But then again, I don't like IoT devices (despite working in an adjacently related field), nor do I torrent.

[–] stardreamer@lemmy.blahaj.zone 4 points 1 year ago* (last edited 1 year ago) (4 children)

20 Gig is nowhere near what most current cloud data centers are using. Most existing infra have at least at 100Gbps NICs. State of the art right now is 800Gbps. Your 20Gbps enterprise server might be enough for bare-metal AD, but if you include us-tail latency network storage and all the other fancy stuff you'll need way more than that. Doubly more so for HPC, ML and other data heavy workloads. Existing links can already see multi-terabit of aggregate throughput, it wouldn't be surprising if someone decided to have a bunch of HD cameras, streaming, torrenting, etc at their house generating traffic 24/7 because someone thought it was a fun thing to do.

For the gateway switch power draw, I can think of an off-the-shelf software switching solution at 75w, and that's for 100Gbps. A 20Gbps ASIC switch would be a lot less power hungry than that. If you're willing to go experimental, here's a theoretical 400Gbps SmartNIC design that runs at 7w, all you need to do is write a basic L3 switching program with NAT and it should all work.

[–] stardreamer@lemmy.blahaj.zone 6 points 1 year ago* (last edited 1 year ago) (7 children)

Because this wouldn't be targeted towards a single device/connection. This is for a household of 5+ streaming 4k, running servers, having cloud (yada yada) IoT devices running simultaneously.

It's the hobbyist tier. It's like asking someone "why do you ever need more than one cast iron pan" when they're into cast iron pan collecting.

view more: ‹ prev next ›