this post was submitted on 15 Nov 2024
51 points (100.0% liked)

Futurology

1776 readers
165 users here now

founded 1 year ago
MODERATORS
top 27 comments
sorted by: hot top controversial new old
[–] RedstoneValley@sh.itjust.works 1 points 6 minutes ago

Can someone point me to technical/learning resources about NPUs? So far all I have seen is superficial marketing talk and ads. And on top of that, everything existing in the AI/ML sector still seems to require beefy server hardware. So is there any real point to NPUs at all?

[–] MyOpinion@lemm.ee 1 points 31 minutes ago

I will block the AI on any computer I use.

[–] wizardbeard@lemmy.dbzer0.com 5 points 2 hours ago

I expect that a decent amount of sales of these are just people who don't care replacing their PC.

I'd be shocked if there's in any way a statistically significant amount of these purchases driven by the "ai features".

[–] hendrik@palaver.p3x.de 7 points 3 hours ago

What kind of AI workloads are these NPUs good at? I mean it can't be most of generative AI like LLMs, since that's mainly limited by the memory bandwith and at this point it doesn't really matter if you have a NPU, GPU or CPU... You first need lots of fast RAM and a wide interface to it.

[–] MudMan@fedia.io 12 points 3 hours ago (1 children)

This is such a hilarious bit of branding nonsense. There is no such thing as "AI PCs".

I mean, I technically own one, in that the branding says I do and it has a Copilot button, but... well, that's definitely not why I purchased it and I don't think I've used an "AI feature" on it. I'm not even opinionated against them, I have run local LLMs in my other computers, it's just not a good application for the device I own that is specifically branded for "AI".

The stupidity of it is that my "AI device" is an ARM device, and I absolutely love the things ARM Windows does that are actually useful. I pulled up my old x64 device that I used before I got this and man, the speed of Windows Hello, how much better it handles video streams, the efficiency... I'd never go back for a portable device at this point.

But the marketing says it's "AI", so once people start telling each other that "AI PCs" are bad and new AMD and Intel "AI" CPUs are released it's anybody's guess how the actually useful newer Windows ARM devices will fare.

I'm still hoping that the somewhat irrational anger towards "AI" stuff subsides so we can start talking about real features now, because man, this has been a frustrating generation to parse for portable Windows devices, and we still have Android, iOS and Mac devices coming down the pipe with similar branding nonsense.

[–] jeeva@lemmy.world 3 points 2 hours ago (1 children)

I think this is referring to machines that come with APUs that have enough tensor (or whatever the equivalent is) cores in order to provide a certain baseline of "AI" operations per second, so Microsoft et al can rely on that for Recall etc without users thinking it slow.

Amusingly, the first generation of these didn't really have enough, so they're truly unwanted.

[–] MudMan@fedia.io 0 points 33 minutes ago

Yeah, my problem is that this was made to coincide with the Snapdragon Windows PCs, which are really good at a bunch of stuff and specifically not good at NPU performance, so the result of the "AI" branding ends up being really disappointing.

We could talk about all the other growing pains and the ways those devices were covered, but the obsessive focus on "AI" certainly didn't help, as demonstrated by the bizarre reporting linked in the OP.

[–] dax@feddit.org 5 points 3 hours ago* (last edited 3 hours ago)

It isn't that surprising.. they are shipping AI chips, but there is almost no software that makes use of them yet. Unless people require them for a certain use case, why bother?

Reminds me a bit of "5G ready" phone contracts, just meaningless marketing. Don't need to chase every hype.

[–] SomeGuy69@lemmy.world 4 points 3 hours ago

Marketing ruining it for themselves. Good job.

[–] Piatro@programming.dev 17 points 4 hours ago

Just saying "built on AI" or whatever isn't a convincing sales pitch. What can I actually do with AI that will improve my day to day life? Not a single advert or pitch has told me a single use case for this that applies to what anyone would use for a personal computer, and they're too risky to buy for employees in a work environment unless you can afford to be the guinea pig for this unproven line of hardware (in the sense that I know a ThinkPad will last 10 years but I have no idea how long a copilot pc will last, how often I need to replace the battery or ram or anything else). I'm aware of tech, I know what these laptops are, but as far as I can see the market for them just does not exist and I don't understand why anyone would think otherwise.

[–] AceFuzzLord@lemm.ee 6 points 4 hours ago

Have they tried forcing people to upgrade to AI PCs in order to receive security updates by checking to see if your PC is an AI PC? You know just to prove people really want AI PCs?

/s

[–] JohnDClay@sh.itjust.works 11 points 5 hours ago (3 children)

And it's hard to tell what the difference is. Apples 'built from the ground up for AI' chips just have more RAM. What's the difference with CPUs? Do they just have more onboard graphics processing that can also be used for matrix multiplication?

[–] MudMan@fedia.io 7 points 3 hours ago (1 children)

The stupid difference is supposed to be that they have some tensor math accelerators like the ones that have been on GPUs for three generations now. Except they're small and slow and can barely run anything locally, so if you care about "AI" you're probably using a dedicated GPU instead of a "NPU".

And because local AI features have been largely useless, so far there is no software that will, say, take advantage of NPU processing for stuff like image upscaling while using the GPU tensor calculations for in-game raytracing or whatever. You're not even offloading any workload to the NPU when you're using your GPU, regardless of what you're using it for.

For Apple stuff where it's all integrated it's probably closer to what you describe, just using the integrated GPU acceleration. I think there are some specific optimizations for the kind of tensor math used in AI as opposed to graphics, but it's mostly the same thing.

[–] JohnDClay@sh.itjust.works -3 points 3 hours ago (1 children)

Seems silly to try to get the CPU to do GPU stuff, just upgrade the GPU.

[–] MudMan@fedia.io 4 points 3 hours ago (1 children)

The idea is having tensor acceleration built into SoCs for portable devices so they can run models locally on laptops, tablets and phones.

Because, you know, server-side ML model calculations are expensive, so offloading compute to the client makes them cheaper.

But this gen can't really run anything useful locally so far, as far as I can tell. Most of the demos during the ramp-up to these were thoroughly underwhelming and nowhere near what you get from server-side services.

Of course they could have just called the "NPU" a new GPU feature and make it work closer to how this is run on dedicated GPUs, but I suppose somebody thought that branding this as a separate device was more marketable.

[–] TheBat@lemmy.world 0 points 1 hour ago (1 children)

EU should introduce regulation that prohibits client-side AI/ML processing for applications that require internet access. Show the cost upfront. Let's see how many people pay for that.

[–] MudMan@fedia.io 1 points 35 minutes ago

That is a weird proposal.

It's definitely weird that everyone is panicking about data center processing costs but not about the exact same hardware powering high end gaming devices that have skyrocketed from 100W to 450W in a few years, but ultimately if you want to run a model locally you can run a model locally. I'm not sure how you'd regulate that, it's just software.

Hell, I don't even think distributing the load is a terrible idea, it's just that the models you can run locally in 40 TOPS kinda suck compared to the order of magnitude more processing you get on modern GPUs.

[–] hendrik@palaver.p3x.de 2 points 3 hours ago (1 children)

The Apple chips also have a wide interface to the RAM. That means you can run chatbots (LLMs) and other AI workloads that are memory-bound at crazy speeds compared to an Intel (or AMD) computer.

[–] JohnDClay@sh.itjust.works 3 points 2 hours ago (1 children)

Really? How fast is the memory bus compared to x86? And did they just double the bus bandwidth by doubling the memory?

I'm dubious because they only now went to 16gb ram as base, which has been standard on x86 for almost a decade.

[–] hendrik@palaver.p3x.de 2 points 2 hours ago* (last edited 2 hours ago)

Depending on the chip, they have somewhere from 100 to 400 GB/s. I'm not sure on the numbers on Intel processors. I think the consumer processors have about 50 - 80 GB/s. (~Alder Lake, dual channel DDR5) Mine seems to have way less. And a recent GPU will be somewhere in the range of 400 to 1000 GB/s. But consumer graphics cards stop at 24GB of VRAM and these flagship models are super expensive. Even compared to Apple products.

The people from the llama.cpp project did some measurements and I believe the Apple "Metal" framework seems to outperform the x86 computers by an order of magnitude or so. I'm not sure, it's been some time since i skimmed the discussions on their Github page.

[–] synapse1278@lemmy.world 3 points 4 hours ago (1 children)

Basically yes. They come with an NPU (Neural processing unit) which is hardware acceleration for matrix multiplications. It cannot do graphics. Slap whatever NPU into the chip, boom: AI laptop!

[–] JohnDClay@sh.itjust.works 2 points 4 hours ago (1 children)

Matrix multiplication is also largely what graphics cards do, I wonder how the npus are different.

[–] synapse1278@lemmy.world 1 points 3 hours ago (1 children)

Modern graphics cards pack a lot of functionality. Shading units, Ray tracing, video encoding/deciding. NPU is just the part needed to accelerat Neural nets.

[–] JohnDClay@sh.itjust.works 2 points 3 hours ago

But you can accelerate nural nets better with a GPU, right? They've got a lot more parallel matrix multiplication compute than any npu you can slap on a CPU.

[–] chaosCruiser@futurology.today 6 points 5 hours ago* (last edited 5 hours ago) (1 children)

”However, if it is performance you are concerned about, "it's important to note that GPUs still far outperform NPUs in terms of raw performance," Jessop said, while NPUs are more power-efficient and better suited for running perpetually.”

Ok, so if you want to run your local LLM on your desktop, use your GPU. If you’re doing that on a laptop in a cafe, get a laptop with an NPU. If you don’t care about either, you don’t need to think about these AI PCs.

[–] JohnDClay@sh.itjust.works 1 points 2 hours ago

Or use a laptop with a GPU? An npu seems to just be slightly upgraded onboard graphics.

[–] Lugh@futurology.today 9 points 6 hours ago

It's a no from me. I suspect as the US gets more deregulated for AI, it will be more no's from people around the world.