this post was submitted on 13 Oct 2024
54 points (100.0% liked)

Technology

37719 readers
279 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] technocrit@lemmy.dbzer0.com 69 points 1 month ago* (last edited 1 month ago) (2 children)

This is an extremely misleading headline.

From the abstract:

... applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by element-wise floating point tensor multiplications and 80% energy cost of dot products.

In other words... This method of computation could save 95% of the energy spent on floating point multiplication (and save 80% on dot products)... Not 95% of total energy.

It's an improvement (potentially), but I don't see any analysis of how this would impact total energy.

[–] AndrasKrigare@beehaw.org 11 points 1 month ago

I'd say it's not just misleading but incorrect if it says "integer" but it's actually floats.

[–] IrritableOcelot@beehaw.org 5 points 1 month ago

Good point. Though, the vast majority of ML training and use is tensor math on floating points, so largely dot and cross products, among other matrix operations.