this post was submitted on 02 Aug 2024
1484 points (98.4% liked)

Science Memes

11086 readers
3020 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] CheeseNoodle@lemmy.world 20 points 3 months ago (3 children)

iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

[–] Johanno@feddit.org 6 points 3 months ago

Well in theory you can explain how the model comes to it's conclusion. However I guess that 0.1% of the "AI Engineers" are actually capable of that. And those costs probably 100k per month.

[–] Atrichum@lemmy.world 6 points 3 months ago (1 children)
[–] CheeseNoodle@lemmy.world 13 points 3 months ago

This ones from 2019 Link
I was a bit off the mark, its not that the models they use aren't black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

[–] Tryptaminev@lemm.ee 4 points 3 months ago

It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.