this post was submitted on 04 Aug 2024
752 points (97.8% liked)

Fuck AI

1387 readers
98 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Eheran@lemmy.world 0 points 3 months ago (1 children)

But it is like magic...? I copy in a bunch of tables etc. from a datasheet and get out code to read and write EEPROM. I use that to read the content of the old BMS and flash a new chip with it. The battery is now working again, after the BMS had a hardware fault in the ADC, destroying the previous pack of cells.

I ask for a simple frontend and I get exactly that. Now I can program ESP32s and perfectly control then via a browser. No me shit interface with some touch pins.

I ask for code to run a simulation on head transfer... answer after some back and forth that is what I get.

What will it be able to give me in 5 years when it is already like magic now?

[–] vrighter@discuss.tchncs.de 8 points 3 months ago (1 children)

nothing. It can only do that because someone already did and it was in their training set.

[–] Eheran@lemmy.world -3 points 3 months ago (4 children)

Stop spreading this. It clearly comes up with original things and does not just copy and paste existing stuff. As my examples could have told you, there is no such stuff on the Internet about programming some specific BMS from >10 years ago with a non-standard I2C.

Amazing how anti people are, even downvoting my clearly positive applications of this tool. What is you problem that I use that to save time? Some things are only possible due to the time saving to begin with. I am not going to freaking learn HTML for a one off sunrise alarm interface.

[–] conciselyverbose@sh.itjust.works 3 points 3 months ago (1 children)

It "comes up with original things" unless working matters in any way.

If there wasn't information in its training set there would be no possibility it gave you anything useful.

[–] Eheran@lemmy.world -3 points 3 months ago (1 children)

Ah, there is "no possibility". So unlile everyone else so far you completely understand how LLMs works and as such an expert can say this. Amazing that I find such a world leading expert here in Lemmy but this expert does not want a Nobel prize and instead just correct random people on the Internet. Thank you so much.

[–] conciselyverbose@sh.itjust.works 3 points 3 months ago

For it to do something both novel and correct, it would require comprehension.

Nothing resembling comprehension in any way is part of any LLM.

[–] vrighter@discuss.tchncs.de 1 points 3 months ago* (last edited 3 months ago) (1 children)

when you ask them simple maths questions from around the time they were trained they got them all right. When they specifically prompted it with questions that provably were published one year later, albeit at the same difficulty, it got 100% wrong answers. You'd be amazed at what one can find on the internet and just how muqh scraping they did to gather it all. 10 years ago is quite recent. Why wouldn't there be documentation? (regardless of whether you managed to find it?) If it's non standard, then I would expect something that is specifically about it somewhere in the training set, whereas the standard compliant stuff wouldn't need a specific make and model to be mentioned.

[–] Eheran@lemmy.world 2 points 3 months ago (1 children)

GPT scores in the top 1% in creativity. There is no need to discuss this. Anyone can try. It is super easy to come up with a unique question. Be it with stacking items or anything else. It is not just copying existing info.

[–] vrighter@discuss.tchncs.de 1 points 3 months ago

only if you deviate only slightly from their training set

[–] alienanimals@lemmy.world -2 points 3 months ago* (last edited 3 months ago) (1 children)

The only thing the Luddites want is an echo-chamber to feed their confirmations bias'. They'll downvote you or completely ignore you if you bring up any positives regarding machine learning.

LLMs and machine learning are still in their infancy, yet they're doing amazing things. It's a tool in its early stages, not a boogeyman. Look to the billionaires if you want someone to blame. This tool set is advancing so fast that people in this thread are relying on goal posts that were moved long ago. Look at the guy claiming AI can't generate images of proper fingers yet this is no longer true and Midjourney continues to make an insane amount of progress in such a short amount of time.

Just look at how many people (50+) upvoted the image claiming AI can't be used to wash dishes, when a simple Google search would prove them wrong.

Not to mention... AI is helping physicists speed up experiments into supernovae to better understand the universe.

AI is helping doctors to expedite cancer screening rates.

AI is also helping to catch illegal fishing, tackle human trafficking, and track diseases.

Edit The laymen that responded couldn't even provide a single source for their unsubstantiated beliefs. It's fortunate that they're so inept.

[–] conciselyverbose@sh.itjust.works 2 points 3 months ago (1 children)

LLMs are absolutely not in their infancy.

They're already many orders of magnitude past diminishing returns and don't scale up for shit.

[–] vrighter@discuss.tchncs.de 1 points 3 months ago

there is literally a loaded die at the end of all generators. It is not part of the llm. It comes later down the pipeline. So not only diminishing returns, but the hallucinations are literally impossible to fix