this post was submitted on 07 Jul 2023
26 points (100.0% liked)

Technology

23 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 1 year ago
 

Robots presented at an AI forum said on Friday they expected to increase in number and help solve global problems, and would not steal humans' jobs or rebel against us.

you are viewing a single comment's thread
view the rest of the comments
[–] Pons_Aelius@kbin.social 1 points 1 year ago (1 children)

Exatctly, these LLMs - Large Language Models are not AI.

Calling them that is as accurate as Tesla's claim to have full self driving. Both are nothing but marketing terms to drive sales and adoption.

[–] takeda@kbin.social 1 points 1 year ago (1 children)

Well, technically they are AI. But speech recognition, face recognition, even showing you a route in gps navigation is technically under AI. We just accepted them as normal thing.

This text generation is IMO similar to AI creating pictures or creating music, it is just done with text.

It is not the kind of AI most people think of, like from sci-fi movies.

It's basically trained to generate text that could fool a casual observer that it was written by a human.

It complicates things, because until now the test for AGI was being able to fool the tester to think they talk with another human.

[–] PCChipsM922U@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)

Put that thing in the wild with no other info at disposal except what it has learned thus far and the real world as input (hear, smell, touch, see), let it roam and see if it can draw conclusions and learn from it's mistakes. If it can, next step would be better itself, or at least make a plan on what needs to be removed/added to, let's say, have a more precise hand grip on things, if there need to be code changes, what might they be.

These things are the real AI test, the hell with Touring tests, they don't prove anything (talk is cheap 😂), tell it to think, learn and adapt, that's the real test 👍.

These language models serve one purpose and one purpose only, to fill social networks with them and act like humans. Why? People just love to talk. That will keep them glued to the platform with more ad revenue coming in.

EDIT: I did a joke test with ChatGPT, it can't understand what the punchline is. It thinks it does, but that's not the punchline of the joke.

https://chat.openai.com/share/965a7b71-b1ad-434c-94ef-12ed1ff65028