this post was submitted on 21 Nov 2023
995 points (97.9% liked)
Technology
59582 readers
2645 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They probably won't advance much because currently it has two opposite but equally difficult problems. On the one hand, AI still hasn't achieved sensor integration, or creating an ontologically sound world model that includes more than one sensor data stream at a time. Right now it can model based on one sensor or one multidimensional array of sensors. But it can't model in-between models. So you can't have, let's say, one single model that can hear, see light and radar at the same time. The same way that animal intelligence can self-correct their world model when one sensor says A but another sensor disagrees and says B. Current models just hallucinate and go off the deep end catastrophically.
On the opposite end, if we want them to be products, as seems to be MS and Altman fixation. Then it cannot be a black box, at least not for the implementers. Only in this past year there have been actual efforts to really see WTF is going on inside the models after they've been trained and how to interpret and manipulate that inner world to effective and intentional results. Even then, the progress is difficult because it's all abstract mathematics and we haven't found a translation layer to parse the model's internal world into something humans can easily interpret.
I don't disagree with you, there are certainly some major hurdles to overcome in many areas. That's why I caveated my comment to say it's overrated for many purposes; however, there are certain use cases where current AI is truly an amazing tool.
Regardless, OpenAI has made it clear that they never intended to relegate themselves purely to specific use cases for AI, they desire AGI. I would assume this is Microsoft's desire, too, but I'm sure they'd be okay making numerous specialized models for each of their products. But yes, unless they can overcome all of those issues as you point out, its generalized usefulness will be severely stunted. Whether they can accomplish this in the short term (<10 years), I guess time will tell.