this post was submitted on 08 Jul 2024
34 points (92.5% liked)
Apple
17432 readers
182 users here now
Welcome
to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!
Rules:
- No NSFW Content
- No Hate Speech or Personal Attacks
- No Ads / Spamming
Self promotion is only allowed in the pinned monthly thread
Communities of Interest:
Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple
Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode
Community banner courtesy of u/Antsomnia.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I do see complaints about Siri being dumb. If Apple’s super clever about this, they could hone the experience without subjecting us to the bulk of the usual hallucination/confabulation irks.
It all boils down to training data and context data. I bet apple has enough good "anonymous" user data helping to train siri only with relevant data. 🤷🏻♀️I guess we will see
Making an LLM that doesn't hallucinate is probably literally impossible, and I'd be very surprised if Apple's AI bullshit isn't built on one.
LLMs are only the first step…. Not the end game by far.
You don’t need to wonder, Apple has said as much that their AI is built on LLMs, just like everybody else. While hallucinations are still a major unsolved problem, that doesn’t mean they aren’t able to be reduced in frequency and severity. A ChatGPT like chatbot is going to hallucinate because you’re asking it to give extremely open ended responses to literally any query. The more data you feed it in the prompt, and the more you constrain its output, the less likely it is to hallucinate. It’ll likely be extremely rare that using the grammar check or rephrasing tools in Apple AI will be affected by hallucinations for that reason. Siri is more comparable to ChatGPT with regards to open ended questions, but it’s likely that they will integrate LLMs primarily for transforming inputs and outputs rather than the whole process. For example, the LLM could be prompted to call a function based on the user’s query. Then, that function finds a reliable result, either using existing APIs for real time information like weather, or using another LLM with a search engine. The output from this truth-finding process is then fed back into an LLM to generate the final output. The role of the LLM is heavily constrained at every step of the way, which is known to minimize hallucinations.
You arguing that this is an unsolvable problem is defeatist and not helpful to actually mitigating the real issue.