this post was submitted on 06 Jul 2024
142 points (90.3% liked)
Technology
59982 readers
2835 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I can't tell if you're suggesting that foundation models (which is the underpinning technology of LLMs) aren't being used for the things that I said they're being used for, but I can assure you they are, either in commercial R&D or in live commercial products.
The fact that they shouldn't be used for these things is something we can certainly agree on, but the fact remains that they are.
Sources:
Wayve is using foundation models for driving, and I am under the impression that their neural net extends all the way from sensor input to motor control: https://wayve.ai/thinking/introducing-gaia1/
Research recommending the use of LLMs for giving financial advice: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4850039
LLMs for therapy: https://blog.langchain.dev/mental-health-therapy-as-an-llm-state-machine/
So this all goes back to my point that some form of accountability is needed for how these tools get used. I haven't examined the specific legislation proposal enough to give any firm opinion on it, but I think it's a good thing that the conversation is happening in a serious way.