this post was submitted on 03 Apr 2024
4 points (66.7% liked)

The Linux Lugcast Podcast

171 readers
1 users here now

website: https://www.linuxlugcast.com/

mumble chat: lugcast.minnix.dev in the lugcast room

email: feedback@linuxlugcast.com

matrix room: https://matrix.to/#/#lugcast:minnix.dev

youtube: https://www.youtube.com/@thelinuxlugcast/videos

peertube: https://nightshift.minnix.dev/c/linux_lugcast/videos

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Kachilde@lemmy.world 4 points 6 months ago (1 children)

Cool. Remind me never to use Opera.

[–] carl_dungeon@lemmy.world 5 points 6 months ago (2 children)

Well this is local LLM, which isn’t the same as sending everything to ChatGPT. I’ve been experimenting with Ollama to run some local LLMs and it’s pretty neat, I can see it becoming pretty useful in a few years when performance and memory requirements improve- there have already been big advances for the local stuff this year. I’m curious how exactly it’ll be used in opera- I’ll at least check it out.

[–] Diabolo96@lemmy.dbzer0.com 4 points 6 months ago (1 children)
[–] carl_dungeon@lemmy.world 3 points 6 months ago

That’s pretty amazing and I suspect that’ll be a trend that continues at a pretty rapid pace!

[–] Kachilde@lemmy.world 1 points 6 months ago

A local LLM is still made up of data scraped from across the internet. Especially not keen on anything that straps Facebook and Google directly to the browser.