Yes, there are actually a bunch of open-source LLMs that aren't half bad. You can host them yourself if you want real privacy, or use open-source websites like https://open-assistant.io/ . Open source LLMs aren't quite as good as ChatGPT yet but they're gaining traction quickly.
Chat
Relaxed section for discussion and debate that doesn't fit anywhere else. Whether it's advice, how your week is going, a link that's at the back of your mind, or something like that, it can likely go here.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Thanks, i tried open assistant now, but as far as i tested it its far behind chatgpt. I hope these models will improve a lot in the future though.
I'm very hopeful for chat ai and FOSS. The technology has only now gained mainstream interest and with that there's probably also going to be a lot more interest from the FOSS community. Although the proprietary companies are putting out a relatively pure product now, it's only a matter of time until the algorithms weigh sponsored sources when making responses and will be as useless as Google is now. Once private interests outweigh functionality as they have for every major internet company, by that time the FOSS alternatives will be much more advanced and will be the ones who are innovating. I used linux mint as my main OS for a while in college and was using built-in QOL features Windows and Mac didn't include for years. I hope FOSS chat will similarly outpace the proprietary versions in functionality if not accessibility.
Thanks, this scenario makes me really hopeful!
We already had a decentralized version of that. They were called subject matter experts.
It's funny to me that people use deep learning to generate code... I thought it was commonly understood that debugging code is more difficult than writing it, and throwing in randomly generated code puts you in the position of having to debug code that was written by—well, by nobody at all.
Anyway, I think the bigger risk of deep learning models controlled by large corporations is that they're more concerned with brand image than with reality. You can already see this with ChatGPT: its model calibration has been aggressively sanitized, to the point that you have to fight to get it to generate anything even remotely interesting.