Just because your system is automated doesn't mean it's free of bias. LLMs are trained on human-generated content. Human-generated content has biases. The model will reflect those biases. There's also the proclivity of GPT to be confidently incorrect, like when it made up completely bogus court cases and a credulous lawyer used them in an actual case. I wouldn't want to get my news from a source that may be lying to my face.
Fediverse
This magazine is dedicated to discussions on the federated social networking ecosystem, which includes decentralized and open-source social media platforms. Whether you are a user, developer, or simply interested in the concept of decentralized social media, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as the benefits and challenges of decentralized social media, new and existing federated platforms, and more. From the latest developments and trends to ethical considerations and the future of federated social media, this category covers a wide range of topics related to the Fediverse.
I wouldn't want to get my news from a source that may be lying to my face.
I have bad news for you lol.
But that's exactly the thing. I don't get my news from comanies that outright lie. With the LLMs you don't really know, so it's not exactly trustworthy either.
And even then, biases are not entirely bad. I often like to say, that I'm biased towards both truth and science.
I subscribed to this and it signed me up for six other mailing lists. Not recommended.
Hey Drop, you are not signed up to other mailing lists when you sign up to neural times. We recommend other newsletters after someone signs up through a widget, and you are only signed up if you press subscribe on the widget.