102
AI chatbots were tasked to run a tech company. They built software in under seven minutes — for less than $1.
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
This isn't what makes them invent facts, or at least not the only (or main?) reason. Fake references, for example, arise because it encounters references in text, so it knows what they look like and where they should be used. It just doesn't know what one is or that it's supposed to match up to something real which says what the text implies that it says.
Right, and if it's set to a "strict" setting where it only ever uses the 100% perfect next word, if the words leading up to a reference are a match for a reference it has seen before it will spit out that specific reference from its training data. But, when it's set to be "creative", and predict words that are a good but not perfect match, it will spit out references that are plausible but don't exist.
So, if you want it to only use real references, you have to set it up to not be at all creative and always use the perfect next word. But, that setting isn't very interesting because it just word-for-word spits out whatever was in its training data. If you want it to be creative, it will "daydream" references that don't exist. The same knob controls both behaviours.
That's not how it works at all. That's not even how references work.