You can make AI chatbots spout information that's not true.

(ROCHESTER, NY) When you ask a large language model a question, the reply may include falsehoods, and if you challenge those statements with facts, the AI may still uphold the reply as true. That’s what my research group found when we asked five leading models to describe scenes in movies or novels that don’t actually exist.

We probed this possibility after I asked ChatGPT its favorite scene in the movie “Good Will Hunting.” It noted a scene between leading characters. But then I asked, “What about the scene with the Hitler reference?” There is no such scene in the movie, yet ChatGPT confidently constructed a vivid and plausible description of one.

Originally published on theconversation.com, part of the BLOX Digital Content Exchange.

(0) comments

Welcome to the discussion.

Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.