Cool piece from the Financial Times comparing hallucinations in LLMs to hallucinations in humans! People often complain about how LLMs frequently hallucinate, but it’s easy to forget that humans hallucinate a lot as well. For example, if you read some article and then later tell your friend about it, it’s likely that you’ll misquote the article and add some embellishments that weren’t actually there. Not to say that LLM hallucinations aren’t important (indeed we should strive to make AI that’s better than humans), but perhaps hallucinations are learned from how we humans try to recall our prior knowledge. Also mentions our recent paper on long-form factuality 🙂 ft.com/content/741f90…
Nice share! A deeper understanding of the two might lead to better ideas to improve LLMs. I have seen several papers (even simple prompting ones) borrowing ideas from cognition. I really enjoy this type of research. And of course, your paper on long-form factuality was quite interesting. Looking forward to follow-up research on the topic.
@JerryWeiAI Sorry. While LLMs share the non-veridical memory part with humans, they don't share the System 2 abilities humans have of being able to correct themselves.. tldr; humans sometimes hallucinate and self-correct. All LLMs do is hallucinate. thehill.com/opinion/techno…
@JerryWeiAI Great point, Jerry! It's interesting how both LLMs and humans experience hallucinations, highlighting the commonality in perception challenges across different species.
@JerryWeiAI That‘s confabulation, not hallucination. Physicians Smith et al took a stance here: journals.plos.org/digitalhealth/…