-
A Survey of Hallucination in Large Visual Language Models,
arXiv, 2410.15359
, arxiv, pdf, cication: -1Wei Lan, Wenyi Chen, Qingfeng Chen, ..., Huiyu Zhou, Yi Pan
-
HALoGEN: Fantastic LLM Hallucinations and Where to Find Them,
arXiv, 2501.08292
, arxiv, pdf, cication: -1Abhilasha Ravichander, Shrusti Ghela, David Wadden, ..., Yejin Choi
-
The FACTS Grounding Leaderboard: Benchmarking LLMs' Ability to Ground Responses to Long-Form Input,
arXiv, 2501.03200
, arxiv, pdf, cication: -1Alon Jacovi, Andrew Wang, Chris Alberti, ..., Sasha Goldshtein, Dipanjan Das · (kaggle)
-
The Illusion-Illusion: Vision Language Models See Illusions Where There are None,
arXiv, 2412.18613
, arxiv, pdf, cication: -1Tomer Ullman
· (𝕏)
-
Improving Factuality with Explicit Working Memory,
arXiv, 2412.18069
, arxiv, pdf, cication: -1Mingda Chen, Yang Li, Karthik Padthe, ..., Gargi Gosh, Wen-tau Yih
-
Improving Factuality with Explicit Working Memory,
arXiv, 2412.18069
, arxiv, pdf, cication: -1Mingda Chen, Yang Li, Karthik Padthe, ..., Gargi Gosh, Wen-tau Yih
-
FACTS Grounding: A new benchmark for evaluating the factuality of large language models
-
🌟 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation,
arXiv, 2412.11919
, arxiv, pdf, cication: -1Xiaoxi Li, Jiajie Jin, Yujia Zhou, ..., Qi Ye, Zhicheng Dou · (RetroLLM. - sunnynexus)
-
Distinguishing Ignorance from Error in LLM Hallucinations,
arXiv, 2410.22071
, arxiv, pdf, cication: -1Adi Simhi, Jonathan Herzig, Idan Szpektor, ..., Yonatan Belinkov · (hallucination-mitigation - technion-cs-nlp)
· (x)
-
Mitigating Object Hallucination via Concentric Causal Attention,
arXiv, 2410.15926
, arxiv, pdf, cication: -1Yun Xing, Yiheng Li, Ivan Laptev, ..., Shijian Lu
-
Can Knowledge Editing Really Correct Hallucinations?,
arXiv, 2410.16251
, arxiv, pdf, cication: -1Baixiang Huang, Canyu Chen, Xiongxiao Xu, ..., Ali Payani, Kai Shu
-
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations,
arXiv, 2410.18860
, arxiv, pdf, cication: -1Aryo Pradipta Gema, Chen Jin, Ahmed Abdulaal, ..., Pasquale Minervini, Amrutha Saseendran
· (x) · (aryopg.github) · (decore - aryopg)
-
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation,
arXiv, 2410.11779
, arxiv, pdf, cication: -1Chenxi Wang, Xiang Chen, Ningyu Zhang, ..., Shumin Deng, Huajun Chen
-
The Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio,
arXiv, 2410.12787
, arxiv, pdf, cication: -1Sicong Leng, Yun Xing, Zesen Cheng, ..., Chunyan Miao, Lidong Bing
- hallucination-leaderboard - vectara