Skip to content

Latest commit

 

History

History
83 lines (53 loc) · 5.52 KB

llm_hallucination.md

File metadata and controls

83 lines (53 loc) · 5.52 KB

LLM Hallucination

Survey

  • A Survey of Hallucination in Large Visual Language Models, arXiv, 2410.15359, arxiv, pdf, cication: -1

    Wei Lan, Wenyi Chen, Qingfeng Chen, ..., Huiyu Zhou, Yi Pan

Hallucination

  • HALoGEN: Fantastic LLM Hallucinations and Where to Find Them, arXiv, 2501.08292, arxiv, pdf, cication: -1

    Abhilasha Ravichander, Shrusti Ghela, David Wadden, ..., Yejin Choi

  • The FACTS Grounding Leaderboard: Benchmarking LLMs' Ability to Ground Responses to Long-Form Input, arXiv, 2501.03200, arxiv, pdf, cication: -1

    Alon Jacovi, Andrew Wang, Chris Alberti, ..., Sasha Goldshtein, Dipanjan Das · (kaggle)

  • The Illusion-Illusion: Vision Language Models See Illusions Where There are None, arXiv, 2412.18613, arxiv, pdf, cication: -1

    Tomer Ullman

    · (𝕏)

  • Improving Factuality with Explicit Working Memory, arXiv, 2412.18069, arxiv, pdf, cication: -1

    Mingda Chen, Yang Li, Karthik Padthe, ..., Gargi Gosh, Wen-tau Yih

  • Improving Factuality with Explicit Working Memory, arXiv, 2412.18069, arxiv, pdf, cication: -1

    Mingda Chen, Yang Li, Karthik Padthe, ..., Gargi Gosh, Wen-tau Yih

  • FACTS Grounding: A new benchmark for evaluating the factuality of large language models

  • 🌟 RetroLLM: Empowering Large Language Models to Retrieve Fine-grained Evidence within Generation, arXiv, 2412.11919, arxiv, pdf, cication: -1

    Xiaoxi Li, Jiajie Jin, Yujia Zhou, ..., Qi Ye, Zhicheng Dou · (RetroLLM. - sunnynexus) Star

  • Distinguishing Ignorance from Error in LLM Hallucinations, arXiv, 2410.22071, arxiv, pdf, cication: -1

    Adi Simhi, Jonathan Herzig, Idan Szpektor, ..., Yonatan Belinkov · (hallucination-mitigation - technion-cs-nlp) Star · (x)

  • Mitigating Object Hallucination via Concentric Causal Attention, arXiv, 2410.15926, arxiv, pdf, cication: -1

    Yun Xing, Yiheng Li, Ivan Laptev, ..., Shijian Lu

    · (cca-llava - xing0047) Star · (arxiv)

  • Can Knowledge Editing Really Correct Hallucinations?, arXiv, 2410.16251, arxiv, pdf, cication: -1

    Baixiang Huang, Canyu Chen, Xiongxiao Xu, ..., Ali Payani, Kai Shu

    · (llm-editing.github)

  • DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations, arXiv, 2410.18860, arxiv, pdf, cication: -1

    Aryo Pradipta Gema, Chen Jin, Ahmed Abdulaal, ..., Pasquale Minervini, Amrutha Saseendran

    · (x) · (aryopg.github) · (decore - aryopg) Star

Evaluation

Multi Modal

  • MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation, arXiv, 2410.11779, arxiv, pdf, cication: -1

    Chenxi Wang, Xiang Chen, Ningyu Zhang, ..., Shumin Deng, Huajun Chen

  • The Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio, arXiv, 2410.12787, arxiv, pdf, cication: -1

    Sicong Leng, Yun Xing, Zesen Cheng, ..., Chunyan Miao, Lidong Bing

Projects

Misc

Misc