TY - JOUR AU - AB - Recent work has suggested that language mod- els (LMs) store both common-sense and fac- tual knowledge learned from pre-training data. In this paper, we leverage this implicit knowl- edge to create an effective end-to-end fact checker using a solely a language model, with- out any external knowledge or explicit re- trieval components. While previous work on extracting knowledge from LMs have focused (a) Traditional fact-checking (b) Our new fact- on the task of open-domain question answer- pipeline. checking pipeline. ing, to the best of our knowledge, this is the first work to examine the use of language mod- Figure 1: Traditional fact-checking pipeline (left) vs. els as fact checkers. In a closed-book set- Our LM-based pipeline (right) ting, we show that our zero-shot LM approach outperforms a random baseline on the stan- dard FEVER task, and that our finetuned LM single language model that generates masked to- compares favorably with standard baselines. kens. This offers a number of advantages over the Though we do not ultimately outperform meth- traditional approach: first, the procedure is over- ods which use explicit knowledge bases, we all simpler, requiring fewer resources and compu- believe our exploration shows that this method is viable and TI - Language Models as Fact Checkers? JF - Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER) DO - 10.18653/v1/2020.fever-1.5 DA - 2020-01-01 UR - https://www.deepdyve.com/lp/unpaywall/language-models-as-fact-checkers-Bz06vE385p DP - DeepDyve ER -