TY - JOUR AU - AB - Regular predictions The factual knowledge acquired during pre- training and stored in the parameters of Lan- guage Models (LMs) can be useful in down- stream tasks (e.g., question answering or tex- tual inference). However, some facts can be Retain previous knowledge incorrectly induced or become obsolete over time. We present K NOWLEDGEEDITOR, a Figure 1: Left: a model f with parameters  prefers a method which can be used to edit this knowl- prediction y for input x (e.g., y is the mode/argmax of a edge and, thus, fix ‘bugs’ or unexpected pre- discrete distribution parameterized by f (x; )). Right: dictions without the need for expensive re- our method uses a hyper-network g to update the pa- training or fine-tuning. Besides being com- 0 0 rameters of f to  such that f (x;  ) prefers an alterna- putationally efficient, K NOWLEDGEEDITOR tive prediction a without affecting the prediction y of does not require any modifications in LM pre- any other input x 6= x. Our model edits the knowledge training (e.g., the use of meta-learning). In our about x stored in the parameters of f . approach, we train a hyper-network with con- strained optimization to TI - Editing Factual Knowledge in Language Models JF - Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing DO - 10.18653/v1/2021.emnlp-main.522 DA - 2021-01-01 UR - https://www.deepdyve.com/lp/unpaywall/editing-factual-knowledge-in-language-models-8i4D9M1Osj DP - DeepDyve ER -