TY - JOUR AU - Raff, Edward AB - Abstract: Generating and editing images from open domain text prompts is a challenging task that heretofore has required expensive and specially trained models. We demonstrate a novel methodology for both tasks which is capable of producing images of high visual quality from text prompts of significant semantic complexity without any training by using a multimodal encoder to guide image generations. We demonstrate on a variety of tasks how using CLIP [37] to guide VQGAN [11] produces higher visual quality outputs than prior, less flexible approaches like DALL-E [38], GLIDE [33] and Open-Edit [24], despite not being trained for the tasks presented. Our code is available in a public repository. TI - VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance JF - Computing Research Repository DO - 10.48550/arxiv.2204.08583 DA - 2022-04-18 UR - https://www.deepdyve.com/lp/arxiv-cornell-university/vqgan-clip-open-domain-image-generation-and-editing-with-natural-ZFyZBlfkAV VL - 2023 IS - 2204 DP - DeepDyve ER -