TY - JOUR AU - Gao, Jianfeng AB - Abstract:We present Set-of-Mark (SoM), a new visual prompting method, to unleash the visual grounding abilities of large multimodal models (LMMs), such as GPT-4V. As illustrated in Fig. 1 (right), we employ off-the-shelf interactive segmentation models, such as SEEM/SAM, to partition an image into regions at different levels of granularity, and overlay these regions with a set of marks e.g., alphanumerics, masks, boxes. Using the marked image as input, GPT-4V can answer the questions that require visual grounding. We perform a comprehensive empirical study to validate the effectiveness of SoM on a wide range of fine-grained vision and multimodal tasks. For example, our experiments show that GPT-4V with SoM in zero-shot setting outperforms the state-of-the-art fully-finetuned referring expression comprehension and segmentation model on RefCOCOg. Code for SoM prompting is made public at: this https URL. TI - Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V JF - Computing Research Repository DO - 10.48550/arxiv.2310.11441 DA - 2023-10-17 UR - https://www.deepdyve.com/lp/arxiv-cornell-university/set-of-mark-prompting-unleashes-extraordinary-visual-grounding-in-gpt-6v02uILaz0 VL - 2024 IS - 2310 DP - DeepDyve ER -