TY - JOUR AU - Qiu, Xipeng AB - Abstract: Multi-modal large language models are regarded as a crucial step towards Artificial General Intelligence (AGI) and have garnered significant interest with the emergence of ChatGPT. However, current speech-language models typically adopt the cascade paradigm, preventing inter-modal knowledge transfer. In this paper, we propose SpeechGPT, a large language model with intrinsic cross-modal conversational abilities, capable of perceiving and generating multi-model content. With discrete speech representations, we first construct SpeechInstruct, a large-scale cross-modal speech instruction dataset. Additionally, we employ a three-stage training strategy that includes modality-adaptation pre-training, cross-modal instruction fine-tuning, and chain-of-modality instruction fine-tuning. The experimental results demonstrate that SpeechGPT has an impressive capacity to follow multi-modal human instructions and highlight the potential of handling multiple modalities with one model. Demos are shown in this https URL. TI - SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities JF - Computing Research Repository DO - 10.48550/arxiv.2305.11000 DA - 2023-05-18 UR - https://www.deepdyve.com/lp/arxiv-cornell-university/speechgpt-empowering-large-language-models-with-intrinsic-cross-modal-7mAwuirBew VL - 2023 IS - 2305 DP - DeepDyve ER -