TY - JOUR AU - Kumar, Praveen AB - Transformer architectures have accelerated the research in Continuous Sign Language Recognition and Translation (CSLRT), which involves predicting sign gloss patterns from video and converting them into spoken language. This process is challenging due to the lack of direct alignment between sign glosses and spoken words. While Transformers are effective due to their ability to process inputs in parallel, their high memory consumption makes the transformers less suitable for edge devices. To address this issue, we propose the SignEdgeLVM, a model that uses a Global Relative Attention Matrix (GRAM) and Dynamic Point Frame Sampling (DPFS) module. The SignEdgeLVM significantly lowers attention mechanism memory consumption by 78.22 MB (99.93 percent) per head in the attention layer for our implementation. Evaluated on the PHOENIX14T dataset, this optimization makes SignEdgeLVM suitable for edge devices. TI - SignEdgeLVM transformer model for enhanced sign language translation on edge devices JF - Discover Computing DO - 10.1007/s10791-025-09509-1 DA - 2025-03-10 UR - https://www.deepdyve.com/lp/springer-journals/signedgelvm-transformer-model-for-enhanced-sign-language-translation-M8a8NXj1Lj VL - 28 IS - 1 DP - DeepDyve ER -