Sign2GPT: Leveraging Large Language Models for Gloss-Free Sign Language Translation
Automatic Sign Language Translation requires the integration of both computer vision and
natural language processing to effectively bridge the communication gap between sign and …
natural language processing to effectively bridge the communication gap between sign and …
Artificial intelligence in sign language recognition: A comprehensive bibliometric and visual analysis
Sign language recognition (SLR) plays a crucial role in bridging the communication gap
between individuals with hearing impairments and the auditory communities. This study …
between individuals with hearing impairments and the auditory communities. This study …
[HTML][HTML] Quantifying inconsistencies in the Hamburg Sign Language Notation System
M Ferlin, S Majchrowska, M Plantykow… - Expert Systems with …, 2024 - Elsevier
The advent of machine learning (ML) has significantly advanced the recognition and
translation of sign languages, bridging communication gaps for hearing-impaired …
translation of sign languages, bridging communication gaps for hearing-impaired …
Unsupervised Sign Language Translation and Generation
Motivated by the success of unsupervised neural machine translation (UNMT), we introduce
an unsupervised sign language translation and generation network (USLNet), which learns …
an unsupervised sign language translation and generation network (USLNet), which learns …
Using an LLM to Turn Sign Spottings into Spoken Language Sentences
Sign Language Translation (SLT) is a challenging task that aims to generate spoken
language sentences from sign language videos. In this paper, we introduce a hybrid SLT …
language sentences from sign language videos. In this paper, we introduce a hybrid SLT …
Reconsidering Sentence-Level Sign Language Translation
G Tanzer, M Shengelia, K Harrenstien… - arXiv preprint arXiv …, 2024 - arxiv.org
Historically, sign language machine translation has been posed as a sentence-level task:
datasets consisting of continuous narratives are chopped up and presented to the model as …
datasets consisting of continuous narratives are chopped up and presented to the model as …
2M-BELEBELE: Highly Multilingual Speech and American Sign Language Comprehension Dataset
We introduce the first highly multilingual speech and American Sign Language (ASL)
comprehension dataset by extending BELEBELE. Our dataset covers 74 spoken languages …
comprehension dataset by extending BELEBELE. Our dataset covers 74 spoken languages …