Quantitative survey of the state of the art in sign language recognition
O Koller - arXiv preprint arXiv:2008.09918, 2020 - arxiv.org
This work presents a meta study covering around 300 published sign language recognition
papers with over 400 experimental results. It includes most papers between the start of the …
papers with over 400 experimental results. It includes most papers between the start of the …
The fate landscape of sign language ai datasets: An interdisciplinary perspective
Sign language datasets are essential to developing many sign language technologies. In
particular, datasets are required for training artificial intelligence (AI) and machine learning …
particular, datasets are required for training artificial intelligence (AI) and machine learning …
Sign language video retrieval with free-form textual queries
A Duarte, S Albanie, X Giró-i-Nieto… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract Systems that can efficiently search collections of sign language videos have been
highlighted as a useful application of sign language technology. However, the problem of …
highlighted as a useful application of sign language technology. However, the problem of …
ArabSign: a multi-modality dataset and benchmark for continuous Arabic Sign Language recognition
H Luqman - 2023 IEEE 17th International Conference on …, 2023 - ieeexplore.ieee.org
Sign language recognition has attracted the interest of researchers in recent years. While
numerous approaches have been proposed for European and Asian sign languages …
numerous approaches have been proposed for European and Asian sign languages …
Enhancing Brazilian Sign Language Recognition through Skeleton Image Representation
CEGR Alves, FDA Boldt… - 2024 37th SIBGRAPI …, 2024 - ieeexplore.ieee.org
Effective communication is paramount for the inclusion of deaf individuals in society.
However, persistent communication barriers due to limited Sign Language (SL) knowledge …
However, persistent communication barriers due to limited Sign Language (SL) knowledge …
Towards visually prompted keyword localisation for zero-resource spoken languages
Imagine being able to show a system a visual depiction of a keyword and finding spoken
utterances that contain this keyword from a zero-resource speech corpus. We formalise this …
utterances that contain this keyword from a zero-resource speech corpus. We formalise this …
Attention-based keyword localisation in speech using visual grounding
Visually grounded speech models learn from images paired with spoken captions. By
tagging images with soft text labels using a trained visual classifier with a fixed vocabulary …
tagging images with soft text labels using a trained visual classifier with a fixed vocabulary …
YFACC: A Yorùbá Speech–Image Dataset for Cross-Lingual Keyword Localisation Through Visual Grounding
Visually grounded speech (VGS) models are trained on images paired with unlabelled
spoken captions. Such models could be used to build speech systems in settings where it is …
spoken captions. Such models could be used to build speech systems in settings where it is …
Keyword localisation in untranscribed speech using visually grounded speech models
Keyword localisation is the task of finding where in a speech utterance a given query
keyword occurs. We investigate to what extent keyword localisation is possible using a …
keyword occurs. We investigate to what extent keyword localisation is possible using a …
Human gesture recognition of dynamic skeleton using graph convolutional networks
W Liang, X Xu, F Xiao - Journal of Electronic Imaging, 2023 - spiedigitallibrary.org
In this era, intelligent vision computing has always been a fascinating field. With the rapid
development in computer vision, dynamic gesture-based recognition systems have attracted …
development in computer vision, dynamic gesture-based recognition systems have attracted …