EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing

R Zhang, K Li, Y Hao, Y Wang, Z Lai… - Proceedings of the …, 2023 - dl.acm.org
We present EchoSpeech, a minimally-obtrusive silent speech interface (SSI) powered by
low-power active acoustic sensing. EchoSpeech uses speakers and microphones mounted …

Liplearner: Customizable silent speech interactions on mobile devices

Z Su, S Fang, J Rekimoto - Proceedings of the 2023 CHI Conference on …, 2023 - dl.acm.org
Silent speech interface is a promising technology that enables private communications in
natural language. However, previous approaches only support a small and inflexible …

mSilent: Towards general corpus silent speech recognition using COTS mmWave radar

S Zeng, H Wan, S Shi, W Wang - Proceedings of the ACM on Interactive …, 2023 - dl.acm.org
Silent speech recognition (SSR) allows users to speak to the device without making a
sound, avoiding being overheard or disturbing others. Compared to the video-based …

HPSpeech: Silent Speech Interface for Commodity Headphones

R Zhang, H Chen, D Agarwal, R Jin, K Li… - Proceedings of the …, 2023 - dl.acm.org
We present HPSpeech, a silent speech interface for commodity headphones. HPSpeech
utilizes the existing speakers of the headphones to emit inaudible acoustic signals. The …

Eario: A low-power acoustic sensing earable for continuously tracking detailed facial movements

K Li, R Zhang, B Liang, F Guimbretière… - Proceedings of the ACM …, 2022 - dl.acm.org
This paper presents EarIO, an AI-powered acoustic sensing technology that allows an
earable (eg, earphone) to continuously track facial expressions using two pairs of …

Lasershoes: Low-cost ground surface detection using laser speckle imaging

Z Yan, Y Lin, G Wang, Y Cai, P Cao, H Mi… - Proceedings of the 2023 …, 2023 - dl.acm.org
Ground surfaces are often carefully designed and engineered with various textures to fit the
functionalities of human environments and thus could contain rich context information for …

Music theory-inspired acoustic representation for speech emotion recognition

X Li, X Shi, D Hu, Y Li, Q Zhang, Z Wang… - … on Audio, Speech …, 2023 - ieeexplore.ieee.org
This research presents a music theory-inspired acoustic representation (hereafter, MTAR) to
address improved speech emotion recognition. The recognition of emotion in speech and …

Lipwatch: Enabling Silent Speech Recognition on Smartwatches using Acoustic Sensing

Q Zhang, Y Lan, K Guo, D Wang - Proceedings of the ACM on Interactive …, 2024 - dl.acm.org
Silent Speech Interfaces (SSI) on mobile devices offer a privacy-friendly alternative to
conventional voice input methods. Previous research has primarily focused on smartphones …

EchoNose: Sensing Mouth, Breathing and Tongue Gestures inside Oral Cavity using a Non-contact Nose Interface

R Sun, X Zhou, B Steeper, R Zhang, S Yin, K Li… - Proceedings of the …, 2023 - dl.acm.org
Sensing movements and gestures inside the oral cavity has been a long-standing challenge
for the wearable research community. This paper introduces EchoNose, a novel nose …

Headar: Sensing Head Gestures for Confirmation Dialogs on Smartwatches with Wearable Millimeter-Wave Radar

X Yang, X Wang, G Dong, Z Yan, M Srivastava… - Proceedings of the …, 2023 - dl.acm.org
Nod and shake of one's head are intuitive and universal gestures in communication. As
smartwatches become increasingly intelligent through advances in user activity sensing …