[HTML][HTML] A shared model-based linguistic space for transmitting our thoughts from brain to brain in natural conversations

Z Zada, A Goldstein, S Michelmann, E Simony, A Price… - Neuron, 2024 - cell.com
Effective communication hinges on a mutual understanding of word meaning in different
contexts. We recorded brain activity using electrocorticography during spontaneous, face-to …

Neural dynamics of predictive timing and motor engagement in music listening

A Zalta, EW Large, D Schön, B Morillon - Science Advances, 2024 - science.org
Why do humans spontaneously dance to music? To test the hypothesis that motor dynamics
reflect predictive timing during music listening, we created melodies with varying degrees of …

Timing and location of speech errors induced by direct cortical stimulation

H Kabakoff, L Yu, D Friedman, P Dugan… - Brain …, 2024 - academic.oup.com
Cortical regions supporting speech production are commonly established using
neuroimaging techniques in both research and clinical settings. However, for neurosurgical …

[HTML][HTML] Subject-Agnostic Transformer-Based Neural Speech Decoding from Surface and Depth Electrode Signals

J Chen, X Chen, R Wang, C Le, A Khalilian-Gourtani… - bioRxiv, 2024 - ncbi.nlm.nih.gov
Objective: This study investigates speech decoding from neural signals captured by
intracranial electrodes. Most prior works can only work with electrodes on a 2D grid (ie …

Continuous synthesis of artificial speech sounds from human cortical surface recordings during silent speech production

K Meng, F Goodarzy, EY Kim, YJ Park… - Journal of Neural …, 2023 - iopscience.iop.org
Objective. Brain–computer interfaces can restore various forms of communication in
paralyzed patients who have lost their ability to articulate intelligible speech. This study …

Pars opercularis underlies efferent predictions and successful auditory feedback processing in speech: Evidence from left-hemisphere stroke

SD Beach, D Tang, S Kiran, CA Niziolek - Neurobiology of Language, 2024 - direct.mit.edu
Hearing one's own speech allows for acoustic self-monitoring in real time. Left-hemisphere
motor planning regions are thought to give rise to efferent predictions that can be compared …

Machine Learning Application to Study Human Brain: The Investigation of Brain Microstructure and Speech Decoding Based on Cortical Neural Activity

J Chen - 2024 - search.proquest.com
Abstract Machine learning and deep neural networks have succeeded in various computer
vision tasks involving modalities ranging from natural to medical images. The advancement …