关注
Haocheng Wang
Haocheng Wang
在 princeton.edu 的电子邮件经过验证
标题
引用次数
引用次数
年份
Brain embeddings with shared geometry to artificial contextual embeddings, as a code for representing language in the human brain
A Goldstein, A Dabush, B Aubrey, M Schain, SA Nastase, Z Zada, E Ham, ...
BioRxiv, 2022.03. 01.482586, 2022
72022
Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns
A Goldstein, A Grinstein-Dabush, M Schain, H Wang, Z Hong, B Aubrey, ...
Nature communications 15 (1), 2768, 2024
52024
Deep speech-to-text models capture the neural basis of spontaneous speech in everyday conversations
A Goldstein, H Wang, L Niekerken, Z Zada, B Aubrey, T Sheffer, ...
bioRxiv, 2023.06. 26.546557, 2023
52023
Information-making processes in the speaker's brain drive human conversations forward
A Goldstein, H Wang, T Sheffer, M Schain, Z Zada, L Niekerken, B Aubrey, ...
bioRxiv, 2024.08. 27.609946, 2024
2024
Scale matters: Large language models with billions (rather than millions) of parameters better match neural representations of natural language
Z Hong, H Wang, Z Zada, H Gazula, D Turner, B Aubrey, L Niekerken, ...
bioRxiv, 2024.06. 12.598513, 2024
2024
ALIGNING BRAINS INTO A SHARED SPACE IMPROVES THEIR ALIGNMENT TO LARGE LANGUAGE MODELS
A Bhattacharjee, Z Zada, H Wang, B Aubrey, W Doyle, P Dugan, ...
bioRxiv, 2024.06. 04.597448, 2024
2024
Larger Language Models Better Predict Neural Activity During Natural Language Processing
Z Hong, H Wang, Z Zada, H Gazula, B Aubrey, W Doyle, S Devore, ...
系统目前无法执行此操作,请稍后再试。
文章 1–7