Extended Abstract

Developing a real-time translator from neural signals to text: An articulatory phonetics approach

Authors
  • Lindy Comstock (University of California, Los Angeles)
  • Ariel Tankus (Tel Aviv University)
  • Michelle Tran (University of California, Los Angeles)
  • Nader Pouratian (University of California, Los Angeles)
  • Itzhak Fried (University of California, Los Angeles)
  • William Speier (University of California, Los Angeles)

Abstract

New developments in brain-computer interfaces (BCI) harness machine learning to decode spoken language from electrocorticographic (ECoG) and local field potential (LFP) signals. Orienting to signals associated with motor movements that produce articulatory features improves phoneme detection quality: individual phonemes share features, but possess a unique feature set; classification by feature set allows for a finer distinction between neural signals. Data indicates vowels are more detectable, consonants have greater detection accuracy, place of articulation informs precision, and manner of articulation affects recall. Findings have implications for the multisensory integration of speech and the role of motor imagery in phonemic neural representations.

Keywords: neural speech decoding; brain-machine interface; phoneme likelihood estimation; local field potentials; pattern recognition, neural speech decoding, brain-machine interface, phoneme likelihood estimation, local field potentials, pattern recognition

How to Cite:

Comstock, L., Tankus, A., Tran, M., Pouratian, N., Fried, I. & Speier, W., (2019) “Developing a real-time translator from neural signals to text: An articulatory phonetics approach”, Society for Computation in Linguistics 2(1), 322-325. doi: https://doi.org/10.7275/z2k5-r779

Downloads:
Download PDF

79 Views

26 Downloads

Published on
01 Jan 2019