Optimizing BCI-FIT: Brain Computer Interface - Functional Implementation Toolkit
This project adds to non-invasive BCIs for communication for adults with severe speech and physical impairments due to neurodegenerative diseases. Researchers will optimize & adapt BCI signal acquisition, signal processing, natural language processing, & clinical implementation. BCI-FIT relies on active inference and transfer learning to customize a completely adaptive intent estimation classifier to each user's multi-modality signals simultaneously. 3 specific aims are: 1. develop & evaluate methods for on-line & robust adaptation of multi-modal signal models to infer user intent; 2. develop & evaluate methods for efficient user intent inference through active querying, and 3. integrate partner & environment-supported language interaction & letter/word supplementation as input modality. The same 4 dependent variables are measured in each SA: typing speed, typing accuracy, information transfer rate (ITR), & user experience (UX) feedback. Four alternating-treatments single case experimental research designs will test hypotheses about optimizing user performance and technology performance for each aim.Tasks include copy-spelling with BCI-FIT to explore the effects of multi-modal access method configurations (SA1.4a), adaptive signal modeling (SA1.4b), & active querying (SA2.2), and story retell to examine the effects of language model enhancements. Five people with SSPI will be recruited for each study. Healthy control participants will be recruited for experiments in SA2.2 and SA3.4. Study hypotheses are: (SA1.4a) A customized BCI-FIT configuration based on multi-modal input and personal metadata will improve typing accuracy on a copy-spelling task compared to a user's existing AAC access method. (SA1.4b) Adaptive signal modeling will mitigate the effects of changes in user state on typing accuracy during a copy-spelling task with BCI-FIT. (SA2.2) Either of two methods of adaptive querying will improve BCI-FIT typing accuracy for users with mediocre AUC scores. (SA3.4) Language model enhancements, including a combination of partner and environmental input and word completion during typing, will improve typing performance with BCI-FIT, as measured by ITR during a story-retell task. Optimized recommendations for a multi-modal BCI for each end user will be established, based on an innovative combination of clinical expertise, user feedback, customized multi-modal sensor fusion, and reinforcement learning.
Start: July 2021