EEG is widely used in intellectual and developmental disabilities (IDD) research due to low cost, minimal exclusionary criteria for data acquisition, and high temporal resolution. A critical challenge in maximizing the utility of EEG data sets is high variability in hardware across labs and idiosyncratic data processing approaches, which impede effective data sharing and replication of findings.
To increase overall reproducibility of EEG results, we aim to develop an automated environment for high throughput EEG data processing using the latest data analysis techniques for inter-groups and inter-conditions statistical inference. Our goal is to provide (1) standardized EEG processing modules representing most frequently used signal extraction (e.g., filtering, artifact detection and correction) and analysis procedures (e.g., frequency, time-frequency, coherence measures), (2) batch processing of large data sets that will provide significant time savings compared to manual processing and reduce human error, (3) identify EEG features and processing outputs that are optimal for integration with other imaging modalities (e.g., fMRI) and standardized behavioral assessments, and (4) support novel nonparametric statistical inferential tools for condition- and group-level hypothesis tests.
In this presentation, we will highlight the flow of the EEG pre-processing component, demonstrate the reproducibility of the pipeline through the application on a dataset of participants with Angelman Syndrome, and discuss broader applications in IDD research.
Last Updated: 12/14/2021 4:22:41 PM
Go to the news and video index