This paper was accepted on the Basis Fashions for the Mind and Physique workshop at NeurIPS 2025.
Self-supervised studying (SSL) provides a promising strategy for studying electroencephalography (EEG) representations from unlabeled knowledge, lowering the necessity for costly annotations for medical purposes like sleep staging and seizure detection. Whereas present EEG SSL strategies predominantly use masked reconstruction methods like masked autoencoders (MAE) that seize native temporal patterns, place prediction pretraining stays underexplored regardless of its potential to be taught long-range dependencies in neural alerts. We introduce PAirwise Relative Shift or PARS pretraining, a novel pretext activity that predicts relative temporal shifts between randomly sampled EEG window pairs. Not like reconstruction-based strategies that concentrate on native sample restoration, PARS encourages encoders to seize relative temporal composition and long-range dependencies inherent in neural alerts. By way of complete analysis on numerous EEG decoding duties, we display that PARS-pretrained transformers constantly outperform current pretraining methods in label-efficient and switch studying settings, establishing a brand new paradigm for self-supervised EEG illustration studying.
**Work carried out throughout an Apple internship
†Stanford College
‡California Institute of Know-how
§College of Amsterdam
