An Unsupervised Learning Algorithm for Multiscale Neural Activity

Hamidreza Abbaspourazad1, Maryam Shanechi1

  • 1University of Southern California

Details

14:35 - 14:50 | Wed 12 Jul | Schwan Room | WeBT8.2

Session: Motor Neuroprostheses

Abstract

Technological advances have enabled the simultaneous recording of multiscale neural activity consisting of spikes, local field potential (LFP), and electrocorticogram (ECoG). Developing models that describe the encoding of behavior within multiscale activity is essential both for understanding neural mechanisms and for various neurotechnologies such as brain-machine interfaces (BMI). Multiscale recordings consist of signals with different statistical profiles and time-scales. While encoding models have been developed for each scale of activity alone, developing statistical models that simultaneously characterize discrete spike and continuous LFP/ECoG recordings and their various time-scales is a major challenge. To address this challenge, we have recently proposed a multiscale state-space encoding model for combined spike/LFP/ECoG recordings. However, methods to learn these state-space models from data are still lacking. Here, we develop an unsupervised learning algorithm for multiscale state-space models. Given a multiscale dataset, our algorithm finds the maximum-likelihood estimate of the state-space model parameters using a new expectation-maximization (EM) technique. We show that the new algorithm can learn the encoding model accurately from simulated multiscale data. We also show that the learned model can be used to decode arm movement trajectories from simulated multiscale activity. These multiscale models have the potential to improve the performance and robustness of various neurotechnologies.