Skip Navigation
Search
vLGP -- variational Latent Gaussian Process
We propose a practical and efficient inference method, called the variational latent
Gaussian process (vLGP), that recovers low-dimensional latent dynamics from high-dimensional
time series. Our method performs dimensionality reduction on a single trial basis
and allows decomposition of neural signals into a small number of smooth temporal
signals and their relative contribution to the population signal. By inferring latent
neural trajectories on each trial, they provide a flexible framework for studying
the internal neural processes that are not time-locked. Higher-order processes such
as decision-making, attention, and memory recall are well suited for latent trajectory
analysis due to their intrinsic low-dimensionality of computation.
The vLGP combines a generative model with a history-dependent point process observation
together with a smoothness prior on the latent trajectories, and improves upon earlier
methods for recovering latent trajectories, which assume either observation models
inappropriate for point processes or linear dynamics. In the real electrophysiological
recordings from the primary visual cortex, we find that vLGP achieves substantially
higher performance than previous methods for predicting omitted spike trains, as well
as capturing both the toroidal topology of visual stimuli space, and the noise-correlation.
These results show that vLGP is a robust method with a potential to reveal hidden
neural dynamics from large-scale neural recordings.
Other relevant information
arXiv preprint: https://arxiv.org/abs/1604.03053
video: https://youtu.be/CrY5AfNH1ik
arXiv preprint: https://arxiv.org/abs/1604.03053
video: https://youtu.be/CrY5AfNH1ik
IACS RESEARCHERS
Yuan Zhao (major contributor)
Il Memming Park (PI)
WEBSITE
DOWNLOAD LINK