< Back to previous page

Publication

Design and Validation of Low-complexity Methods for Resolving Spike Overlap in Neuronal Spike Sorting

Book - Dissertation

Despite many neuroscientific breakthroughs, it remains largely unknown how brain activity supports cognition. Obtaining such a fundamental understanding of the brain has the potential to foster new clinical applications aimed at improving functional deficits resulting from a 'malfunctioning' brain. The identification of causal relationships between brain activity and cognition depends on the real-time characterization of neural activity. By implanting extracellular electrodes into the brain, potential changes can be recorded, which are a reflection of the ongoing neural activity. Such recorded voltage traces contain, among other neural activity signals and patterns, information about the action potentials of the neurons that are close to the implanted electrodes, which is also referred to as the multi-unit activity. Although the multi-unit activity holds information about the activity of individual neurons, this information is not readily available. Moreover, the multi-unit activity often contains a mixture of action potentials, also referred to as spikes, from several neurons. In this work, we focus on the development and validation of signal processing algorithms to extract individual spike times from multi-unit activity recordings and group those spike times according to their putative neurons. The algorithms that perform this specific extraction task are generally known as spike sorting algorithms. The resulting single-neuron spike trains are usually further processed to decode the information that is encoded within the spike train. Although spike sorting and decoding hold great promise towards a better understanding of the brain, recent research in spike sorting has resulted in computationally intensive approaches that are not suitable for real-time use. On the other hand, available spike sorting algorithms intended for online use are often limited in terms of coping with realistic experimental conditions, e.g., the occurrence of overlapping spikes, under which they fail. The first point of focus of this work is the development of a spike sorting methodology with low computational complexity, which explicitly accounts for overlapping spikes. We take a common threshold-based approach where a single-neuron spike train is obtained through the use of a linear finite impulse response filter applied to the multi-unit activity, followed by a comparison of the filter output against a threshold value. Such a low-complexity sorting architecture is useful in the context of online and/or embedded sorting. We propose a novel class of filter design methods for the threshold-based sorting architecture that are based on signal-to-peak-interference ratio (SPIR) optimality. This new class of filters enables threshold-based spike sorting, as compared to earlier approaches based on signal-to-noise ratio (SNR) optimality or signal-to-interference-plus-noise ratio (SINR) optimality, which are insufficiently discriminative or are faced with practical limitations. We show that the proposed methodology outperforms existing methods on an extensive set of validation data. Furthermore, we shown that SPIR-optimality is related to the theory of support vector machines, such that the SPIR-optimal filter can be interpreted as a maximum-margin matched filter that can be useful in other pattern recognition tasks where the computational complexity is restricted. Related to the design of optimal linear filters for use in spike sorting, we present a study that is aimed at better understanding the need for linear filter design regularization. We propose a data-driven regularization technique that is shown to outperform the state-of-the-art. Furthermore, the proposed methodology makes use of an interpretable hyperparameter, which aids in the regularization tuning process. The proposed regularization technique is then integrated with SPIR-optimal filter design, where it is shown to have the additional property of transforming the filter design into an unconstrained optimization problem. The second point of focus in this work is related to the validation of spike sorting algorithms. Such validation requires the availability of ground truth multi-unit activity recordings, i.e., recordings for which the individual spike times of one or more spike trains are completely known. State-of-the art approaches for the generation of such ground truth data are either expensive, and/or require expert knowledge, or result in unrealistic recordings. In this work we present a user-friendly tool for the creation of ground truth multi-unit activity recordings. The graphical tool implements a hybrid ground truth model, which enables the transformation of real extracellular recordings into ground truth recordings. As such, realistic ground truth data can be easily obtained without the need for costly procedures or complex computational modelling theory. Therefore, besides its usefulness for validating spike sorting algorithms during development, such data can also be leveraged by spike sorting users for algorithm selection and parameter tuning to improve spike sorting performance for their specific recording setting. The transformation is based on the availability of manually verified initial spike sorting results. The tool comes with additional routines to aid the quantification of spike sorting performance on such ground truth data. Furthermore, the tool has been integrated within a wider spike sorting ecosystem to accelerate its adoption. A case study is presented in which its usefulness for the spike sorting practice is demonstrated. As a third point of focus, we propose an innovative method for resolving spike overlap directly in the feature space, as opposed to other strategies that are based on the design of linear filters that depend on the availability of an initial clustering. The methodology consists of the design of a specialized spike embedding in which overlap behaves as a linear operation. The proposed methodology holds great promise for a future clustering-based spike sorting pipeline that can handle various signal characteristics that are often encountered in the practice, such as spike overlap, bursting, and drift. Through the elimination of the filtering related complexities from the pipeline, this approach could be considered as an alternative pipeline that enables low-complexity spike sorting capable of resolving spike overlap.
Publication year:2020
Accessibility:Open