Title Promoter Affiliations Abstract "Design and Validation of Low-complexity Methods for Resolving Spike Overlap in Neuronal Spike Sorting" "Alexander Bertrand" "Laboratory for Biological Psychology, ESAT - STADIUS, Stadius Centre for Dynamical Systems, Signal Processing and Data Analytics, Dynamical Systems, Signal Processing and Data Analytics (STADIUS)" "Despite many neuroscientific breakthroughs, it remains largely unknown how brain activity supports cognition. Obtaining such a fundamental understanding of the brain has the potential to foster new clinical applications aimed at improving functional deficits resulting from a 'malfunctioning' brain. The identification of causal relationships between brain activity and cognition depends on the real-time characterization of neural activity. By implanting extracellular electrodes into the brain, potential changes can be recorded, which are a reflection of the ongoing neural activity. Such recorded voltage traces contain, among other neural activity signals and patterns, information about the action potentials of the neurons that are close to the implanted electrodes, which is also referred to as the multi-unit activity.Although the multi-unit activity holds information about the activity of individual neurons, this information is not readily available. Moreover, the multi-unit activity often contains a mixture of action potentials, also referred to as spikes, from several neurons. In this work, we focus on the development and validation of signal processing algorithms to extract individual spike times from multi-unit activity recordings and group those spike times according to their putative neurons. The algorithms that perform this specific extraction task are generally known as spike sorting algorithms. The resulting single-neuron spike trains are usually further processed to decode the information that is encoded within the spike train. Although spike sorting and decoding hold great promise towards a better understanding of the brain, recent research in spike sorting has resulted in computationally intensive approaches that are not suitable for real-time use. On the other hand, available spike sorting algorithms intended for online use are often limited in terms of coping with realistic experimental conditions, e.g., the occurrence of overlapping spikes, under which they fail.The first point of focus of this work is the development of a spike sorting methodology with low computational complexity, which explicitly accounts for overlapping spikes. We take a common threshold-based approach where a single-neuron spike train is obtained through the use of a linear finite impulse response filter applied to the multi-unit activity, followed by a comparison of the filter output against a threshold value. Such a low-complexity sorting architecture is useful in the context of online and/or embedded sorting. We propose a novel class of filter design methods for the threshold-based sorting architecture that are based on signal-to-peak-interference ratio (SPIR) optimality. This new class of filters enables threshold-based spike sorting, as compared to earlier approaches based on signal-to-noise ratio (SNR) optimality or signal-to-interference-plus-noise ratio (SINR) optimality, which are insufficiently discriminative or are faced with practical limitations. We show that the proposed methodology outperforms existing methods on an extensive set of validation data. Furthermore, we shown that SPIR-optimality is related to the theory of support vector machines, such that the SPIR-optimal filter can be interpreted as a maximum-margin matched filter that can be useful in other pattern recognition tasks where the computational complexity is restricted.Related to the design of optimal linear filters for use in spike sorting, we present a study that is aimed at better understanding the need for linear filter design regularization. We propose a data-driven regularization technique that is shown to outperform the state-of-the-art. Furthermore, the proposed methodology makes use of an interpretable hyperparameter, which aids in the regularization tuning process. The proposed regularization technique is then integrated with SPIR-optimal filter design, where it is shown to have the additional property of transforming the filter design into an unconstrained optimization problem.The second point of focus in this work is related to the validation of spike sorting algorithms. Such validation requires the availability of ground truth multi-unit activity recordings, i.e., recordings for which the individual spike times of one or more spike trains are completely known. State-of-the art approaches for the generation of such ground truth data are either expensive, and/or require expert knowledge, or result in unrealistic recordings. In this work we present a user-friendly tool for the creation of ground truth multi-unit activity recordings. The graphical tool implements a hybrid ground truth model, which enables the transformation of real extracellular recordings into ground truth recordings. As such, realistic ground truth data can be easily obtained without the need for costly procedures or complex computational modelling theory. Therefore, besides its usefulness for validating spike sorting algorithms during development, such data can also be leveraged by spike sorting users for algorithm selection and parameter tuning to improve spike sorting performance for their specific recording setting. The transformation is based on the availability of manually verified initial spike sorting results. The tool comes with additional routines to aid the quantification of spike sorting performance on such ground truth data. Furthermore, the tool has been integrated within a wider spike sorting ecosystem to accelerate its adoption. A case study is presented in which its usefulness for the spike sorting practice is demonstrated.As a third point of focus, we propose an innovative method for resolving spike overlap directly in the feature space, as opposed to other strategies that are based on the design of linear filters that depend on the availability of an initial clustering. The methodology consists of the design of a specialized spike embedding in which overlap behaves as a linear operation. The proposed methodology holds great promise for a future clustering-based spike sorting pipeline that can handle various signal characteristics that are often encountered in the practice, such as spike overlap, bursting, and drift. Through the elimination of the filtering related complexities from the pipeline, this approach could be considered as an alternative pipeline that enables low-complexity spike sorting capable of resolving spike overlap." "Hardware-efficient spike sorting based on hybrid analog/digital computation" "Georges Gielen" "Electronic Circuits and Systems (ECS)" "The brain is the most complex organ in the human body and, to be able to understand how it works, large-scale in-vivo sensing of neuron populations has emerged as a key research technique. Microfabricated silicon neural probes have established as the dominant technology in this field and have achieved ever increasing densities and numbers of simultaneous recording electrodes. Imec is the leader in the design and development of CMOS neural probes that achieve minimum probe-shank dimensions, high electrode density and large number of simultaneous recording channels with low noise and low-power performances. With these probes, it is currently possible to record from many neurons spanning multiple brain regions. However, the future neural probes will require even higher numbers of simultaneous recording sites to enable the study of much larger neuron populations in the brain. In addition, wireless data transmission is becoming an important requirement to allow experiments with free moving animals without tethering cables. Thus, on-chip neural signal processing is compulsory in such system to compress the enormous raw data which will take way too much power to be transmitted wirelessly. The ideal solution would be to sort the recorded neural spikes before transmission so that only the identifiers of the recorded neurons and their spike timings need to be transmitted. The essential challenge for realizing such system is to find ultra-low-power solutions to implement spike sorting algorithms on chip. The power consumed by the spike sorting circuits needs to be less than the power saved in wireless data transmission. This PhD research aims to tackle this challenge based on two primary hypotheses: Hypothesis 1: The power efficiency of spike sorting circuits can be improved by taking a hybrid analog digital computation approach. The conventional system approach consists of low noise analog front-end and high precision analog digital converter on the recording ASIC, and spike sorting algorithm implemented on external digital signal processor. This approach is optimal for spike sorting accuracy and algorithm flexibility, but is not for power efficiency. In particular, we hypothesize the following improvements: Hypothesis 1.1: Significant part of the signal processing and computation in spike sorting algorithms can be implemented using reduced-precision low-power analog circuit (e.g. log-domain NEO-based neural detector, switched-capacitor-based matrix multiplier for feature extraction, etc.). Hypothesis 1.2: Analog preprocessing could improve system power efficiency by reducing the data redundancy before spike sorting (e.g. spike-triggered sampling, adaptive ADC resolution, compressed sensing). Hypothesis 2: Significant reduction in power consumption can be achieved by implementing application-specific instead of general-purpose spike sorting algorithms on chip. Many brain-machine-interface applications may not require the recorded neural signals to be sorted. For example, multi-unity activity (sample-by-sample RMS of AP signals recorded from multiple unsorted neurons) has been demonstrated to be sufficient for arm movement prediction. There are also many algorithms that demonstrate comparable neural decoding performance omitting several intermediate spike sorting steps. Those application-specified algorithms are less power hungry and are suitable for on-chip integration. This doctoral research will focus on hardware-efficient spike sorting based on hybrid analog/digital computation. The student will explore new low-power analog circuits to implement neural feature extraction. The student will then design and implement these architectures in an ASIC, and test the functionality and performance in a real-case scenario." "Distributed signal processing algorithms for spike sorting in nextgeneration high-density neuroprobes" "Alexander Bertrand" "Dynamical Systems, Signal Processing and Data Analytics (STADIUS), Department of Electrical Engineering (ESAT), Laboratory for Biological Psychology" "Neurons communicate with each other through action potentials or so-called ‘spikes’. When an electrode is inserted into the brain, it records spikes of all the neurons in its close vicinity. To decode brain processes, all these spikes have to be sorted according to their underlying neuronal source, aided by so-called 'spike sorting' (SS) algorithms. Recent advances in silicon technology have paved the way for neuroprobes with high-density (HD) electrode grids. These HD grids provide more spatial information, but it is unclear how -and to what extent- this can be exploited by SS algorithms. Furthermore, the full exploitation of HD electrode grids is hampered due to several fundamental hardware (HW) and software (SW) limitations. On theHW side, bandwidth and wiring constraints make it impossible to extract all electrode signals. On the SW side, the standard machine-learning algorithms for SS are not designed to (optimally) exploit spatial information, and their computational complexity scales poorly with the number of channels.Therefore, this project aims to1) explore the added value and fundamental limits of HD electrode grids based on physiological models,2) develop novel SS algorithms that optimally exploit this spatial information, and3) overcome the current HW/SW limitations based on distributed signal processing and distributed probe architectures.If successful, the project could elicit a fundamental paradigm shift in the design of HD neurorecording technology." "Brain-machine interfacing with micro-electrode arrays in the visual cortex." "Marc Van Hulle" "Laboratory for Neuro- and Psychophysiology, ESAT - STADIUS, Stadius Centre for Dynamical Systems, Signal Processing and Data Analytics" "Brain-Machine interfaces (BMIs, also called Neuromotor prostheses) provide an outlook for immediately improving the quality of life of neurologically impaired persons. Nowadays, in these BMIs, a micro-electrode array, consisting of tens of electrodes, is implanted in the brain in the (pre)motor frontal areas and in the parietal cortex, region involved in motor intention, motor planning and motor execution. There it captures brain signals from which the information is extracted by further processing, but the human operator is still needed to run the systems. The aim of this project is to develop a fully autonomous BMI, the system that should be able to work without human assistance. The system will provide: 1) automatic on-line spike detection; 2) automatic spike sorting (spike discrimination) and 3) array decoding (spike decoding). In stead or recording in the (pre)motor regions, we will measure the responses in the visual cortical area V4 neurons to a set of visual stimuli that the monkey learned to discriminate. Since recording in this area is much harder than in the (pre)motor cortex, due to the lesser amount of pyramidal neurons, the spike detection and spike discrimination problem becomes much tougher and success in this matter will ensure success in most other cortical areas. Recording in the visual area instead of a motor region is necessary for the development of a functional autonomous prosthetic device that will produce the same output as the stimulus-driven behavioural response of the animal." "Distributed signal processing algorithms for wireless EEG sensor networks with applications in auditory-based brain-computer interfaces" "Alexander Bertrand" "Dynamical Systems, Signal Processing and Data Analytics (STADIUS)" "Electroencephalography (EEG) is a cheap and non-invasive neuromonitoring technique to measure electrical potentials generated by the brain. Recently, the concept of a wireless EEG sensor network (WESN) has been proposed, in which the head is covered with a multitude of EEG nodes with facilities for local signal processing (SP) and wireless communication. Since they are amenable to extreme miniaturization and low-power system design, it is believed that such WESNs are an enabling technology for long-term wearable EEG monitoring. Since the wireless transmission is the most power-hungry component, it is crucial to minimize the amount of data that is to be transmitted. To this end, we will develop novel distributed algorithms to solve multi-channel EEG SP tasks, while avoiding energy-inefficient data centralization. We will focus on algorithms for artifact removal and extraction of brain responses, in particular for two different auditory-based brain-computer interfaces (BCIs)." "Micro Bubble-jet Cell Sorter" "Liesbet Lagae" "Quantum Solid State Physics (QSP), Soft Matter and Biophysics, ESAT - MICAS, Microelectronics and Sensors, Department of Cardiovascular Sciences" "FACS enables the analysis and sorting of cells from a heterogeneous cell population at single cell level, hence representing an indispensable tool for biological research and clinical diagnostics and therapeutics. A traditional FACS consists of a flow cell in which biological cells pass an optical detection point one by one. There they are discriminated based on optical scatter signals and fluorescent labels. The cells are then encapsulated in droplets and electrostatically deflected into collection vials depending on set optical signal gates. Selection of cells of interest is typically done based on cell surface antigens, viability, DNA and RNA content or fluorescent proteins. Novel applications, such as rare cell isolation and cell therapy are demanding extreme high throughput and sterile sorting with multi-marker selection. The formation of aerosols in a FACS during sorting raises issues regarding biosafety and cross-contamination, which is especially problematic for these applications.Several microfluidic sorters on chip have been developed to overcome the limitations of FACS. The actuation method in these devices either relies on physical cell properties or is independent of cell properties with selection based on the optical scatter and fluorescence signature of the cell i.e. µFACS. For many applications, the physical properties of the target cells do not defer enough to be sorted out from the heterogeneous cell mixture and only a µFACS is capable of isolating the cells of interest. Many actuation methods for µFACS devices have been developed such as a MEMS valve, piezoelectric induced jet flow or vapor bubble induced jet flow. The latter method can be divided in bubble generation from laser induced optical heating and Joule heating of a thin-film resistor.In this dissertation, a strategy has been developed to create a high throughput, biosafe and disposable µFACS, the micro bubble-jet sorter. It leverages lithographical fabrication techniques to fabricate hundreds of small thermal hotspots in a thin-film microheater. Pulsed resistive heating at the hotspots produces numerous micro vapor bubbles with ultra-short lifetimes compared to a single large vapor bubble in other jet flow sorters. On top of that, all external actuators are integrated which removes costly components such as with the laser induced bubble generation. The fabrication process and design of two versions of the micro bubble-jet sorter chip are introduced. Two new photo-patternable adhesive polymers were characterized for microfluidic cytometry applications. Powerful cavitation forces from collapsing vapor bubbles at the end of the bubble cycle deteriorate the passivation layer on top of the microheater. A second-generation sorter chip was developed with an improved material stack to increase the heater lifetime. The vapor bubble cycle was studied with thermal simulations of the heater stack and stroboscopic imaging of the vapor bubbles during heating pulse application. The jet flow for cell sorting was characterized with finite element analysis, and dye trace imaging. High rate sorting experiments with beads showed that the sorting performance was limited by coincidence events in which cells arrive simultaneously and the jet flow period. Moreover, a significant enrichment of spiked cells demonstrated that this sorting chip is particularly suitable for CTC isolation from PBMCs and prenatal cell isolation from cervical samples. To check the influence of the sorting process on cell well-being, cell viability tests were performed. In a preliminary test, the expression levels of genes related to stress of sorted cells were measured to further investigate the influence of the sorting process. Affordable solutions for massive high throughput sorting can only be achieved with parallel sorting on chip. Therefore, thermal simulations and measurements were performed to investigate the limitations of parallel sorting. The parallel sorting capabilities were demonstrated with bead and cell isolations.Cell sorting based on an array of micro sized vapor bubbles provides an eminent tool to increase the throughput of µFACS devices required for next generation cell therapy applications. To this end, a microfluidic sorting chip has been demonstrated that will enable contamination-free, enclosed, and affordable cell sorting that is compatible with mass manufacturing." "Syndetic and asyndetic complementation in Spanish. A diachronic probabilistic account" "Bert Cornillie" "Functional and Cognitive Linguistics: Grammar and Typology (FunC), Leuven" "My dissertation focuses on the alternation between syndetic and asyndetic finite complement clauses in Spanish. Syndetic complements, introduced by an explicit complementizer que ‘that’, as in (1a), are the most frequent patterns of complementation in Present‑day Spanish. Alternatively, a complement clause can also be introduced asyndetically, i.e. without the complementizer que, as shown in (1b), where the absence of the complementizer is indicated by “Ø”.(1) a. Le ruego que me deje pasar      b. Le ruego Ø me deje pasar     ‘I beg you to let me pass’ (Lit. ‘I beg (that) you let me pass)      Inmaculada Alvear, El sonido de tu boca. Spain, 2005, Theatre playAsyndetic complementation has been considered a “syntactic fashion” (Pountain 2015) which spread starting from the 15th century and eventually lost popularity, becoming a marginal construction in Present-day Spanish. However, the grammatical, stylistic and social motivations that might have caused its spread and decline remain understudied. In this work, I rely on the concept of probabilistic grammar (Bod, Hay & Jannedy 2003; Bresnan 2007), by assuming that the constraints that regulate the syndetic/asyndetic alternation are probabilistic rather than categorical. Together with the analysis of language‑internal predictors, I will consider the role of language-external factors, such as social power (Brown and Gilman 1960), style and Discourse Traditions. With this aim, I use descriptive and inferential statistics (mixed-effect logistic regression) to investigate the diachronic changes of the language-internal and language-external probabilistic constraints (Szmrecsanyi 2013a), in order to understand how they have affected the distribution of the variants.The dissertation is articulated in three case studies: two on historical data (15th to 18th centuries) from the corpus CODEA+2015 (GITHE 2015) and one on present-day data (21st century), based on a manually compiled written-corpus and on the spoken corpus PRESEEA (2014). The main results show that between the 15th and the 18th century the asyndetic variant was mainly used to mark a higher level of integration between main and subordinate clause (Mazzola, Cornillie & Rosemeyer 2022, cf. Givón 2001), and that the alternation is sensitive to processing constraints such as the Complexity Principle (Rohdenburg 1996) and the Domain Minimisation principle (Hawkins 1999; 2004). From the socio-stylistic perspective, the alternation is influenced by external factors, such as the type of audience addressed (Bell 2001) and the Discourse Tradition of the document (Kabatek 2005). Asyndetic complementation was typically employed in speech acts which entailed a deferential request addressed to someone with higher social power (Mazzola, Rosemeyer & Cornillie 2022). Finally, the analysis of the 21st-century data confirms that the asyndetic construction is extremely infrequent and that the morphosyntactic and semantic constraints are affected by the decline to a larger extent, whereas processing constraints are conserved. From the constructional point of view, this indicated that the asyndetic construction underwent deschematisation: from an abstract subschema of complementation to a less schematic, lower-level construction (Croft 2003; Barðdal & Gildea 2015). I argue that this grammatical change can be attributed to external and “environmental” factors (Szmrecsanyi 2013b), such as the changes of its typical Discourse Traditions and environments, affected by the historical evolution of social conventions. Overall, this dissertation has changed the view of Spanish asyndetic complements in terms of their chronology, probabilistic grammar and socio-stylistic distribution, and has contributed to the investigation of the intricate relationship between language change and social conventions. More specifically, this study improves our general understanding of syntactic alternations and their diachronic development by stressing the important role played by all sorts of predictors in the development of language variation and change, including: grammatical and processing constraints, momentary fancies, cultural contacts, social and ideological transformations. References Barðdal, Jóhanna & Spike Gildea. 2015. Diachronic Construction Grammar. Epistemological context, basic assumptions and historical implications. In Jóhanna Barðdal, Elena Smirnova, Lotte Sommerer & Spike Gildea (eds.), Diachronic construction grammar (Constructional Approaches to Language 18), 1–49. Amsterdam: John Benjamins; Bell, Allan. 2001. Back in style: Reworking audience design. In Penelope Eckert & John R. Rickford (eds.), Style and sociolinguistic variation, 139–169. Cambridge: Cambridge University Press; Bod, Rens, Jennifer Hay & Stefanie Jannedy. 2003.Probabilistic Linguistics. Cambridge, Massachusetts: MIT Press; Bresnan, Joan. 2007. Is Syntactic Knowledge Probabilistic? Experiments with the English Dative Alternation. In Sam Featherston & Wolfgang Sternefeld (eds.), Roots. Linguistics in Search of its Evidential Base, 75–96. Berlin: De Gruyter; Brown, Roger & Albert Gilman. 1960. The Pronouns of Power and Solidarity. In Thomas Sebeok (ed.), Style in Language, 253–276. Cambridge, Massachusetts: MIT Press; Croft, William A. 2003. A false dichotomy: Lexical rules vs. constructions. In Hubert Cuyckens, Thomas Berg, René Dirven & Klaus-Uwe Panther (eds.), Motivation in Language. Studies in honor of Günter Radden, 49–68. Amsterdam: John Benjamins; GITHE. 2015. Codea+2015. Corpus de documentos españoles anteriores a 1800. https://doi.org/10.37536/CODEA.2015. (10 January, 2022); Givón, Talmy. 2001.Syntax: an introduction. Vol. 2. Amsterdam: John Benjamins; Hawkins, John A. 1999. Processing Complexity and Filler-Gap Dependencies across Grammars. Language 75(2). 244–285; Hawkins, John A. 2004.Efficiency and Complexity in Grammars. Oxford: Oxford University Press; Kabatek, Johannes. 2005. Tradiciones discursivas y cambio lingüístico. Lexis: Revista de lingüística y literatura 29(2). 151–177; Mazzola, Giulia, Bert Cornillie & Malte Rosemeyer. 2022. Asyndetic complementation and referential integration in Spanish: A diachronic probabilistic grammar account. Journal of Historical Linguistics 12(2). 194–240; Mazzola, Giulia, Malte Rosemeyer & Bert Cornillie. 2022. Syntactic alternations and socio-stylistic constraints: the case of asyndetic complementation in the history of Spanish. Journal of Historical Sociolinguistics. De Gruyter Mouton 8(2). 197–235; Pountain, Christopher. 2015. Que-deletion: the rise and fall of a syntactic fashion. In Francisco Dubert García, Gabriel Rei-Doval & Xulio Sousa (eds.), En memoria de tanto miragre. Estudos dedicados ó profesor David Mackenzie, 143–159. Santiago de Compostela: Universidad de Santiago de Compostela - Servicio de Publicaciones e Intercambio Científico; PRESEEA. 2014. Corpus del Proyecto para el estudio sociolingüístico del español de España y de América. https://preseea.linguas.net/. (16 March, 2022); Rohdenburg, Günter. 1996. Cognitive complexity and increased grammatical explicitness in English. Cognitive Linguistics 7(2); Szmrecsanyi, Benedikt. 2013a. Diachronic Probabilistic Grammar. English Language and Linguistics 19(3). 41–68; Szmrecsanyi, Benedikt. 2013b. The great regression. Genitive variability in Late Modern English news texts. In Kersti Börjars, David Denison & Alan K. Scott (eds.), Morphosyntactic Categories and the Expression of Possession. Amsterdam: John Benjamins." "Newton-type operator splitting methods for real-time optimization of cyberphysical systems" "Panos Patrinos" "Dynamical Systems, Signal Processing and Data Analytics (STADIUS)" "Operator splitting techniques, introduced in the 50's for solving PDEs and optimal control problems, have been successfully used to reduce complex problems into a series of simpler subproblems. They have recently received an enormous renewed interest due to their ability in handling large-scale and embedded convex optimization problems, and thus have found numerous applications in real-time control, machine learning, data mining and signal processing. The main reasoning behind operator splitting techniques, from their very birth, is that they can be seen as relaxed fixed point iterations for finding a fixed point of a nonexpansive mapping. As fixed point iterations can be inherently slow, this is the case for all operator splitting techniques as well. The key idea of the methodological part of the present project is to give new interpretations that will allow to construct much more efficient algorithms that however preserve all the favorable properties of operator splitting techniques (separability, distributability, cheap iterations). This paves the way for developing Newton-type techniques with favorable global convergence properties and fast asymptotic convergence rates. The new algorithms can potentially have a huge impact in the case of embedded model predictive control (MPC) (much faster convergence means applicability of embedded MPC to a wider range of industries such as automotive, aerospace and robotics), in control of interconnected cyberphysical systems (faster convergence means smaller amount of communication rounds, hence smaller delays and power consumption) as well as big data analytics (faster convergence means ability to process even larger data sets)." "Explicit and Implicit Tensor Decomposition-based Algorithms and Applications" "Lieven De Lathauwer" "ESAT - STADIUS, Stadius Centre for Dynamical Systems, Signal Processing and Data Analytics, Electrical Engineering (ESAT), Kulak Kortrijk Campus" "Various real-life data such as time series and multi-sensor recordings can be represented by vectors and matrices, which are one-way and two-way arrays of numerical values, respectively. Valuable information can be extracted from these measured data matrices by means of matrix factorizations in a broad range of applications within signal processing, data mining, and machine learning. While matrix-based methods are powerful and well-known tools for various applications, they are limited to single-mode variations, making them ill-suited to tackle multi-way data without loss of information. Higher-order tensors are a natural extension of vectors (first order) and matrices (second-order), enabling us to represent multi-way arrays of numerical values, which have become ubiquitous in signal processing and data mining applications. By leveraging the powerful utitilies offered by tensor decompositions such as compression and uniqueness properties, we can extract more information from multi-way data than what is possible by using only matrix tools.While higher-order tensors allow us to properly accommodate for multiple modes of variation in data, tensor problems are often large-scale because the number of entries in a tensor increases exponentially with the tensor order. This curse of dimensionality can, however, be alleviated or even broken by various techniques such as representing the tensor by an approximate but compact tensor model. While a pessimist only sees the curse, an optimist sees a significant opportunity for the compact representation of large-scale data vectors: by representing a large-scale vector (first order) using a compact (higher-order) tensor model, the number of parameters needed to represent the underlying vector decreases exponentially in the order of the tensor representation. The key assumption to employ this blessing of dimensionality is that the data can be described by much fewer parameters than the actual number of samples, which is often true in large-scale applications.By leveraging the blessing of dimensionality in this thesis for blind source separation and (blind) system identification, we can tackle large-scale applications through explicit and implicit tensor decomposition-based methods. While explicit decompositions decompose a tensor that is known a priori, implicit decompositions decompose a tensor that is only known implicitly. In this thesis, we present a single-step framework for a particular type of implicit tensor decomposition, consisting of optimization-based and algebraic algorithms as well as generic uniqueness results. By properly exploiting additional structure in specific applications, we can significantly reduce the computational complexity of our optimization-based method. Our approach for large-scale instantaneous blind source separation and (blind) system identification enables various applications such as direction-of-arrival estimation in large-scale arrays and neural spike sorting in high-density recordings. Furthermore, we link implicit tensor decompositions to multilinear systems of equations, which are a generalization of linear systems, allowing us to propose a novel tensor-based classification scheme that we use for face recognition and irregular heartbeat classification with excellent performance. "