< Terug naar vorige pagina
Exploiting efficient representations in large-scale tensor decompositions
Tijdschriftbijdrage - Tijdschriftartikel
© 2019 Society for Industrial and Applied Mathematics Decomposing tensors into simple terms is often an essential step toward discovering and understanding underlying processes or toward compressing data. However, storing the tensor and computing its decomposition is challenging in a large-scale setting. Though in many cases a tensor is structured, it can be represented using few parameters: a sparse tensor is determined by the positions and values of its nonzeros, a polyadic decomposition by its factor matrices, a Tensor Train by its core tensors, a Hankel tensor by its generating vector, etc. The complexity of tensor decomposition algorithms can be reduced significantly in terms of time and memory if these efficient representations are exploited directly. Only a few core operations such as norms and inner products need to be specialized to achieve this, thereby avoiding the explicit construction of multiway arrays. To improve the interpretability of tensor models, constraints are often imposed or multiple datasets are fused through joint factorizations. While imposing these constraints prohibits the use of traditional compression techniques, our framework allows constraints and compression, as well as other efficient representations, to be handled trivially as the underlying optimization variables do not change. To illustrate this, large-scale nonnegative tensor factorization is performed using MLSVD and Tensor Train compression. We also show how vector and matrix data can be analyzed using tensorization while keeping a vector or matrix complexity through the concept of implicit tensorization, as illustrated for Hankelization and Löwnerization. The concepts and numerical properties are extensively investigated through experiments.
Tijdschrift: SIAM Journal on Scientific Computing
Pagina's: A789 - A815
Jaar van publicatie:2019