### Lieven Vandenberghe

University of California, Los Angeles

**Semidefinite programming methods for continuous sparse optimization**

We discuss extensions of semidefinite programming methods for 1-norm minimization over infinite dictionaries of complex exponentials, which have recently been proposed for superresolution and gridless compressed sensing.

We show that results related to the generalized Kalman-Yakubovich-Popov lemma in linear system theory provide simple constructive proofs for the semidefinite representations of the penalties used in these problems. The connection leads to extensions to more general dictionaries associated with linear state-space models and matrix pencils.

The results will be illustrated with applications in spectral estimation, array signal processing, and numerical analysis.

*Joint work with Hsiao-Han Chao.*

### Karin Schnass

University of Innsbruck

#### Sparsity, Co-sparsity and Learning

While (synthesis) sparsity is by now a well-studied low complexity model for signal processing, the dual concept of (analysis) co-sparsity is much less invesitigated but equally promising. We will first give a quick overview over both models and then turn to optimisation formulations for learning sparsifying dictionaries as well as co-sparsifying (analysis) operators. Finally we will discuss the resulting learning algorithms and ongoing research directions.

*Joint work with Michael Sandbichler*

### Phil Schniter

Ohio State University

#### Robust approximate message passing

Approximation message passing (AMP) has recently become popular for inference in linear and generalized linear models. AMP can be viewed as an approximation of loopy belief propagation that requires only two matrix multiplies and a (typically simple) denoising step per iteration, and relatively few iterations, making it computationally efficient. When the measurement matrix "A" is large and well modeled as i.i.d. sub-Gaussian, AMP's behavior is closely predicted by a state evolution. Furthermore, when this state evolution has unique fixed points, the AMP estimates are Bayes optimal. For general measurement matrices, however, AMP may produce highly suboptimal estimates or not even converge. Thus, there has been great interest in making AMP robust to the choice of measurement matrix.

In this talk, we describe some recent progress on robust AMP. In particular, we describe a method based on an approximation of non-loopy expectation propagation that, like AMP, requires only two matrix multiplies and a simple denoising step per iteration. But unlike AMP, it leverages knowledge of the measurement matrix SVD to yield excellent performance over a larger class of measurement matrices. In particular, when the Gramian A'A is large and unitarily invariant, its behavior is closely predicted by a state evolution whose fixed points match the replica prediction. Moreover, convergence has been proven in certain cases, with empirical results showing robust convergence even with severely ill-conditioned matrices. Like AMP, this robust AMP can be successfully used with non-scalar denoisers to accomplish sophisticated inference tasks, such as simultaneously learning and exploiting i.i.d. signal priors, or leveraging black-box denoisers such as BM3D. We look forward to describing these preliminary results, as well as ongoing research, on robust AMP.

*Joint work with Alyson Fletcher and Sundeep Rangan.*

### Florent Krzakala

ENS Paris

#### Approximate Message Passing and Low Rank Matrix Factorization ProblemS

A large amount of interesting problem in machine learning and statistics can be expressed as a low rank structured matrix factorization problem, such as sparse PCA, planted clique, sub-matrix localization, clustering of mixtures of Gaussians or community detection in a graph.

I will discuss how recent ideas from statistical physics and information theory have led, on the one hand, to new mathematical insights in these problems, leading to a characterization of the optimal possible performances, and on the other to the development of new powerful algorithms, called approximate message passing, which turns out to be optimal for a large set of problems and parameters.

### Holger Rauhut

RWTH Aachen

#### Low rank tensor recovery

An extension of compressive sensing predicts that matrices of low rank can be recovered from incomplete linear information via efficient algorithms, for instance nuclear norm minimization. Low rank representations become much more efficient one passing from matrices to tensors of higher order and it is of interest to extend algorithms and theory to the recovery of low rank tensors from incomplete information. Unfortunately, many problems related to matrix decompositions become computationally hard and/or hard to analyze when passing to higher order tensors. This talk presents two approaches to low rank tensor recovery together with (partial) results. The first one extends iterative hard thresholding algorithms to the tensor case and gives a partial recovery result based on a variant of the restricted isometry property. The second one considers relaxations of the tensor nuclear norm (which itself is NP-hard to compute) and corresponding semidefinite optimization problems. These relaxations are based on so-called theta bodies, a concept from convex algebraic geometry. For both approaches numerical experiments are promising but a number of open problems remain.

*Joint work with Reinhold Schneider and Zeljka Stojanac.*

### Gerhard Wunder

FU Berlin

#### Compressive Coded Random Access for 5G Massive Machine-type Communication

Massive Machine-type Communication (MMC) within the Internet of Things (IoT) is an important future market segment in 5G, but not yet efficiently supported in cellular systems. Major challenge in MMC is the very unfavorable payload to control overhead relation due to small messages and oversized Medium Access (MAC) procedures. In this talk we follow up on a recent concept called Compressive Coded Random Access (CCRA) combining advanced MAC protocols with Compressed Sensing (CS) based multiuser detection. Specifically, we introduce a "one shot" random access procedure where users can send a message without a priori synchronizing with the network. In this procedure a common overloaded control channel is used to jointly detect sparse user activity and sparse channel profiles. In the same slot, data is detected based on the already available information. In the talk we show how CS algorithms and in particular the concept of hierarchical sparsity can be used to design efficient and scalable access protocols. The CCRA concept is introduced in full detail and further generalizations are discussed. We present algorithms and analysis that proves the additional benefit of the concept.

### Petros Boufounos

Mitsubishi Electric Research Labs

#### On representing the right information: embeddings, quantization, and distributed coding

Low-dimensional embeddings have recently emerged as a key component in modern signal processing theory and practice. This talk will explore the information-preserving properties of such embeddings, which make them suitable for a number of signal representation applications. In particular we will demonstrate that embeddings can be designed to preserve different ranges of distances with different accuracy. Such designs make embeddings a very convenient coding mechanism for signal geometries, enabling rate-distortion tradeoffs in which the distortion is measured on the signal geometry and not on the signal itself. In addition, the same embeddings can be used as a distributed coding mechanism, i.e., when information about the signal being coded is available in the decoder. Since embeddings can be used to represent only a limited amount of information, the same embedding design, when properly implemented, can also provide information-theoretic privacy guarantees. In summary, low-dimensional embeddings provide a fairly general framework for information representation, with wide applicability and relatively untapped potential.

### Bogdan Roman

University of Cambridge

#### Resolution enhancement and targeted sampling in physical imaging

Signal recovery from undersampled data is of great importance to physical imaging systems. We identify and model two asymptotic phenomena that manifest in imaging systems. The first, asymptotic sparsity, is associated with models of signals, and the second, asymptotic incoherence, is associated with models of measurement and representation. We show that compressed sensing behaves differently at different resolutions and that this phenomenon can be exploited via sampling principles that are aligned with the asymptotic phenomena, applying them to four different physical imaging systems: Magnetic Resonance Imaging, Fluorescence Microscopy, Nuclear Magnetic Resonance, and Helium Atom Scattering. In these systems we show that it is possible to highlight higher length scales of interest in order to enhance image resolution and reveal more detail.

*Joint work with Ben Adcock, Robert Calderbank, Martin Graves, Daniel Nietlispach, Mark Bostock, Irene Calvo-Almazan, Anders Hansen.*