Workshop Summary

There is a long history of algorithmic development for solving inverse problems arising in sensing and imaging systems and beyond. Examples include medical and computational imaging, compressive sensing, as well as community detection in networks. Until recently, most algorithms for solving inverse problems in the imaging and network sciences were based on static signal models derived from physics or intuition, such as wavelets or sparse representations.

Today, the best performing approaches for the aforementioned image reconstruction and sensing problems are based on deep learning, which learn various elements of the method including i) signal representations, ii) stepsizes and parameters of iterative algorithms, iii) regularizers, and iv) entire inverse functions. For example, it has recently been shown that solving a variety of inverse problems by transforming an iterative, physics-based algorithm into a deep network whose parameters can be learned from training data, offers faster convergence and/or a better quality solution. Moreover, even with very little or no learning, deep neural networks enable superior performance for classical linear inverse problems such as denoising and compressive sensing. Motivated by those success stories, researchers are redesigning traditional imaging and sensing systems.

However, the field is mostly wide open with a range of theoretical and practical questions unanswered. In particular, deep-neural network based approaches often lack the guarantees of the traditional physics based methods, and while typically superior can make drastic reconstruction errors, such as fantasizing a tumor in an MRI reconstruction.

This workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep neural network based approaches for solving inverse problems in the imaging and network sciences.

Schedule

Time Event
  Morning session
8:30 - 8:40 Opening Remarks
8:40 - 9:10 Lenka Zdeborova: The spiked matrix model with generative priors
9:10 - 9:40 Shuang Qiu, Xiaohan Wei, Zhuoran Yang: Robust One-Bit Recovery via ReLU Generative Networks: Improved Statistical Rate and Global Landscape Analysis
9:40 - 10:30 Coffee Break
  Pre-lunch session
10:30 - 11:00 Laura Waller: Computational microscopy in scattering media
11:00 - 11:30 Mahdi Soltanolkotabi: Denoising via Early Stopping
11:30 - 12:00 Stephan Hoyer, Jascha Sohl-Dickstein, Sam Greydanus: Neural Reparameterization Improves Structural Optimization
12:00 - 2:00 Lunch Break
  Post-lunch session
2:00 - 2:30 Piotr Indyk: Learning-Based Low-Rank Approximations
2:30 - 3:00 Josh Batson: Blind Denoising, Self-Supervision, and Implicit Inverse Problems
3:00 - 3:30 Venkat Chandrasekaran: Learning Regularizers from Data
3:30 - 4:15 Break and Posters
  Afternoon session
4:15 - 6:00 Poster Session

Accepted Papers

Call for Papers and Submission Instructions

Submission is closed!

We invite researchers to submit anonymous extended abstracts of up to 4 pages (excluding references) which will be considered for contributed talks and posters. No specific formatting is required. Authors may use the NeurIPS style file, or any other style as long as it has standard font size (11pt) and margins (1in).

Submission at openreview open now until the submission deadline on September 9 September 13.

We invite works on inverse problems in the imaging sciences and new developments in non-Euclidean domains such as graphs, including contributions on the development of new architectures for natural signal priors (for examples GANs, non adversarially trained generators, unlearned neural networks, and combinations thereof), theoretical foundations (including rigorous recovery guarantees, provable convergence, and bounds on representation errors), and applications in imaging and beyond. We especially encourage submissions in the following areas:

Important Dates

Organizers

Please email neurips2019inverse@gmail.com with any questions.