Learning-based methods, and in particular deep neural networks, have emerged as highly successful and universal tools for image and signal recovery and restoration. They achieve state-of-the-art results on tasks ranging from image denoising, image compression, and image reconstruction from few and noisy measurements. They are starting to be used in important imaging technologies, for example in GEs newest computational tomography scanners and in the newest generation of the iPhone.
The field has a range of theoretical and practical questions that remain unanswered. In particular, learning and neural network-based approaches often lack the guarantees of traditional physics-based methods. Further, while superior on average, learning-based methods can make drastic reconstruction errors, such as hallucinating a tumor in an MRI reconstruction or turning a pixelated picture of Obama into a white male.
This virtual workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep neural network-based approaches for solving inverse problems in the imaging sciences and beyond.
The workshop schedule is aligned with 7:30 AM to 4 PM PT; please see this converter for conversion to your specific time zone.
The workshop is livestreamed at the NeurIPS workshop webpage. To access this website you need to register for the conference at https://neurips.cc/ and be logged in.
The videos of the talk can also be previewed as of now through the NeurIPS workshop webpage. The livestream will play the videos, and each of the talks is followed by a live discussion. You can attend the live discussion either through the live stream or you can join the zoom session. Questions should be asked through rocketchat, and a co-organizer will moderate.
During the designated discussion sessions you can talk to the respective speakers in gather.town. To get the gather.town link, visit the official NeurIPS schedule at the NeurIPS workshop webpage.
The poster session is also on gather.town. The gather.town link is accessible through the the NeurIPS workshop webpage.
|8:00||Victor Lempitsky (Skoltech): Generative Models for Landscapes and Avatars|
|8:30||Thomas Pock (TU Graz): Variational Networks|
|9:00||Contributed talk 1: Vineet Edupuganti, Morteza Mardani, Shreyas Vasanawala, John M. Pauly: Risk Quantification in Deep MRI Reconstruction|
|9:15||Contributed talk 2: Sungmin Cha, Taeeon Park, Byeongjoon Kim, Jongduk Baek, Taesup Moon: GAN2GAN: Generative Noise Learning for Blind Denoising with Single Noisy Images|
|9:30||Break and Discussion with Victor Lempinsky, Thomas Pock, and Erich Kobler|
|10:00||Rebecca Willett (University of Chicago): Model Adaptation for Inverse Problems in Imaging|
|10:30||Stefano Ermon (Stanford): Generative Modeling via Denoising|
|11:00||Contributed talk 3: Ajil Jalal, Sushrut Karmalkar, Alex Dimakis, Eric Price: Compressed Sensing with Approximate Priors via Conditional Resampling|
|11:15||Chris Metzler: Approximate Message Passing (AMP) Algorithms for Computational Imaging|
|11:30||Discussion with Rebecca Willett and Stefano Emron|
|2:00||Peyman Milanfar (Google) - Denoising as Building Block Theory and Applications|
|2:30||Rachel Ward (UT Austin)|
|3:00||Larry Zitnick (Facebook AI Reseach) - fastMRI|
|3:30||Discussion with Peyman Milanfar, Rachel Ward, and Larry Zitnick|
|4:00||End of official program|
All accepted papers are available on openreview:
Approximate Probabilistic Inference with Composed Flows
Jay Whang, Erik Lindgren, Alex Dimakis
Bayesian Inference in Physics-Driven Problems with Adversarial Priors
Dhruv V Patel, Deep Ray, Harisankar Ramaswamy, Assad Oberai
Compressed Sensing with Approximate Priors via Conditional Resampling
Ajil Jalal, Sushrut Karmalkar, Alex Dimakis, Eric Price
Compressed Sensing with Invertible Generative Models and Dependent Noise
Jay Whang, Qi Lei, Alex Dimakis
Deep Learning for Plasma Tomography in Nuclear Fusion
Diogo R. Ferreira, Pedro J. Carvalho
Deep Learning Initialized Phase Retrieval
Raunak Manekar, Zhong Zhuang, Kshitij Tayal, Vipin Kumar, Ju Sun
Denoising Score-Matching for Uncertainty Quantification in Inverse Problems
Zaccharie Ramzi, Benjamin Remy, Francois Lanusse, Jean-Luc Starck, Philippe Ciuciu
GAN2GAN: Generative Noise Learning for Blind Denoising with Single Noisy Images
Sungmin Cha, Taeeon Park, Byeongjoon Kim, Jongduk Baek, Taesup Moon
Generative Tomography Reconstruction
Matteo Ronchetti, Davide Bacciu
Generator Surgery for Compressed Sensing
Jung Yeon Park, Niklas Smedemark-Margulies, Max Daniels, Rose Yu, Jan-Willem van de Meent, Paul Hand
Intermediate Layer Optimization for Inverse Problems using Deep Generative Models
Joseph Dean, Giannis Daras, Alex Dimakis
Learning Spectral Regularizations for Linear Inverse Problems
Hartmut Bauermeister, Martin Burger, Michael Moeller
Learning to Sample MRI via Variational Information Maximization
Cagan Alkan, Morteza Mardani, Shreyas Vasanawala, John M. Pauly
Likelihood-Free Inference with Deep Gaussian Processes
Alexander Aushev, Henri Pesonen, Markus Heinonen, Jukka Corander, Samuel Kaski
Quantifying Sources of Uncertainty in Deep Learning-Based Image Reconstruction
Riccardo Barbano, Zeljko Kereta, Chen Zhang, Andreas Hauptmann, Simon Arridge, Bangti Jin
Risk Quantification in Deep MRI Reconstruction
Vineet Edupuganti, Morteza Mardani, Shreyas Vasanawala, John M. Pauly
Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser
Zahra Kadkhodaie, Eero Peter Simoncelli
Towards Neurally Augmented ALISTA
Freya Behrens, Jonathan Sauder, Peter Jung
Uncertainty-Driven Adaptive Sampling via GANs
Thomas Sanchez, Igor Krawczuk, Zhaodong Sun, Volkan Cevher
Unlocking Inverse Problems Using Deep Learning: Breaking Symmetries in Phase Retrieval
Kshitij Tayal, Chieh-Hsin Lai, Raunak Manekar, Zhong Zhuang, Vipin Kumar, Ju Sun
Call for Papers and Submission Instructions
We invite researchers to submit anonymous papers of up to 4 pages (excluding references and appendices) which will be considered for contributed workshop papers. No specific formatting is required. Authors are encouraged to use the workshop edition of the NeurIPS style file, but they may use any other style as long as it has standard font size (11pt) and margins (1in). The paper can have an appendix of unlimited lenght.
Submission at OpenReview will be open from Sep. 1 until the submission deadline on October 9, 2020.
We welcome all submission in the intersection of inverse problems and deep learning including contributions related to robustness and biases, neural network architectures, regularization, optimization methods, datasets, theoretical foundations (including rigorous recovery guarantees, provable convergence, and bounds on representation errors), untrained methods, generative models, end-to-end methods, and applications in imaging, time series, and beyond. We especially encourage submissions related to the following questions:
Deep learning based approaches can make drastic reconstruction errors and may introduce biases. How common are such issues, can such reconstruction difficulties be alleviated, and if yes, how?
Deep learning based approaches often lack the guarantees of the traditional physics based methods. What theoretical results are necessary and possible?
Untrained neural networks such as the deep image prior have shown that neural networks alone, without any learning, can give excellent reconstruction performance. How important is training on the target distribution for imaging performance and is it possible to achieve state-of-the art performance without training?
- Submit the final version and poster by December 1, 2020 at OpenReview
- Submission Deadline: Extended to October 12, 2020 (this is the final deadline).
- Notification: October 23, 2020.
- Workshop: Friday December 11
- Reinhard Heckel (TUM)
- Paul Hand (Northeastern)
- Soheil Feizi (UMD)
- Lenka Zdeborova (CEA/SACLAY)
- Richard Baraniuk (Rice University)
Please email firstname.lastname@example.org with any questions.