资源算法compsensing_dip

compsensing_dip

2019-12-19 | |  39 |   0 |   0

Compressed Sensing with Deep Image Prior

This repository provides code to reproduce results from the paper: Compressed Sensing with Deep Image Prior and Learned Regularization.

Here are a few example results:图片.png

Preliminaries


  1. Clone the repository

    $ git clone https://github.com/davevanveen/compsensing_dip.git
    $ cd compsensing_dip

    Please run all commands from the root directory of the repository, i.e from compsensing_dip/

  2. Install requirements

    $ pip install -r requirements.txt

Plotting reconstructions with existing data


  1. Open jupyter notebook of plots

    $ jupyter notebook plot.ipynb
  2. Set variables in the second cell according to interest, e.g. DATASET, NUM_MEASUREMENTS_LIST, ALG_LIST. Existing supported data is described in the comments.

  3. Execute cells to view output.

Generating new reconstructions on the MNIST, xray, or retinopathy datasets


  1. Execute the baseline command

    $ python comp_sensing.py

    which will run experiments with the default parameters specified in configs.json

  2. To generate reconstruction data according to user-specified parameters, add command line arguments according to those available in parser.py. Example:

    $ python comp_sensing.py --DATASET xray --NUM_MEASUREMENTS 2000 4000 8000 --ALG csdip dct

Running CS-DIP on a new dataset


  1. Create a new directory /data/dataset_name/sub/ which contains your images

  2. In utils.py, create a new DCGAN architecture. This will be similar to the pre-defined architectures, e.g. DCGAN_XRAY, but must have output dimension equal to the size of your new images. Output dimension can be changed by adjusting kernel_size, stride, and padding as discussed in the torch.nn documentation.

  3. Update configs.json to set parameters for your dataset. Update utils.init_dcgan to import/initiate the corresponding DCGAN.

  4. Generate and plot reconstructions according to instructions above.

Note: We recommend experimenting with the DCGAN architecture and dataset parameters to obtain the best possible reconstructions.

Generating learned regularization parameters for a new dataset


The purpose of this section is to generate a new (mu, Sigma) based on layer-wise weights of the DCGAN. This functionality will be added soon.


上一篇:deep-image-prior

下一篇:DMSP

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...