Skip to content

A collection of state-of-the-art video or single-image super-resolution architectures, reimplemented in tensorflow.

License

Notifications You must be signed in to change notification settings

LoSealL/VideoSuperResolution

Repository files navigation

Video Super Resolution

A collection of state-of-the-art video or single-image super-resolution architectures, reimplemented in tensorflow.

Project uploaded to PyPI now. Try install from PyPI:

pip install VSR

Pretrained weights is uploading now.

Several referenced PyTorch implementations are also included now.

Quick Link:

Network list and reference (Updating)

The hyperlink directs to paper site, follows the official codes if the authors open sources.

All these models are implemented in ONE framework.

Model Published Code* VSR (TF)** VSR (Torch) Keywords Pretrained
SRCNN ECCV14 -, Keras Y Y Kaiming
RAISR arXiv - - - Google, Pixel 3
ESPCN CVPR16 -, Keras Y Y Real time
VDSR CVPR16 - Y Y Deep, Residual
DRCN CVPR16 - Y Y Recurrent
DRRN CVPR17 Caffe, PyTorch Y Y Recurrent
LapSRN CVPR17 Matlab Y - Huber loss
EDSR CVPR17 - Y Y NTIRE17 Champion
SRGAN CVPR17 - Y - 1st proposed GAN
VESPCN CVPR17 - Y Y VideoSR
MemNet ICCV17 Caffe Y -
SRDenseNet ICCV17 -, PyTorch Y - Dense
SPMC ICCV17 Tensorflow T Y VideoSR
DnCNN TIP17 Matlab Y Y Denoise
DCSCN arXiv Tensorflow Y -
IDN CVPR18 Caffe Y - Fast
RDN CVPR18 Torch Y - Deep, BI-BD-DN
SRMD CVPR18 Matlab - Y Denoise/Deblur/SR
DBPN CVPR18 PyTorch Y Y NTIRE18 Champion
ZSSR CVPR18 Tensorflow - - Zero-shot
FRVSR CVPR18 PDF T Y VideoSR
DUF CVPR18 Tensorflow T - VideoSR
CARN ECCV18 PyTorch Y Y Fast
RCAN ECCV18 PyTorch Y Y Deep, BI-BD-DN
MSRN ECCV18 PyTorch Y Y
SRFeat ECCV18 Tensorflow Y Y GAN
NLRN NIPS18 Tensorflow T - Non-local, Recurrent
SRCliqueNet NIPS18 - - - Wavelet
FFDNet TIP18 Matlab Y Y Conditional denoise
CBDNet CVPR19 Matlab T - Blind-denoise
SOFVSR ACCV18 PyTorch - Y VideoSR
ESRGAN ECCVW18 PyTorch - Y 1st place PIRM 2018
TecoGAN arXiv Tensorflow - T VideoSR GAN
RBPN CVPR19 PyTorch - Y VideoSR
DPSR CVPR19 Pytorch - -
SRFBN CVPR19 Pytorch - -
SRNTT CVPR19 Tensorflow - - Adobe
SAN CVPR19 empty - - AliDAMO SOTA
AdaFM CVPR19 Pytorch - - SenseTime Oral

*The 1st repo is by paper author.

**Y: included; -: not included; T: under-testing.

You can download pre-trained weights through prepare_data, or visit the hyperlink at .

Link of datasets

(please contact me if any of links offend you or any one disabled)

Name Usage # Site Comments
SET5 Test 5 download jbhuang0604
SET14 Test 14 download jbhuang0604
SunHay80 Test 80 download jbhuang0604
Urban100 Test 100 download jbhuang0604
VID4 Test 4 download 4 videos
BSD100 Train 300 download jbhuang0604
BSD300 Train/Val 300 download -
BSD500 Train/Val 500 download -
91-Image Train 91 download Yang
DIV2K Train/Val 900 website NTIRE17
Waterloo Train 4741 website -
MCL-V Train 12 website 12 videos
GOPRO Train/Val 33 website 33 videos, deblur
CelebA Train 202599 website Human faces
Sintel Train/Val 35 website Optical flow
FlyingChairs Train 22872 website Optical flow
DND Test 50 website Real noisy photos
RENOIR Train 120 website Real noisy photos
NC Test 60 website Noisy photos
SIDD(M) Train/Val 200 website NTIRE 2019 Real Denoise
RSR Train/Val 80 download NTIRE 2019 Real SR
Vimeo-90k Train/Test 89800 website 90k HQ videos

Other open datasets: Kaggle ImageNet COCO

VSR package

This package offers a training and data processing framework based on TF. What I made is a simple, easy-to-use framework without lots of encapulations and abstractions. Moreover, VSR can handle raw NV12/YUV as well as a sequence of images as inputs.

Install

  1. Prepare proper tensorflow and pytorch(optional). For example, GPU and CUDA10.0 (recommend to use conda):

    conda install tensorflow-gpu==1.15.0
    # optional
    # conda install pytorch
  2. Install VSR package

    # For someone see this doc online
    # git clone https://github.com/loseall/VideoSuperResolution && cd VideoSuperResolution
    pip install -e .

Getting Started

  1. Download pre-trained weights and (optinal) training datasets. For instance, let's begin with VESPCN and vid4 test data:

    python prepare_data.py --filter vespcn vid4
  2. Customize backend cd ~/.vsr/ touch config.yml

    backend: tensorflow  # (tensorflow, pytorch)
    verbose: info        # (debug, info, warning, error)
  3. Evaluate

    cd Train
    python eval.py srcnn -t vid4 --pretrain=/path/srcnn.pth
  4. Train

    python prepare_data.py --filter mcl-v
    cd Train
    python train.py vespcn --dataset mcl-v --memory_limit 1GB --epochs 100

OK, that's all you need. For more details, use --help to get more information.


More documents can be found at Docs.