Image Super-resolution via Patch-wise Sparse Recovery

Jianchao Yang et al.

Introduction

Research on image statistics suggests that image patches can be well represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. With the requirement that the sparse representation of the low-resolution image patch can well reconstruct its high-resolution counterpart, we train two dictionaries for the low- and high-resolution image patches. The learned dictionary pair is a compact representation adapted to the natural images of interest, which leads to state-of-the-art single image super-resolution performances both quantitatively and qualitatively.

Software

Download

Matlab codes

Feedback

Email me if you have any questions.

Reference

[1] Jianchao Yang, Zhaowen Wang, Zhe Lin, and Thomas Huang. Coupled dictionary training for image super-resolution. To appear in IEEE Transactions on Image Processing (TIP), 2011.

[2] Jianchao Yang, John Wright, Thomas Huang, and Yi Ma. Image super-resolution via sparse representation. IEEE Transactions on Image Processing (TIP), vol. 19, issue 11, 2010.

[3] Jianchao Yang, John Wright, Thomas Huang, and Yi Ma. Image super-resolution as sparse representation of raw image patches. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.



Return to Jianchao Yang's homepage.