2021

Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit

T. Zhou, X. Lin, J. Wu, Y. Chen, H. Xie, Y. Li, J. Fan, H. Wu, L. Fang and Q. Dai,

Nature Photonics, 2021: 1-7. (cover article)

Abstract:

There is an ever-growing demand for artificial intelligence. Optical processors, which compute with photons instead of electrons, can fundamentally accelerate the development of artificial intelligence by offering substantially improved computing performance. There has been long-term interest in optically constructing the most widely used artificial-intelligence architecture, that is, artificial neural networks, to achieve brain-inspired information processing at the speed of light. However, owing to restrictions in design flexibility and the accumulation of system errors, existing processor architectures are not reconfigurable and have limited model complexity and experimental performance. Here, we propose the reconfigurable diffractive processing unit, an optoelectronic fused computing architecture based on the diffraction of light, which can support different neural networks and achieve a high model complexity with millions of neurons. Along with the developed adaptive training approach to circumvent system errors, we achieved excellent experimental accuracies for high-speed image and video recognition over benchmark datasets and a computing performance superior to that of cutting-edge electronic computing platforms.





Latex Bibtex Citation:

@article{zhou2021large,

  title={Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit},

  author={Zhou, Tiankuang and Lin, Xing and Wu, Jiamin and Chen, Yitong and Xie, Hao and Li, Yipeng and Fan, Jingtao and Wu, Huaqiang and Fang, Lu and Dai, Qionghai},

  journal={Nature Photonics},

  pages={1--7},

  year={2021},

  publisher={Nature Publishing Group}

}


A modular hierarchical array camera

X. Yuan, M. Ji, J. Wu, D. Brady, Q. Dai and L. Fang,

Light: Science & Applications, volume 10, Article number: 37 (2021). (cover article)

Abstract:

Array cameras removed the optical limitations of a single camera and paved the way for high-performance imaging via the combination of micro-cameras and computation to fuse multiple aperture images. However, existing solutions use dense arrays of cameras that require laborious calibration and lack flexibility and practicality. Inspired by the cognition function principle of the human brain, we develop an unstructured array camera system that adopts a hierarchical modular design with multiscale hybrid cameras composing different modules. Intelligent computations are designed to collaboratively operate along both intra- and intermodule pathways. This system can adaptively allocate imagery resources to dramatically reduce the hardware cost and possesses unprecedented flexibility, robustness, and versatility. Large scenes of real-world data were acquired to perform human-centric studies for the assessment of human behaviours at the individual level and crowd behaviours at the population level requiring high-resolution long-term monitoring of dynamic wide-area scenes.


Latex Bibtex Citation:

@article{yuan2021modular,

  title={A modular hierarchical array camera},

  author={Yuan, Xiaoyun and Ji, Mengqi and Wu, Jiamin and Brady, David J and Dai, Qionghai and Fang, Lu},

  journal={Light: Science \& Applications},

  volume={10},

  number={1},

  pages={1--9},

  year={2021},

  publisher={Nature Publishing Group}

}

Boosting Single Image Super-Resolution Learnt From Implicit Multi-Image Prior

D. Jin, M. Ji, L. Xu, G. Wu, L. Wang, and L. Fang,

IEEE Trans. on Image Processing (TIP), Feb. 2021

Abstract:

Learning-based single image super-resolution (SISR) aims to learn a versatile mapping from low resolution (LR) image to its high resolution (HR) version. The critical challenge is to bias the network training towards continuous and sharp edges. For the first time in this work, we propose an implicit boundary prior learnt from multi-view observations to significantly mitigate the challenge in SISR we outline. Specifically, the multi-image prior that encodes both disparity information and boundary structure of the scene supervise a SISR network for edge-preserving. For simplicity, in the training procedure of our framework, light field (LF) serves as an effective multi-image prior, and a hybrid loss function jointly considers the content, structure, variance as well as disparity information from 4D LF data. Consequently, for inference, such a general training scheme boosts the performance of various SISR networks, especially for the regions along edges. Extensive experiments on representative backbone SISR architectures constantly show the effectiveness of the proposed method, leading to around 0.6 dB gain without modifying the network architecture.



Latex Bibtex Citation:

@article{RN455,

   author = {Jin, Dingjian and Ji, Mengqi and Xu, Lan and Wu, Gaochang and Wang, Liejun and Fang, Lu},

   title = {Boosting Single Image Super-Resolution Learnt From Implicit Multi-Image Prior},

   journal = {IEEE Transactions on Image Processing (TIP)},

   volume = {30},

   pages = {3240-3251},

   ISSN = {1941-0042 (Electronic)

1057-7149 (Linking)},

   DOI = {10.1109/TIP.2021.3059507},

   url = {https://www.ncbi.nlm.nih.gov/pubmed/33621177},

   year = {2021},

   type = {Journal Article}

}


A^2 -FPN: Attention Aggregation based Feature Pyramid Network for Instance Segmentation

M. Hu, Y. Li, L. Fang and S. Wang, 

Proc. of Computer Vision and Pattern Recognition (CVPR), 2021

Abstract:

Learning pyramidal feature representations is crucial for recognizing object instances at different scales. Feature Pyramid Network (FPN) is the classic architecture to build a feature pyramid with high-level semantics throughout. However, intrinsic defects in feature extraction and fusion inhibit FPN from further aggregating more discriminative features. In this work, we propose Attention Aggregation based Feature Pyramid Network (A2 -FPN), to improve multi-scale feature learning through attention-guided feature aggregation. In feature extraction, it extracts discriminative features by collecting-distributing multi-level global context features, and mitigates the semantic information loss due to drastically reduced channels. In feature fusion, it aggregates complementary information from adjacent features to generate location-wise reassembly kernels for content-aware sampling, and employs channelwise reweighting to enhance the semantic consistency before element-wise addition. A2 -FPN shows consistent gains on different instance segmentation frameworks. By replacing FPN with A2 -FPN in Mask R-CNN, our model boosts the performance by 2.1% and 1.6% mask AP when using ResNet-50 and ResNet-101 as backbone, respectively. Moreover, A2 -FPN achieves an improvement of 2.0% and 1.4% mask AP when integrated into the strong baselines such as Cascade Mask R-CNN and Hybrid Task Cascade.


Latex Bibtex Citation:
Data-Uncertainty Guided Multi-Phase Learning for Semi-Supervised Object Detection

Z. Wang, Y. Li, Y. Guo, L. Fang and S. Wang,

Proc. of Computer Vision and Pattern Recognition (CVPR), 2021

Abstract:

In this paper, we delve into semi-supervised object detec-tion where unlabeled images are leveraged to break throughthe upper bound of fully-supervised object detection.  Pre-vious semi-supervised methods based on pseudo labels areseverely degenerated by noise and prone to overfit to noisylabels,  thus  are  deficient  in  learning  different  unlabeledknowledge well.  To address this issue, we propose a data-uncertainty guided multi-phase learning method for semi-supervised object detection.  We comprehensively considerdivergent types of unlabeled images according to their dif-ficulty levels, utilize them in different phases, and ensemblemodels from different phases together to generate ultimateresults.  Image uncertainty guided easy data selection andregion uncertainty guided RoI Re-weighting are involved inmulti-phase learning and enable the detector to concentrateon more certain knowledge. Through extensive experimentson PASCAL VOC and MS COCO, we demonstrate that ourmethod behaves extraordinarily compared to baseline ap-proaches  and  outperforms  them  by  a  large  margin,  morethan3%on VOC and2%on COCO.


Latex Bibtex Citation:

@article{wang2021data, 

title={Data-Uncertainty Guided Multi-Phase Learning for Semi-Supervised Object Detection}, 

author={Wang, Zhenyu and Li, Yali and Guo, Ye and Fang, Lu and Wang, Shengjin}, 

journal={arXiv preprint arXiv:2103.16368}, 

year={2021}

}

2020

Augmenting Vascular Disease Diagnosis by Vasculature-aware Unsupervised Learning

Y. Wang, M. Ji, S. Jiang, X. Wang, J. Wu, F. Duan, J. Fan, L. Huang, S. Ma, L. Fang and Q. Dai, 

Nature Machine Intelligence, 2020.

Abstract:

Vascular disease is one of the leading causes of death and threatens human health worldwide. Imaging examination of vascular pathology with reduced invasiveness is challenging due to the intrinsic vasculature complexity and non-uniform scattering from bio-tissues. Here, we report VasNet, a vasculature-aware unsupervised learning algorithm that augments pathovascular recognition from small sets of unlabelled fluorescence and digital subtraction angiography images. VasNet adopts a multi-scale fusion strategy with a domain adversarial neural network loss function that induces biased pattern reconstruction by strengthening features relevant to the retinal vasculature reference while weakening irrelevant features. VasNet delivers the outputs ‘Structure + X’ (where X refers to multi-dimensional features such as blood flows, the distinguishment of blood dilation and its suspicious counterparts, and the dependence of new pattern emergence on disease progression). Therefore, explainable imaging output from VasNet and other algorithm extensions holds the promise to augment medical diagnosis, as it improves performance while reducing the cost of human expertise, equipment and time consumption.


Latex Bibtex Citation:

@article{wang2020augmenting,

  title={Augmenting vascular disease diagnosis by vasculature-aware unsupervised learning},

  author={Wang, Yong and Ji, Mengqi and Jiang, Shengwei and Wang, Xukang and Wu, Jiamin and Duan, Feng and Fan, Jingtao and Huang, Laiqiang and Ma, Shaohua and Fang, Lu and others},

  journal={Nature Machine Intelligence},

  volume={2},

  number={6},

  pages={337--346},

  year={2020},

  publisher={Nature Publishing Group}

}

BuildingFusion: Semantic-aware Structural Building-scale 3D Reconstruction

T. Zheng, G. Zhang, L. Han, L. Xu and L. Fang

IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 2020.

Abstract:

Scalable geometry reconstruction and understanding is an important yet unsolved task. Current methods often suffer from false loop closures when there are similar-looking rooms in the scene, and often lack online scene understanding. We propose BuildingFusion, a semantic-aware structural building-scale reconstruction system, which not only allows building-scale dense reconstruction collaboratively, but also provides semantic and structural information on-the-fly. Technically, the robustness to similar places is enabled by a novel semantic-aware room-level loop closure detection(LCD) method. The insight lies in that even though local views may look similar in different rooms, the objects inside and their locations are usually different, implying that the semantic information forms a unique and compact representation for place recognition. To achieve that, a 3D convolutional network is used to learn instance-level embeddings for similarity measurement and candidate selection, followed by a graph matching module for geometry verification. On the system side, we adopt a centralized architecture to enable collaborative scanning. Each agent reconstructs a part of the scene, and the combination is activated when the overlaps are found using room-level LCD, which is performed on the server. Extensive comparisons demonstrate the superiority of the semantic-aware room-level LCD over traditional image-based LCD. Live demo on the real-world building-scale scenes shows the feasibility of our method with robust, collaborative, and real-time performance.



Latex Bibtex Citation:

@ARTICLE{9286413,

  author={T. {Zheng} and G. {Zhang} and L. {Han} and L. {Xu} and L. {Fang}},

  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 

  title={Building Fusion: Semantic-aware Structural Building-scale 3D Reconstruction}, 

  year={2020},

  volume={},

  number={},

  pages={1-1},

  doi={10.1109/TPAMI.2020.3042881}}


Real-time Globally Consistent Dense 3D Reconstruction with Online Texturing

L. Han, S. Gu, D. Zhong, S. Quan and L. Fang

IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 2020.

Abstract:

High-quality reconstruction of 3D geometry and texture plays a vital role in providing immersive perception of the real world. Additionally, online computation enables the practical usage of 3D reconstruction for interaction. We present an RGBD-based globally-consistent dense 3D reconstruction approach, accompanying high-resolution (< 1 cm) geometric reconstruction and high-quality (the spatial resolution of the RGB image) texture mapping, both of which work online using the CPU computing of a portable device merely. For geometric reconstruction, we introduce a sparse voxel sampling scheme employing the continuous nature of surfaces in 3D space, reducing more than 95% of the computational burden compared with conventional volumetric fusion approaches. For online texture mapping, we propose a simplified incremental MRF solver, which utilizes previous optimization results for faster convergence, and an efficient reference-based color adjustment scheme for texture optimization. Quantitative and qualitative experiments demonstrate that our online scheme achieves a more realistic visualization of the environment with more abundant details, while taking more compact memory consumption and much lower computational complexity than existing solutions.


Latex Bibtex Citation:

@ARTICLE{9184935,

author={L. {Han} and S. {Gu} and D. {Zhong} and s. {quan} and L. {FANG}},

journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},

title={Real-time Globally Consistent Dense 3D Reconstruction with Online Texturing},

year={2020},

volume={},

number={},

pages={1-1},

doi={10.1109/TPAMI.2020.3021023}}


SurfaceNet+: An End-to-end 3D Neural Network for Very Sparse Multi-view Stereopsis

M. Ji, J. Zhang, Q. Dai and L. Fang

IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 2020.

Abstract:

Multi-view stereopsis (MVS) tries to recover the 3D model from 2D images. As the observations become sparser, the significant 3D information loss makes the MVS problem more challenging. Instead of only focusing on densely sampled conditions, we investigate sparse-MVS with large baseline angles since sparser sampling is always more favorable inpractice. By investigating various observation sparsities, we show that the classical depth-fusion pipeline becomes powerless for thecase with larger baseline angle that worsens the photo-consistency check. As another line of solution, we present SurfaceNet+, a volumetric method to handle the 'incompleteness' and 'inaccuracy' problems induced by very sparse MVS setup. Specifically, the former problem is handled by a novel volume-wise view selection approach. It owns superiority in selecting valid views while discarding invalid occluded views by considering the geometric prior. Furthermore, the latter problem is handled via a multi-scale strategy that consequently refines the recovered geometry around the region with repeating pattern. The experiments demonstrate the tremendous performance gap between SurfaceNet+ and the state-of-the-art methods in terms of precision and recall. Under the extreme sparse-MVS settings in two datasets, where existing methods can only return very few points, SurfaceNet+ still works as well as in the dense MVS setting.


Latex Bibtex Citation:

@ARTICLE{ji2020surfacenet_plus,

title={SurfaceNet+: An End-to-end 3D Neural Network for Very Sparse Multi-view Stereopsis},

author={M. {Ji} and J. {Zhang} and Q. {Dai} and L. {Fang}},

journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},

year={2020},

volume={},

number={},

pages={1-1},

}


CrossNet++: Cross-scale Large-parallax Warping for Reference-based Super-resolution

Y. Tang, H. Zheng, Y. Zhu, X. Yuan, X. Lin, D. Brady and L. Fang

IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 2020.

Abstract:

The ability of camera arrays to efficiently capture higher space-bandwidth product than single cameras has led to various multiscale and hybrid systems. These systems play vital roles in computational photography, including light field imaging, 360 VR camera, gigapixel videography, etc. One of the critical tasks in multiscale hybrid imaging is matching and fusing cross-resolution images from different cameras under perspective parallax. In this paper, we investigate the reference-based super-resolution (RefSR) problem associated with dual-camera or multi-camera systems, with a significant resolution gap (8x) and large parallax (10%pixel displacement). We present CrossNet++, an end-to-end network containing novel two-stage cross-scale warping modules. The stage I learns to narrow down the parallax distinctively with the strong guidance of landmarks and intensity distribution consensus. Then the stage II operates more fine-grained alignment and aggregation in feature domain to synthesize the final super-resolved image. To further address the large parallax, new hybrid loss functions comprising warping loss, landmark loss and super-resolution loss are proposed to regularize training and enable better convergence. CrossNet++ significantly outperforms the state-of-art on light field datasets as well as real dual-camera data. We further demonstrate the generalization of our framework by transferring it to video super-resolution and video denoising.


Latex Bibtex Citation:

@ARTICLE{9099445,

  author={Y. {Tan} and H. {Zheng} and Y. {Zhu} and X. {Yuan} and X. {Lin} and D. {Brady} and L. {Fang}},

  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 

  title={CrossNet++: Cross-scale Large-parallax Warping for Reference-based Super-resolution}, 

  year={2020},

  volume={},

  number={},

  pages={1-1},

  doi={10.1109/TPAMI.2020.2997007}}


Live Semantic 3D Perception for Immersive Augmented Reality

L. Han, T. Zheng, Y. Zhu, L. Xu and L. Fang

IEEE VR & IEEE Trans. on Visualization and Computer Graphics (TVCG), 2020. (Oral)

Abstract:

Semantic understanding of 3D environments is critical for both the unmanned system and the human involved virtual/augmented reality (VR/AR) immersive experience. Spatially-sparse convolution, taking advantage of the intrinsic sparsity of 3D point cloud data, makes high resolution 3D convolutional neural networks tractable with state-of-the-art results on 3D semantic segmentation problems. However, the exhaustive computations limits the practical usage of semantic 3D perception for VR/AR applications in portable devices. In this paper, we identify that the efficiency bottleneck lies in the unorganized memory access of the sparse convolution steps, i.e., the points are stored independently based on a predefined dictionary, which is inefficient due to the limited memory bandwidth of parallel computing devices (GPU). With the insight that points are continuous as 2D surfaces in 3D space, a chunk-based sparse convolution scheme is proposed to reuse the neighboring points within each spatially organized chunk. An efficient multi-layer adaptive fusion module is further proposed for employing the spatial consistency cue of 3D data to further reduce the computational burden. Quantitative experiments on public datasets demonstrate that our approach works 11× faster than previous approaches with competitive accuracy. By implementing both semantic and geometric 3D reconstruction simultaneously on a portable tablet device, we demo a foundation platform for immersive AR applications.


Latex Bibtex Citation:

@ARTICLE{8998140,

author={L. {Han} and T. {Zheng} and Y. {Zhu} and L. {Xu} and L. {Fang}},

journal={IEEE Transactions on Visualization and Computer Graphics},

title={Live Semantic 3D Perception for Immersive Augmented Reality},

year={2020},

volume={26},

number={5},

pages={2012-2022},

doi={10.1109/TVCG.2020.2973477}}


Deep Learning for Camera Data Acquisition, Control, and Image Estimation

D. Brady, L. Fang and Z. Ma, 

Advances in Optics and Photonics, 2020.

Abstract:

We review the impact of deep-learning technologies on camera architecture. The function of a camera is first to capture visual information and second to form an image. Conventionally, both functions are implemented in physical optics. Throughout the digital age, however, joint design of physical sampling and electronic processing, e.g., computational imaging, has been increasingly applied to improve these functions. Over the past five years, deep learning has radically improved the capacity of computational imaging. Here we briefly review the development of artificial neural networks and their recent intersection with computational imaging. We then consider in more detail how deep learning impacts the primary strategies of computational photography: focal plane modulation, lens design, and robotic control. With focal plane modulation, we show that deep learning improves signal inference to enable faster hyperspectral, polarization, and video capture while reducing the power per pixel by 10{\textminus}100{\texttimes}. With lens design, deep learning improves multiple aperture image fusion to enable task-specific array cameras. With control, deep learning enables dynamic scene-specific control that may ultimately enable cameras that capture the entire optical data cube (the ``light field''), rather than just a focal slice. Finally, we discuss how these three strategies impact the physical camera design as we seek to balance physical compactness and simplicity, information capacity, computational complexity, and visual fidelity.


Latex Bibtex Citation:

@article{Brady:20,

author = {David J. Brady and Lu Fang and Zhan Ma},

journal = {Adv. Opt. Photon.},

keywords = {Extended depth of field; Image processing; Image quality; Imaging systems; Optical imaging; Signal processing},

number = {4},

pages = {787--846},

publisher = {OSA},

title = {Deep learning for camera data acquisition, control, and image estimation},

volume = {12},

month = {Dec},

year = {2020},

url = {http://aop.osa.org/abstract.cfm?URI=aop-12-4-787},

doi = {10.1364/AOP.398263},


In Situ Optical Backpropagation Training of Diffractive Optical Neural Networks

T. Zhou, L. Fang, T. Yan, J. Wu, Y. Li, J. Fan, H. Wu, X. Lin, and Q. Dai, 

Photonics Research, Vol. 8, No. 6, pp.940-953, 2020.

Abstract:

Training an artificial neural network with backpropagation algorithms to perform advanced machine learning tasks requires an extensive computational process. This paper proposes to implement the backpropagation algorithm optically for in situ training of both linear and nonlinear diffractive optical neural networks, which enables the acceleration of training speed and improvement in energy efficiency on core computing modules. We demonstrate that the gradient of a loss function with respect to the weights of diffractive layers can be accurately calculated by measuring the forward and backward propagated optical fields based on light reciprocity and phase conjunction principles. The diffractive modulation weights are updated by programming a high-speed spatial light modulator to minimize the error between prediction and target output and perform inference tasks at the speed of light. We numerically validate the effectiveness of our approach on simulated networks for various applications. The proposed in situ optical learning architecture achieves accuracy comparable to in silico training with an electronic computer on the tasks of object classification and matrix-vector multiplication, which further allows the diffractive optical neural network to adapt to system imperfections. Also, the self-adaptive property of our approach facilitates the novel application of the network for all-optical imaging through scattering media. The proposed approach paves the way for robust implementation of large-scale diffractive neural networks to perform distinctive tasks all-optically.


Latex Bibtex Citation:

@article{Zhou:20,

author = {Tiankuang Zhou and Lu Fang and Tao Yan and Jiamin Wu and Yipeng Li and Jingtao Fan and Huaqiang Wu and Xing Lin and Qionghai Dai},

journal = {Photon. Res.},

keywords = {Diffractive optical elements; Light fields; Optical fields; Optical networks; Optical neural systems; Phase shifting digital holography},

number = {6},

pages = {940--953},

publisher = {OSA},

title = {In situ optical backpropagation training of diffractive optical neural networks},

volume = {8},

month = {Jun},

year = {2020},

url = {http://www.osapublishing.org/prj/abstract.cfm?URI=prj-8-6-940},

doi = {10.1364/PRJ.389553}

}


All-in-depth via Cross-baseline Light Field Camera

D. Jin,  A. Zhang,  J. Wu, G. Wu, and L. Fang,

Proc. of The 28th ACM International Conference on Multimedia (MM '20)

Abstract:

Light-field (LF) camera holds great promise for passive/general depth estimation benefited from high angular resolution, yet suffering small baseline for distanced region. While stereo solution with large baseline is superior to handle distant scenarios, the problem of limited angular resolution becomes bothering for near objects. Aiming for all-in-depth solution, we propose a cross-baseline LF camera using a commercial LF camera and a monocular camera, which naturally form a 'stereo camera' enabling compensated baseline for LF camera. The idea is simple yet non-trivial, due to the significant angular resolution gap and baseline gap between LF and stereo cameras. Fusing two depth maps from LF and stereo modules in spatial domain is fluky, which relies on the imprecisely predicted depth to distinguish close or distance range, and determine the weights for fusion. Alternatively, taking the unified representation for both LF and monocular sub-aperture view in epipolar plane image (EPI) domain, we show that for each pixel, the minimum variance along different shearing degrees in EPI domain estimates its depth with the highest fidelity. By minimizing the minimum variance, the depth error is minimized accordingly. The insight is that the calculated minimum variance in EPI domain owns higher fidelity than the predicted depth in spatial domain. Extensive experiments demonstrate the superiority of our cross-baseline LF camera in providing high-quality all-in-depth map from 0.2m to 100m.


Latex Bibtex Citation:

@article{Jin2020AllindepthVC,

  title={All-in-depth via Cross-baseline Light Field Camera},

  author={Dingjian Jin and A. Zhang and Jiamin Wu and Gaochang Wu and Haoqian Wang and Lu Fang},

  journal={Proceedings of the 28th ACM International Conference on Multimedia},

  year={2020}

}



EventCap: Monocular 3D Capture of High-Speed Human Motions using an Event Camera

L. Xu, W. Xu, V. Golyanik, M. Habermann, L. Fang and C. Theobalt,

Proc. of Computer Vision and Pattern Recognition (CVPR), 2020. (Oral)

Abstract:

The high frame rate is a critical requirement for capturing fast human motions. In this setting, existing markerless image-based methods are constrained by the lighting requirement, the high data bandwidth and the consequent high computation overhead. In this paper, we propose EventCap - the first approach for 3D capturing of high-speed human motions using a single event camera. Our method combines model-based optimization and CNN-based human pose detection to capture high frequency motion details and to reduce the drifting in the tracking. As a result, we can capture fast motions at millisecond resolution with significantly higher data efficiency than using high frame rate videos. Experiments on our new event-based fast human motion dataset demonstrate the effectiveness and accuracy of our method, as well as its robustness to challenging lighting conditions.


Latex Bibtex Citation:

@INPROCEEDINGS{9157340,

author={L. {Xu} and W. {Xu} and V. {Golyanik} and M. {Habermann} and L. {Fang} and C. {Theobalt}},

booktitle={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},

title={EventCap: Monocular 3D Capture of High-Speed Human Motions Using an Event Camera},

year={2020},

volume={},

number={},

pages={4967-4977},

doi={10.1109/CVPR42600.2020.00502}}


PANDA: A Gigapixel-level Human-centric Video Dataset

X. Wang, X. Zhang, Y. Zhu, Y. Guo, X. Yuan, G. Ding, Q. Dai, D. Brady and L. Fang

Proc. of Computer Vision and Pattern Recognition (CVPR), 2020.

Abstract:

We present PANDA, the first gigaPixel-level humAN-centric viDeo dAtaset, for large-scale, long-term, and multi-object visual analysis. The videos in PANDA were captured by a gigapixel camera and cover real-world scenes with both wide field-of-view (~1 square kilometer area) and high-resolution details (~gigapixel-level/frame). The scenes may contain 4k head counts with over 100× scale variation. PANDA provides enriched and hierarchical ground-truth annotations, including 15,974.6k bounding boxes, 111.8k fine-grained attribute labels, 12.7k trajectories, 2.2k groups and 2.9k interactions. We benchmark the human detection and tracking tasks. Due to the vast variance of pedestrian pose, scale, occlusion and trajectory, existing approaches are challenged by both accuracy and efficiency. Given the uniqueness of PANDA with both wide FoV and high resolution, a new task of interaction-aware group detection is introduced. We design a `global-to-local zoom-in' framework, where global trajectories and local interactions are simultaneously encoded, yielding promising results. We believe PANDA will contribute to the community of artificial intelligence and praxeology by understanding human behaviors and interactions in large-scale real-world scenes. PANDA Website: http://www.panda-dataset.com.


Latex Bibtex Citation:

@INPROCEEDINGS{9156646,

author={X. {Wang} and X. {Zhang} and Y. {Zhu} and Y. {Guo} and X. {Yuan} and L. {Xiang} and Z. {Wang} and G. {Ding} and D. {Brady} and Q. {Dai} and L. {Fang}},

booktitle={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},

title={PANDA: A Gigapixel-Level Human-Centric Video Dataset},

year={2020},

volume={},

number={},

pages={3265-3275},

doi={10.1109/CVPR42600.2020.00333}}

}


OccuSeg: Occupancy-aware 3D Instance Segmentation

L. Han, T. Zheng, L. Xu and L. Fang

Proc. of Computer Vision and Pattern Recognition (CVPR), 2020.

Abstract:

3D instance segmentation, with a variety of applications in robotics and augmented reality, is in large demands these days. Unlike 2D images that are projective observations of the environment, 3D models provide metric reconstruction of the scenes without occlusion or scale ambiguity. In this paper, we define “3D occupancy size”, as the number of voxels occupied by each instance. It owns advantages of robustness in prediction, on which basis, OccuSeg, an occupancy-aware 3D instance segmentation scheme is proposed. Our multi-task learning produces both occupancy signal and embedding representations, where the training of spatial and feature embeddings varies with their difference in scale-aware. Our clustering scheme benefits from the reliable comparison between the predicted occupancy size and the clustered occupancy size, which encourages hard samples being correctly clustered and avoids over segmentation. The proposed approach achieves state-of-theart performance on 3 real-world datasets, i.e. ScanNetV2, S3DIS and SceneNN, while maintaining high efficiency.



Latex Bibtex Citation:

@INPROCEEDINGS{9157103,

author={L. {Han} and T. {Zheng} and L. {Xu} and L. {Fang}},

booktitle={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},

title={OccuSeg: Occupancy-Aware 3D Instance Segmentation},

year={2020},

volume={},

number={},

pages={2937-2946},

doi={10.1109/CVPR42600.2020.00301}}


Multiscale-VR: Multiscale Gigapixel 3D Panoramic Videography for Virtual Reality

J. Zhang, T. Zhu, A. Zhang, Z. Wang, X. Yuan, Q. Dai and L. Fang

Proc. of Int. Conf. on Computational Photography (ICCP),2020. (Oral)

Abstract:

Creating virtual reality (VR) content with effective imaging systems has attracted significant attention worldwide following the broad applications of VR in various fields, including entertainment, surveillance, sports, etc. However, due to the inherent trade-off between field-of-view and resolution of the imaging system as well as the prohibitive computational cost, live capturing and generating multiscale 360° 3D video content at an eye-limited resolution to provide immersive VR experiences confront significant challenges. In this work, we propose Multiscale-VR, a multiscale unstructured camera array computational imaging system for high-quality gigapixel 3D panoramic videography that creates the six-degree-of-freedom multiscale interactive VR content. The Multiscale-VR imaging system comprises scalable cylindrical-distributed global and local cameras, where global stereo cameras are stitched to cover 360° field-of-view, and unstructured local monocular cameras are adapted to the global camera for flexible high-resolution video streaming arrangement. We demonstrate that a high-quality gigapixel depth video can be faithfully reconstructed by our deep neural network-based algorithm pipeline where the global depth via stereo matching and the local depth via high-resolution RGB-guided refinement are associated. To generate the immersive 3D VR content, we present a three-layer rendering framework that includes an original layer for scene rendering, a diffusion layer for handling occlusion regions, and a dynamic layer for efficient dynamic foreground rendering. Our multiscale reconstruction architecture enables the proposed prototype system for rendering highly effective 3D, 360° gigapixel live VR video at 30 fps from the captured high-throughput multiscale video sequences. The proposed multiscale interactive VR content generation approach by using a heterogeneous camera system design, in contrast to the existing single-scale VR imaging systems with structured homogeneous cameras, will open up new avenues of research in VR and provide an unprecedented immersive experience benefiting various novel applications.


Latex Bibtex Citation:

@INPROCEEDINGS{9105244,

author={J. {Zhang} and T. {Zhu} and A. {Zhang} and X. {Yuan} and Z. {Wang} and S. {Beetschen} and L. {Xu} and X. {Lin} and Q. {Dai} and L. {Fang}},

booktitle={2020 IEEE International Conference on Computational Photography (ICCP)},

title={Multiscale-VR: Multiscale Gigapixel 3D Panoramic Videography for Virtual Reality},

year={2020},

volume={},

number={},

pages={1-12},

doi={10.1109/ICCP48838.2020.9105244}}


2019

UnstructuredFusion: Realtime 4D Geometry and Texture Reconstruction using Commercial RGBD Cameras

L. Xu, Z. Su, L. Han, T. Yu, Y. Liu and L. Fang

IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), special issue on RGB-D Vision, April 2019.

Abstract:

A high-quality 4D geometry and texture reconstruction for human activities usually requires multiview perceptions via highly structured multi-camera setup, where both the specifically designed cameras and the tedious pre-calibration restrict the popularity of professional multi-camera systems for daily applications. In this paper, we propose UnstructuredFusion, a practicable realtime markerless human performance capture method using unstructured commercial RGBD cameras. Along with the flexible hardware setup using simply three unstructured RGBD cameras without any careful pre-calibration, the challenge 4D reconstruction through multiple asynchronous videos is solved by proposing three novel technique contributions, i.e., online multi-camera calibration, skeleton warping based non-rigid tracking, and temporal blending based atlas texturing. The overall insights behind lie in the solid global constraints of human body and human motion which are modeled by the skeleton and the skeleton warping, respectively. Extensive experiments such as allocating three cameras flexibly in a handheld way demonstrate that the proposed UnstructuredFusion achieves high-quality 4D geometry and texture reconstruction without tiresome pre-calibration, liberating the cumbersome hardware and software restrictions in conventional structured multi-camera system, while eliminating the inherent occlusion issues of the single camera setup.


Latex Bibtex Citation:

@ARTICLE{8708933,
 author={L. {Xu} and Z. {Su} and L. {Han} and T. {Yu} and Y. {Liu} and L. {Fang}},
 journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
 title={UnstructuredFusion: Realtime 4D Geometry and Texture Reconstruction Using Commercial RGBD Cameras},
 year={2020},
 volume={42},
 number={10},
 pages={2508-2522},
 doi={10.1109/TPAMI.2019.2915229}}

Real-time Global Registration for Globally Consistent RGBD SLAM

L. Han, L. Xu, D. Bobkov, E. Steinbach and L. Fang

IEEE Trans. on Robotics (TRO) , July 2018.

Abstract:

Real-time globally consistent camera localization is critical for visual simultaneous localization and mapping (SLAM) applications. Regardless the popularity of high efficient pose graph optimization as a backend in SLAM, its deficiency in accuracy can hardly benefit the reconstruction application. An alternative solution for the sake of high accuracy would be global registration, which minimizes the alignment error of all the corresponding observations, yet suffers from high complexity due to the tremendous observations that need to be considered. In this paper, we start by analyzing the complexity bottleneck of global point cloud registration problem, i.e., each observation (three-dimensional point feature) has to be linearized based on its local coordinate (camera poses), which however is nonlinear and dynamically changing, resulting in extensive computation during optimization. We further prove that such nonlinearity can be decoupled into linear component (feature position) and nonlinear components (camera poses), where the former linear one can be effectively represented by its compact second-order statistics, while the latter nonlinear one merely requires six degrees of freedom for each camera pose. Benefiting from the decoupled representation, the complexity can be significantly reduced without sacrifice in accuracy. Experiments show that the proposed algorithm achieves globally consistent pose estimation in real-time via CPU computing, and owns comparable accuracy as state-of-the-art that use GPU computing, enabling the practical usage of globally consistent RGB-D SLAM on highly computationally constrained devices.


Latex Bibtex Citation:

@ARTICLE{8606275,  author={L. {Han} and L. {Xu} and D. {Bobkov} and E. {Steinbach} and L. {Fang}},  journal={IEEE Transactions on Robotics},   title={Real-Time Global Registration for Globally Consistent RGB-D SLAM},   year={2019},  volume={35},  number={2},  pages={498-508},  doi={10.1109/TRO.2018.2882730}}

Fourier-space Diffractive Deep Neural Network

T. Yan, J. Wu, T. Zhou, H. Xie, J. Fan, L. Fang, X. Lin and Q. Dai, 

Physical Review Letters (PRL), July 2019.

Abstract:

In this Letter we propose the Fourier-space diffractive deep neural network (FD2NN) for all-optical image processing that performs advanced computer vision tasks at the speed of light. The FD2NN is achieved by placing the extremely compact diffractive modulation layers at the Fourier plane or both Fourier and imaging planes of an optical system, where the optical nonlinearity is introduced from ferroelectric thin films. We demonstrated that FD2NN can be trained with deep learning algorithms for all-optical saliency detection and high-accuracy object classification.


Latex Bibtex Citation:

@article{article,

author = {Yan, Tao and Wu, Jiamin and Tiankuang, Zhou and Xie, Hao and Xu, Feng and Fan, Jingtao and Fang, Lu and Lin, Xing and Dai, Qionghai},

year = {2019},

month = {07},

pages = {},

title = {Fourier-space Diffractive Deep Neural Network},

volume = {123},

journal = {Physical Review Letters},

doi = {10.1103/PhysRevLett.123.023901}

}

iDFusion: Globally Consistent Dense 3D Reconstruction from RGB-D and Inertial Measurements

D. Zhong, L. Han and L. Fang

Proc. of The 27th ACM International Conference on Multimedia (MM '19).

Abstract:

We present a practical fast, globally consistent and robust dense 3D reconstruction system, iDFusion, by exploring the joint benefit of both the visual (RGB-D) solution and inertial measurement unit (IMU). A global optimization considering all the previous states is adopted to maintain high localization accuracy and global consistency, yet its complexity of being linear to the number of all previous camera/IMU observations seriously impedes real-time implementation. We show that the global optimization can be solved efficiently at the complexity linear to the number of keyframes, and further realize a real-time dense 3D reconstruction system given the estimated camera states. Meanwhile, for the sake of robustness, we propose a novel loop-validity detector based on the estimated bias of the IMU state. By checking the consistency of camera movements, a false loop closure constraint introduces manifest inconsistency between the camera movements and IMU measurements. Experiments reveal that iDFusion owns superior reconstruction performance running in 25 fps on CPU computing of portable devices, under challenging yet practical scenarios including texture-less, motion blur, and repetitive contents.


Latex Bibtex Citation:

@inproceedings{10.1145/3343031.3351085, author = {Zhong, Dawei and Han, Lei and Fang, Lu}, title = {IDFusion: Globally Consistent Dense 3D Reconstruction from RGB-D and Inertial Measurements}, year = {2019}, isbn = {9781450368896}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3343031.3351085}, doi = {10.1145/3343031.3351085}, booktitle = {Proceedings of the 27th ACM International Conference on Multimedia}, pages = {962–970}, numpages = {9}, keywords = {loop closure, visual-imu global optimization, 3d reconstruction, real-time slam}, location = {Nice, France}, series = {MM '19} }


Overcoming Tissue Scattering in Wide-field Two-photon Imaging by Extended Detection and Computational Reconstruction

Y. Zhang, T. Zhou, X. Hu, X. Li, L. Fang, L. Kong, and Q. Dai, 

Optics Express, vol. 27, pp. 20117-20132, July 2019.

Abstract:

Compared to point-scanning multiphoton microscopy, line-scanning temporal focusing microscopy (LTFM) is competitive in high imaging speed while maintaining tight axial confinement. However, considering its wide-field detection mode, LTFM suffers from shallow penetration depth as a result of the crosstalk induced by tissue scattering. In contrast to the spatial filtering based on confocal slit detection, here we propose the extended detection LTFM (ED-LTFM), the first wide-field two-photon imaging technique to extract signals from scattered photons and thus effectively extend the imaging depth. By recording a succession of line-shape excited signals in 2D and reconstructing signals under Hessian regularization, we can push the depth limitation of wide-field imaging in scattering tissues. We validate the concept with numerical simulations, and demonstrate the performance of enhanced imaging depth in in vivo imaging of mouse brains.


Latex Bibtex Citation:

@article{Zhang:19,

author = {Yuanlong Zhang and Tiankuang Zhou and Xuemei Hu and Xinyang Li and Hao Xie and Lu Fang and Lingjie Kong and Qionghai Dai},

journal = {Opt. Express},

keywords = {High speed imaging; In vivo imaging; Laser materials processing; Multiphoton microscopy; Spatial filtering; Two photon imaging},

number = {15},

pages = {20117--20132},

publisher = {OSA},

title = {Overcoming tissue scattering in wide-field two-photon imaging by extended detection and computational reconstruction},

volume = {27},

month = {Jul},

year = {2019},

url = {http://www.opticsexpress.org/abstract.cfm?URI=oe-27-15-20117},

doi = {10.1364/OE.27.020117}

}


2018

iHuman3D: Intelligent Human Body 3D Reconstruction using a Single Flying Camera

W. Cheng, L. Xu, L. Han and L. Fang

Proc. of The 26th ACM International Conference on Multimedia (MM '18). (Oral)

Abstract:

Aiming at autonomous, adaptive and real-time human body reconstruction technique, this paper presents iHuman3D: an intelligent human body 3D reconstruction system using a single aerial robot integrated with an RGB-D camera. Specifically, we propose a real-time and active view planning strategy based on a highly efficient ray casting algorithm in GPU and a novel information gain formulation directly in TSDF. We also propose the human body reconstruction module by revising the traditional volumetric fusion pipeline with a compactly-designed non-rigid deformation for slight motion of the human target. We unify both the active view planning and human body reconstruction in the same TSDF volume-based representation. Quantitative and qualitative experiments are conducted to validate that the proposed iHuman3D system effectively removes the constraint of extra manual labor, enabling real-time and autonomous reconstruction of human body.


Latex Bibtex Citation:

@inproceedings{10.1145/3240508.3240600,

author = {Cheng, Wei and Xu, Lan and Han, Lei and Guo, Yuanfang and Fang, Lu},

title = {IHuman3D: Intelligent Human Body 3D Reconstruction Using a Single Flying Camera},

year = {2018},

isbn = {9781450356657},

publisher = {Association for Computing Machinery},

address = {New York, NY, USA},

url = {https://doi.org/10.1145/3240508.3240600},

doi = {10.1145/3240508.3240600},

booktitle = {Proceedings of the 26th ACM International Conference on Multimedia},

pages = {1733–1741},

numpages = {9},

keywords = {tsdf, human 3d reconstruction, next best view, flying camera},

location = {Seoul, Republic of Korea},

series = {MM '18}

}


FlashFusion: Real-time Globally Consistent Dense 3D Reconstruction using CPU Computing

L. Han and L. Fang

Proc. of Robotics Science and Systems (RSS) , June 2018. (Oral)

Abstract:

Aiming  at  the  practical  usage  of  dense  3D  recon-struction  on  portable  devices,  we  propose  FlashFusion,  a  Fast LArge-Scale  High-resolution  (sub-centimeter  level)  3D  recon-struction  system  without  the  use  of  GPU  computing.  It  enables globally-consistent  localization  through  a  robust  yet  fast  global bundle  adjustment  scheme,  and  realizes  spatial  hashing  based volumetric  fusion  running  at 300Hz  and  rendering  at 25Hz via  highly  efficient  valid  chunk  selection  and  mesh  extraction schemes. Extensive experiments on both real world and synthetic datasets  demonstrate  that  FlashFusion  succeeds  to  enable real-time, globally consistent, high-resolution (5mm), and large-scale dense  3D  reconstruction  using highly-constrained  computation,i.e., the CPU computing on portable device.


Latex Bibtex Citation:

@inproceedings{han2018flashfusion,

  title={FlashFusion: Real-time Globally Consistent Dense 3D Reconstruction using CPU Computing.},

  author={Han, Lei and Fang, Lu},

  booktitle={Robotics: Science and Systems},

  volume={1},

  number={6},

  pages={7},

  year={2018}

}


Halftone Image Watermarking by Content Aware Double-sided Embedding Error Diffusion

Y. Guo, R. Wang, L. Fang and X. Cao, 

IEEE Trans. on Image Processing (TIP) , May 2018.

Abstract:

In this paper, we carry out a performance analysis from a probabilistic perspective to introduce the error diffusion-based halftone visual watermarking (EDHVW) methods' expected performances and limitations. Then, we propose a new general EDHVW method, content aware double-sided embedding error diffusion (CaDEED), via considering the expected watermark decoding performance with specific content of the cover images and watermark, different noise tolerance abilities of various cover image content, and the different importance levels of every pixel (when being perceived) in the secret pattern (watermark). To demonstrate the effectiveness of CaDEED, we propose CaDEED with expectation constraint (CaDEED-EC) and CaDEED-noise visibility function (NVF) and importance factor (IF) (CaDEED-N&I). Specifically, we build CaDEED-EC by only considering the expected performances of specific cover images and watermark. By adopting the NVF and proposing the IF to assign weights to every embedding location and watermark pixel, respectively, we build the specific method CaDEED-N&I. In the experiments, we select the optimal parameters for NVF and IF via extensive experiments. In both the numerical and visual comparisons, the experimental results demonstrate the superiority of our proposed work.


Latex Bibtex Citation:

@ARTICLE{8314752,  author={Y. {Guo} and O. C. {Au} and R. {Wang} and L. {Fang} and X. {Cao}},  journal={IEEE Transactions on Image Processing},   title={Halftone Image Watermarking by Content Aware Double-Sided Embedding Error Diffusion},   year={2018},  volume={27},  number={7},  pages={3387-3402},  doi={10.1109/TIP.2018.2815181}}

CrossNet: An End-to-end Reference-based Super Resolution Network using Cross-scale Warping

H. Zheng, M. Ji, H. Wang, Y. Liu and L. Fang

Proc. of European conference on computer vision (ECCV), 2018.

Abstract:

The Reference-based Super-resolution (RefSR) super-resolves a low-resolution (LR) image given an external high-resolution (HR) reference image, where the reference image and LR image share similar viewpoint but with significant resolution gap x8. Existing RefSR methods work in a cascaded way such as patch matching followed by synthesis pipeline with two independently defined objective functions, leading to the inter-patch misalignment, grid effect and inefficient optimization. To resolve these issues, we present CrossNet, an end-to-end and fully-convolutional deep neural network using cross-scale warping. Our network contains image encoders, cross-scale warping layers, and fusion decoder: the encoder serves to extract multi-scale features from both the LR and the reference images; the cross-scale warping layers spatially aligns the reference feature map with the LR feature map; the decoder finally aggregates feature maps from both domains to synthesize the HR output. Using cross-scale warping, our network is able to perform spatial alignment at pixel-level in an end-to-end fashion, which improves the existing schemes both in precision (around 2dB-4dB) and efficiency (more than 100 times faster).


Latex Bibtex Citation:

@inproceedings{zheng2018crossnet,
title={CrossNet: An End-to-end Reference-based Super Resolution Network using Cross-scale Warping},
author={Zheng, Haitian and Ji, Mengqi and Wang, Haoqian and Liu, Yebin and Fang, Lu},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
pages={88--104},
year={2018}
}

Magnify-Net for Multi-person 2D Pose Estimation

H. Wang, W. P An, X. Wang, and L. Fang

Proc. of IEEE International Conference on Multimedia and Expo (ICME), 2018. (Oral)

Abstract:

We propose a novel method for multi-person 2D pose estimation. Our model zooms in the image gradually, which we refer to as the Magnify-Net, to solve the bottleneck problem of mean average precision (mAP) versus pixel error. Moreover, we squeeze the network efficiently by an inspired design that increases the mAP while saving the processing time. It is a simple, yet robust, bottom-up approach consisting of one stage. The architecture is designed to detect the part position and their association jointly via two branches of the same sequential prediction process, resulting in a remarkable performance and efficiency rise. Our method outcompetes the previous state-of-the-art results on the challenging COCO key-points task and MPII Multi-Person Dataset.


Latex Bibtex Citation:

@INPROCEEDINGS{8486591,  author={H. {Wang} and W. P. {An} and X. {Wang} and L. {Fang} and J. {Yuan}},  booktitle={2018 IEEE International Conference on Multimedia and Expo (ICME)},   title={Magnify-Net for Multi-Person 2D Pose Estimation},   year={2018},  volume={},  number={},  pages={1-6},  doi={10.1109/ICME.2018.8486591}}

A Natural Shape-Preserving Stereoscopic Image Stitching

H. Wang, Y. Zhou, X. Wang, and L. Fang

Proc. of International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018.

Abstract:

This paper presents a method for stereoscopic image stitching, which can make stereoscopic images look as natural as possible. Our method combines a constrained projective warp and a shape-preserving warp to reduce the projective distortion and the vertical disparity of the stitched image. In addition to provide a good alignment accuracy and maintain the consistency of input stereoscopic images, we add a specific restriction into the projective warp, which establishes the connection between target left and right images. To optimize the whole warp, a energy term is designed. It can constrain the shape of straight line and vertical disparity. Experimental results on a variety of stereoscopic images can ensure the efficiency of the proposed method.


Latex Bibtex Citation:

H. Wang, Y. Zhou, X. Wang and L. Fang, "A Natural Shape-Preserving Stereoscopic Image Stitching," 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 2018, pp. 1812-1816, doi: 10.1109/ICASSP.2018.8461411.

2017

Adaptive Multispectral Demosaicking Based on Frequency Domain Analysis of Spectral Correlation

S. Jaiswal and L. Fang

IEEE Trans. on Image Processing (TIP), Vol. 26, No. 2, pp. 953-968, Feb. 2017.

Abstract:

Color filter array (CFA) interpolation, or three-band demosaicking, is a process of interpolating the missing color samples in each band to reconstruct a full color image. In this paper, we are concerned with the challenging problem of multispectral demosaicking, where each band is significantly undersampled due to the increment in the number of bands. Specifically, we demonstrate a frequency-domain analysis of the subsampled color-difference signal and observe that the conventional assumption of highly correlated spectral bands for estimating undersampled components is not precise. Instead, such a spectral correlation assumption is image dependent and rests on the aliasing interferences among the various color-difference spectra. To address this problem, we propose an adaptive spectral-correlation-based demosaicking (ASCD) algorithm that uses a novel anti-aliasing filter to suppress these interferences, and we then integrate it with an intra-prediction scheme to generate a more accurate prediction for the reconstructed image. Our ASCD is computationally very simple, and exploits the spectral correlation property much more effectively than the existing algorithms. Experimental results conducted on two data sets for multispectral demosaicking and one data set for CFA demosaicking demonstrate that the proposed ASCD outperforms the state-of-the-art algorithms.


Latex Bibtex Citation:

@ARTICLE{7762719,  author={S. P. {Jaiswal} and L. {Fang} and V. {Jakhetiya} and J. {Pang} and K. {Mueller} and O. C. {Au}},  journal={IEEE Transactions on Image Processing},   title={Adaptive Multispectral Demosaicking Based on Frequency-Domain Analysis of Spectral Correlation},   year={2017},  volume={26},  number={2},  pages={953-968},  doi={10.1109/TIP.2016.2634120}}

SurfaceNet: an End-to-end 3D Neural Network for Multiview Stereopsis

M. Ji, J. Gall, H. Zheng, Y. Liu and L. Fang

Proc. of IEEE International Conference on Computer Vision (ICCV), Oct. 2017.

Abstract:

This paper proposes an end-to-end learning framework for multiview stereopsis. We term the network SurfaceNet. It takes a set of images and their corresponding camera parameters as input and directly infers the 3D model. The key advantage of the framework is that both photo-consistency as well geometric relations of the surface structure can be directly learned for the purpose of multiview stereopsis in an end-to-end fashion. SurfaceNet is a fully 3D convolutional network which is achieved by encoding the camera parameters together with the images in a 3D voxel representation. We evaluate SurfaceNet on the large-scale DTU benchmark.


Latex Bibtex Citation:

@INPROCEEDINGS{8237515,  author={M. {Ji} and J. {Gall} and H. {Zheng} and Y. {Liu} and L. {Fang}},  booktitle={2017 IEEE International Conference on Computer Vision (ICCV)},   title={SurfaceNet: An End-to-End 3D Neural Network for Multiview Stereopsis},   year={2017},  volume={},  number={},  pages={2326-2334},  doi={10.1109/ICCV.2017.253}}

Beyond SIFT using Binary Features in Loop Closure Detection

L. Han and L. Fang

IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Sept. 2017. (Oral)

Abstract:

In this paper a binary feature based Loop Closure Detection (LCD) method is proposed, which for the first time achieves higher precision-recall (PR) performance compared with state-of-the-art SIFT feature based approaches. The proposed system originates from our previous work Multi-Index hashing for Loop closure Detection (MILD), which employs Multi-Index Hashing (MIH) [1] for Approximate Nearest Neighbor (ANN) search of binary features. As the accuracy of MILD is limited by repeating textures and inaccurate image similarity measurement, burstiness handling is introduced to solve this problem and achieves considerable accuracy improvement. Additionally, a comprehensive theoretical analysis on MIH used in MILD is conducted to further explore the potentials of hashing methods for ANN search of binary features from probabilistic perspective. This analysis provides more freedom on best parameter choosing in MIH for different application scenarios. Experiments on popular public datasets show that the proposed approach achieved the highest accuracy compared with state-of-the-art while running at 30Hz for databases containing thousands of images.


Latex Bibtex Citation:

@INPROCEEDINGS{8206261,  author={L. {Han} and G. {Zhou} and L. {Xu} and L. {Fang}},  booktitle={2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},   title={Beyond SIFT using binary features in Loop Closure Detection},   year={2017},  volume={},  number={},  pages={4057-4063},  doi={10.1109/IROS.2017.8206261}}

MILD: Multi-index hashing for appearance based loop closure detection

L. Han and L. Fang

Proc. of IEEE International Conference on Multimedia and Expo (ICME), July 2017. (Best Student Paper Award)

Abstract:

Loop Closure Detection (LCD) has been proved to be extremely useful in global consistent visual Simultaneously Localization and Mapping (SLAM) and appearance-based robot relocalization. Methods exploiting binary features in bag of words representation have recently gained a lot of popularity for their efficiency, but suffer from low recall due to the inherent drawback that high dimensional binary feature descriptors lack well-defined centroids. In this paper, we propose a realtime LCD approach called MILD (Multi-Index Hashing for Loop closure Detection), in which image similarity is measured by feature matching directly to achieve high recall without introducing extra computational complexity with the aid of Multi-Index Hashing (MIH). A theoretical analysis of the approximate image similarity measurement using MIH is presented, which reveals the trade-off between efficiency and accuracy from a probabilistic perspective. Extensive comparisons with state-of-the-art LCD methods demonstrate the superiority of MILD in both efficiency and accuracy.


Latex Bibtex Citation:

@INPROCEEDINGS{8019479,  author={L. {Han} and L. {Fang}},  booktitle={2017 IEEE International Conference on Multimedia and Expo (ICME)},   title={MILD: Multi-index hashing for appearance based loop closure detection},   year={2017},  volume={},  number={},  pages={139-144},  doi={10.1109/ICME.2017.8019479}}

Multiscale Gigapixel Video: A Cross Resolution Image Matching and Warping Approach

X. Yuan, L. Fang, D. Brady, Y. Liu and Q. Dai, 

Proc. of IEEE International Conference on Computational Photography (ICCP), May 2017.(Oral)

Abstract:

We present a multi-scale camera array to capture and synthesize gigapixel videos in an efficient way. Our acquisition setup contains a reference camera with a short-focus lens to get a large field-of-view video and a number of unstructured long-focus cameras to capture local-view details. Based on this new design, we propose an iterative feature matching and image warping method to independently warp each local-view video to the reference video. The key feature of the proposed algorithm is its robustness to and high accuracy for the huge resolution gap (more than 8x resolution gap between the reference and the local-view videos), camera parallaxes, complex scene appearances and color inconsistency among cameras. Experimental results show that the proposed multi-scale camera array and cross resolution video warping scheme is capable of generating seamless gigapixel video without the need of camera calibration and large overlapping area constraints between the local-view cameras.


Latex Bibtex Citation:

@INPROCEEDINGS{7951481,  author={X. {Yuan} and L. {Fang} and Q. {Dai} and D. J. {Brady} and Y. {Liu}},  booktitle={2017 IEEE International Conference on Computational Photography (ICCP)},   title={Multiscale gigapixel video: A cross resolution image matching and warping approach},   year={2017},  volume={},  number={},  pages={1-9},  doi={10.1109/ICCPHOT.2017.7951481}}

2016

Subpixel Image Quality Assesement Syncretizing Local Subpixel and Global Pixel Features

J. Zeng, L. Fang, J. Pang, H. Li and F. Wu, 

IEEE Trans. on Image Processing (TIP), Vol. 25, No. 12, pp. 5841 - 5856, Dec. 2016.

Abstract:

The subpixel rendering technology increases the apparent resolution of an LCD/OLED screen by exploiting the physical property that a pixel is composed of RGB individually addressable subpixels. Due to the intrinsic intercoordination between apparent luminance resolution and color fringing artifact, a common method of subpixel image assessment is subjective evaluation. In this paper, we propose a unified subpixel image quality assessment metric called subpixel image assessment (SPA), which syncretizes local subpixel and global pixel features. Specifically, comprehensive subjective studies are conducted to acquire data of user preferences. Accordingly, a collection of low-level features is designed under extensive perceptual validation, capturing subpixel and pixel features, which reflect local details and global distance from the original image. With the features and their measurements as the basis, the SPA is obtained, which leads to a good representation of the subpixel image characteristics. The experimental results justify the effectiveness and the superiority of the SPA. The SPA is also successfully adopted in a variety of applications, including content adaptive sampling and metric-guided image compression.


Latex Bibtex Citation:

@ARTICLE{7583723,  author={J. {Zeng} and L. {Fang} and J. {Pang} and H. {Li} and F. {Wu}},  journal={IEEE Transactions on Image Processing},   title={Subpixel Image Quality Assessment Syncretizing Local Subpixel and Global Pixel Features},   year={2016},  volume={25},  number={12},  pages={5841-5856},  doi={10.1109/TIP.2016.2615429}}

Estimation of Virtual View Synthesis Distortion Towards Virtual View Position

L. Fang, Y. Xiang, N.M. Cheung and F. Wu, 

IEEE Trans. on Image Processing (TIP), Vol. 25, No. 5, pp.1961 - 1976, Mar. 2016.

Abstract:

We propose an analytical model to estimate the depth-error-induced virtual view synthesis distortion (VVSD) in 3D video, taking the distance between reference and virtual views (virtual view position) into account. In particular, we start with a comprehensive preanalysis and discussion over several possible VVSD scenarios. Taking intrinsic characteristic of each scenario into consideration, we specifically classify them into four clusters: 1) overlapping region; 2) disocclusion and boundary region; 3) edge region; and 4) infrequent region. We propose to model VVSD as the linear combination of the distortion under different scenarios (DDSs) weighted by the probability under different scenarios (PDSs). We show analytically that DDS and PDS can be related to the virtual view position using quadratic/biquadratic models and linear models, respectively. Experimental results verify that the proposed model is capable of estimating the relationship between VVSD and the distance between reference and virtual views. Therefore, our model can be used to inform a reference view setup for capturing, or distortion at certain virtual view positions, when depth information is compressed.


Latex Bibtex Citation:

@ARTICLE{7430323,  author={L. {Fang} and Y. {Xiang} and N. {Cheung} and F. {Wu}},  journal={IEEE Transactions on Image Processing},   title={Estimation of Virtual View Synthesis Distortion Toward Virtual View Position},   year={2016},  volume={25},  number={5},  pages={1961-1976},  doi={10.1109/TIP.2016.2535345}}

Robust Blur Kernel Estimation for License Plate Images from Fast Moving Vehicles

Q. Lu, W. Zhou, L. Fang and H. Li, 

IEEE Trans. on Image Processing (TIP), Vol. 25, No. 5, pp.2311 - 2323, Feb. 2016.


Abstract:

As the unique identification of a vehicle, license plate is a key clue to uncover over-speed vehicles or the ones involved in hit-and-run accidents. However, the snapshot of over-speed vehicle captured by surveillance camera is frequently blurred due to fast motion, which is even unrecognizable by human. Those observed plate images are usually in low resolution and suffer severe loss of edge information, which cast great challenge to existing blind deblurring methods. For license plate image blurring caused by fast motion, the blur kernel can be viewed as linear uniform convolution and parametrically modeled with angle and length. In this paper, we propose a novel scheme based on sparse representation to identify the blur kernel. By analyzing the sparse representation coefficients of the recovered image, we determine the angle of the kernel based on the observation that the recovered image has the most sparse representation when the kernel angle corresponds to the genuine motion angle. Then, we estimate the length of the motion kernel with Radon transform in Fourier domain. Our scheme can well handle large motion blur even when the license plate is unrecognizable by human. We evaluate our approach on real-world images and compare with several popular state-of-the-art blind image deblurring algorithms. Experimental results demonstrate the superiority of our proposed approach in terms of effectiveness and robustness.


Latex Bibtex Citation:

@ARTICLE{7422106,  author={Q. {Lu} and W. {Zhou} and L. {Fang} and H. {Li}},  journal={IEEE Transactions on Image Processing},   title={Robust Blur Kernel Estimation for License Plate Images From Fast Moving Vehicles},   year={2016},  volume={25},  number={5},  pages={2311-2323},  doi={10.1109/TIP.2016.2535375}}

Computation and Memory Efficient Image Segmentation

Y. Zhou, T. Do, H. Zheng, N. Cheung and L. Fang

IEEE Trans. on Circuits and Systems for Video Technology (TCSVT), Aug.2016.

Abstract:

In this paper, we address the segmentation problem under limited computation and memory resources. Given a segmentation algorithm, we propose a framework that can reduce its computation time and memory requirement simultaneously, while preserving its accuracy. The proposed framework uses standard pixel-domain downsampling and includes two main steps. Coarse segmentation is first performed on the downsampled image. Refinement is then applied to the coarse segmentation results. We make two novel contributions to enable competitive accuracy using this simple framework. First, we rigorously examine the effect of downsampling on segmentation using a signal processing analysis. The analysis helps to determine the uncertain regions, which are small image regions where pixel labels are uncertain after the coarse segmentation. Second, we propose an efficient minimum spanning tree-based algorithm to propagate the labels into the uncertain regions. We perform extensive experiments using several standard data sets. The experimental results show that our segmentation accuracy is comparable to state-of-the-art methods, while requiring much less computation time and memory than those methods.


Latex Bibtex Citation:

@ARTICLE{7543518,  author={Y. {Zhou} and T. {Do} and H. {Zheng} and N. {Cheung} and L. {Fang}},  journal={IEEE Transactions on Circuits and Systems for Video Technology},   title={Computation and Memory Efficient Image Segmentation},   year={2018},  volume={28},  number={1},  pages={46-61},  doi={10.1109/TCSVT.2016.2600261}}

Deep Learning for Surface Material Classification Using Haptic and Visual Information

H. Zheng, L. Fang, M. Ji, M. Strese, Y. Özer and E. Steinbach, 

IEEE Trans. on Multimedia (TMM), July 2016.

Abstract:

When a user scratches a hand-held rigid tool across an object surface, an acceleration signal can be captured, which carries relevant information about the surface material properties. More importantly, such haptic acceleration signals can be used together with surface images to jointly recognize the surface material. In this paper, we present a novel deep learning method dealing with the surface material classification problem based on a fully convolutional network, which takes the aforementioned acceleration signal and a corresponding image of the surface texture as inputs. Compared to the existing surface material classification solutions which rely on a careful design of hand-crafted features, our method automatically extracts discriminative features utilizing advanced deep learning methodologies. Experiments performed on the TUM surface material database demonstrate that our method achieves state-of-the-art classification accuracy robustly and efficiently.


Latex Bibtex Citation:

@ARTICLE{7530831,  author={H. {Zheng} and L. {Fang} and M. {Ji} and M. {Strese} and Y. {Özer} and E. {Steinbach}},  journal={IEEE Transactions on Multimedia},   title={Deep Learning for Surface Material Classification Using Haptic and Visual Information},   year={2016},  volume={18},  number={12},  pages={2407-2416},  doi={10.1109/TMM.2016.2598140}}

2015 and before

Subpixel-based Image Scaling for Grid-like Subpixel Arrangement: a Generalized Continuous-domain Analysis Model

J. Pang, L. Fang, J. Zeng, Y. Guo and K. Tang, 

IEEE Trans. on Image Processing (TIP), Vol. 25, No. 3, pp.1017 - 1032, Dec. 2015.

Abstract:

Subpixel-based image scaling can improve the apparent resolution of displayed images by controlling individual subpixels rather than whole pixels. However, improved luminance resolution brings chrominance distortion, making it crucial to suppress color error while maintaining sharpness. Moreover, it is challenging to develop a scheme that is applicable for various subpixel arrangements and for arbitrary scaling factors. In this paper, we address the aforementioned issues by proposing a generalized continuous-domain analysis model, which considers the low-pass nature of the human visual system (HVS). Specifically, given a discrete image and a grid-like subpixel arrangement, the signal perceived by the HVS is modeled as a 2D continuous image. Minimizing the difference between the perceived image and the continuous target image leads to the proposed scheme, which we call continuous-domain analysis for subpixel-based scaling (CASS). To eliminate the ringing artifacts caused by the ideal low-pass filtering in CASS, we propose an improved scheme, which we call CASS with Laplacian-of-Gaussian filtering. Experiments show that the proposed methods provide sharp images with negligible color fringing artifacts. Our methods are comparable with the state-of-the-art methods when applied on the RGB stripe arrangement, and outperform existing methods when applied on other subpixel arrangements.


Latex Bibtex Citation:

@ARTICLE{7365454,  author={J. {Pang} and L. {Fang} and J. {Zeng} and Y. {Guo} and K. {Tang}},  journal={IEEE Transactions on Image Processing},   title={Subpixel-Based Image Scaling for Grid-like Subpixel Arrangements: A Generalized Continuous-Domain Analysis Model},   year={2016},  volume={25},  number={3},  pages={1017-1032},  doi={10.1109/TIP.2015.2512381}}

Deblurring Saturated Night Images with Function-form Kernel

H. Liu, X. Sun, F. Wu and L. Fang

IEEE Trans. on Image Processing (TIP), Vol. 24, No. 11, pp.4637 - 4650, July 2015.

Abstract:

Deblurring saturated night images are a challenging problem because such images have low contrast combined with heavy noise and saturated regions. Unlike the deblurring schemes that discard saturated regions when estimating blur kernels, this paper proposes a novel scheme to deduce blur kernels from saturated regions via a novel kernel representation and advanced algorithms. Our key technical contribution is the proposed function-form representation of blur kernels, which regularizes existing matrix-form kernels using three functional components: 1) trajectory; 2) intensity; and 3) expansion. From automatically detected saturated regions, their skeleton, brightness, and width are fitted into the corresponding three functional components of blur kernels. Such regularization significantly improves the quality of kernels deduced from saturated regions. Second, we propose an energy minimizing algorithm to select and assign the deduced function-form kernels to partitioned image regions as the initialization for non-uniform deblurring. Finally, we convert the assigned function-form kernels into matrix form for more detailed estimation in a multi-scale deconvolution. Experimental results show that our scheme outperforms existing schemes on challenging real examples.


Latex Bibtex Citation:

@ARTICLE{7169560,  author={H. {Liu} and X. {Sun} and L. {Fang} and F. {Wu}},  journal={IEEE Transactions on Image Processing},   title={Deblurring Saturated Night Image With Function-Form Kernel},   year={2015},  volume={24},  number={11},  pages={4637-4650},  doi={10.1109/TIP.2015.2461445}}

An Analytical Model for Synthesis Distortion Estimation in 3D Video

L. Fang, N. Cheung, D. Tian, A. Verto, H. Sun and O. C. Au, 

IEEE Trans. on Image Processing (TIP), Vol. 23, No. 1, pp.185 - 199, Jan. 2014.

Abstract:

We propose an analytical model to estimate the synthesized view quality in 3D video. The model relates errors in the depth images to the synthesis quality, taking into account texture image characteristics, texture image quality, and the rendering process. Especially, we decompose the synthesis distortion into texture-error induced distortion and depth-error induced distortion. We analyze the depth-error induced distortion using an approach combining frequency and spatial domain techniques. Experiment results with video sequences and coding/rendering tools used in MPEG 3DV activities show that our analytical model can accurately estimate the synthesis noise power. Thus, the model can be used to estimate the rendering quality for different system designs.


Latex Bibtex Citation:

@ARTICLE{6648645,  author={L. {Fang} and N. {Cheung} and D. {Tian} and A. {Vetro} and H. {Sun} and O. C. {Au}},  journal={IEEE Transactions on Image Processing},   title={An Analytical Model for Synthesis Distortion Estimation in 3D Video},   year={2014},  volume={23},  number={1},  pages={185-199},  doi={10.1109/TIP.2013.2287608}}

Luma-chroma Space Filter Design for Subpixel-based Monochrome Image Downsampling

L. Fang, O. C. Au, A. K. Katsaggelos, N. Cheung and H. Li, 

IEEE Trans. on Image Processing (TIP), Vol. 22, No. 10, pp.1057-7149, April 2013.

Abstract:

In general, subpixel-based downsampling can achieve higher apparent resolution of the down-sampled images on LCD or OLED displays than pixel-based downsampling. With the frequency domain analysis of subpixel-based downsampling, we discover special characteristics of the luma-chroma color transform choice for monochrome images. With these, we model the anti-aliasing filter design for subpixel-based monochrome image downsampling as a human visual system-based optimization problem with a two-term cost function and obtain a closed-form solution. One cost term measures the luminance distortion and the other term measures the chrominance aliasing in our chosen luma-chroma space. Simulation results suggest that the proposed method can achieve sharper down-sampled gray/font images compared with conventional pixel and subpixel-based methods, without noticeable color fringing artifacts.


Latex Bibtex Citation:

@ARTICLE{6514934,  author={L. {Fang} and O. C. {Au} and N. {Cheung} and A. K. {Katsaggelos} and H. {Li} and F. {Zou}},  journal={IEEE Transactions on Image Processing},   title={Luma-Chroma Space Filter Design for Subpixel-Based Monochrome Image Downsampling},   year={2013},  volume={22},  number={10},  pages={3818-3829},  doi={10.1109/TIP.2013.2262288}}

Subpixel Rendering: From Font Rendering to Image Subsampling

L. Fang, O. C. Au and N. Cheung, 

IEEE Signal Processing Magazine (SPM), Vol. 30, No. 3, pp.177 - 189, May 2013.

Abstract:

Subpixel rendering technologies take advantage of the subpixel structure of a display to increase the apparent resolution and to improve the display quality of text, graphics, or images. These techniques can potentially improve the apparent resolution because a single pixel on color liquid crystal display (LCD) or organic light-emitting diode (OLED) displays consists of several independently controllable colored subpixels. Applications of subpixel rendering are font rendering and image/video subsampling. By controlling individual subpixel values of neighboring pixels, it is possible to microshift the apparent position of a line to give greater details of text. Similarly, since the individual selectable components are increased threefold by controlling subpixels rather than pixels, subpixel-based subsampling can potentially improve the apparent resolution of a down-scaled image. However, the increased apparent luminance resolution often comes at the price of color fringing artifacts. A major challenge is to suppress chrominance distortion while maintaining apparent luminance sharpness. This column introduces subpixel arrangement in color displays, how subpixel rendering works, and several practical subpixel rendering applications in font rendering and image subsampling.


Latex Bibtex Citation:

@article{article,

author = {Fang, Lu and Au, Oscar and Cheung, Ngai-Man},

year = {2013},

month = {05},

pages = {177-189},

title = {Subpixel Rendering: From Font Rendering to Image Subsampling},

volume = {30},

journal = {Signal Processing Magazine, IEEE},

doi = {10.1109/MSP.2013.2241311}

}

Multichannel Non-Local Means Fusion for Color Image Denoising

J. Dai, O. C. Au, L. Fang, C. Pang and F. Zou, 

IEEE Trans. On Circuits and System for Video Technology (TCSVT), Vol. 23, No.11, pp. 1873 - 1886, June 2013.

Abstract:

In this paper, we propose an advanced color image denoising scheme called multichannel nonlocal means fusion (MNLF), where noise reduction is formulated as the minimization of a penalty function. An inherent feature of color images is the strong interchannel correlation, which is introduced into the penalty function as additional prior constraints to expect a better performance. The optimal solution of the minimization problem is derived, consisting of constructing and fusing multiple nonlocal means (NLM) spanning all three channels. The weights in the fusion are optimized to minimize the overall mean squared denoising error, with the help of the extended and adapted Stein's unbiased risk estimator (SURE). Simulations on representative test images under various noise levels verify the improvement brought by the multichannel NLM, compared to the traditional single-channel NLM. In the meantime, MNLF provides competitive performance both in terms of the color peak signal-to-noise ratio and in perceptual quality when compared with other state-of-the-art benchmarks.


Latex Bibtex Citation:

@article{10.1109/TCSVT.2013.2269020,

author = {Dai, Jingjing and Au, Oscar C. and Fang, Lu and Pang, Chao and Zou, Feng and Li, Jiali},

title = {Multichannel Nonlocal Means Fusion for Color Image Denoising},

year = {2013},

issue_date = {November 2013},

publisher = {IEEE Press},

volume = {23},

number = {11},

issn = {1051-8215},

url = {https://doi.org/10.1109/TCSVT.2013.2269020},

doi = {10.1109/TCSVT.2013.2269020},

journal = {IEEE Trans. Cir. and Sys. for Video Technol.},

month = nov,

pages = {1873–1886},

numpages = {14}

}


Novel 2-D MMSE Subpixel-based Image Down-sampling

L. Fang, O. C. Au, K. Tang, H. Wang and X. Wen, 

IEEE Trans. On Circuits and Systems for Video Technology (TCSVT), Vol. 22, No. 5, pp. 740-753, Dec. 2011.

Abstract:

Subpixel-based down-sampling is a method that can potentially improve apparent resolution of a down-scaled image on LCD by controlling individual subpixels rather than pixels. However, the increased luminance resolution comes at price of chrominance distortion. A major challenge is to suppress color fringing artifacts while maintaining sharpness. We propose a new subpixel-based down-sampling pattern called diagonal direct subpixel-based down-sampling (DDSD) for which we design a 2-D image reconstruction model. Then, we formulate subpixel-based down-sampling as a MMSE problem and derive the optimal solution called minimum mean square error for subpixel-based down-sampling (MMSE-SD). Unfortunately, straightforward implementation of MMSE-SD is computational intensive. We thus prove that the solution is equivalent to a 2-D linear filter followed by DDSD, which is much simpler. We further reduce computational complexity using a small k × k filter to approximate the much larger MMSE-SD filter. To compare the performances of pixel and subpixel-based down-sampling methods, we propose two novel objective measures: normalized l 1 high frequency energy for apparent luminance sharpness and PSNR U(V) for chrominance distortion. Simulation results show that both MMSE-SD and MMSE-SD( k ) can give sharper images compared with conventional down-sampling methods, with little color fringing artifacts.


Latex Bibtex Citation:

@ARTICLE{6101564,  author={L. {Fang} and O. C. {Au} and K. {Tang} and X. {Wen} and H. {Wang}},  journal={IEEE Transactions on Circuits and Systems for Video Technology},   title={Novel 2-D MMSE Subpixel-Based Image Down-Sampling},   year={2012},  volume={22},  number={5},  pages={740-753},  doi={10.1109/TCSVT.2011.2179458}}

Novel RD-optimized VBSME with Matching Highly Data Re-usable Hardware Architecture

X. Wen, O. C. Au, J. Xu, L. Fang and R. Cha, 

IEEE Trans. On Circuits and Systems for Video Technology (TCSVT), Vol. 21, No. 2, Jan, 2011.

Abstract:

To achieve superior performance, rate-distortion optimized motion estimation (ME) for variable block size (RDO VBSME) is often used in state-of-the-art video coding systems such as the H.264 JM software. However, the complexity of RDO-VBSME is very high both for software and hardware implementations. In this paper, we propose a hardware-friendly ME algorithm called RDOMFS with a novel hardware-friendly rate-distortion (RD)-like cost function, and a hardware-friendly modified motion vector predictor. Simulation results suggest that the proposed RDOMFS can achieve essentially the same RD performance as RDO-VBSME in JM. We also propose a matching hardware architecture with a novel Smart Snake Scanning order which can achieve very high data re-use ratio and data throughout. It is also reconfigurable because it can achieve variable data re-use ratio and can process variable frame size. The design is implemented with TSMC 0.18 μm CMOS technology and costs 103 k gates. At a clock frequency of 63 MHz, the architecture achieves real-time 1920 × 1080 RDO-VBSME at 30 frames/s. At a maximum clock frequency of 250 MHz, it can process 4096 × 2160 at 30 frames/s.


Latex Bibtex Citation:

@ARTICLE{5688307,  author={X. {Wen} and O. C. {Au} and J. {Xu} and L. {Fang} and R. {Cha} and J. {Li}},  journal={IEEE Transactions on Circuits and Systems for Video Technology},   title={Novel RD-Optimized VBSME With Matching Highly Data Re-Usable Hardware Architecture},   year={2011},  volume={21},  number={2},  pages={206-219},  doi={10.1109/TCSVT.2011.2106274}}

Joint Demosaicing and Subpixel-Based Down-Sampling for Bayer Images: A Fast Frequency-Domain Analysis Approach

L. Fang, O. C. Au, Y. Chen, A. K. Katsaggelos, H. Wang and X. Wen, 

IEEE Trans. on Multimedia (TMM), Vol. 14, No. 4, pp. 1359-1369, Mar. 2012.

Abstract:

A portable device such as a digital camera with a single sensor and Bayer color filter array (CFA) requires demosaicing to reconstruct a full color image. To display a high resolution image on a low resolution LCD screen of the portable device, it must be down-sampled. The two steps, demosaicing and down-sampling, influence each other. On one hand, the color artifacts introduced in demosaicing may be magnified when followed by down-sampling; on the other hand, the detail removed in the down-sampling cannot be recovered in the demosaicing. Therefore, it is very important to consider simultaneous demosaicing and down-sampling.


Latex Bibtex Citation:

@ARTICLE{6171859,  author={L. {Fang} and O. C. {Au} and Y. {Chen} and A. K. {Katsaggelos} and H. {Wang} and X. {Wen}},  journal={IEEE Transactions on Multimedia},   title={Joint Demosaicing and Subpixel-Based Down-Sampling for Bayer Images: A Fast Frequency-Domain Analysis Approach},   year={2012},  volume={14},  number={4},  pages={1359-1369},  doi={10.1109/TMM.2012.2191269}}

Increasing Image Resolution on Portable Displays by Subpixel Rendering - a Systematic Overview

L. Fang, O. C. Au, X. Wen and K. Tang,

APSIPA Trans. on Signal and Information Processing (TSIP), Vol. 1, No. 1, pp. 1-10, Jan.2012.(Open-access, Most-viewed paper in 2012)

Abstract:

Many of portable devices such as smart phones, portable multimedia players (PMP), and digital single-lens reflex (DSLR) cameras are capable of capturing high-resolution images (e.g. 10 mega-pixel in DSLR) or even video. The limited battery power supply in the portable devices often prevents these systems to use high-power large liquid crystal display (LCD). Instead, the portable devices often have a LCD screen with small physical size (e.g. 3 cm × 2 cm for Smartphone or DSLR) and with much lower pixel resolution (e.g. 0.15 mega-pixel for 480 × 320 display) than actual image/video resolution. Thus, the high-resolution image and video are down-sampled before being displayed. Unfortunately, the anti-aliasing filter often leads to rather severe blurring. Although the blurring may be minor when the viewing distance is large, it can be rather disturbing in portable applications due to the short viewing distance. To cope with the blurring problem, one possible solution is to use an LCD screen with higher resolution. But such hardware solution tends to be expensive and often not welcomed by the consumer electronic companies. Another possible solution is to continue to use the low-resolution LCD screen, but use some software technique to enhance the apparent image/video resolution. In this paper, we discuss a novel way to improve the apparent resolution of down-sampled image/video using a technique called subpixel rendering, which controls subpixel that is smaller than a pixel in a high-precision manner.


Latex Bibtex Citation:

@article{fang_au_tang_wen_2012, title={Increasing image resolution on portable displays by subpixel rendering – a systematic overview}, volume={1}, DOI={10.1017/ATSIP.2012.3}, journal={APSIPA Transactions on Signal and Information Processing}, publisher={Cambridge University Press}, author={Fang, Lu and Au, Oscar C. and Tang, Ketan and Wen, Xing}, year={2012}, pages={e1}}

Separable Kernel for Image Deblurring

L. Fang, H. Liu, F. Wu, X. Sun and H. Li, 

Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.

Abstract:

In this paper, we deal with the image deblurring problem in a completely new perspective by proposing separable kernel to represent the inherent properties of the camera and scene system. Specifically, we decompose a blur kernel into three individual descriptors (trajectory, intensity and point spread function) so that they can be optimized separately. To demonstrate the advantages, we extract one-pixel-width trajectories of blur kernels and propose a random perturbation algorithm to optimize them but still keeping their continuity. For many cases, where current deblurring approaches fall into local minimum, excellent deblurred results and correct blur kernels can be obtained by individually optimizing the kernel trajectories. Our work strongly suggests that more constraints and priors should be introduced to blur kernels in solving the deblurring problem because blur kernels have lower dimensions than images.


Latex Bibtex Citation:

@INPROCEEDINGS{6909765,  author={L. {Fang} and H. {Liu} and F. {Wu} and X. {Sun} and H. {Li}},  booktitle={2014 IEEE Conference on Computer Vision and Pattern Recognition},   title={Separable Kernel for Image Deblurring},   year={2014},  volume={},  number={},  pages={2885-2892},  doi={10.1109/CVPR.2014.369}}