▼ 2024 New ▶ Generation ▶ Enhancement ▶ Denoising ▶ Low4High
2024 New
Exploiting Optical Flow Guidance for Transformer-Based Video Inpainting
Kaidong Zhang, Jialun Peng, Jingjing Fu, Dong Liu
IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2024
Paper | Code | Abstract
Transformers have been widely used for video processing owing to the multi-head self attention (MHSA) mechanism. However, the MHSA mechanism encounters an intrinsic difficulty for video inpainting, since the features associated with the corrupted regions are degraded and incur inaccurate self attention. This problem, termed query degradation, may be mitigated by first completing optical flows and then using the flows to guide the self attention, which was verified in our previous work – flow-guided transformer (FGT). We further exploit the flow guidance and propose FGT++ to pursue more effective and efficient video inpainting. First, we design a lightweight flow completion network by using local aggregation and edge loss. Second, to address the query degradation, we propose a flow guidance feature integration module, which uses the motion discrepancy to enhance the features, together with a flow-guided feature propagation module that warps the features according to the flows. Third, we decouple the transformer along the temporal and spatial dimensions, where flows are used to select the tokens through a temporally deformable MHSA mechanism, and global tokens are combined with the inner-window local tokens through a dual-perspective MHSA mechanism. FGT++ is experimentally evaluated to be outperforming the existing video inpainting networks qualitatively and quantitatively.
Towards Interactive Image Inpainting via Robust Sketch Refinement
Chang Liu, Shunxin Xu, Jialun Peng, Kaidong Zhang, Dong Liu
IEEE Transactions on Multimedia (T-MM), 2024
Paper | Code | Abstract
One tough problem of image inpainting is to restore complex structures in the corrupted regions. It motivates interactive image inpainting which leverages additional hints, e.g., sketches, to assist the inpainting process. A sketch is simple and intuitive for end users to provide, but meanwhile has free forms with much randomness. Such randomness may confuse the inpainting models, and incur severe artifacts in completed images. To better facilitate image inpainting with sketch guidance, we propose a two-stage image inpainting system, termed SketchRefiner. The first stage of our approach serves as a data provider that simulates real sketches and derives the capability of sketch calibration from the simulated data. In the second stage, our approach aligns the sketch guidance with the inpainting process so as to elevate image inpainting with sketches. We also propose a real-world test protocol to address the evaluation of inpainting methods upon practical applications with user sketches. Experimental results on three prevailing benchmark datasets, i.e., CelebA-HQ, Places2, and ImageNet, and the proposed test protocol demonstrate the state-of-the-art performance of our approach, and its great potentials upon real-world applications. Further analyses illustrate that our approach effectively utilizes sketch information as guidance and eliminates the artifacts due to the free-form sketches.
Disparity-guided Multi-view Interaction Network for Light Field Reflection Removal
Yutong Liu, Wenming Weng, Ruisheng Gao, Zeyu Xiao, Yueyi Zhang, Zhiwei Xiong
IEEE Transactions on Computational Imaging (T-CI), 2024
Paper | Code | Abstract
Light field (LF) imaging presents a promising avenue for reflection removal, owing to its ability of reliable depth perception and utilization of complementary texture details from multiple sub-aperture images (SAIs). However, the domain shifts between real-world and synthetic scenes, as well as the challenge of embedding transmission information across SAIs pose the main obstacles in this task. In this paper, we conquer the above challenges from the perspectives of data and network, respectively. To mitigate domain shifts, we propose an efficient data synthesis strategy for simulating realistic reflection scenes, and build the largest ever LF reflection dataset containing 420 synthetic scenes and 70 real-world scenes. To enable the transmission information embedding across SAIs, we propose a novel D isparity-guided M ulti-view I nteraction Net work (DMINet) for LF reflection removal. DMINet mainly consists of a transmission disparity estimation (TDE) module and a center-side interaction (CSI) module. The TDE module aims to predict transmission disparity by filtering out reflection disturbances, while the CSI module is responsible for the transmission integration which adopts the central view as the bridge for the propagation conducted between different SAIs. Compared with existing reflection removal methods for LF input, DMINet achieves a distinct performance boost with merits of efficiency and robustness, especially for scenes with complex depth variations.
EdiTor: Edge-guided Transformer for Ghost-free High Dynamic Range Imaging
Yuanshen Guan, Ruikang Xu, Mingde Yao, Jie Huang, Zhiwei Xiong
ACM Transactions on Multimedia Computing, Communications and Applications (TOMM), 2024
Paper | Abstract
Synthesizing the high dynamic range (HDR) image from multi-exposure images has been extensively studied by exploiting convolutional neural networks (CNNs) recently. Despite the remarkable progress, existing CNN-based methods have the intrinsic limitation of local receptive field, which hinders the model’s capability of capturing long-range correspondence and large motions across under/over-exposure images, resulting in ghosting artifacts of dynamic scenes. To address the above challenge, we propose a novel Edge-guided Transformer framework (EdiTor) customized for ghost-free HDR reconstruction, where the long-range motions across different exposures can be delicately modeled by incorporating the edge prior. Specifically, EdiTor calculates patch-wise correlation maps on both image and edge domains, enabling the network to effectively model the global movements and the fine-grained shifts across multiple exposures. Based on this framework, we further propose an exposure-masked loss to adaptively compensate for the severely distorted regions (e.g., highlights and shadows). Experiments demonstrate that EdiTor outperforms state-of-the-art methods both quantitatively and qualitatively, achieving appealing HDR visualization with unified textures and colors.
Decomposition Betters Tracking Everything Everywhere
Rui Li, Dong Liu
European Conference on Computer Vision (ECCV), 2024
Coming soon!
High-Resolution and Few-shot View Synthesis from Asymmetric Dual-lens Inputs
Ruikang Xu, Mingde Yao, Yue Li, Yueyi Zhang, Zhiwei Xiong
European Conference on Computer Vision (ECCV), 2024
Coming soon!
Mask-Based Modeling for Neural Radiance Fields
Ganlin Yang, Guoqiang Wei, Zhizheng Zhang, Yan Lu, Dong Liu
International Conference on Learning Representations (ICLR), 2024
Paper | Code | Abstract
Most Neural Radiance Fields (NeRFs) exhibit limited generalization capabilities,which restrict their applicability in representing multiple scenes using a single model. To address this problem, existing generalizable NeRF methods simply condition the model on image features. These methods still struggle to learn precise global representations over diverse scenes since they lack an effective mechanism for interacting among different points and views. In this work, we unveil that 3D implicit representation learning can be significantly improved by mask-based modeling. Specifically, we propose masked ray and view modeling for generalizable NeRF (MRVM-NeRF), which is a self-supervised pretraining target to predict complete scene representations from partially masked features along each ray. With this pretraining target, MRVM-NeRF enables better use of correlations across different rays and views as the geometry priors, which thereby strengthens the capability of capturing intricate details within the scenes and boosts the generalization capability across different scenes. Extensive experiments demonstrate the effectiveness of our proposed MRVM-NeRF on both synthetic and real-world datasets, qualitatively and quantitatively. Besides, we also conduct experiments to show the compatibility of our proposed method with various backbones and its superiority under few-shot cases.
TSA2: Temporal Segment Adaptation and Aggregation for Video Harmonization
Zeyu Xiao, Yurui Zhu, Xueyang Fu, Zhiwei Xiong
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024
Paper | Code | Abstract
Video composition merges the foreground and background of different videos, presenting challenges due to variations in capture conditions (e.g., saturation, brightness, and contrast). Video harmonization is a vital process in achieving a realistic composite by seamlessly adjusting the foreground's appearance to match the background. In this paper, we propose TSA2, a novel method for video harmonization that incorporates temporal segment adaptation and aggregation. TSA2 divides the inharmonious input sequence into temporal segments, each corresponding to a different frame rate, allowing effective utilization of complementary information within each segment. The method includes the Temporal Segment Adaptation module, which learns and remaps the distribution difference between background and foreground regions, and the Temporal Segment Aggregation module, which emphasizes and aggregates cross-segment information through element-wise correlations. Experimental results demonstrate that TSA2 outperforms advanced image and video harmonization methods quantitatively and qualitatively.
▲ 2024 New ▶ Generation ▶ Enhancement ▶ Denoising ▶ Low4High