▼ 2024 New ▶ Active 3D ▶ Passive 3D ▶ Event ▶ Point Cloud
2024 New
Exploiting Dual-Correlation for Multi-frame Time-of-Flight Denoising
Guanting Dong, Yueyi Zhang, Xiaoyan Sun, Zhiwei Xiong
European Conference on Computer Vision (ECCV), 2024
Coming soon!
Event-Adapted Video Super-Resolution
Zeyu Xiao, Dachun Kai, Yueyi Zhang, Zheng-Jun Zha, Xiaoyan Sun, Zhiwei Xiong
European Conference on Computer Vision (ECCV), 2024
Coming soon!
Joint Flow Estimation from Point Clouds and Event Streams
Hanlin Li, Yueyi Zhang, Guanting Dong, Shida Sun, Zhiwei Xiong
IEEE International Conference on Multimedia and Expo (ICME), 2024
Coming soon!
Depth From Asymmetric Frame-Event Stereo: A Divide-and-Conquer Approach
Xihao Chen, Wenming Weng, Yueyi Zhang, Zhiwei Xiong
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024
Paper | Code | Abstract
Event cameras asynchronously measure brightness changes in a scene without motion blur or saturation, while frame cameras capture images with dense intensity and fine details at a fixed rate. The exclusive advantages of the two modalities make depth estimation from Stereo Asymmetric Frame-Event (SAFE) systems appealing. However, due to the inevitable information absence of one modality in certain challenging regions, existing stereo matching methods lose efficacy for asymmetric inputs from SAFE systems. In this paper, we propose a divide-and-conquer approach that decomposes depth estimation from SAFE systems into three sub-tasks, i.e., frame-event stereo matching, frame-based Structure-from-Motion (SfM), and event-based SfM. In this way, the above challenging regions are addressed by monocular SfM, which estimates robust depth with two views belonging to the same functioning modality. Moreover, we propose a dual sampling strategy to construct cost volumes with identical spatial locations and depth hypotheses for different sub-tasks, which enables sub-task fusion at the cost volume level. To tackle the occlusion issue raised by the sampling strategy, we further introduce a temporal fusion scheme to utilize long-term sequential inputs with multi-view information. Experimental results validate the superior performance of our method over existing solutions.
▲ 2024 New ▶ Active 3D ▶ Passive 3D ▶ Event ▶ Point Cloud