Event-based Depth Completion Datasets

NUS, UCL, NJUST

Overview

Tab.1 provides an overview of the sensors used in the datasets with their respective specifications. EventDC-Real is a real-world dataset in which color images and event frames are captured using the FLIR BFS-U3-31S4C camera and the DAVIS346 sensor, respectively. The ground truth (GT) depth is acquired from a 128-line Ouster LiDAR, and the sparse depth is derived from its 16 sub-lines. EventDC-SemiSyn is a semi-synthetic dataset based on KITTI. The sparse depth and GT depth come from the raw data of KITTI. For the color images, we apply radial motion blur by progressively scaling and transforming the image around its center to simulate a motion blur effect with adjustable strength and step count. Additionally, VID2E is used to generate the event data with frames captured within 15 ms before and after the current timestamp. EventDC-FullSyn is a fully synthetic dataset generated using the CARLA simulator. The color images are processed similarly with radial motion blur. Finally, to facilitate model training, the resolution of all datasets has been cropped to multiples of 32.

Download


Baidu Disk (a286)
Hugging Face

In addition, if you use these datasets, please cite related works. For example:

Citation

                    
@article{yan2025event,
  title={Event-Driven Dynamic Scene Depth Completion},
  author={Yan, Zhiqiang and Jiao, Jianhao and Wang, Zhengxue and Lee, Gim Hee},
  journal={arXiv preprint arXiv:2505.13279},
  year={2025}
}

Contact

For any questions, please contact yanzq@nus.edu.sg

© Zhiqiang Yan | Last updated: Nov. 2025