Object detection and self-localization are the fundamental tasks in autonomous driving as well as mobile robotics. In recent years, thanks to the rapid development of deep learning, vision-based perception and localization have been greatly improved in both accuracy and reliability. Deep learning based approaches also show good generalization ability under the deep neural network structure and the support of a large amount of training data. Facing complex environments in practical applications, more and more work has been initiated to focus on edge cases, including adverse weather conditions. Raindrops falling on vehicle windows (in case of built-in camera) or camera lenses (in case of external camera) on a rainy day are one of them. They typically cause vision sensors to produce blurry images which in turn interfere with high-level environmental perception task.
We propose a method that can effectively remove raindrops from the images captured by a handheld light field camera using image inpainting. The depth map generated from the light field image was used to detect raindrop regions, which were then expressed as a binary mask. In parallel, the original image was improved by refocusing on the far regions. Image inpainting was finally utilized to eliminate rain-drops with the binary mask and enhanced image. Image quality analysis, object detection, and vision-based self-localization were performed to prove the raindrop removal enhancement with light field images.
This work has been published in IEEE Access open journal.