Surround-view fisheye camera perception for automated driving: Overview, survey & challenges
Surround-view fisheye cameras are commonly used for near-field sensing in automated
driving. Four fisheye cameras on four sides of the vehicle are sufficient to cover 360° around …
driving. Four fisheye cameras on four sides of the vehicle are sufficient to cover 360° around …
Deep reinforcement learning for autonomous driving: A survey
With the development of deep representation learning, the domain of reinforcement learning
(RL) has become a powerful learning framework now capable of learning complex policies …
(RL) has become a powerful learning framework now capable of learning complex policies …
Omnidet: Surround view cameras based multi-task visual perception network for autonomous driving
Surround View fisheye cameras are commonly deployed in automated driving for 360 near-
field sensing around the vehicle. This work presents a multi-task visual perception network …
field sensing around the vehicle. This work presents a multi-task visual perception network …
Syndistnet: Self-supervised monocular fisheye camera distance estimation synergized with semantic segmentation for autonomous driving
State-of-the-art self-supervised learning approaches for monocular depth estimation usually
suffer from scale ambiguity. They do not generalize well when applied on distance …
suffer from scale ambiguity. They do not generalize well when applied on distance …
Near-field perception for low-speed vehicle automation using surround-view fisheye cameras
Cameras are the primary sensor in automated driving systems. They provide high
information density and are optimal for detecting road infrastructure cues laid out for human …
information density and are optimal for detecting road infrastructure cues laid out for human …
Unrectdepthnet: Self-supervised monocular depth estimation using a generic framework for handling common camera distortion models
VR Kumar, S Yogamani, M Bach, C Witt… - 2020 IEEE/RSJ …, 2020 - ieeexplore.ieee.org
In classical computer vision, rectification is an integral part of multi-view depth estimation. It
typically includes epipolar rectification and lens distortion correction. This process simplifies …
typically includes epipolar rectification and lens distortion correction. This process simplifies …
Dynamic task weighting methods for multi-task networks in autonomous driving systems
Deep multi-task networks are of particular interest for autonomous driving systems. They can
potentially strike an excellent trade-off between predictive performance, hardware …
potentially strike an excellent trade-off between predictive performance, hardware …
Monocular instance motion segmentation for autonomous driving: Kitti instancemotseg dataset and multi-task baseline
Moving object segmentation is a crucial task for autonomous vehicles as it can be used to
segment objects in a class agnostic manner based on their motion cues. It enables the …
segment objects in a class agnostic manner based on their motion cues. It enables the …
Adversarial attacks on multi-task visual perception for autonomous driving
Deep neural networks (DNNs) have accomplished impressive success in various
applications, including autonomous driving perception tasks, in recent years. On the other …
applications, including autonomous driving perception tasks, in recent years. On the other …
Surround-view fisheye BEV-perception for valet parking: Dataset, baseline and distortion-insensitive multi-task framework
Surround-view fisheye perception under valet parking scenes is fundamental and crucial in
autonomous driving. Environmental conditions in parking lots perform differently from the …
autonomous driving. Environmental conditions in parking lots perform differently from the …