Camera Radar Fusion
Published:
From my previous work, I have focused on computer vision with multiple cameras to obtain geometric information such as pose. More recent trends, such as in the self-driving car industry, have focused on incorporating data from multiple complementory sensors. This is called sensor fusion, and while more complex has many advantages over using a single kind of sensor. Cameras and radars complement each other’s information quite well, but research on fusing the two has only recently started to gain interest. Using graph neural networks, we hope to be able to use data-driven methods to fuse the sensors more robustly in more settings.