
Multi-Camera Visual-Inertial-GPS fusion
In collaboration with the Toyota Research Institute (TRI)
In this work, we've developed a non-linear optimization based system that tightly integrates GPS data,
inertial measurments and visual input from a multi camera setup.
Building upon our Generic Visual SLAM framework for multi-camera setups, we utilize IMU preintegration
to summarize hundreds of inertial measurements into a single relative motion constraint.
For achieving precise global localization, we've developed a custom GPS factor.
This factor allows for real-time estimation of the transformation between the Visual-Inertial Odometry (VIO) frame
and the GPS's ENU frame while accounting for the time offset between the sensors.