Visual Place Recognition

Keywords: robotics, navigation, place recognition, whole image matching, patch matching, calibration
Spring 2013 - Fall 2019
teaser image

Description

Visual sensors offer many advantages over traditional robotic mapping sensors, including low cost, small size, passive sensing and low power consumption. A large number of vision-based mapping systems have been developed over the past ten years, including FAB-MAP, MonoSLAM, FrameSLAM, V-GPS, Mini-SLAM, and SeqSLAM amongst many others. Yet as robots are tested over longer and longer time periods in real world environments, it is becoming clear that perceptual change, caused by factors such as day-night cycles, varying weather conditions and seasonal change, remains a significant challenge for vision-based methods. Current vision-based approaches to the problem are limited by one or more significant restrictions such as requiring hand-picked training data, camera motion information, or long image sequences.

To address these limitations, we present a novel multi-step vision-based place recognition system inspired by the human visual processing pathway, and specifically the simultaneous increase in both matching selectivity and tolerance or invariance along the pathway. We extend this concept to the domain of place recognition, by implementing an initial low resolution, low tolerance whole image matcher followed by a higher resolution, highly tolerant patch matching stage. We demonstrate the method achieving recall rates of up to 51% at 100% precision on the sunny day-rainy night Alderley dataset, creating a new benchmark in condition-invariant place recognition. The approach is able to match very perceptually different images of the same place (above image on the left) while rejecting proposed matches between highly aliased images of different places (above image on the right). Finally we present a pilot human study that reveals algorithm performance is comparable to human performance.

Publications

  • "Self-Driving Vehicles: Key Technical Challenges and Progress Off the Road,"
    Michael Milford, Samuel E. Anthony, Walter J. Scheirer,
    IEEE Potentials,
    January-February 2020.
  • "Deja vu: Scalable Place Recognition Using Mutually Supportive Feature Frequencies,"
    Adam Jacobson, Walter J. Scheirer, Michael Milford,
    Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),
    September 2017.
  • "Vision-based Simultaneous Localization and Mapping in Changing Outdoor Environments,"
    Michael Milford, Eleonora Vig, Walter J. Scheirer, David D. Cox,
    Journal of Field Robotics (JFR),
    September / October 2014.
  • "Condition-Invariant, Top-Down Visual Place Recognition,"
    Michael Milford, Walter J. Scheirer, Eleonora Vig, Arren Glover, Oliver Baumann, Jason Mattingley, David D. Cox,
    Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),
    June 2014.
  • "Towards Condition-Invariant, Top-Down Visual Place Recognition,"
    Michael Milford, Eleonora Vig, Walter J. Scheirer, David D. Cox
    Proceedings of the Australasian Conference on Robotics and Automation (ACRA),
    December 2013.