Algorithm helps autonomous vehicles find themselves visually

June 28, 2021 // By Jean-Pierre Joosting
Algorithm helps autonomous vehicles find themselves visually
Deep learning makes visual terrain-relative navigation more practical as, for the first time, the technology works regardless of seasonal changes to that terrain.

A new algorithm developed at Caltech allows autonomous systems to recognize where they are simply by looking at the terrain around them — and for the first time, the technology works regardless of seasonal changes to that terrain.

Details about the process were published on June 23 in the journal Science Robotics, published by the American Association for the Advancement of Science (AAAS).

The general process, known as visual terrain-relative navigation (VTRN), was first developed in the 1960s. By comparing nearby terrain to high-resolution satellite images, autonomous systems can locate themselves.

The problem is that, in order for it to work, the current generation of VTRN requires that the terrain it is looking at closely matches the images in its database. Anything that alters or obscures the terrain, such as snow cover or fallen leaves, causes the images to not match up and fouls up the system. So, unless there is a database of the landscape images under every conceivable condition, VTRN systems can be easily confused.

To overcome this challenge, a team from the lab of Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and research scientist at JPL, which Caltech manages for NASA, turned to deep learning and artificial intelligence (AI) to remove seasonal content that hinders current VTRN systems.


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.