Using visual information carries a high potential for localization and map creation in unknown environments. The project NAVVIS (Navigation anhand visueller Informationen zur erweiterten Wahrnehmung der Umgebung; translated: Navigation based on visual information for extended perception of the environment) aims at investigating and exploiting this potential by using visual sensors and by developing adequate methods and algorithms.
Localization and mapping of the environment are essential capabilities for determining a user’s position and orientation. While self-localization is realized in many cases by satellite-based positioning, the exploration of unknown and indoor environments of larger buildings cannot be performed using satellite navigation. In the project NAVVIS, this gap is to be closed by using visual sensors.
A central challenge is the simultaneous localization and mapping in previously unknown environments, so that the location of an image can be determined without additional infrastructure and without a previously existing database. This can be achieved by combining relative position information and the ability to recognize formerly detected objects. Through the redundancy in the combined information, precise visual maps of the environment can be generated. By matching the image recordings of user to these visual reference maps, not only the position of a user, but also his orientation can be determined. This is a basis for novel location-based services.
The methods and techniques to be developed in the project NAVVIS are thus a key technology enabling navigation and mapping without the need for a complex infrastructure. Besides the use in autonomous exploration of unknown environments, the techniques developed in NAVVIS have also a high potential to complement satellite-based navigation in the area of commercial mass market applications, like location-aware services in airports, malls, museums, and many more.
In order to achieve these long-term goals, novel concepts and algorithms are to be developed. By reduction of complexity, they overcome the existing limitations regarding the extensiveness of the environment and the hardware requirements. Wide loops are to be closed very efficiently using content-based image retrieval approaches. These fundamental technologies are made usable for personal navigation by an interface that adapts to current context information and user requirements.
The NAVVIS Indoor Viewer and the corresponding dataset presented on this website are designed to help other research groups to work in the challenging field of mobile visual indoor localization.
In case of any questions please do not hesitate to contact us via email: email@example.com
|Project Title:||Navigation anhand visueller Informationen zur erweiterten Wahrnehmung der Umgebung (NAVVIS II)|
|Project Coordinator:||Eckehard Steinbach, Georg Schroth Lehrstuhl für Medientechnik Arcisstraße 21, 80333 München Tel.: 089-289-23500|
|Project Runtime:||01.04.2013 – 31.03.2015|
|Cooperation Partners:||Lehrstuhl für Medientechnik|
- Prof. Dr.-Ing. Eckehard Steinbach
- M.Sc. Dmytro Bobkov
- Dipl.-Ing. Adrian Garcea
- Dipl.-Ing. Sebastian Hilsenbeck
- Dipl.-Ing. Robert Huitl
- Dipl.-Medieninf. Andreas Möller
- M.Sc. Dominik van Opdenbosch
- Dr.-Ing. Georg Schroth
Sebastian Hilsenbeck, Dmytro Bobkov, Georg Schroth, Robert Huitl, Eckehard Steinbach, “Graph-based Data Fusion of Pedometer and WiFi Measurements for Mobile Indoor Positioning“, In ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2014), Seattle, WA, USA, September 2014
Dominik van Opdenbosch, Georg Schroth, Robert Huitl, Sebastian Hilsenbeck, Adrian Garcea, Eckehard Steinbach, “Camera-Based Indoor Positioning Using Scalable Streaming Of Compressed Binary Image Signatures“, In IEEE International Conference on Image Processing (ICIP 2014), Paris, France, October 2014
Julian Straub, Sebastian Hilsenbeck, Georg Schroth, Robert Huitl, Andreas Möller, Eckehard Steinbach, “Fast Relocalization For Visual Odometry Using Binary Features“, In IEEE International Conference on Image Processing (ICIP 2013), Melbourne, Australia, September 2013
Andreas Möller, Matthias Kranz, Robert Huitl, Stefan Diewald, Luis Roalter, “A Mobile Indoor Navigation System Interface Adapted to Vision-Based Localization“, In 11th International Conference on Mobile and Ubiquitous Multimedia (MUM2012), Ulm, Germany, December 2012
Robert Huitl, Georg Schroth, Sebastian Hilsenbeck, Florian Schweiger, Eckehard Steinbach, “Virtual Reference View Generation for CBIR-based Visual Pose Estimation“, In ACM Multimedia 2012, Nara, Japan, November 2012
Sebastian Hilsenbeck, Andreas Möller, Robert Huitl, Georg Schroth, Matthias Kranz, Eckehard Steinbach, “Scale-Preserving Long-Term Visual Odometry for Indoor Navigation“, In International Conference on Indoor Positioning and Indoor Navigation (IPIN 2012), Sydney, Australia, November 2012
Robert Huitl, Georg Schroth, Sebastian Hilsenbeck, Florian Schweiger, Eckehard Steinbach, “TUMindoor: an extensive image and point cloud dataset for visual indoor localization and mapping“, In IEEE International Conference on Image Processing (ICIP 2012), Orlando, September 2012
Andreas Möller, Christian Kray, Luis Roalter, Stefan Diewald, Robert Huitl, Matthias Kranz, “Tool Support for Prototyping Interfaces for Vision-Based Indoor Navigation“, In Workshop on Mobile Vision and HCI (MobiVis) on MobileHCI 2012, San Francisco, USA, September 2012
Georg Schroth, Robert Huitl, Mohammad Abu-Alqumsan, Florian Schweiger, Eckehard Steinbach, “Exploiting prior knowledge in mobile visual location recognition“, In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2012), Kyoto, March 2012
Georg Schroth, Robert Huitl, David Chen, Mohammad Abu-Alqumsan, Anas Al-Nuaimi, Eckehard Steinbach, “Mobile Visual Location Recognition“, In IEEE Signal Processing Magazine, Special Issue on Mobile Media Search, Pages: 77-89, Volume: 28, Number: 4, July 2011
Georg Schroth, Sebastian Hilsenbeck, Robert Huitl, Florian Schweiger, Eckehard Steinbach, “Exploiting text-related features for content-based image retrieval“, In IEEE International Symposium on Multimedia (ISM 2011), Dana Point, CA, USA, December 2011
Huizhong Chen, Sam S. Tsai, Georg Schroth, David M. Chen, Radek Grzeszczuk, Bernd Girod, “Robust Text Detection in Natural Images with Edge-Enhanced Maximally Stable Extremal Regions“, In IEEE International Conference on Image Processing (ICIP 2011), Brussels, September 2011
Sam S. Tsai, Huizhong Chen, David M Chen, Georg Schroth, Radek Grzeszczuk, Bernd Girod, “Mobile Visual Search on Printed Documents using Text and Low Bit-Rate Features“, In IEEE International Conference on Image Processing (ICIP 2011), Brussels, September 2011
Georg Schroth, Anas Al-Nuaimi, Robert Huitl, Florian Schweiger, Eckehard Steinbach, “Rapid Image Retrieval for Mobile Location Recognition“, In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2011), Prague, Czech Republic, May 2011