|Team||Feature Extraction and Identification (50%)|
|Institution||Lumière University of Lyon 2 (ICOM)|
|Location||Bron (Université Lyon2)|
|mehdi.ayadi at liris.cnrs.fr|
|Subject||Augmented Reality on mobile devices in an urban context|
|Abstract||The objective of this thesis is to propose a novel approach that would allow a user, while moving in an urban are, to view in augmented reality, using a smartphone or tablet, the impact of construction projects on the urban landscape, once the building achieved. We do not necessarily act to provide a photo-realistic rendering, but rather the most accurate geometry of the scene as possible, throughout the strolling of the user.
In fact, most mobile augmented reality applications today simply use the smartphone’s embedded instruments to position the observer using the GPS, and generate a very rough summary view of what he should see if he was perfectly positioned: the magnetic compass estimates the viewing direction ; the gyroscope and accelerometer provide a rough estimation of motion parameters (three degrees of freedom in translation and three degrees of freedom in rotation). The use of these instruments to insert a synthetic object in the video stream gives the user a very unrealistic impression of the viewed scene: inserted objects appear to “float” or “hover” while the user moves.
We then propose an approach in which points of interest are automatically extracted from images (visual landmarks) and tracked throughout the sequence, allowing anchor them the synthetic objects. The main challenge is therefore to involve images, from live video stream, captured by the smartphone’s camera, to improve the positioning of the user in the city.
Last update : 2017-03-24 09:45:05