A New Dataset for Geospatial Visual Localisation: egenioussBench

Determining a camera’s pose from images – known as visual localisation- is fundamental to applications from autonomous driving and robotics to augmented reality, yet existing datasets face two key issues. They either lack the scale needed for large-scale scenes, limiting progress towards truly scalable methods. Second, when they do cover large scenes, they often provide imprecise ground truth poses for the query image data. egenioussBench overcomes these limitations by pairing a high-resolution aerial 3D mesh and a CityGML LoD2 model as geospatial referee data and a map-independent ground-level smartphone imagery with centimetre-accurate poses obtained via PPK and GCP/CP-aided adjustment as query data.

The benchmark offers:

-A high-resolution aerial 3D Mesh and a CityGML LoD2 model as geospatial reference data
-A test split of 42 non-co-visible query images with withheld ground truth
-A validation split of 412 sequential query images with released poses
-A public leaderboard, evaluated with multi-threshold binning metrics and comprehensive global statistics

More information, including a link to a paper, can be found at https://www.isprs.org/resources/datasets/benchmarks/egenioussBench/Default.aspx

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult