Dataset Overview


System Overview: an end-to-end pipeline to render an RGB-D-inertial benchmark for large scale interior scene understanding and mapping. Our dataset contains 20M images created by pipeline: (A) We collect around 1 million CAD models provided by world-leading furniture manufacturers. These models have been used in the real-world production. (B) Based on those models, around 1,100 professional designers create around 22 million interior layouts. Most of such layouts have been used in real-world decorations. (C) For each layout, we generate a number of configurations to represent different random lightings and simulation of scene change over time in daily life. (D) We provide an interactive simulator (ViSim) to help for creating ground truth IMU, events, as well as monocular or stereo camera trajectories including hand-drawn, random walking and neural network based realistic trajectory. (E) All supported image sequences and ground truth.


Citation

@inproceedings { InteriorNet18, author = { Wenbin Li and Sajad Saeedi and John McCormac and Ronald Clark and Dimos Tzoumanikas and Qing Ye and Yuzhong Huang and Rui Tang and Stefan Leutenegger }, booktitle = { British Machine Vision Conference (BMVC) }, title = { InteriorNet: Mega-scale Multi-sensor Photo-realistic Indoor Scenes Dataset }, year = { 2018 } }

Download


For access to our full image dataset, please agree to the terms of use [HERE]. After we receive your agreement form, we will send you the links to our full image dataset. Many thanks for your patience. If you use any part of our dataset/code, please cite our paper. For more details, please refer to our [PAPER] [FORMAT]. Any request or query should go to interiornetdataset[AT]gmail.com


UPDATE: Kujiale.com will reserve the right of the assets including furniture models, layouts and scenes, used in this dataset. Please contact Dr. Rui Tang (ati[AT]qunhemail.com) for the purpose of permission and download of the assets.


Acknowledgements

We would like to thank Kujiale.com for providing their database of production furniture models and layouts, as well as access to their GPU/CPU clusters. We also thank the Kujiale artists and other professionals for their great efforts into editing and labelling millions of models and scenes. We also highly appreciate comments and technical support from Kujiale Rendering Group, as well as helpful discussions and comments from Prof. Andrew Davison and other members of Robot Vision Group of Imperial College London. This research is supported by the EPSRC grants PAMELA EP/K008730/1, Aerial ABM EP/N018494/1, and Imperial College London.