Datasets


Stefan Heber and Thomas Pock
2016

Synthetic Light Field (LF) dataset rendered using POV-Ray [1]
3d view

3D view

sub-aperture image

sub-aperture image

POV-Ray allows to calculate floating point accurate ground truth depth maps without discretization artifacts. In order to be able to increase the dataset as required, we implemented a random scene generator. This scene generator divides the entire scene in foreground, midground, and background. The foreground and midground regions are randomly filled with comparatively small and large objects, respectively. Those objects are heavily occluding each other. The resulting occlusion and disocclusion effects lead to a high degree of hyperplane intersections in the LF domain. The used 3D objects for the foreground and midground are obtained from the Stanford 3D scanning repository [2], and from the Oyonale dataset [3]. We use around 20 different 3D objects, where about half of them come with random textures from categories like for instance stone, wood, or metal. Moreover, we also use random finish properties. Among other things those finish properties define the non-Lambertian reflectance characteristics of the different surfaces. The backgrounds of the scenes are represented by images downloaded from Google image search, that are labeled for reuse. We use background images with various resolutions from the categories city, landscape, mountain, and street.
After creating a random scene we render it from various viewpoints, where those viewpoints are placed on a regular grid. All rendered images use the same image plane, and the optical axes converge at a predefined point, that is chosen at random somewhere between the image plane and the background. Note, that due to the non-parallel viewing directions this results in non-perpendicular camera vectors, which is intended.
We are making this data publicly available to researchers. The light fields may be used for academic and research purposes, but are not to be used for commercial purposes, nor should they appear in a product for sale without our permission.


[1] Pov-ray. http://www.povray.org.
[2] B. Curless and M. Levoy. A volumetric method for building complex models from range images. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’96, pages 303–312, New York, NY, USA, 1996. ACM.
[3] Oyonale. http://www.oyonale.com.

@INPROCEEDINGS{7780776,
author={S. Heber and T. Pock},
booktitle={2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
title={Convolutional Networks for Shape from Light Field},
year={2016},
volume={},
number={},
pages={3746-3754},
keywords={computer vision;image representation;neural nets;ray tracing;shape recognition;shape;light field;convolutional neural networks;CNN;computer vision;depth information;LF data;end-to-end mapping;image representation;2D hyperplane orientations;ray tracing software;POV-Ray;Two dimensional displays;Three-dimensional displays;Shape;Cameras;Data visualization;Tensile stress;Network architecture},
doi={10.1109/CVPR.2016.407},
ISSN={1063-6919},
month={June},}

CONTACT US

Drop us a line if you are interested in our research. Please get in touch via email at info(at)etaargus.com or using the form below.



By clicking Send Message, you agree to our Terms of Use and Privacy Policy.