Non-Rigid Structure from Motion Challenge

The evaluation is still open for submissions!

Please feel free to submit using the instructions given here.

If you use the dataset or benchmark, please cite our paper:

title={A Benchmark and Evaluation of Non-Rigid Structure from Motion},
author={Jensen, Sebastian Hoppe Nesgaard and Del Bue, Alessio and Doest, Mads Emil Brix and Aan{\ae}s, Henrik},
journal={arXiv preprint arXiv:1801.08388},

The Challenge

Non-rigid Structure from Motion (NRSfM) has been a very active research topic in the last 18 years. Given its relevance, it is surprising that methods have been tested on a rather limited type of objects deformations, related to very few materials (motion capture data of the human body mainly). This limitation has likely biased the research towards a certain type of methods thus leading to a slowdown, and possible misdirection, of the research in this field. By combining advanced robotics with dense 3D scanning and non-rigid animatronics, we have produced a rich and varied dataset with accurate Ground Truth for usage in evaluation the state-of-the-art in NRSfM. In addition to variety we also supply realistic missing data based on the densely captured geometry.

We invite the computer vision community to take part in the NRSfM challenge, use the dataset, develop new methods, and share results with the rest of the community.


The NRSfM Challenge provides 2D image tracks and matched points from different types of shapes with different material properties (paper, rubber, articulated objects, balloons) under both orthographic and projective camera models. Realistic missing data are given by deformation based self-occlusions and changes in camera viewpoint.


Evaluation with accurate ground truth provides for the first time to compare each method performance with different real deformations, ratio of missing data and camera distortions.