18th International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS), Antwerp, Belgium, 18 - 21 September 2017, vol.10617, pp.180-190
We propose a new method for human pose estimation which leverages information from multiple views to impose a strong prior on the articulated pose. The novelty of the method concerns the types of coherence modeled. Consistency is maximized over the different views through different terms modeling classical geometric information (coherence of the resulting poses) as well as appearance information which is modeled as latent variables in the global energy function. Experiments on the HumanEva dataset show that the proposed method significantly decreases the estimation error compared to single-view results and attains a 3D PCP score of 86%.