High-Level Bottom-Up Cues for Top-Down Parsing of Facade Images - École des Ponts ParisTech Access content directly
Conference Papers Year : 2012

High-Level Bottom-Up Cues for Top-Down Parsing of Facade Images

Abstract

We address the problem of parsing images of building facades. The goal is to segment images, assigning to the resulting regions semantic labels that correspond to the basic architectural elements. We assume a top-down parsing framework is developed beforehand, based on a 2D shape grammar that encodes a prior knowledge on the possible composition of facades. The algorithm explores the space of feasible solutions by generating the possible configurations of the facade and comparing it to the input data by means of a local, pixel- or patch-based classifier. We propose new bottom-up cues for the algorithm, both for evaluation of a candidate parse and for guiding the exploration of the space of feasible solutions. The method that we propose benefits from detection-based information and leverages on the similar appearance of elements that repeat in a given facade. Experiments performed on standard datasets show that this use of more discriminative bottom-up cues improves the convergence in comparison to state-of-the-art algorithms, and gives better results in terms of precision and recall, as well as computation time and deviation.
Fichier principal
Vignette du fichier
PID2507871.pdf (3.42 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-00743043 , version 1 (18-10-2012)

Identifiers

  • HAL Id : hal-00743043 , version 1

Cite

David Ok, Mateusz Koziński, Renaud Marlet, Nikos Paragios. High-Level Bottom-Up Cues for Top-Down Parsing of Facade Images. 3DIMPVT, Oct 2012, Zürich, Switzerland. pp.N/A. ⟨hal-00743043⟩
348 View
530 Download

Share

Gmail Facebook X LinkedIn More