## Complexity of Representation and Inference in Compositional Models with Part Sharing

*Alan Yuille, Roozbeh Mottaghi*; 17(11):1−28, 2016.

### Abstract

This paper performs a complexity analysis of a class of serial
and parallel compositional models of multiple objects and shows
that they enable efficient representation and rapid inference.
Compositional models are generative and represent objects in a
hierarchically distributed manner in terms of parts and
subparts, which are constructed recursively by part-subpart
compositions. Parts are represented more coarsely at higher
level of the hierarchy, so that the upper levels give coarse
summary descriptions (e.g., there is a horse in the image) while
the lower levels represents the details (e.g., the positions of
the legs of the horse). This hierarchically distributed
representation obeys the

*executive summary* principle,
meaning that a high level executive only requires a coarse
summary description and can, if necessary, get more details by
consulting lower level executives. The parts and subparts are
organized in terms of hierarchical dictionaries which enables

part sharing between different objects allowing
efficient representation of many objects. The first main
contribution of this paper is to show that compositional models
can be mapped onto a parallel visual architecture similar to
that used by bio- inspired visual models such as deep
convolutional networks but more explicit in terms of
representation, hence enabling part detection as well as object
detection, and suitable for complexity analysis. Inference
algorithms can be run on this architecture to exploit the gains
caused by part sharing and executive summary. Effectively, this
compositional architecture enables us to perform exact inference
simultaneously over a large class of generative models of
objects. The second contribution is an analysis of the
complexity of compositional models in terms of computation time
(for serial computers) and numbers of nodes (e.g., "neurons")
for parallel computers. In particular, we compute the complexity
gains by part sharing and executive summary and their dependence
on how the dictionary scales with the level of the hierarchy. We
explore three regimes of scaling behavior where the dictionary
size (i) increases exponentially with the level of the
hierarchy, (ii) is determined by an unsupervised compositional
learning algorithm applied to real data, (iii) decreases
exponentially with scale. This analysis shows that in some
regimes the use of shared parts enables algorithms which can
perform inference in time linear in the number of levels for an
exponential number of objects. In other regimes part sharing has
little advantage for serial computers but can enable linear
processing on parallel computers.

[abs][pdf][bib]