The Dependent Dirichlet Process Mixture of Objects for Detection-free Tracking and Object Modeling

Willie Neiswanger, Frank Wood, Eric Xing
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:660-668, 2014.

Abstract

This paper explores how to find, track, and learn models of arbitrary objects in a video without a predefined method for object detection. We present a model that localizes objects via unsupervised tracking while learning a representation of each object, avoiding the need for pre-built detectors. Our model uses a dependent Dirichlet process mixture to capture the uncertainty in the number and appearance of objects and requires only spatial and color video data that can be efficiently extracted via frame differencing. We give two inference algorithms for use in both online and offline settings, and use them to perform accurate detection-free tracking on multiple real videos. We demonstrate our method in difficult detection scenarios involving occlusions and appearance shifts, on videos containing a large number of objects, and on a recent human-tracking benchmark where we show performance comparable to state of the art detector-based methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-neiswanger14, title = {{The Dependent Dirichlet Process Mixture of Objects for Detection-free Tracking and Object Modeling}}, author = {Neiswanger, Willie and Wood, Frank and Xing, Eric}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {660--668}, year = {2014}, editor = {Kaski, Samuel and Corander, Jukka}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/neiswanger14.pdf}, url = {https://proceedings.mlr.press/v33/neiswanger14.html}, abstract = {This paper explores how to find, track, and learn models of arbitrary objects in a video without a predefined method for object detection. We present a model that localizes objects via unsupervised tracking while learning a representation of each object, avoiding the need for pre-built detectors. Our model uses a dependent Dirichlet process mixture to capture the uncertainty in the number and appearance of objects and requires only spatial and color video data that can be efficiently extracted via frame differencing. We give two inference algorithms for use in both online and offline settings, and use them to perform accurate detection-free tracking on multiple real videos. We demonstrate our method in difficult detection scenarios involving occlusions and appearance shifts, on videos containing a large number of objects, and on a recent human-tracking benchmark where we show performance comparable to state of the art detector-based methods.} }
Endnote
%0 Conference Paper %T The Dependent Dirichlet Process Mixture of Objects for Detection-free Tracking and Object Modeling %A Willie Neiswanger %A Frank Wood %A Eric Xing %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-neiswanger14 %I PMLR %P 660--668 %U https://proceedings.mlr.press/v33/neiswanger14.html %V 33 %X This paper explores how to find, track, and learn models of arbitrary objects in a video without a predefined method for object detection. We present a model that localizes objects via unsupervised tracking while learning a representation of each object, avoiding the need for pre-built detectors. Our model uses a dependent Dirichlet process mixture to capture the uncertainty in the number and appearance of objects and requires only spatial and color video data that can be efficiently extracted via frame differencing. We give two inference algorithms for use in both online and offline settings, and use them to perform accurate detection-free tracking on multiple real videos. We demonstrate our method in difficult detection scenarios involving occlusions and appearance shifts, on videos containing a large number of objects, and on a recent human-tracking benchmark where we show performance comparable to state of the art detector-based methods.
RIS
TY - CPAPER TI - The Dependent Dirichlet Process Mixture of Objects for Detection-free Tracking and Object Modeling AU - Willie Neiswanger AU - Frank Wood AU - Eric Xing BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-neiswanger14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 33 SP - 660 EP - 668 L1 - http://proceedings.mlr.press/v33/neiswanger14.pdf UR - https://proceedings.mlr.press/v33/neiswanger14.html AB - This paper explores how to find, track, and learn models of arbitrary objects in a video without a predefined method for object detection. We present a model that localizes objects via unsupervised tracking while learning a representation of each object, avoiding the need for pre-built detectors. Our model uses a dependent Dirichlet process mixture to capture the uncertainty in the number and appearance of objects and requires only spatial and color video data that can be efficiently extracted via frame differencing. We give two inference algorithms for use in both online and offline settings, and use them to perform accurate detection-free tracking on multiple real videos. We demonstrate our method in difficult detection scenarios involving occlusions and appearance shifts, on videos containing a large number of objects, and on a recent human-tracking benchmark where we show performance comparable to state of the art detector-based methods. ER -
APA
Neiswanger, W., Wood, F. & Xing, E.. (2014). The Dependent Dirichlet Process Mixture of Objects for Detection-free Tracking and Object Modeling. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 33:660-668 Available from https://proceedings.mlr.press/v33/neiswanger14.html.

Related Material