Scale Invariant Conditional Dependence Measures

Sashank J Reddi, Barnabas Poczos
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):1355-1363, 2013.

Abstract

In this paper we develop new dependence and conditional dependence measures and provide their estimators. An attractive property of these measures and estimators is that they are invariant to any monotone increasing transformations of the random variables, which is important in many applications including feature selection. Under certain conditions we show the consistency of these estimators, derive upper bounds on their convergence rates, and show that the estimators do not suffer from the curse of dimensionality. However, when the conditions are less restrictive, we derive a lower bound which proves that in the worst case the convergence can be arbitrarily slow similarly to some other estimators. Numerical illustrations demonstrate the applicability of our method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-jreddi13, title = {Scale Invariant Conditional Dependence Measures}, author = {J Reddi, Sashank and Poczos, Barnabas}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {1355--1363}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/jreddi13.pdf}, url = {https://proceedings.mlr.press/v28/jreddi13.html}, abstract = {In this paper we develop new dependence and conditional dependence measures and provide their estimators. An attractive property of these measures and estimators is that they are invariant to any monotone increasing transformations of the random variables, which is important in many applications including feature selection. Under certain conditions we show the consistency of these estimators, derive upper bounds on their convergence rates, and show that the estimators do not suffer from the curse of dimensionality. However, when the conditions are less restrictive, we derive a lower bound which proves that in the worst case the convergence can be arbitrarily slow similarly to some other estimators. Numerical illustrations demonstrate the applicability of our method.} }
Endnote
%0 Conference Paper %T Scale Invariant Conditional Dependence Measures %A Sashank J Reddi %A Barnabas Poczos %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-jreddi13 %I PMLR %P 1355--1363 %U https://proceedings.mlr.press/v28/jreddi13.html %V 28 %N 3 %X In this paper we develop new dependence and conditional dependence measures and provide their estimators. An attractive property of these measures and estimators is that they are invariant to any monotone increasing transformations of the random variables, which is important in many applications including feature selection. Under certain conditions we show the consistency of these estimators, derive upper bounds on their convergence rates, and show that the estimators do not suffer from the curse of dimensionality. However, when the conditions are less restrictive, we derive a lower bound which proves that in the worst case the convergence can be arbitrarily slow similarly to some other estimators. Numerical illustrations demonstrate the applicability of our method.
RIS
TY - CPAPER TI - Scale Invariant Conditional Dependence Measures AU - Sashank J Reddi AU - Barnabas Poczos BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-jreddi13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 1355 EP - 1363 L1 - http://proceedings.mlr.press/v28/jreddi13.pdf UR - https://proceedings.mlr.press/v28/jreddi13.html AB - In this paper we develop new dependence and conditional dependence measures and provide their estimators. An attractive property of these measures and estimators is that they are invariant to any monotone increasing transformations of the random variables, which is important in many applications including feature selection. Under certain conditions we show the consistency of these estimators, derive upper bounds on their convergence rates, and show that the estimators do not suffer from the curse of dimensionality. However, when the conditions are less restrictive, we derive a lower bound which proves that in the worst case the convergence can be arbitrarily slow similarly to some other estimators. Numerical illustrations demonstrate the applicability of our method. ER -
APA
J Reddi, S. & Poczos, B.. (2013). Scale Invariant Conditional Dependence Measures. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):1355-1363 Available from https://proceedings.mlr.press/v28/jreddi13.html.

Related Material