Convex Adversarial Collective Classification

MohamadAli Torkamani, Daniel Lowd
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):642-650, 2013.

Abstract

In this paper, we present a novel method for robustly performing collective classification in the presence of a malicious adversary that can modify up to a fixed number of binary-valued attributes. Our method is formulated as a convex quadratic program that guarantees optimal weights against a worst-case adversary in polynomial time. In addition to increased robustness against active adversaries, this kind of adversarial regularization can also lead to improved generalization even when no adversary is present. In experiments on real and simulated data, our method consistently outperforms both non-adversarial and non-relational baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-torkamani13, title = {Convex Adversarial Collective Classification}, author = {Torkamani, MohamadAli and Lowd, Daniel}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {642--650}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/torkamani13.pdf}, url = {https://proceedings.mlr.press/v28/torkamani13.html}, abstract = {In this paper, we present a novel method for robustly performing collective classification in the presence of a malicious adversary that can modify up to a fixed number of binary-valued attributes. Our method is formulated as a convex quadratic program that guarantees optimal weights against a worst-case adversary in polynomial time. In addition to increased robustness against active adversaries, this kind of adversarial regularization can also lead to improved generalization even when no adversary is present. In experiments on real and simulated data, our method consistently outperforms both non-adversarial and non-relational baselines.} }
Endnote
%0 Conference Paper %T Convex Adversarial Collective Classification %A MohamadAli Torkamani %A Daniel Lowd %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-torkamani13 %I PMLR %P 642--650 %U https://proceedings.mlr.press/v28/torkamani13.html %V 28 %N 1 %X In this paper, we present a novel method for robustly performing collective classification in the presence of a malicious adversary that can modify up to a fixed number of binary-valued attributes. Our method is formulated as a convex quadratic program that guarantees optimal weights against a worst-case adversary in polynomial time. In addition to increased robustness against active adversaries, this kind of adversarial regularization can also lead to improved generalization even when no adversary is present. In experiments on real and simulated data, our method consistently outperforms both non-adversarial and non-relational baselines.
RIS
TY - CPAPER TI - Convex Adversarial Collective Classification AU - MohamadAli Torkamani AU - Daniel Lowd BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-torkamani13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 642 EP - 650 L1 - http://proceedings.mlr.press/v28/torkamani13.pdf UR - https://proceedings.mlr.press/v28/torkamani13.html AB - In this paper, we present a novel method for robustly performing collective classification in the presence of a malicious adversary that can modify up to a fixed number of binary-valued attributes. Our method is formulated as a convex quadratic program that guarantees optimal weights against a worst-case adversary in polynomial time. In addition to increased robustness against active adversaries, this kind of adversarial regularization can also lead to improved generalization even when no adversary is present. In experiments on real and simulated data, our method consistently outperforms both non-adversarial and non-relational baselines. ER -
APA
Torkamani, M. & Lowd, D.. (2013). Convex Adversarial Collective Classification. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):642-650 Available from https://proceedings.mlr.press/v28/torkamani13.html.

Related Material