Online Anomaly Detection under Adversarial Impact

Marius Kloft, Pavel Laskov
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:405-412, 2010.

Abstract

Security analysis of learning algorithms is gaining increasing importance, especially since they have become target of deliberate obstruction in certain applications. Some security-hardened algorithms have been previously proposed for supervised learning; however, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method—online centroid anomaly detection—in the presence of adversarial noise. Our analysis addresses three key security-related issues: derivation of an optimal attack, analysis of its efficiency and constraints. Experimental evaluation carried out on real HTTP and exploit traces confirms the tightness of our theoretical bounds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v9-kloft10a, title = {Online Anomaly Detection under Adversarial Impact}, author = {Kloft, Marius and Laskov, Pavel}, booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics}, pages = {405--412}, year = {2010}, editor = {Teh, Yee Whye and Titterington, Mike}, volume = {9}, series = {Proceedings of Machine Learning Research}, address = {Chia Laguna Resort, Sardinia, Italy}, month = {13--15 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v9/kloft10a/kloft10a.pdf}, url = {https://proceedings.mlr.press/v9/kloft10a.html}, abstract = {Security analysis of learning algorithms is gaining increasing importance, especially since they have become target of deliberate obstruction in certain applications. Some security-hardened algorithms have been previously proposed for supervised learning; however, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method—online centroid anomaly detection—in the presence of adversarial noise. Our analysis addresses three key security-related issues: derivation of an optimal attack, analysis of its efficiency and constraints. Experimental evaluation carried out on real HTTP and exploit traces confirms the tightness of our theoretical bounds.} }
Endnote
%0 Conference Paper %T Online Anomaly Detection under Adversarial Impact %A Marius Kloft %A Pavel Laskov %B Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2010 %E Yee Whye Teh %E Mike Titterington %F pmlr-v9-kloft10a %I PMLR %P 405--412 %U https://proceedings.mlr.press/v9/kloft10a.html %V 9 %X Security analysis of learning algorithms is gaining increasing importance, especially since they have become target of deliberate obstruction in certain applications. Some security-hardened algorithms have been previously proposed for supervised learning; however, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method—online centroid anomaly detection—in the presence of adversarial noise. Our analysis addresses three key security-related issues: derivation of an optimal attack, analysis of its efficiency and constraints. Experimental evaluation carried out on real HTTP and exploit traces confirms the tightness of our theoretical bounds.
RIS
TY - CPAPER TI - Online Anomaly Detection under Adversarial Impact AU - Marius Kloft AU - Pavel Laskov BT - Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics DA - 2010/03/31 ED - Yee Whye Teh ED - Mike Titterington ID - pmlr-v9-kloft10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 9 SP - 405 EP - 412 L1 - http://proceedings.mlr.press/v9/kloft10a/kloft10a.pdf UR - https://proceedings.mlr.press/v9/kloft10a.html AB - Security analysis of learning algorithms is gaining increasing importance, especially since they have become target of deliberate obstruction in certain applications. Some security-hardened algorithms have been previously proposed for supervised learning; however, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method—online centroid anomaly detection—in the presence of adversarial noise. Our analysis addresses three key security-related issues: derivation of an optimal attack, analysis of its efficiency and constraints. Experimental evaluation carried out on real HTTP and exploit traces confirms the tightness of our theoretical bounds. ER -
APA
Kloft, M. & Laskov, P.. (2010). Online Anomaly Detection under Adversarial Impact. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 9:405-412 Available from https://proceedings.mlr.press/v9/kloft10a.html.

Related Material