Geometry-aware stationary subspace analysis

Inbal Horev, Florian Yger, Masashi Sugiyama
Proceedings of The 8th Asian Conference on Machine Learning, PMLR 63:430-444, 2016.

Abstract

In many real-world applications data exhibits non-stationarity, i.e., its distribution changes over time. One approach to handling non-stationarity is to remove or minimize it before attempting to analyze the data. In the context of brain computer interface (BCI) data analysis this is sometimes achieved using stationary subspace analysis (SSA). The classic SSA method finds a matrix that projects the data onto a stationary subspace by optimizing a cost function based on a matrix divergence. In this work we present an alternative method for SSA based on a symmetrized version of this matrix divergence. We show that this frames the problem in terms of distances between symmetric positive definite (SPD) matrices, suggesting a geometric interpretation of the problem. Stemming from this geometric viewpoint, we introduce and analyze a method which utilizes the geometry of the SPD matrix manifold and the invariance properties of its metrics. Most notably we show that these invariances alleviate the need to whiten the input matrices, a common step in many SSA methods which often introduces error. We demonstrate the usefulness of our technique in experiments on both synthetic and real-world data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v63-Horev84, title = {Geometry-aware stationary subspace analysis}, author = {Horev, Inbal and Yger, Florian and Sugiyama, Masashi}, booktitle = {Proceedings of The 8th Asian Conference on Machine Learning}, pages = {430--444}, year = {2016}, editor = {Durrant, Robert J. and Kim, Kee-Eung}, volume = {63}, series = {Proceedings of Machine Learning Research}, address = {The University of Waikato, Hamilton, New Zealand}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v63/Horev84.pdf}, url = {https://proceedings.mlr.press/v63/Horev84.html}, abstract = {In many real-world applications data exhibits non-stationarity, i.e., its distribution changes over time. One approach to handling non-stationarity is to remove or minimize it before attempting to analyze the data. In the context of brain computer interface (BCI) data analysis this is sometimes achieved using stationary subspace analysis (SSA). The classic SSA method finds a matrix that projects the data onto a stationary subspace by optimizing a cost function based on a matrix divergence. In this work we present an alternative method for SSA based on a symmetrized version of this matrix divergence. We show that this frames the problem in terms of distances between symmetric positive definite (SPD) matrices, suggesting a geometric interpretation of the problem. Stemming from this geometric viewpoint, we introduce and analyze a method which utilizes the geometry of the SPD matrix manifold and the invariance properties of its metrics. Most notably we show that these invariances alleviate the need to whiten the input matrices, a common step in many SSA methods which often introduces error. We demonstrate the usefulness of our technique in experiments on both synthetic and real-world data.} }
Endnote
%0 Conference Paper %T Geometry-aware stationary subspace analysis %A Inbal Horev %A Florian Yger %A Masashi Sugiyama %B Proceedings of The 8th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Robert J. Durrant %E Kee-Eung Kim %F pmlr-v63-Horev84 %I PMLR %P 430--444 %U https://proceedings.mlr.press/v63/Horev84.html %V 63 %X In many real-world applications data exhibits non-stationarity, i.e., its distribution changes over time. One approach to handling non-stationarity is to remove or minimize it before attempting to analyze the data. In the context of brain computer interface (BCI) data analysis this is sometimes achieved using stationary subspace analysis (SSA). The classic SSA method finds a matrix that projects the data onto a stationary subspace by optimizing a cost function based on a matrix divergence. In this work we present an alternative method for SSA based on a symmetrized version of this matrix divergence. We show that this frames the problem in terms of distances between symmetric positive definite (SPD) matrices, suggesting a geometric interpretation of the problem. Stemming from this geometric viewpoint, we introduce and analyze a method which utilizes the geometry of the SPD matrix manifold and the invariance properties of its metrics. Most notably we show that these invariances alleviate the need to whiten the input matrices, a common step in many SSA methods which often introduces error. We demonstrate the usefulness of our technique in experiments on both synthetic and real-world data.
RIS
TY - CPAPER TI - Geometry-aware stationary subspace analysis AU - Inbal Horev AU - Florian Yger AU - Masashi Sugiyama BT - Proceedings of The 8th Asian Conference on Machine Learning DA - 2016/11/20 ED - Robert J. Durrant ED - Kee-Eung Kim ID - pmlr-v63-Horev84 PB - PMLR DP - Proceedings of Machine Learning Research VL - 63 SP - 430 EP - 444 L1 - http://proceedings.mlr.press/v63/Horev84.pdf UR - https://proceedings.mlr.press/v63/Horev84.html AB - In many real-world applications data exhibits non-stationarity, i.e., its distribution changes over time. One approach to handling non-stationarity is to remove or minimize it before attempting to analyze the data. In the context of brain computer interface (BCI) data analysis this is sometimes achieved using stationary subspace analysis (SSA). The classic SSA method finds a matrix that projects the data onto a stationary subspace by optimizing a cost function based on a matrix divergence. In this work we present an alternative method for SSA based on a symmetrized version of this matrix divergence. We show that this frames the problem in terms of distances between symmetric positive definite (SPD) matrices, suggesting a geometric interpretation of the problem. Stemming from this geometric viewpoint, we introduce and analyze a method which utilizes the geometry of the SPD matrix manifold and the invariance properties of its metrics. Most notably we show that these invariances alleviate the need to whiten the input matrices, a common step in many SSA methods which often introduces error. We demonstrate the usefulness of our technique in experiments on both synthetic and real-world data. ER -
APA
Horev, I., Yger, F. & Sugiyama, M.. (2016). Geometry-aware stationary subspace analysis. Proceedings of The 8th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 63:430-444 Available from https://proceedings.mlr.press/v63/Horev84.html.

Related Material