Co-Training with Insufficient Views

Wei Wang, Zhi-Hua Zhou
Proceedings of the 5th Asian Conference on Machine Learning, PMLR 29:467-482, 2013.

Abstract

Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. Most previous theoretical analyses on co-training are based on the assumption that each of the views is sufficient to correctly predict the label. However, this assumption can hardly be met in real applications due to feature corruption or various feature noise. In this paper, we present the theoretical analysis on co-training when neither view is sufficient. We define the diversity between the two views with respect to the confidence of prediction and prove that if the two views have large diversity, co-training is able to improve the learning performance by exploiting unlabeled data even with insufficient views. We also discuss the relationship between view insufficiency and diversity, and give some implications for understanding of the difference between co-training and co-regularization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v29-Wang13b, title = {Co-Training with Insufficient Views}, author = {Wang, Wei and Zhou, Zhi-Hua}, booktitle = {Proceedings of the 5th Asian Conference on Machine Learning}, pages = {467--482}, year = {2013}, editor = {Ong, Cheng Soon and Ho, Tu Bao}, volume = {29}, series = {Proceedings of Machine Learning Research}, address = {Australian National University, Canberra, Australia}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v29/Wang13b.pdf}, url = {https://proceedings.mlr.press/v29/Wang13b.html}, abstract = {Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. Most previous theoretical analyses on co-training are based on the assumption that each of the views is sufficient to correctly predict the label. However, this assumption can hardly be met in real applications due to feature corruption or various feature noise. In this paper, we present the theoretical analysis on co-training when neither view is sufficient. We define the diversity between the two views with respect to the confidence of prediction and prove that if the two views have large diversity, co-training is able to improve the learning performance by exploiting unlabeled data even with insufficient views. We also discuss the relationship between view insufficiency and diversity, and give some implications for understanding of the difference between co-training and co-regularization.} }
Endnote
%0 Conference Paper %T Co-Training with Insufficient Views %A Wei Wang %A Zhi-Hua Zhou %B Proceedings of the 5th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Cheng Soon Ong %E Tu Bao Ho %F pmlr-v29-Wang13b %I PMLR %P 467--482 %U https://proceedings.mlr.press/v29/Wang13b.html %V 29 %X Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. Most previous theoretical analyses on co-training are based on the assumption that each of the views is sufficient to correctly predict the label. However, this assumption can hardly be met in real applications due to feature corruption or various feature noise. In this paper, we present the theoretical analysis on co-training when neither view is sufficient. We define the diversity between the two views with respect to the confidence of prediction and prove that if the two views have large diversity, co-training is able to improve the learning performance by exploiting unlabeled data even with insufficient views. We also discuss the relationship between view insufficiency and diversity, and give some implications for understanding of the difference between co-training and co-regularization.
RIS
TY - CPAPER TI - Co-Training with Insufficient Views AU - Wei Wang AU - Zhi-Hua Zhou BT - Proceedings of the 5th Asian Conference on Machine Learning DA - 2013/10/21 ED - Cheng Soon Ong ED - Tu Bao Ho ID - pmlr-v29-Wang13b PB - PMLR DP - Proceedings of Machine Learning Research VL - 29 SP - 467 EP - 482 L1 - http://proceedings.mlr.press/v29/Wang13b.pdf UR - https://proceedings.mlr.press/v29/Wang13b.html AB - Co-training is a famous semi-supervised learning paradigm exploiting unlabeled data with two views. Most previous theoretical analyses on co-training are based on the assumption that each of the views is sufficient to correctly predict the label. However, this assumption can hardly be met in real applications due to feature corruption or various feature noise. In this paper, we present the theoretical analysis on co-training when neither view is sufficient. We define the diversity between the two views with respect to the confidence of prediction and prove that if the two views have large diversity, co-training is able to improve the learning performance by exploiting unlabeled data even with insufficient views. We also discuss the relationship between view insufficiency and diversity, and give some implications for understanding of the difference between co-training and co-regularization. ER -
APA
Wang, W. & Zhou, Z.. (2013). Co-Training with Insufficient Views. Proceedings of the 5th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 29:467-482 Available from https://proceedings.mlr.press/v29/Wang13b.html.

Related Material