Statistical Unfolded Logic Learning

Wang-Zhou Dai, Zhi-Hua Zhou
Asian Conference on Machine Learning, PMLR 45:349-361, 2016.

Abstract

During the past decade, Statistical Relational Learning (SRL) and Probabilistic Inductive Logic Programming (PILP), owing to their strength in capturing structure information, have attracted much attention for learning relational models such as weighted logic rules. Typically, a generative model is assumed for the structured joint distribution, and the learning process is accomplished in an enormous relational space. In this paper, we propose a new framework, i.e., Statistical Unfolded Logic (SUL) learning. In contrast to learning rules in the relational space directly, SUL propositionalizes the structure information into an attribute-value data set, and thus, statistical discriminative learning which is much more efficient than generative relational learning can be executed. In addition to achieving better generalization performance, SUL is able to conduct predicate invention that is hard to be realized by traditional SRL and PILP approaches. Experiments on real tasks show that our proposed approach is superior to state-of-the-art weighted rules learning approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v45-Dai15, title = {Statistical Unfolded Logic Learning}, author = {Dai, Wang-Zhou and Zhou, Zhi-Hua}, booktitle = {Asian Conference on Machine Learning}, pages = {349--361}, year = {2016}, editor = {Holmes, Geoffrey and Liu, Tie-Yan}, volume = {45}, series = {Proceedings of Machine Learning Research}, address = {Hong Kong}, month = {20--22 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v45/Dai15.pdf}, url = {https://proceedings.mlr.press/v45/Dai15.html}, abstract = {During the past decade, Statistical Relational Learning (SRL) and Probabilistic Inductive Logic Programming (PILP), owing to their strength in capturing structure information, have attracted much attention for learning relational models such as weighted logic rules. Typically, a generative model is assumed for the structured joint distribution, and the learning process is accomplished in an enormous relational space. In this paper, we propose a new framework, i.e., Statistical Unfolded Logic (SUL) learning. In contrast to learning rules in the relational space directly, SUL propositionalizes the structure information into an attribute-value data set, and thus, statistical discriminative learning which is much more efficient than generative relational learning can be executed. In addition to achieving better generalization performance, SUL is able to conduct predicate invention that is hard to be realized by traditional SRL and PILP approaches. Experiments on real tasks show that our proposed approach is superior to state-of-the-art weighted rules learning approaches.} }
Endnote
%0 Conference Paper %T Statistical Unfolded Logic Learning %A Wang-Zhou Dai %A Zhi-Hua Zhou %B Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Geoffrey Holmes %E Tie-Yan Liu %F pmlr-v45-Dai15 %I PMLR %P 349--361 %U https://proceedings.mlr.press/v45/Dai15.html %V 45 %X During the past decade, Statistical Relational Learning (SRL) and Probabilistic Inductive Logic Programming (PILP), owing to their strength in capturing structure information, have attracted much attention for learning relational models such as weighted logic rules. Typically, a generative model is assumed for the structured joint distribution, and the learning process is accomplished in an enormous relational space. In this paper, we propose a new framework, i.e., Statistical Unfolded Logic (SUL) learning. In contrast to learning rules in the relational space directly, SUL propositionalizes the structure information into an attribute-value data set, and thus, statistical discriminative learning which is much more efficient than generative relational learning can be executed. In addition to achieving better generalization performance, SUL is able to conduct predicate invention that is hard to be realized by traditional SRL and PILP approaches. Experiments on real tasks show that our proposed approach is superior to state-of-the-art weighted rules learning approaches.
RIS
TY - CPAPER TI - Statistical Unfolded Logic Learning AU - Wang-Zhou Dai AU - Zhi-Hua Zhou BT - Asian Conference on Machine Learning DA - 2016/02/25 ED - Geoffrey Holmes ED - Tie-Yan Liu ID - pmlr-v45-Dai15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 45 SP - 349 EP - 361 L1 - http://proceedings.mlr.press/v45/Dai15.pdf UR - https://proceedings.mlr.press/v45/Dai15.html AB - During the past decade, Statistical Relational Learning (SRL) and Probabilistic Inductive Logic Programming (PILP), owing to their strength in capturing structure information, have attracted much attention for learning relational models such as weighted logic rules. Typically, a generative model is assumed for the structured joint distribution, and the learning process is accomplished in an enormous relational space. In this paper, we propose a new framework, i.e., Statistical Unfolded Logic (SUL) learning. In contrast to learning rules in the relational space directly, SUL propositionalizes the structure information into an attribute-value data set, and thus, statistical discriminative learning which is much more efficient than generative relational learning can be executed. In addition to achieving better generalization performance, SUL is able to conduct predicate invention that is hard to be realized by traditional SRL and PILP approaches. Experiments on real tasks show that our proposed approach is superior to state-of-the-art weighted rules learning approaches. ER -
APA
Dai, W. & Zhou, Z.. (2016). Statistical Unfolded Logic Learning. Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 45:349-361 Available from https://proceedings.mlr.press/v45/Dai15.html.

Related Material