FPGASVM: A Framework for Accelerating Kernelized Support Vector Machine

Mudhar Bin Rabieah, Christos-Savvas Bouganis
Proceedings of the 5th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications at KDD 2016, PMLR 53:68-84, 2016.

Abstract

Support Vector Machines (SVM) are powerful supervised learnings method in machine learning. However, their applicability to large problems, where frequent retraining of the system is required, has been limited due to the time consuming training stage whose computational cost scales quadratically with the number of examples. In this work, a complete FPGA-based system for kernelized SVM training using ensemble learning is presented. The proposed framework builds on the FPGA architecture and utilises a cascaded multiprecision training flow, exploits the heterogeneity within the training problem by tuning the number representation used, and supports ensemble training tuned to each internal memory structure so to address very large datasets. Its performance evaluation shows that the proposed system achieves more than an order of magnitude better results compared to state-of-the-art CPU and GPU-based implementations, providing a stepping stone for researchers and practitioners to tackle large-scale SVM problems that require frequent retraining.

Cite this Paper


BibTeX
@InProceedings{pmlr-v53-rabieah16, title = {FPGASVM: A Framework for Accelerating Kernelized Support Vector Machine}, author = {Bin Rabieah, Mudhar and Bouganis, Christos-Savvas}, booktitle = {Proceedings of the 5th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications at KDD 2016}, pages = {68--84}, year = {2016}, editor = {Fan, Wei and Bifet, Albert and Read, Jesse and Yang, Qiang and Yu, Philip S.}, volume = {53}, series = {Proceedings of Machine Learning Research}, address = {San Francisco, California, USA}, month = {14 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v53/rabieah16.pdf}, url = {https://proceedings.mlr.press/v53/rabieah16.html}, abstract = {Support Vector Machines (SVM) are powerful supervised learnings method in machine learning. However, their applicability to large problems, where frequent retraining of the system is required, has been limited due to the time consuming training stage whose computational cost scales quadratically with the number of examples. In this work, a complete FPGA-based system for kernelized SVM training using ensemble learning is presented. The proposed framework builds on the FPGA architecture and utilises a cascaded multiprecision training flow, exploits the heterogeneity within the training problem by tuning the number representation used, and supports ensemble training tuned to each internal memory structure so to address very large datasets. Its performance evaluation shows that the proposed system achieves more than an order of magnitude better results compared to state-of-the-art CPU and GPU-based implementations, providing a stepping stone for researchers and practitioners to tackle large-scale SVM problems that require frequent retraining.} }
Endnote
%0 Conference Paper %T FPGASVM: A Framework for Accelerating Kernelized Support Vector Machine %A Mudhar Bin Rabieah %A Christos-Savvas Bouganis %B Proceedings of the 5th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications at KDD 2016 %C Proceedings of Machine Learning Research %D 2016 %E Wei Fan %E Albert Bifet %E Jesse Read %E Qiang Yang %E Philip S. Yu %F pmlr-v53-rabieah16 %I PMLR %P 68--84 %U https://proceedings.mlr.press/v53/rabieah16.html %V 53 %X Support Vector Machines (SVM) are powerful supervised learnings method in machine learning. However, their applicability to large problems, where frequent retraining of the system is required, has been limited due to the time consuming training stage whose computational cost scales quadratically with the number of examples. In this work, a complete FPGA-based system for kernelized SVM training using ensemble learning is presented. The proposed framework builds on the FPGA architecture and utilises a cascaded multiprecision training flow, exploits the heterogeneity within the training problem by tuning the number representation used, and supports ensemble training tuned to each internal memory structure so to address very large datasets. Its performance evaluation shows that the proposed system achieves more than an order of magnitude better results compared to state-of-the-art CPU and GPU-based implementations, providing a stepping stone for researchers and practitioners to tackle large-scale SVM problems that require frequent retraining.
RIS
TY - CPAPER TI - FPGASVM: A Framework for Accelerating Kernelized Support Vector Machine AU - Mudhar Bin Rabieah AU - Christos-Savvas Bouganis BT - Proceedings of the 5th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications at KDD 2016 DA - 2016/12/06 ED - Wei Fan ED - Albert Bifet ED - Jesse Read ED - Qiang Yang ED - Philip S. Yu ID - pmlr-v53-rabieah16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 53 SP - 68 EP - 84 L1 - http://proceedings.mlr.press/v53/rabieah16.pdf UR - https://proceedings.mlr.press/v53/rabieah16.html AB - Support Vector Machines (SVM) are powerful supervised learnings method in machine learning. However, their applicability to large problems, where frequent retraining of the system is required, has been limited due to the time consuming training stage whose computational cost scales quadratically with the number of examples. In this work, a complete FPGA-based system for kernelized SVM training using ensemble learning is presented. The proposed framework builds on the FPGA architecture and utilises a cascaded multiprecision training flow, exploits the heterogeneity within the training problem by tuning the number representation used, and supports ensemble training tuned to each internal memory structure so to address very large datasets. Its performance evaluation shows that the proposed system achieves more than an order of magnitude better results compared to state-of-the-art CPU and GPU-based implementations, providing a stepping stone for researchers and practitioners to tackle large-scale SVM problems that require frequent retraining. ER -
APA
Bin Rabieah, M. & Bouganis, C.. (2016). FPGASVM: A Framework for Accelerating Kernelized Support Vector Machine. Proceedings of the 5th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications at KDD 2016, in Proceedings of Machine Learning Research 53:68-84 Available from https://proceedings.mlr.press/v53/rabieah16.html.

Related Material