An approximate approach for training polynomial kernel SVMs in linear time

Yu Chieh Wu, Jie Chi Yang, Yue Shi Lee

Research output: Contribution to journalConference articlepeer-review

12 Scopus citations

Abstract

Kernel methods such as support vector machines (SVMs) have attracted a great deal of popularity in the machine learning and natural language processing (NLP) communities. Polynomial kernel SVMs showed very competitive accuracy in many NLP problems, like part-of-speech tagging and chunking. However, these methods are usually too inefficient to be applied to large dataset and real time purpose. In this paper, we propose an approximate method to analogy polynomial kernel with efficient data mining approaches. To prevent exponential-scaled testing time complexity, we also present a new method for speeding up SVM classifying which does independent to the polynomial degree d. The experimental results showed that our method is 16.94 and 450 times faster than traditional polynomial kernel in terms of training and testing respectively.

Original languageEnglish
Pages (from-to)65-68
Number of pages4
JournalProceedings of the Annual Meeting of the Association for Computational Linguistics
StatePublished - 2007
Event45th Annual Meeting of the Association for Computational Linguistics, ACL 2007 - Prague, Czech Republic
Duration: 25 Jun 200727 Jun 2007

Fingerprint

Dive into the research topics of 'An approximate approach for training polynomial kernel SVMs in linear time'. Together they form a unique fingerprint.

Cite this