Title
Optimal and Non-Optimal Parallel Implementations of the Sequentiall Minimal Optimization Algorithm for Support Vector Machine Training
Date Issued
01 January 2004
Resource Type
Conference Proceeding
Author(s)
Abstract
Support Vector Machines (SVMs) are supervised learning systems that have gained wide acceptance among the pattern recognition community. Learning is based on structural risk minimization over a training set and leads to a quadratic programming problem. Due to the sample size these optimization problems are very large and training remains one of the most computationally expensive stages in Support Vector Machine design. This paper addresses this problem by exploring different approaches to parallel training. Several algorithms are developed and evaluated including a non-optimal approach for parallel training based on the unbiased version of Piatt's Sequential Minimal Optimization (SMO) algorithm, an improvement to a previous biased non-optimal parallel SMO, and an optimal solution combining SMO with the Chunking approach. Experimental results show that non-optimal solutions can achieve a speed-up of 0(N2), according to the number of processors used, with a compromise in the increment of the number of Support Vectors and a decrement in accuracy. The SMO - Chunking optimal solution presents a much lesser speedup, which depends on the number of support vectors vs. total number of samples ratio.
Start page
21
End page
26
Scopus EID
2-s2.0-85133012259
ISBN
9781618398185
Source
17th ISCA International Conference on Parallel and Distributed Computing Systems 2004, PDCS 2004
Resource of which it is part
17th ISCA International Conference on Parallel and Distributed Computing Systems 2004, PDCS 2004
Sources of information:
Directorio de Producción Científica
Scopus