A Practical Guide To Support Vector Classification: I I I N L
A Practical Guide To Support Vector Classification: I I I N L
Abstract
Support vector machine (SVM) is a popular technique for classification.
However, beginners who are not familiar with SVM often get unsatisfactory
results since they miss some easy but significant steps. In this guide, we propose
a simple procedure which usually gives reasonable results.
1 Introduction
SVM (Support Vector Machine) is a new technique for data classification. Even
though people consider that it is easier to use than Neural Networks, however, users
who are not familiar with SVM often get unsatisfactory results at first. Here we
propose a “cookbook” approach which usually gives reasonable results.
Note that this guide is not for SVM researchers nor do we guarantee the best
accuracy. We also do not intend to solve challenging or difficult problems. Our
purpose is to give SVM novices a recipe to obtain acceptable results fast and easily.
Although users do not need to understand the underlying theory of SVM, nev-
ertheless, we briefly introduce SVM basics which are necessary for explaining our
procedure. A classification task usually involves with training and testing data which
consist of some data instances. Each instance in the training set contains one “target
value” (class labels) and several “attributes” (features). The goal of SVM is to pro-
duce a model which predicts target value of data instances in the testing set which
are given only the attributes.
Given a training set of instance-label pairs (xi , yi ), i = 1, . . . , l where xi ∈ Rn and
y ∈ {1, −1}l , the support vector machines (SVM) (Boser, Guyon, and Vapnik 1992;
Cortes and Vapnik 1995) require the solution of the following optimization problem:
l
1 T X
min w w+C ξi
w,b,ξ 2 i=1
subject to yi (wT φ(xi ) + b) ≥ 1 − ξi , (1)
ξi ≥ 0.
1
Table 1: Problem characteristics and performance comparisons.
Here training vectors xi are mapped into a higher (maybe infinite) dimensional space
by the function φ. Then SVM finds a linear separating hyperplane with the maximal
margin in this higher dimensional space. C > 0 is the penalty parameter of the error
term. Furthermore, K(xi , xj ) ≡ φ(xi )T φ(xj ) is called the kernel function. Though
new kernels are being proposed by researchers, beginners may find in SVM books the
following four basic kernels:
2
1.2 Proposed Procedure
Many beginners use the following procedure now:
• Test
• Use the best parameter C and γ to train the whole training set5
• Test
2 Data Preprocessing
2.1 Categorical Feature
SVM requires that each data instance is represented as a vector of real numbers.
Hence, if there are categorical attributes, we first have to convert them into numeric
data. We recommend using m numbers to represent an m-category attribute. Only
one of the m numbers is one, and others are zero. For example, a three-category
attribute such as {red, green, blue} can be represented as (0,0,1), (0,1,0), and (1,0,0).
Our experience indicates that if the number of values in an attribute is not too many,
this coding might be more stable than using a single number to represent a categorical
attribute.
5
The best parameter might be affected by the size of data set but in practice the one obtained
from cross-validation is already sutable for the whole training set.
3
2.2 Scaling
Scaling them before applying SVM is very important. (Sarle 1997, Part 2 of Neural
Networks FAQ) explains why we scale data while using Neural Networks, and most
of considerations also apply to SVM.
The main advantage is to avoid attributes in greater numeric ranges dominate
those in smaller numeric ranges. Another advantage is to avoid numerical difficulties
during the calculation. Because kernel values usually depend on the inner products of
feature vectors, e.g. the linear kernel and the polynomial kernel, large attribute values
might cause numerical problems. We recommend linearly scaling each attribute to
the range [−1, +1] or [0, 1].
Of course we have to use the same method to scale testing data before testing.
For example, suppose that we scaled the first attribute of training data from [-10,
+10] to [-1, +1]. If the first attribute of testing data is lying in the range [-11, +8],
we must scale the testing data to [-1.1, +0.8].
3 Model Selection
Though there are only four common kernels mentioned in Section 1, we must decide
which one to try first. Then the penalty parameter C and kernel parameters are
chosen.
4
note that the sigmoid kernel is not valid (i.e. not the inner product of two vectors)
under some parameters (Vapnik 1995).
5
(a) Training data and an overfitting classifier (b) Applying an overfitting classifier on test-
ing data
(c) Training data and a better classifier (d) Applying a better classifier on testing
data
6
an exhaustive parameter search by approximations or heuristics. The other reason is
that the computational time to find good parameters by grid-search is not much more
than that by advanced methods since there are only two parameters. Furthermore,
the grid-search can be easily parallelized because each (C, γ) is independent. Many
of advanced methods are iterative processes, e.g. walking along a path, which might
be difficult for parallelization.
Figure 2: Loose grid search on C = 2−5 , 2−3 , . . . , 215 and γ = 2−15 , 2−13 , . . . , 23 .
7
Figure 3: Fine grid-search on C = 21 , 21.25 , . . . , 25 and γ = 2−7 , 2−6.75 , . . . , 2−3 .
4 Discussion
In some situations, the proposed procedure is not good enough, so other techniques
such as feature selection may be needed. Such issues are beyond our consideration
here. Our experience indicates that the procedure works well for data which do not
have many features. If there are thousands of attributes, there may be a need to
choose a subset of them before giving the data to SVM.
Acknowledgement
We thank all users of our SVM software LIBSVM and BSVM , who help us to identify
possible difficulties encountered by beginners.
8
A Examples of the Proposed Procedure
In this appendix, we compare accuracy by the proposed procedure with that by
general beginners. Experiments are on the three problems mentioned in Table 1 by
using the software LIBSVM (Chang and Lin 2001). For each problem, we first list
the accuracy by direct training and testing. Secondly, we show the difference in
accuracy with and without scaling. From what has been discussed in Section 2.2,
the range of training set attributes must be saved so that we are able to restore
them while scaling the testing set. Thirdly, the accuracy by the proposed procedure
(scaling and then model selection) is presented. Finally, we demonstrate the use
of a tool in LIBSVM which does the whole procedure automatically. Note that a
similar parameter selection tool like the grid.py presented below is availabe in the
R-LIBSVM interface (see the function tune).
• Astroparticle Physics
$./svm-train train.1
$./svm-predict test.1 train.1.model test.1.predict
→ Accuracy = 66.925%
$./svm-train -c 2 -g 2 train.1.scale
$./svm-predict test.1.scale train.1.scale.model test.1.predict
→ Accuracy = 96.875%
9
– Using an automatic script
• Bioinformatics
$./svm-train -v 5 train.2
→ Cross Validation Accuracy = 56.5217%
• Vehicle
10
– Original sets with default parameters
$./svm-train train.3
$./svm-predict test.3 train.3.model test.3.predict
→ Accuracy = 2.43902%
– Scaled sets with default parameters
$./svm-scale -l 1 -u -1 -s range3 train.3 > train.3.scale
$./svm-scale -r range3 test.3 > test.3.scale
$./svm-train train.3.scale
$./svm-predict test.3.scale train.3.scale.model test.3.predict
→ Accuracy = 12.1951%
– Scaled sets with parameter selection
$python grid.py train.3.scale
···
128.0 0.125 84.8753
(Best C=128.0, γ=0.125 with five-fold cross-validation rate=84.8753%)
$./svm-train -c 128 -g 0.125 train.3.scale
$./svm-predict test.3.scale train.3.scale.model test.3.predict
→ Accuracy = 87.8049%
– Using an automatic script
$python easy.py train.3 test.3
Scaling training data...
Cross validation...
Best c=128.0, g=0.125
Training...
Scaling testing data...
Testing...
Accuracy = 87.8049% (36/41) (classification)
References
Boser, B., I. Guyon, and V. Vapnik (1992). A training algorithm for optimal mar-
gin classifiers. In Proceedings of the Fifth Annual Workshop on Computational
Learning Theory.
11
Chang, C.-C. and C.-J. Lin (2001). LIBSVM: a library for support vector machines.
Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
Cortes, C. and V. Vapnik (1995). Support-vector network. Machine Learning 20,
273–297.
Gardy, J. L., C. Spencer, K. Wang, M. Ester, G. E. Tusnady, I. Simon, S. Hua,
K. deFays, C. Lambert, K. Nakai, and F. S. Brinkman (2003). PSORT-B: im-
proving protein subcellular localization prediction for gram-negative bacteria.
Nucleic Acids Research 31 (13), 3613–3617.
Keerthi, S. S. and C.-J. Lin (2003). Asymptotic behaviors of support vector ma-
chines with Gaussian kernel. Neural Computation 15 (7), 1667–1689.
Lin, H.-T. and C.-J. Lin (2003). A study on sigmoid kernels for SVM and the train-
ing of non-PSD kernels by SMO-type methods. Technical report, Department
of Computer Science and Information Engineering, National Taiwan University.
Available at http://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf.
Michie, D., D. J. Spiegelhalter, and C. C. Taylor (1994). Machine Learning, Neu-
ral and Statistical Classification. Englewood Cliffs, N.J.: Prentice Hall. Data
available at http://www.ncc.up.pt/liacc/ML/statlog/datasets.html.
Sarle, W. S. (1997). Neural Network FAQ. Periodic posting to the Usenet news-
group comp.ai.neural-nets.
Vapnik, V. (1995). The Nature of Statistical Learning Theory. New York, NY:
Springer-Verlag.
12