I will try to describe the steps I took to make the algorithm work in practice.
When you want to classify data in two categories, few algorithms are better than SVM.
It usually divides data in two different sets by finding a "line" that better separates the points. It is capable to classify data linearly (put a straight line to differentiate sets) or do a nonlinear classification (differentiate sets with a curve). This "separator" is called a hyperplane.
Picture 1 - Linear hyperplane separator
Before you even start running the algorithm, the first thing needed is to normalize features. SVM uses features to classify data, and these should be obtained by analyzing the dataset and seeing what better represents it (like what is done with SIFT and SURF for images). When parameters are not normalized, the ones with greater absolute value have greater effect on the hyperplane margin. This means that some parameters are going to influence more your algorithms than others. If that is not what you want, make sure all data features have the same range.
Another important point to check is the parameters from the SVM algorithm itself. A parameter that has to be tuned in practice to better fit the hyperplane to the data. This parameter is γ (gama) and it is responsible for the linearity degree of the hyperplane. The smaller γ is, the more the hyperplane is going to look like a straight line. If γ is too great, the hyperplane will be more curvy and might delineate the data too well and lead to overfitting.
Picture 2 - great value of γ
Another parameter to be tuned to help improve accuracy is C. It is responsible for the size of the "soft margin" of SVM. The soft margin is a "gray" area around the hyperplane. This means that points inside this soft margin are not classified as any of the two categories. The smaller the value of C, the greater the soft margin.
Picture 3 - Great values of C
Picture 4 - Small values of C
More about SVM