"Unexpected Regression Performance Using Cross-Validation"

ikhwanikhwan MemberPosts:5Contributor II
edited May 2019 inHelp
Hi all,

I tried to do SVM Regression using LibSVM. When I measure the performance of this learner without cross validation (using the whole data as training set), it gives the following results:

absolute_error: 8618.717 +/- 19520.661
relative_error: 102.25% +/- 631.07%
correlation: 0.873
prediction_average: 35706.987 +/- 42654.440


However, when I add 10-fold cross validation in the workflow, I got really different result:

absolute_error: 28596.955 +/- 3938.106 (mikro: 28591.849 +/- 30064.573)
relative_error: 395.80% +/- 192.38% (mikro: 395.36% +/- 1,329.27%)
correlation: 0.320 +/- 0.126 (mikro: 0.303)
prediction_average: 35707.687 +/- 5282.379 (mikro: 35706.987 +/- 42654.440)


Is it normal to face this kind of situation, especailly when we use SVM Regression?
Is there any way to improve this performance?

FYI, the dataset consists of around 500 instances with 80 attributes. Originally it only has 6 attributes, two of them are textual and I converted to WordVector (TF-IDF) and the rest are nominal which are converted into binary.

For the learner, I use epsilon-SVR with gamma = 1.0 and C = 100000.0. Those parameter are the results of Optimization process.

Thanks in advance.

Cheers,
Ikhwan

This is the XML file for the cross-validation:


































































Answers

  • haddockhaddock MemberPosts:849Maven
    Hi there,

    A question, when you say...
    For the learner, I use epsilon-SVR with gamma = 1.0 and C = 100000.0. Those parameter are the results of Optimization process.
    How did you do the optimisation, and on what data? The reason I ask is that overtraining with SVMs is a well-known pitfall, and this issue keeps popping up.
  • ikhwanikhwan MemberPosts:5Contributor II
    Many thanks for your reply. Yeah.. since I only have limited data, I use the same data for optimization process.
    Do you have any suggestion for this situation? Should I split my data, but how much should I give for optimization?

    For optimization, I just follow one workflow discussed previously in the forum. This is the XML file:









    <描述> & lt; p>通常不同的操作符rs have many parameters and it is not clear which parameter values are best for the learning task at hand. The parameter optimization operator helps to find an optimal parameter set for the used operators. </p> <p> The inner crossvalidation estimates the performance for each parameter set. In this process two parameters of the SVM are tuned. The result can be plotted in 3D (using gnuplot) or in color mode. </p> <p> Try the following: <ul> <li>Start the process. The result is the best parameter set and the performance which was achieved with this parameter set.</li> <li>Edit the parameter list of the ParameterOptimization operator to find another parameter set.</li> </ul> </p>


















    <过程扩展= " true "高度= " 390 "宽度= " 585 " >



































































































    <连接from_op = " XValidation”摇来摇去m_port="averagable 1" to_port="result 1"/>







Sign InorRegisterto comment.