"Optimization of SVM"
Hi,
I have been wasting a while on the SVM optimization (or maybe not). The question is more or less same ashttp://rapid-i.com/rapidforum/index.php/topic,1573.0.html, From Rapido ,
The idea is
1. generate data (binomial label)
2. transform the data
3. Parameter optimization with xvalidation, but AUC will be the main control parameter
4. Draw the best parameter's ROC and show AUC
but when i ran the process, the output is always negtive. (the machine judge everything to negtive), This really confused me... Thanks for advance
I have been wasting a while on the SVM optimization (or maybe not). The question is more or less same ashttp://rapid-i.com/rapidforum/index.php/topic,1573.0.html, From Rapido ,
The idea is
1. generate data (binomial label)
2. transform the data
3. Parameter optimization with xvalidation, but AUC will be the main control parameter
4. Draw the best parameter's ROC and show AUC
but when i ran the process, the output is always negtive. (the machine judge everything to negtive), This really confused me... Thanks for advance
-
-
——<运营商激活= "true" class="process" expanded="true" name="Root">
<参数键= value =“process_duration_for_mail30" />
-
——<运营商激活= "true" class="generate_data" expanded="true" height="60" name="TrainData" width="90" x="45" y="30">
——<运营商激活= "true" class="normalize" expanded="true" height="94" name="Ztransformation" width="90" x="180" y="30">
——<运营商激活= "true" class="remember" expanded="true" height="60" name="IOStorer" width="90" x="313" y="30">
——<运营商激活= "true" class="optimize_parameters_grid" expanded="true" height="148" name="ParameterOptimization" width="90" x="313" y="120">
-
-
——<运营商激活= "true" class="x_validation" expanded="true" height="112" name="Validation" width="90" x="112" y="30">
-
——<运营商激活= "true" class="support_vector_machine_libsvm" expanded="true" height="76" name="Training" width="90" x="82" y="30">
-
——<运营商激活= "true" class="apply_model" expanded="true" height="76" name="Test" width="90" x="45" y="30">
——<运营商激活= "true" class="performance_binominal_classification" expanded="true" height="76" name="Performance (2)" width="90" x="112" y="210">
——<运营商激活= "true" class="log" expanded="true" height="112" name="Log" width="90" x="313" y="120">
-
<连接from_op = from_port“ParameterOptimization”="performance" to_port="result 1" />
<连接from_op = from_port“ParameterOptimization”="parameter" to_port="result 2" />
<连接from_op = from_port“ParameterOptimization”="result 1" to_port="result 3" />
<连接from_op = from_port“ParameterOptimization”="result 2" to_port="result 4" />
<连接from_op = from_port“ParameterOptimization”="result 3" to_port="result 5" />
Tagged:
0
Answers
please switch to the XML View of RapidMiner to see the process as text. Then copy it to the clip board and paste it here, since the Internet Explorer representation contains these "-" signs, that are not allowed in XML. I will then try to reproduce the behavior and check if there's a bug.
Greetings,
Sebastian
seems to me to deliver the best performance anyway? Or what do you want to the algorithm do, if you serve completely random data? learning the random seed? Possible, but more complex...
Greetings,
Sebastian
I can not upload the result pictures here, but the accuracy table is
accuracy 58.00% +- 16.61%
true negative true positive
pred. negative 58 42
pred. positive 0 0
class recal 100% 0.00%
I am really confused.. thanks
it's really simple: The data you are trying to learn from is completely random. There is no statistical dependency between the attribute values and the label. Without such an dependency, you cannot predict the label based on the attribute values, because they simply are completely independent from the label value. So the best thing you can do is predicting always the most frequent class.
And exactly this is done by the SVM.
To have more sexier results, change the parameter of the data generator to something different, that does not contain "random" in it's name.
Greetings,
Sebastian
what exactly is the problem with the current 5.0.006 version?
Greetings,
Sebastian
just quickly jumping in: I think it is exactly the strength of SVM tonotfit the model to random data - this reduces the risk for overfitting. A neural net, for example, can easily be tuned to learn the random data (or "to memorize" it...) but this is exactly the reason why I prefer SVM over NN ;D
Just my 2c. Cheers,
Ingo