One-class SVM performance problem

LegendLegend MemberPosts:8Contributor II
edited August 2019 inHelp
Dears,

我一直在玩rapidminer one-class LibSVM but I couldn't get any negative prediction result, only 100% confidence_TRUE at any parameters of SVM.

Does somebody know how to get correct result for one-class SVM in RM?

I will appreciate your response.
Kindly Regards,
Danny Seo.
Tagged:

Answers

  • TobiasMalbrechtTobiasMalbrecht Moderator, Employee, MemberPosts:292RM Product Management
    Hi Danny,

    it is quite irritating that you get 100% confidence for the class with every parameter setting. I was able to get more reasonable results quite easily using generated data. So maybe there is something wrong in your process setup or your parameters. Here is the RM5 code for the process I just set up. Maybe you are able to use this as a guide ...
















































    <连接from_op = "Role" from_port="example set output" to_op="SVM" to_port="training set"/>











    Kind regards,
    Tobias
  • LegendLegend MemberPosts:8Contributor II
    Dear Tobias Malbrecht,

    Thank you for your response.
    I have tested your code as folloing :
    (I just added some test data generation.)

    However, it always results "true" predictions even if test data is generated between 100 and 200 bounds.
    How can I classify out liers?

    (It's possible with the consideration of confidence(true) attrigbute?)

    Thanks.
    Kindly Regards,
    Danny.







































































    <连接from_op = "Role" from_port="example set output" to_op="SVM" to_port="training set"/>




    <连接from_op = "Role (2)" from_port="example set output" to_op="Apply Model" to_port="unlabelled data"/>










  • TobiasMalbrechtTobiasMalbrecht Moderator, Employee, MemberPosts:292RM Product Management
    Hi Danny,
    Legend wrote:

    However, it always results "true" predictions even if test data is generated between 100 and 200 bounds.
    How can I classify out liers?

    (It's possible with the consideration of confidence(true) attrigbute?)
    of course it predicts class "true" - what else should the model do, if it only describes one class? Nevertheless, the confidence attribute is an indicator to what extent the data points belong to that class. You may define a threshold yourself and classify all instances below that threshold as outliers. Alternatively, you may also use an outlier detection scheme directly.

    Kind regards,
    Tobias
  • dragoljubdragoljub MemberPosts:241Maven
    I have extensively used the C++ version of LibSVM. The one-class SVM in RM does not seem to perform the same type of analysis, namely it does not allow taking multiple class labels.

    For example, one-class SVM can be used to train an outlier model using 2 classes of labeled data. Although model training does not use the labels when generating a model it should be able to differentiate (predict) between the inside and outside of the one-class model. Therefore RM should be able to take a binomial class label and perform prediction for 2 classes of labels.

    -Gagi
  • harri678harri678 MemberPosts:34Maven
    I am also working with LibSVM's one-class in RM and i miss the classification part (controlled by nu parameter).
    In the log i can find lots of the following entries when using one-class in RM 5.0.3:

    ...
    Feb 24, 2010 11:58:33 AM WARNING: SimpleCriterion: NaN was generated!
    ...

    I remember the "NaN" values from java-libsvm where it indicates the classification result of an outlier, so its definitly processed within RM. Would it be very difficult to add some kind of binominal prediction-functionality where model result "NaN" is mapped to a predication label like "out" and result "1" is mapped to a prediction "in"? I can offer to contribute some code in this case if you give me a hint in which RM-class these changes are required and if its not too time consuming;).

    Greetings,
    Harald

  • landland RapidMiner Certified Analyst, RapidMiner Certified Expert, MemberPosts:2,531Unicorn
    Hi Harald,
    you might write a feature request on our bug tracker, but our schedule is quite full. So if you need it really fast, you very well could contribute the code. I would start search in the LibSVMModel class in com/rapidminer/operator/learner/functions/kernel package.

    Greetings,
    Sebastian
  • harri678harri678 MemberPosts:34Maven
    Hi Sebastian,

    I think I will give it a try to implement it by myself! My dev-environment is already up and running and I've located the proper part in the code (thanks for the tip). When using the one class classification mode, the results are either "-1" or "1" as expected but the probabilities aren't calculated by this function. So I think about implementing an optional parameter for the libsvmtype one-class to switch between the current and a new classification behavior to maintain downwards compatibility. Currently I still need a little bit more understanding on how the datastructs (esp. Attribute and Example) work together. Also the NaN log message is not directly generated by libsvm, I'm sure it has something to do with the label attribute and I'll research this too.

    如果它的工作也不是太脏,我欣然contribute the code.

    Greetings, Harald


    UPDATE: patch available inhttp://rapid-i.com/rapidforum/index.php/topic,1746.0.html
Sign InorRegisterto comment.