"Remove correlated attributes delivers strange result"

qwertzqwertz MemberPosts:130Maven
edited June 2019 inHelp
Hi there,

I am (unfortunately) not an expert in correlations calculation but the result of this sample process seems strange to me.


First I run the code as is. --> Result includes all attributes
Then I change the parameter "filter relation" to property "less". --> Result still includes att1

To my understanding att1 can either have a correlation greater than 0.9 OR less than 0.9 but it cannot appear in both results...








<宏/ >

















Best regards
Sachs
Tagged:

Answers

  • haddockhaddock MemberPosts:849Maven
    First I run the code as is. --> Result includes all attributes
    Really? Returns nothing on my machine.
  • qwertzqwertz MemberPosts:130Maven

    I just double checked on another machine. Works perfectly fine... except that I have no clue on what's happing in the background (as described above)...
  • haddockhaddock MemberPosts:849Maven
    Hi,

    Odd, I ran twice before posting, but now it works as you say; you'd expect random data to get cleaned out, but it doesn't. The reason for this is noted in the help...
    请注意,这在一些运营商可能会失败cases when the attributes should be filtered out, e.g. it might not be able to remove for example all negative correlated features. The reason for this behaviour seems to be that for the complete m x m - matrix of correlations (for m attributes) the correlations will not be recalculated and hence not checked if one of the attributes of the current pair was already marked for removal. That means for three attributes a1, a2, and a3 that it might be that a2 was already ruled out by the negative correlation with a1 and is now not able to rule out a3 any longer.
    Err, yes, well;)
  • qwertzqwertz MemberPosts:130Maven


    Being able to read helps a lot... stupid me...

    Though, I have to admit that I don't fully understand the content of the explanation.

    The reason for this behaviour seems to be that for the complete m x m - matrix of correlations (for m attributes) the correlations will not be recalculated and hence not checked if one of the attributes of the current pair was already marked for removal. That means for three attributes a1, a2, and a3 that it might be that a2 was already ruled out by the negative correlation with a1 and is now not able to rule out a3 any longer.
    So in the end I am not able to use this operator as I don't know in what cases attributes are removed correctly?
    I am wondering then what the inteded use scenario is like?


    Anyway, thanks a lot:)
    Sachs
  • haddockhaddock MemberPosts:849Maven
    Hi,

    Agreed, the explanation is a bit obscure, but at least there is one. On the other hand bear in mind..

    1. It doesn't removefalsely,它may not removecompletely, in that some may remain.
    2. The use scenario may be different from mining random data for 90% correlation!
    3. There are alternative dimension reducers.
    4. Some learners, like SVMs, handle high dimensionality rather well.

    Best

    H
  • mosiomohsenmosiomohsen MemberPosts:1Contributor I

    I have a problem about 'Remove Correlated Attributes' operator.

    I have 91 attribues. This operator just remove 5 attributes from my dataset, even I set threshold parameter to 0.01.

    What should I do?

  • sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM ModeratorPosts:2,959Community Manager

    hello@mosiomohsen- so I guess my first question is whether or not you are certain that you have truly independent variables? Perhaps they truly are correlated?

    If not, I'd recommend posting your XML process here (see "Read Before Posting" on right when you reply) and attach your dataset. This way we can replicate what you're doing and help you better.

    Scott

Sign InorRegisterto comment.