All supervised models should, if possible, return attribute weights

yzanyzan MemberPosts:66Unicorn
edited December 2018 in产品反馈- Resolved

All supervised operators should, if meaningful, return attribute weights representing the feature importance. If nothing else a decision tree and perceptron could get it.

0
0 votes

Fixed and Released·Last Updated

Comments

  • sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM ModeratorPosts:2,959Community Manager

    hello@yzan- can you please give us an example to replicate?

    Scott

  • yzanyzan MemberPosts:66Unicorn

    An example of a supervised operator, which returns attribute weights, is "Generalized Linear Model".

    The calculation of weights for a decision tree:

    1. It is possible to simply return a vector, which can take values {0,1} based on whether the attribute is used in the tree, or not.
    2. http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
    3. http://support.sas.com/documentation/cdl/en/stathpug/68163/HTML/default/viewer.htm#stathpug_hpsplit_details30.htm
    4. http://support.sas.com/documentation/onlinedoc/miner/em43/allproc.pdf(pages 54-56).

    For a perceptron, the returned attribute weights could correspond to the weights of the perceptron (they are already visible in the "model" output, but they are not immediattely passable to operators like "Select by Weights").

  • sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM ModeratorPosts:2,959Community Manager

    thanks for that,@yzan. Just heard back from dev team that this is coming soon.:)


    Scott

  • sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM ModeratorPosts:2,959Community Manager
  • yzanyzan MemberPosts:66Unicorn

    Possibly even "Deep Learning" could return attribute weights as the backend H2O implementation provides this information and other algorithms from H2O, like GLM and GBT, already output attribute weights.

  • CraigBostonUSACraigBostonUSA Administrator, Employee, MemberPosts:34RM Team Member

    Update:As of version 8.0Decision TreeandRandom Forestnow provide a new port that outputs feature weights.

    https://docs.www.turtlecreekpls.com/la


    @yzanwrote:

    All supervised operators should, if meaningful, return attribute weights representing the feature importance. If nothing else a decision tree and perceptron could get it.



    test/studio/releases/changes-8.0.0.html?_ga=2.83072976.793993492.1515416834-774805979.1445867999

  • sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM ModeratorPosts:2,959Community Manager

    ?

Sign InorRegisterto comment.