All supervised models should, if possible, return attribute weights
yzan
MemberPosts:66Unicorn
All supervised operators should, if meaningful, return attribute weights representing the feature importance. If nothing else a decision tree and perceptron could get it.
Tagged:
0
Comments
hello@yzan- can you please give us an example to replicate?
Scott
An example of a supervised operator, which returns attribute weights, is "Generalized Linear Model".
The calculation of weights for a decision tree:
For a perceptron, the returned attribute weights could correspond to the weights of the perceptron (they are already visible in the "model" output, but they are not immediattely passable to operators like "Select by Weights").
thanks for that,@yzan. Just heard back from dev team that this is coming soon.
Scott
Possibly even "Deep Learning" could return attribute weights as the backend H2O implementation provides this information and other algorithms from H2O, like GLM and GBT, already output attribute weights.
Update:As of version 8.0Decision TreeandRandom Forestnow provide a new port that outputs feature weights.
https://docs.www.turtlecreekpls.com/la
test/studio/releases/changes-8.0.0.html?_ga=2.83072976.793993492.1515416834-774805979.1445867999
?