Random Forest (Deprecated)
Synopsis
This operator generates a set of a specified number of random trees i.e. it generates a random forest. The resulting model is a voting model of all the trees.
Description
The Random Forest operator generates a set of random trees. The random trees are generated in exactly the same way as the Random Tree operator generates a tree. The resulting forest model contains a specified number of random tree models. Thenumber of treesparameter specifies the required number of trees. The resulting model is a voting model of all the random trees. For more information about random trees please study the
operator.
The representation of the data in form of a tree has the advantage compared with other approaches of being meaningful and easy to interpret. The goal is to create a classification model that predicts the value of atarget attribute(often calledclassorlabel) based on several input attributes of the ExampleSet. Each interior node of the tree corresponds to one of the input attributes. The number of edges of a nominal interior node is equal to the number of possible values of the corresponding input attribute. Outgoing edges of numerical attributes are labeled with disjoint ranges. Each leaf node represents a value of thelabelattribute given the values of the input attributes represented by the path from the root to the leaf. For better understanding of the structure of a tree please study the
of the Decision Tree operator.
Pruning is a technique in which leaf nodes that do not add to the discriminative power of the tree are removed. This is done to convert an over-specific or over-fitted tree to a more general form in order to enhance its predictive power on unseen datasets. Pre-pruning is a type of pruning performed parallel to the tree creation process. Post-pruning, on the other hand, is done after the tree creation process is complete.
Input
training set
This input port expects an ExampleSet. It is the output of the Retrieve operator in the attached Example Process. The output of other operators can also be used as input.
Output
model
The Random Forest model is delivered from this output port. This model can be applied on unseen data sets for the prediction of thelabelattribute. This model is a voting model of all the random trees
example set
The ExampleSet that was given as input is passed without changing to the output through this port. This is usually used to reuse the same ExampleSet in further operators or to view the ExampleSet in the Results Workspace.
Parameters
Number of trees
This parameter specifies the number of random trees to generate.
Criterion
选择wh的标准ich attributes will be selected for splitting. It can have one of the following values:
- information_gain: The entropy of all the attributes is calculated. The attribute with minimum entropy is selected for split. This method has a bias towards selecting attributes with a large number of values.
- gain_ratio: It is a variant of information gain. It adjusts the information gain for each attribute to allow the breadth and uniformity of the attribute values.
- gini_index: This is a measure of impurity of an ExampleSet. Splitting on a chosen attribute gives a reduction in the average gini index of the resulting subsets.
- 精度: Such an attribute is selected for a split that maximizes the accuracy of the whole Tree.
Minimal size for split
The size of a node is the number of examples in its subset. The size of the root node is equal to the total number of examples in the ExampleSet. Only those nodes are split whose size is greater than or equal to theminimal size for splitparameter.
Minimal leaf size
The size of a leaf node is the number of examples in its subset. The tree is generated in such a way that every leaf node subset has at least theminimal leaf sizenumber of instances.
Minimal gain
The gain of a node is calculated before splitting it. The node is split if its Gain is greater than theminimal gain. Higher value of minimal gain results in fewer splits and thus a smaller tree. A too high value will completely prevent splitting and a tree with a single node is generated.
Maximal depth
The depth of a tree varies depending upon size and nature of the ExampleSet. This parameter is used to restrict the size of the trees. The tree generation process is not continued when the tree depth is equal to themaximal depth. If its value is set to '-1', themaximal depthparameter puts no bound on the depth of the tree, a tree of maximum depth is generated. If its value is set to '1', a tree with a single node is generated.
Confidence
This parameter specifies the confidence level used for the pessimistic error calculation of pruning.
Number of prepruning alternatives
As prepruning runs parallel to the tree generation process, it may prevent splitting at certain nodes when splitting at that node does not add to the discriminative power of the entire tree. In such a case alternative nodes are tried for splitting. This parameter adjusts the number of alternative nodes tried for splitting when a split is prevented by prepruning at a certain node.
No prepruning
By default the trees are generated with prepruning. Setting this parameter to true disables the prepruning and generates trees without any prepruning.
No pruning
By default the tree is generated with pruning. Setting this parameter to true disables the pruning and generated unpruned trees.
Guess subset ratio
If this parameter is set to true thenlog(m) + 1attributes are used, otherwise a ratio should be specified by thesubset ratioparameter.
Subset ratio
This parameter specifies the ratio of randomly chosen attributes to test.
Use local random seed
This parameter indicates if alocal random seedshould be used for randomization. Using the same value oflocal random seedwill produce the same randomization.
Local random seed
This parameter specifies thelocal random seed. This parameter is only available if the我们e local random seedparameter is set to true.