Data Mining - Pruning (a decision tree, decision rules)
Table of Contents
1 - About
2 - Articles Related
3 - Decision tree
A decision tree is pruned to get (perhaps) a tree that generalize better to independent test data. (We may get a decision tree that might perform worse on the training data but generalization is the goal).
See Information gain and Overfitting for an example.
Sometimes simplifying a decision tree gives better results.
How to prune:
- Don’t continue splitting if the nodes get very small: Minimum number cases that reach a leaf
- Build full tree and then work back from the leaves, applying a statistical test at each stage (Weka: confidenceFactor)
- Sometimes it’s good to prune an interior node, raising the subtree beneath it up one level (subtreeRaising, default true)
3.1 - Minimum number cases that reach a leaf
One simple way of pruning a decision tree is to impose a minimum on the number of training examples that reach a leaf.
Weka: This is done by J48's minNumObj parameter (default value 2) with the unpruned switch set to True. (The terminology is a little confusing. If unpruned is deselected, J48's uses other pruning mechanisms)
| min Num |
| Number of |
| Size of |
The number of leaves in the tree decreases very rapidly as the size of each leaf is allowed to grow. The tree size follows the same trajectory as the number of leaves: it decreases very rapidly as the leaf size grows.
3.2 - Confidence Factor
|confidence Factor J48||Accuracy|