Session 20. Classification and Regression Tress (CART) w. {rpart}. Elements of Information Theory for Classification Trees. Pre-pruning and post-pruning (revisited) of Decision Trees.

Feedback should be send to goran.milovanovic@datakolektiv.com. These notebooks accompany the Intro to Data Science: Non-Technical Background course 2020/21.


What do we want to do today?

We now take a deep dive into Decision Trees in R. We will rely on the {rpart} package which encompasses some very well studied procedures to grow CART (Classification and Regression Trees). We begin by solving a regression problem tree with {rpart} and introduce the theory of CART along the way. We then proceed to solve a classification problem with a Decision Tree model and introduce the elements of Information Theory to understand the criteria used to assess the model. Enters entropy: the fundamental concept of Information Theory and among the corner stones of contemporary statistical learning. Finally we introduce the tree post-pruning procedures - essentially cross-validation across the model control parameters - and contrast it to tree post-pruning based on the complexity parameter.

0. Setup

install.packages('rpart')
install.packages ('rpart.plot')

Grab the HR_comma_sep.csv dataset from the Kaggle and place it in your _data directory for this session. We will also use the Boston Housing Dataset: BostonHousing.csv

dataDir <- paste0(getwd(), "/_data/")
library(tidyverse)
library(data.table)
library(rpart)
library(rpart.plot)

1. CART: The regression problem

We begin by loading and inspecting the Boston Housing dataset.

dataSet <- read.csv(paste0('_data/', 'BostonHousing.csv'), 
                    header = T, 
                    check.names = F,
                    stringsAsFactors = F)
head(dataSet)
glimpse(dataSet)
Rows: 506
Columns: 14
$ crim    <dbl> 0.00632, 0.02731, 0.02729, 0.03237, 0.06905, 0.02985, 0.088...
$ zn      <dbl> 18.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.5, 12.5, 12.5, 12.5, 12.5...
$ indus   <dbl> 2.31, 7.07, 7.07, 2.18, 2.18, 2.18, 7.87, 7.87, 7.87, 7.87,...
$ chas    <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...
$ nox     <dbl> 0.538, 0.469, 0.469, 0.458, 0.458, 0.458, 0.524, 0.524, 0.5...
$ rm      <dbl> 6.575, 6.421, 7.185, 6.998, 7.147, 6.430, 6.012, 6.172, 5.6...
$ age     <dbl> 65.2, 78.9, 61.1, 45.8, 54.2, 58.7, 66.6, 96.1, 100.0, 85.9...
$ dis     <dbl> 4.0900, 4.9671, 4.9671, 6.0622, 6.0622, 6.0622, 5.5605, 5.9...
$ rad     <int> 1, 2, 2, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4,...
$ tax     <int> 296, 242, 242, 222, 222, 222, 311, 311, 311, 311, 311, 311,...
$ ptratio <dbl> 15.3, 17.8, 17.8, 18.7, 18.7, 18.7, 15.2, 15.2, 15.2, 15.2,...
$ b       <dbl> 396.90, 396.90, 392.83, 394.63, 396.90, 394.12, 395.60, 396...
$ lstat   <dbl> 4.98, 9.14, 4.03, 2.94, 5.33, 5.21, 12.43, 19.15, 29.93, 17...
$ medv    <dbl> 24.0, 21.6, 34.7, 33.4, 36.2, 28.7, 22.9, 27.1, 16.5, 18.9,...

Here are the variables:

  • crim: per capita crime rate by town
  • zn: proportion of residential land zoned for lots over 25,000 sq.ft.
  • indus: proportion of non-retail business acres per town.
  • chas: Charles River dummy variable (1 if tract bounds river; 0 otherwise)
  • nox: nitric oxides concentration (parts per 10 million)
  • rm: average number of rooms per dwelling
  • age: proportion of owner-occupied units built prior to 1940
  • dis: weighted distances to five Boston employment centers
  • rad: index of accessibility to radial highways
  • tax: full-value property-tax rate per $10,000
  • ptratio: pupil-teacher ratio by town
  • b: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
  • lstat: % lower status of the population
  • medv: Median value of owner-occupied homes in $1000’s

The medv variable is the outcome.

ggplot(dataSet, 
       aes(x = medv)) + 
  geom_histogram(fill = "blue", 
                 color = "blue") + 
  theme_bw() + 
  theme(panel.border = element_blank())

Now the {rpart} model:

regTree1 <- rpart(
  formula = medv ~ .,
  data = dataSet,
  method  = 'anova',
  control = rpart.control(cp = 0)
  )

N.B. Use the anova method to instruct {rpart} to fit a regression tree. The package will try to make an intelligent guess based on the nature of the outcome variable, however it is probably wiser to instruct it. Never mind the control = rpart.control(cp = 0) part for now.

Let’ see: use rpart.plot() to visualize the Decision Tree:

rpart.plot(regTree1,
           type = 4,
           extra = "auto")

Nothing can be seen? I know. Give me a chance I will fix this later in this Session.

The visualization works like this: wherever a decision (a split) is made, we go left for TRUE on the respective condition, and right for FALSE.

In the tree representation above, each node shows the predicted value and the percentage of cases falling under it. We will not post-prune this tree model now (as we did in the previous session); that can wait. First we need a bit of theory only to understand how trees like this are grown.

To put it in a nutshell: a Regression Decision Tree model partitions the dataset into subgroups, and then fits a constant for each case found in each subgroup. That constant is nothing more than the mean of all observations in the subgroup.

Basic regression trees partition a data set into smaller subgroups and then fit a simple constant for each observation in the subgroup. The partitioning is achieved by successive binary partitions (aka recursive partitioning) based on the different predictors. The constant to predict is based on the average values for all observations that fall in that subgroup.

The fundamental model equation is surprisingly simple then. Let’s index the subgroups (regions) as \(m = 1, 2,.., M\). Imagine we deal with \(N\) predictors in \(X\). The predicted value for any observation \(\hat{f}(X)\) is the sum of the \(c_m\) values - the region (subgroup) means multiplied by and indicator function \(I(X)\) which simply tells us if some particular observation falls into some region \(R_m\) or not (taking values of 1 and 0, respectively):

\[\hat{f}(X)=\sum_{m=1}^{M}{c_mI(X_1, X_2,.., X_N)\in{R_m}}\] But everything is easy if we already know the regions defined by splits across the predictor in the tree. But how do we learn about them? How do we grow trees?

The tree grows by a method known as binary recursive partitioning. The method is known as greedy: it begins by taking the whole dataset and splits it into two regions so as to minimize the sum of squared errors as much as possible, but once the split is made it does not affect the later splits. The process continues recursively across the newly obtained regions, each being split into two regions where sum of squared errors are minimized:

\[SSE=\Bigg\{\sum_{i\in{R_1}}(y_i-c_1)^2+\sum_{i\in{R_2}}(y_i-c_2)^2\Bigg\}\]

The recursive partitioning continues until some criterion is met (for example, the minimum allowed number of observations in a terminal node). And that is the critical part in growing a Decision Tree because complex models like them can easily overfit the data by finding smaller and smaller regions in the space spawned by the model predictors, theoretically isolating each particular value of the outcome - or very, very small subsets of their values - in highly idyosincratic regions!

Now, how do we avoid overfitting in the Decision Tree model? There are two ways to do it. The first is called pre-pruning and essentially means performing a cross-validation across the model controlling parameters interpreted as stopping rules. We will focus on pre-pruning later on in this session. Now we want to understand the **post-pruning* approach - already exemplified in Session 19 - which (just imagine) also has to do with cross-validation in the end. It is based on the value of the complexity criterion which is calculated as the tree grows in recursive partitioning, on the fly. The post-pruning solution to prevent the tree from overfitting the data is the following one (to be explained):

Instead of minimizing

\[SSE=\Bigg\{\sum_{i\in{R_1}}(y_i-c_1)^2+\sum_{i\in{R_2}}(y_i-c_2)^2\Bigg\}\]

we choose to minimize

\[SSE_{penalized}=\Bigg\{\sum_{i\in{R_1}}(y_i-c_1)^2+\sum_{i\in{R_2}}(y_i-c_2)^2\Bigg\} + \alpha|T|\]

where \(\alpha\) is the cost complexity parameter and \(|T|\) is the number of terminal nodes in the tree. Post-pruning works in the following way:

  • fit a large tree, a complex model to the data;
  • start looking for subtrees of that tree that have the minimal \(SSE_{penalized}\)
  • across a range of values for \(\alpha\); for each selected subtree
  • perform a k-fold cross-validation across \(\alpha\) and determine the cross-validation (i.e. prediction) error;
  • choose the substree with a minimal cross-validation error found on a particular value of \(\alpha\).

Let’s perform post-pruning in R. We need to take a look at the complexity parameter table which is found in regTree1$cptable field of the model object:

print(regTree1$cptable)
             CP nsplit rel error    xerror       xstd
1  0.4527442007      0 1.0000000 1.0049332 0.08317640
2  0.1711724363      1 0.5472558 0.6276144 0.05685322
3  0.0716578409      2 0.3760834 0.4216362 0.04511624
4  0.0361642808      3 0.3044255 0.3393898 0.04197179
5  0.0333692301      4 0.2682612 0.3270257 0.04341389
6  0.0266129999      5 0.2348920 0.3118824 0.04336519
7  0.0158511574      6 0.2082790 0.2828871 0.04122777
8  0.0082454484      7 0.1924279 0.2833726 0.04290846
9  0.0072653855      8 0.1841824 0.2843890 0.04467585
10 0.0069310873      9 0.1769170 0.2811362 0.04358962
11 0.0061263349     10 0.1699859 0.2777601 0.04366197
12 0.0048053197     11 0.1638596 0.2500946 0.03885811
13 0.0045609248     12 0.1590543 0.2504507 0.03885986
14 0.0039410233     13 0.1544934 0.2483725 0.03883041
15 0.0033161209     14 0.1505523 0.2493143 0.03900525
16 0.0031206492     15 0.1472362 0.2437834 0.03888541
17 0.0022459421     16 0.1441156 0.2432608 0.03877377
18 0.0022354038     18 0.1396237 0.2441498 0.03876625
19 0.0021720940     19 0.1373883 0.2432100 0.03876169
20 0.0019335691     20 0.1352162 0.2433058 0.03875133
21 0.0017169079     21 0.1332826 0.2438034 0.03874916
22 0.0014440359     22 0.1315657 0.2433717 0.03876508
23 0.0014098099     23 0.1301217 0.2447990 0.03872838
24 0.0013635419     24 0.1287119 0.2449375 0.03878009
25 0.0012778421     25 0.1273483 0.2458674 0.03877505
26 0.0012473645     26 0.1260705 0.2477517 0.03876580
27 0.0011372540     28 0.1235757 0.2482631 0.03875683
28 0.0009633247     29 0.1224385 0.2481842 0.03925403
29 0.0008486865     30 0.1214752 0.2462672 0.03925443
30 0.0007098930     31 0.1206265 0.2448268 0.03922722
31 0.0005879326     32 0.1199166 0.2450964 0.03941737
32 0.0005115280     33 0.1193287 0.2454433 0.03942573
33 0.0003794039     34 0.1188171 0.2459388 0.03942342
34 0.0003719398     35 0.1184377 0.2447076 0.03901253
35 0.0003438561     36 0.1180658 0.2448500 0.03900522
36 0.0003311450     37 0.1177219 0.2447269 0.03900653
37 0.0002274363     39 0.1170596 0.2445965 0.03905484
38 0.0001955788     40 0.1168322 0.2455211 0.03955117
39 0.0000000000     41 0.1166366 0.2454463 0.03954663

We can also visualize it with plotcp():

plotcp(regTree1)

Can you see how the value of xerror stabilizes at some point and maybe even begin to grow slightly? Remember how we used control = rpart.control(cp = 0) in our model call: we have instructed rpart() to set \(\alpha\) to 0 and produced a tree that overfits the data!

Now, pre-Pruning: look at the complexity parameter table and find the value of CP for the lowest xerror:

cptable <- as.data.frame(regTree1$cptable)
optimal_cp <- cptable$CP[which.min(cptable$xerror)]
optimalregTree1 <- prune(regTree1,
                         cp = optimal_cp)
rpart.plot(optimalregTree1,
           type = 4,
           extra = "auto")

Let’s check the situation now:

plotcp(optimalregTree1)

Well, this is way better. From the rpart() documentation:

complexity parameter. Any split that does not decrease the overall lack of fit by a factor of cp is not attempted. For instance, with anova splitting, this means that the overall R-squared must increase by cp at each step. The main role of this parameter is to save computing time by pruning off splits that are obviously not worthwhile. Essentially,the user informs the program that any split which does not improve the fit by cp will likely be pruned off by cross-validation, and that hence the program need not pursue it.

However, let’s take a look at an even simpler model. Let’s not use the cp control parameter and just let {rpart} grow the tree on its own:

regTree2 <- rpart(
  formula = medv ~ .,
  data = dataSet,
  method  = 'anova')
rpart.plot(regTree2,
           type = 4,
           extra = "auto")

Now the visualization finally helps, at least.

plotcp(regTree2)

What criteria has been used on behalf of rpart() to grow this tree?

regTree2$control$cp
[1] 0.01

Let’s compare:

optimal_cp <- .01
optimalregTree2 <- prune(regTree2,
                         cp = optimal_cp)
rpart.plot(optimalregTree2,
           type = 4,
           extra = "auto")

So please be aware of the default values of the control parameters in large and complex statistical learning models!

The \(R^2\) of the optimal model, if you are interested to learn, is:

predictions <- predict(regTree2, 
                       newdata = dataSet)
round(
  cor(predictions, dataSet$medv)^2, 
  2)
[1] 0.81

2. CART: The classification problem

Growing a Classification Tree is a little bit different than growing a Regression Tree, for one simple, obvious reason that we have already encountered in our discussions of Generalized Linear Models: namely, minimizing \(SSE\) is not an ideal goal anymore. We need some new criteria for our algorithms to decide when a new split is useful and when not. Let’s see what can be done about it.

2.1 Elements of Information Theory: enters Entropy

Imagine a tree-growing algorithm running for a two-class classification problem, ending up in a set of regions in the space spawned by the predictors. Each region potentially contains both correctly and incorrectly classified instances of the outcome. I want to measure how “pure” a region is: how well does it delineates truth from falsehood in classification.

Consider a coin toss statistical experiment with \(p = .23\) and 100 observations:

exp <- rbinom(n = 100, size = 1, prob = .23)
table(exp)
exp
 0  1 
75 25 

As expected, there are way more 0s, given the low probability of just .23 to observe `1. Of course, the probability \(p = .50\) of a fair coin would produce a different distribution of results:

exp <- rbinom(n = 100, size = 1, prob = .5)
table(exp)
exp
 0  1 
51 49 

Q. Which of these two distributions is more informative? What brings more knowledge to us: an outcome of a fair coin toss which brings about 1 and 0 with an approximatelly 50:50 distribution, or a toss of a coin with \(p = .23\)? Which outcome does bring more surprise to us?

What we can tell about the two coins: which one is more predictable. Obviously, the coin with \(p = .23\) is way more predictable than the fair coin which carries complete uncertainty about its outcome. There is somehow more structure in the uneven distribution of outcomes with \(p = .23\) than in the distribution of outcomes of a fair coin toss.

Consider the following measure:

\[H(X)=-\sum_{i = 1}^{N}P(x_i)logP(x_i)\] It is called the Shannon entropy, or the Shannon information entropy, and it was introduced by Claude Shannon in his famous 1948 paper A Mathematical Theory of Communication. Let’s compute the entropy of a coin tossing statistical experiment over a range of values of \(p\):

p <- seq(.001, .999, by = .001)
ent <- -(p*log(p) + (1-p)*log(1-p))
entFrame <- data.frame(p = p, 
                       entropy = ent)
ggplot(data = entFrame, 
       aes(x = p, y = entropy)) + 
  geom_path(size = .25, color = "blue") + 
  ggtitle("Entropy of a Coin Toss") +
  theme_bw() + 
  theme(panel.border = element_blank()) + 
  theme(plot.title = element_text(hjust = .5))

Look how entropy nicely describes the uncertainty of a statistical experiment: it reaches its maximum for a fair coin with \(p=.5\), and then falls symmetrically towards \(p=0\) and \(p=1\) where the outcomes of the experiment are certain. Entropy easily generalizes to more than two outcomes and is used as a fundamental measure of uncertainty in Information and Probability Theory. It is a conceptual cornerstone of many contemporary models of statistical learning.

Ask yourself: could we use entropy to measure the “purity” of a binary classification? Should our Decision Tree models maybe tend to grow trees with terminal nodes that are minimally entropic?

2.2 Growing a Classification Tree

Let’s consider the following two measures of impurity of a node in a tree that can help guide the recursive partitioning process in classification:

Gini Impurity

  • Define

\[\hat{p}_{mk} = \frac{1}{N_m}\sum_{y_i\in{R_m}}I(y_i=k)\] where \(N_m\) is the number of observations in region \(m\), \(k\) a particular class that should be predicted correctly, and \(I()\) again just an indicator function, and \(y_i\) an observation of the outcome. This is nothing else than the probability of the class \(k\) in region \(R_m\). The Gini Impurity index is then just

\[Gini\ Impurity = \sum_{k\neq{k}'}\hat{p}_{mk}\hat{p}_{mk'}=\sum_{k=1}^{K}\hat{p}_{mk}(1-\hat{p}_{mk})\]

The Gini Impurity is a measure of missclassification: it looks for the probability of an instance being incorrectly classified. Gini impurity reaches its minimum at zero when all cases in the node fall into a single target category, so it is a badness-of-fit measure (the higher it is - the worse). Gini Impurity has a deep theoretical connection to entropy which we will not go into now. Essentially, we want to guide the process of recursive partitioning of a classification trees by looking for splits that minimize Gini Impurity relative to its value in the parent nodes.

Information Gain

Define

\[IG(T,a) = H(T)-H(T|a)\]

where \(IG(T,a)\) is an Information Gain obtained from a split s in a tree, \(H(T)\) is the entropy of the parent node, and \(H(T|a)\) is the sum of entropy in the children nodes - the entropy following the split. Essentially, we want to build classification trees that maximize Information Gain in their splits.

2.3 Classification with {rpart}, again

Just to remind ourselves on how to use {rpart} for classification problems we will repeat the steps already taken in Session 19.

Consider the HR_comma_sep.csv dataset:

dataSet <- read.csv(paste0('_data/', 'HR_comma_sep.csv'), 
                    header = T, 
                    check.names = F,
                    stringsAsFactors = F)
head(dataSet)
table(dataSet$left)

    0     1 
11428  3571 

The task is to predict the value of left - whether the employee has left the company or not - from a set of predictors encompassing the following:

glimpse(dataSet)
Rows: 14,999
Columns: 10
$ satisfaction_level    <dbl> 0.38, 0.80, 0.11, 0.72, 0.37, 0.41, 0.10, 0.9...
$ last_evaluation       <dbl> 0.53, 0.86, 0.88, 0.87, 0.52, 0.50, 0.77, 0.8...
$ number_project        <int> 2, 5, 7, 5, 2, 2, 6, 5, 5, 2, 2, 6, 4, 2, 2, ...
$ average_montly_hours  <int> 157, 262, 272, 223, 159, 153, 247, 259, 224, ...
$ time_spend_company    <int> 3, 6, 4, 5, 3, 3, 4, 5, 5, 3, 3, 4, 5, 3, 3, ...
$ Work_accident         <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
$ left                  <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
$ promotion_last_5years <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
$ sales                 <chr> "sales", "sales", "sales", "sales", "sales", ...
$ salary                <chr> "low", "medium", "medium", "low", "low", "low...
  • satisfaction_level: a measure of employee’s level of satisfaction
  • last_evaluation: the result of a last evaluation
  • number_projects: in how many projects did the employee took part
  • average_monthly_hours: how working hours monthly on average
  • time_spend_company: for how long is the employee with us
  • Work_accident: any work accidents?
  • promotion_last_5years: did the promotion occur in the last five years?
  • sales: department (sales, accounting, hr, technical, support, management, IT, product_mng, marketing, RandD)
  • salary: salary class (low, medium, high)

Train one Decision Tree on train (N.B. method is now "class"):

# - Base Model
classTree <- rpart(left ~ ., 
                   data = dataSet, 
                   method = "class")

Visualize the model with rpart.plot():

rpart.plot(classTree)

2.4 Pre-pruning: additional ways to avoid overfitting the tree, again

Let’s take a look at the control parameters used by {rpart} to fit this classification tree:

classTree$control
$minsplit
[1] 20

$minbucket
[1] 7

$cp
[1] 0.01

$maxcompete
[1] 4

$maxsurrogate
[1] 5

$usesurrogate
[1] 2

$surrogatestyle
[1] 0

$maxdepth
[1] 30

$xval
[1] 10

Pre-pruning a Decision Tree model essentially means nothing else than tweaking a subset of important control parameters until their optimal values for the problem at hand are found. Essentially all of the following can be used to prevent the tree from becoming to complex and begin overfitting the data (as concisely explained by Sibanjan Das in his Decision Trees and Pruning in R post on DevZone):

  • maxdepth: This parameter is used to set the maximum depth of a tree. Depth is the length of the longest path from a Root node to a Leaf node. Setting this parameter will stop growing the tree when the depth is equal the value set for maxdepth.
  • minsplit: It is the minimum number of records that must exist in a node for a split to happen or be attempted. For example, we set minimum records in a split to be 5; then, a node can be further split for achieving purity when the number of records in each split node is more than 5.
  • minbucket: It is the minimum number of records that can be present in a Terminal node. For example, we set the minimum records in a node to 5, meaning that every Terminal/Leaf node should have at least five records. We should also take care of not overfitting the model by specifying this parameter. If it is set to a too-small value, like 1, we may run the risk of overfitting our model.

And how is this tweaking of the control parameters performed? You already know the answer: by cross-validating the model across a range of their values, each time performing and ROC analysis for classification as we did in Session 19!

2.5 Variable Importance

The importance of each variable in a Decision Tree model can be obtained from the variable.importance field of the model object in the following way:

importance <- 
  as.data.frame(
    sort(classTree$variable.importance, decreasing = T)
    )
colnames(importance) <- 'Importance'
print(importance)

The docs say:

An overall measure of variable importance is the sum of the goodness of split measures for each split for which it was the primary variable.


Further Readings


Goran S. Milovanović

DataKolektiv, 2020/21

contact:


License: GPLv3 This Notebook is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This Notebook is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this Notebook. If not, see http://www.gnu.org/licenses/.


---
title: Intro to Data Science (Non-Technical Background, R) - Session20
author:
- name: Goran S. Milovanović, PhD
  affiliation: DataKolektiv, Chief Scientist & Owner; Data Scientist for Wikidata, WMDE
abstract: 
output:
  html_notebook:
    code_folding: show
    theme: spacelab
    toc: yes
    toc_float: yes
    toc_depth: 5
  html_document:
    toc: yes
    toc_depth: 5
---

![](../_img/DK_Logo_100.png)

***
# Session 20. Classification and Regression Tress (CART) w. {rpart}. Elements of Information Theory for Classification Trees. Pre-pruning and post-pruning (revisited) of Decision Trees.  

**Feedback** should be send to `goran.milovanovic@datakolektiv.com`. 
These notebooks accompany the Intro to Data Science: Non-Technical Background course 2020/21.

***

### What do we want to do today?

We now take a deep dive into Decision Trees in R. We will rely on the [{rpart}](https://cran.r-project.org/web/packages/rpart/index.html) package which encompasses some very well studied procedures to grow **CART** (*Classification and Regression Trees*). We begin by solving a regression problem tree with {rpart} and introduce the theory of CART along the way. We then proceed to solve a classification problem with a Decision Tree model and introduce the elements of Information Theory to understand the criteria used to assess the model. Enters **entropy**: the fundamental concept of Information Theory and among the corner stones of contemporary statistical learning. Finally we introduce the tree post-pruning procedures - essentially cross-validation across the model control parameters - and contrast it to tree post-pruning based on the *complexity parameter*.


### 0. Setup

```{r echo = T, eval = F}
install.packages('rpart')
install.packages ('rpart.plot')
```

Grab the `HR_comma_sep.csv` dataset from the [Kaggle](https://www.kaggle.com/liujiaqi/hr-comma-sepcsv) and place it in your `_data` directory for this session. We will also use the [Boston Housing Dataset:  BostonHousing.csv](https://raw.githubusercontent.com/selva86/datasets/master/BostonHousing.csv)

```{r echo = T, message = F, warning = F}
dataDir <- paste0(getwd(), "/_data/")
library(tidyverse)
library(data.table)
library(rpart)
library(rpart.plot)
```


### 1. CART: The regression problem

We begin by loading and inspecting the Boston Housing dataset.

```{r echo = T, message = F, warning = F}
dataSet <- read.csv(paste0('_data/', 'BostonHousing.csv'), 
                    header = T, 
                    check.names = F,
                    stringsAsFactors = F)
head(dataSet)
```

```{r echo = T, message = F, warning = F}
glimpse(dataSet)
```
Here are the variables:

+ **crim**: per capita crime rate by town
+ **zn**: proportion of residential land zoned for lots over 25,000 sq.ft.
+ **indus**: proportion of non-retail business acres per town.
+ **chas**: Charles River dummy variable (1 if tract bounds river; 0 otherwise)
+ **nox**: nitric oxides concentration (parts per 10 million)
+ **rm**: average number of rooms per dwelling
+ **age**: proportion of owner-occupied units built prior to 1940
+ **dis**: weighted distances to five Boston employment centers
+ **rad**: index of accessibility to radial highways
+ **tax**: full-value property-tax rate per $10,000
+ **ptratio**: pupil-teacher ratio by town
+ **b**: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
+ **lstat**: % lower status of the population
+ **medv**: Median value of owner-occupied homes in $1000's

The `medv` variable is the *outcome*.

```{r echo = T, message = F, warning = F}
ggplot(dataSet, 
       aes(x = medv)) + 
  geom_histogram(fill = "blue", 
                 color = "blue") + 
  theme_bw() + 
  theme(panel.border = element_blank())
```
Now the {rpart} model:

```{r echo = T, message = F, warning = F}
regTree1 <- rpart(
  formula = medv ~ .,
  data = dataSet,
  method  = 'anova',
  control = rpart.control(cp = 0)
  )
```

**N.B.** Use the `anova` method to instruct {rpart} to fit a regression tree. The package will try to make an intelligent guess based on the nature of the outcome variable, however it is probably wiser to instruct it. Never mind the `control = rpart.control(cp = 0)` part for now. 

Let' see: use `rpart.plot()` to visualize the Decision Tree:

```{r echo = T, message = F, warning = F}
rpart.plot(regTree1,
           type = 4,
           extra = "auto")
```
Nothing can be seen? I know. Give me a chance I will fix this later in this Session.

The visualization works like this: wherever a decision (a split) is made, we go **left** for `TRUE` on the respective condition, and **right** for `FALSE`. 

In the tree representation above, each node shows the predicted value and the percentage of cases falling under it. We will not post-prune this tree model now (as we did in the previous session); that can wait. First we need a bit of theory only to understand how trees like this are grown.

To put it in a nutshell: a Regression Decision Tree model partitions the dataset into subgroups, and then fits a **constant** for each case found in each subgroup. That constant is nothing more than the mean of all observations in the subgroup.  

Basic regression trees partition a data set into smaller subgroups and then fit a simple constant for each observation in the subgroup. The partitioning is achieved by successive binary partitions (aka recursive partitioning) based on the different predictors. The constant to predict is based on *the average values* for all observations that fall in that subgroup.

The fundamental model equation is surprisingly simple then. Let's index the subgroups (regions) as $m = 1, 2,.., M$. Imagine we deal with $N$ predictors in $X$. The predicted value for any observation $\hat{f}(X)$ is the sum of the $c_m$ values - the region (subgroup) means multiplied by and indicator function $I(X)$ which simply tells us if some particular observation falls into some region $R_m$ or not (taking values of `1` and `0`, respectively): 

$$\hat{f}(X)=\sum_{m=1}^{M}{c_mI(X_1, X_2,.., X_N)\in{R_m}}$$
But everything is easy if we already *know* the regions defined by splits across the predictor in the tree. But how do we learn about them? How do we grow trees?

The tree grows by a method known as **binary recursive partitioning**. The method is known as *greedy*: it begins by taking the whole dataset and splits it into two regions so as to minimize the sum of squared errors as much as possible, but once the split is made *it does not affect* the later splits. The process continues recursively across the newly obtained regions, each being split into two regions where sum of squared errors are minimized:

$$SSE=\Bigg\{\sum_{i\in{R_1}}(y_i-c_1)^2+\sum_{i\in{R_2}}(y_i-c_2)^2\Bigg\}$$

The recursive partitioning continues until some criterion is met (for example, the minimum allowed number of observations in a terminal node). And that is the critical part in growing a Decision Tree **because** complex models like them can *easily overfit* the data by finding smaller and smaller regions in the space spawned by the model predictors, theoretically isolating each particular value of the outcome - or very, very small subsets of their values - in highly idyosincratic regions!

Now, how do we avoid overfitting in the Decision Tree model? There are two ways to do it. The first is called **pre-pruning** and essentially means performing a cross-validation across the model controlling parameters interpreted as stopping rules. We will focus on pre-pruning later on in this session. Now we want to understand the **post-pruning* approach - already exemplified in Session 19 - which (just imagine) also has to do with cross-validation in the end. It is based on the value of the *complexity criterion* which is calculated as the tree grows in recursive partitioning, on the fly. The post-pruning solution to prevent the tree from overfitting the data is the following one (to be explained):

Instead of minimizing

$$SSE=\Bigg\{\sum_{i\in{R_1}}(y_i-c_1)^2+\sum_{i\in{R_2}}(y_i-c_2)^2\Bigg\}$$

we choose to minimize

$$SSE_{penalized}=\Bigg\{\sum_{i\in{R_1}}(y_i-c_1)^2+\sum_{i\in{R_2}}(y_i-c_2)^2\Bigg\} + \alpha|T|$$

where $\alpha$ is the *cost complexity parameter* and $|T|$ is the *number of terminal nodes in the tree*. Post-pruning works in the following way:

- fit a large tree, a complex model to the data;
- start looking for *subtrees* of that tree that have the minimal $SSE_{penalized}$
- across a range of values for $\alpha$; for each selected subtree
- perform a k-fold cross-validation across $\alpha$ and determine the *cross-validation (i.e. prediction) error*;
- choose the substree with a minimal cross-validation error found on a particular value of $\alpha$.

Let's perform post-pruning in R. We need to take a look at the *complexity parameter table* which is found in `regTree1$cptable` field of the model object:

```{r echo = T, message = F, warning = F}
print(regTree1$cptable)
```
We can also visualize it with `plotcp()`:

```{r echo = T, message = F, warning = F}
plotcp(regTree1)
```
Can you see how the value of `xerror` stabilizes at some point and maybe even begin to grow slightly? Remember how we used `control = rpart.control(cp = 0)` in our model call: we have instructed `rpart()` to set $\alpha$ to `0` and produced a tree that overfits the data!

Now, pre-Pruning: look at the *complexity parameter table* and find the value of `CP` for the lowest `xerror`:

```{r echo = T, message = F, warning = F}
cptable <- as.data.frame(regTree1$cptable)
optimal_cp <- cptable$CP[which.min(cptable$xerror)]
optimalregTree1 <- prune(regTree1,
                         cp = optimal_cp)
rpart.plot(optimalregTree1,
           type = 4,
           extra = "auto")
```
Let's check the situation now:

```{r echo = T, message = F, warning = F}
plotcp(optimalregTree1)
```
Well, this is way better. From the `rpart()` [documentation](https://www.rdocumentation.org/packages/rpart/versions/4.1-15/topics/rpart.control):

> complexity parameter. Any split that does not decrease the overall lack of fit by a factor of cp is not attempted. For instance, with anova splitting, this means that the overall R-squared must increase by cp at each step. The main role of this parameter is to save computing time by pruning off splits that are obviously not worthwhile. Essentially,the user informs the program that any split which does not improve the fit by cp will likely be pruned off by cross-validation, and that hence the program need not pursue it.

However, let's take a look at an even simpler model. Let's not use the `cp` control parameter and just let {rpart} grow the tree on its own:

```{r echo = T, message = F, warning = F}
regTree2 <- rpart(
  formula = medv ~ .,
  data = dataSet,
  method  = 'anova')
rpart.plot(regTree2,
           type = 4,
           extra = "auto")
```
Now the visualization finally helps, at least.

```{r echo = T, message = F, warning = F}
plotcp(regTree2)
```
What criteria has been used on behalf of `rpart()` to grow this tree?

```{r echo = T, message = F, warning = F}
regTree2$control$cp
```
Let's compare:

```{r echo = T, message = F, warning = F}
optimal_cp <- .01
optimalregTree2 <- prune(regTree2,
                         cp = optimal_cp)
rpart.plot(optimalregTree2,
           type = 4,
           extra = "auto")
```
So please be aware of the default values of the control parameters in large and complex statistical learning models!

The $R^2$ of the optimal model, if you are interested to learn, is:

```{r echo = T, message = F, warning = F}
predictions <- predict(regTree2, 
                       newdata = dataSet)
round(
  cor(predictions, dataSet$medv)^2, 
  2)
```
### 2. CART: The classification problem

Growing a Classification Tree is a little bit different than growing a Regression Tree, for one simple, obvious reason that we have already encountered in our discussions of Generalized Linear Models: namely, minimizing $SSE$ is not an ideal goal anymore. We need some new criteria for our algorithms to decide when a new split is useful and when not. Let's see what can be done about it.

#### 2.1 Elements of Information Theory: enters Entropy

Imagine a tree-growing algorithm running for a two-class classification problem, ending up in a set of regions in the space spawned by the predictors. Each region potentially contains both correctly and incorrectly classified instances of the outcome. I want to measure how "pure" a region is: how well does it delineates truth from falsehood in classification.

Consider a coin toss statistical experiment with $p = .23$ and 100 observations:

```{r echo = T, message = F, warning = F}
exp <- rbinom(n = 100, size = 1, prob = .23)
table(exp)
```
As expected, there are way more `0s`, given the low probability of just `.23` to observe `1. Of course, the probability $p = .50$ of a fair coin would produce a different distribution of results:

```{r echo = T, message = F, warning = F}
exp <- rbinom(n = 100, size = 1, prob = .5)
table(exp)
```
**Q.** Which of these two distributions is more **informative**? What brings more knowledge to us: an outcome of a fair coin toss which brings about `1` and `0` with an approximatelly 50:50 distribution, or a toss of a coin with $p = .23$? Which outcome does bring more **surprise** to us? 

What we can tell about the two coins: which one is more **predictable**. Obviously, the coin with $p = .23$ is way more predictable than the fair coin which carries complete uncertainty about its outcome. There is somehow more structure in the uneven distribution of outcomes with $p = .23$ than in the distribution of outcomes of a fair coin toss.

Consider the following measure:

$$H(X)=-\sum_{i = 1}^{N}P(x_i)logP(x_i)$$
It is called the **Shannon entropy**, or the **Shannon information entropy**, and it was introduced by [Claude Shannon](https://en.wikipedia.org/wiki/Claude_Shannon) in his famous 1948 paper **A Mathematical Theory of Communication**. Let's compute the entropy of a coin tossing statistical experiment over a range of values of $p$: 

```{r echo = T, message = F, warning = F}
p <- seq(.001, .999, by = .001)
ent <- -(p*log(p) + (1-p)*log(1-p))
entFrame <- data.frame(p = p, 
                       entropy = ent)
ggplot(data = entFrame, 
       aes(x = p, y = entropy)) + 
  geom_path(size = .25, color = "blue") + 
  ggtitle("Entropy of a Coin Toss") +
  theme_bw() + 
  theme(panel.border = element_blank()) + 
  theme(plot.title = element_text(hjust = .5))
```
Look how entropy nicely describes the uncertainty of a statistical experiment: it reaches its maximum for a fair coin with $p=.5$, and then falls symmetrically towards $p=0$ and $p=1$ where the outcomes of the experiment are **certain**. Entropy easily generalizes to more than two outcomes and is used as a fundamental measure of **uncertainty** in Information and Probability Theory. It is a conceptual cornerstone of many contemporary models of statistical learning.

Ask yourself: could we use entropy to measure the "purity" of a binary classification? Should our Decision Tree models maybe tend to grow trees with terminal nodes that are minimally entropic?

#### 2.2 Growing a Classification Tree

Let's consider the following two measures of *impurity of a node in a tree* that can help guide the recursive partitioning process in classification:

**Gini Impurity**

- Define

$$\hat{p}_{mk} = \frac{1}{N_m}\sum_{y_i\in{R_m}}I(y_i=k)$$
where $N_m$ is the number of observations in region $m$, $k$ a particular class that should be predicted correctly, and $I()$ again just an indicator function, and $y_i$ an observation of the outcome. This is nothing else than the probability of the class $k$ in region $R_m$. The Gini Impurity index is then just

$$Gini\ Impurity = \sum_{k\neq{k}'}\hat{p}_{mk}\hat{p}_{mk'}=\sum_{k=1}^{K}\hat{p}_{mk}(1-\hat{p}_{mk})$$

The Gini Impurity is a measure of missclassification: it looks for the probability of an instance being incorrectly classified. Gini impurity reaches its minimum at zero when all cases in the node fall into a single target category, so it is a badness-of-fit measure (the higher it is - the worse). Gini Impurity has a deep theoretical connection to entropy which we will not go into now. Essentially, we want to guide the process of recursive partitioning of a classification trees by looking for splits that minimize Gini Impurity relative to its value in the parent nodes.

**Information Gain**

Define

$$IG(T,a) = H(T)-H(T|a)$$

where $IG(T,a)$ is an **Information Gain** obtained from a split *s* in a tree, $H(T)$ is the entropy of the parent node, and $H(T|a)$ is the sum of entropy in the children nodes - *the entropy following the split*. Essentially, we want to build classification trees that maximize Information Gain in their splits. 

#### 2.3 Classification with {rpart}, again

Just to remind ourselves on how to use {rpart} for classification problems we will repeat the steps already taken in Session 19. 

Consider the `HR_comma_sep.csv` dataset:

```{r echo = T, message = F, warning = F}
dataSet <- read.csv(paste0('_data/', 'HR_comma_sep.csv'), 
                    header = T, 
                    check.names = F,
                    stringsAsFactors = F)
head(dataSet)
```


```{r echo = T, message = F, warning = F}
table(dataSet$left)
```

The task is to predict the value of `left` - whether the employee has left the company or not - from a set of predictors encompassing the following:

```{r echo = T, message = F, warning = F}
glimpse(dataSet)
```

- **satisfaction_level**: a measure of employee's level of satisfaction
- **last_evaluation**: the result of a last evaluation
- **number_projects**: in how many projects did the employee took part
- **average_monthly_hours**: how working hours monthly on average
- **time_spend_company**: for how long is the employee with us
- **Work_accident**: any work accidents?
- **promotion_last_5years**: did the promotion occur in the last five years?
- **sales**: department (sales, accounting, hr, technical, support, management, IT, product_mng, marketing, RandD)
- **salary**: salary class (low, medium, high)


Train one Decision Tree on `train` (**N.B.** `method` is now `"class"`):

```{r echo = T, message = F, warning = F}
# - Base Model
classTree <- rpart(left ~ ., 
                   data = dataSet, 
                   method = "class")
```

Visualize the model with `rpart.plot()`:

```{r echo = T, message = F, warning = F}
rpart.plot(classTree)
```

#### 2.4 Pre-pruning: additional ways to avoid overfitting the tree, again 

Let's take a look at the *control parameters* used by {rpart} to fit this classification tree:

```{r echo = T, message = F, warning = F}
classTree$control
```
Pre-pruning a Decision Tree model essentially means nothing else than tweaking a subset of important control parameters until their optimal values for the problem at hand are found. Essentially all of the following can be used to prevent the tree from becoming to complex and begin overfitting the data (as concisely explained by [Sibanjan Das in his Decision Trees and Pruning in R post on DevZone](https://dzone.com/articles/decision-trees-and-pruning-in-r)):

> - **maxdepth**: This parameter is used to set the maximum depth of a tree. Depth is the length of the longest path from a Root node to a Leaf node. Setting this parameter will stop growing the tree when the depth is equal the value set for maxdepth.
> - **minsplit:** It is the minimum number of records that must exist in a node for a split to happen or be attempted. For example, we set minimum records in a split to be 5; then, a node can be further split for achieving purity when the number of records in each split node is more than 5.
> - **minbucket:** It is the minimum number of records that can be present in a Terminal node. For example, we set the minimum records in a node to 5, meaning that every Terminal/Leaf node should have at least five records. We should also take care of not overfitting the model by specifying this parameter. If it is set to a too-small value, like 1, we may run the risk of overfitting our model.

And how is this *tweaking* of the control parameters performed? You already know the answer: by cross-validating the model across a range of their values, each time performing and ROC analysis for classification as we did in Session 19!

#### 2.5 Variable Importance

The importance of each variable in a Decision Tree model can be obtained from the `variable.importance` field of the model object in the following way:

```{r echo = T, message = F, warning = F}
importance <- 
  as.data.frame(
    sort(classTree$variable.importance, decreasing = T)
    )
colnames(importance) <- 'Importance'
print(importance)
```

The docs say:

> An overall measure of variable importance is the sum of the goodness of split measures for each split for which it was the primary variable.

***

### Further Readings

+ [The Elements of Statistical Learning, Hastie, T., Tibshirani, R. & Friedman, J., 12th printing with corrections and table of contents, Jan 2017, Chapter 9.2 Tree-Based Methods)](https://web.stanford.edu/~hastie/ElemStatLearn/printings/ESLII_print12_toc.pdf)

+ [An Introduction to Recursive Partitioning Using the RPART Routines, Terry M. Therneau, Elizabeth J. Atkinson, Mayo Foundation, April 11, 2019](https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf)

+ [A nice, intuitive, and precise explanation of Information Entropy and Information Gain with examples: Brian Ambielli's blog](https://bambielli.com/til/2017-10-22-information-gain/)

+ [A nice, intuitive, and precise explanation of Gini Impurity with examples: Brian Ambielli's blog](https://bambielli.com/til/2017-10-29-gini-impurity/)

+ [A step by step calculations of Information Gain for Decision Tress from English Wikipedia](https://en.wikipedia.org/wiki/Decision_tree_learning#Information_gain)


***
Goran S. Milovanović

DataKolektiv, 2020/21

contact: goran.milovanovic@datakolektiv.com

![](../_img/DK_Logo_100.png)

***
License: [GPLv3](http://www.gnu.org/licenses/gpl-3.0.txt)
This Notebook is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This Notebook is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this Notebook. If not, see <http://www.gnu.org/licenses/>.

***

