Feedback should be send to goran.milovanovic@datakolektiv.com
. These notebooks accompany the Intro to Data Science: Non-Technical Background course 2020/21.
We will dive deep into theory today in an effort to understand the logic of statistical modeling by error minimization. We will talk a bit about different types of mathematical models that are being developed and used in contemporary Data Science, Machine Learning, and AI. We will distinguish between supervised and unsupervised learning on one, and then reinforcement learning on the other hand. Then we focus on supervised learning and the simple regression model again in order to understand how mathematical models are estimated from empirical data. We will demonstrate several different approaches to estimate a given model’s parameters from data and discuss why they all converge along the same lines of perspective towards one single, important insight: statistics is learning.
Setup:
dataDir <- paste0(getwd(), "/_data/")
library(tidyverse)
library(data.table)
library(plotly)
Once again: Sepal.Length
predicts Petal.Length
in iris
:
ggplot(data = iris,
aes(x = Sepal.Length,
y = Petal.Length)) +
geom_point(aes(x = Sepal.Length, y = Petal.Length), color = "black", size = 2) +
geom_point(aes(x = Sepal.Length, y = Petal.Length), color = "white", size = 1.5) +
geom_smooth(method='lm', size = .25, color = "red") +
ggtitle("Sepal Length vs Petal Length") +
xlab("Sepal Length") + ylab("Petal Length") +
theme_bw() +
theme(panel.border = element_blank()) +
theme(plot.title = element_text(hjust = .5))
`geom_smooth()` using formula 'y ~ x'
Let’s recall the form of the simple linear regression model:
\[Y = \beta_0 + \beta_1X_1 + \epsilon\]
or
\[outcome_i = model + error_i\]
where \(i\) is an index across the whole dataset (i.e. each row, each pair of Sepal.Length
and Petal.Lenth
, each observation). So, statistical models make errors in their predictions, of course. Then what model works the best for a given dataset? The one that minimizes the error, of course.
Take a look at the following chart:
ggplot(data = iris,
aes(x = Sepal.Length,
y = Petal.Length)) +
geom_point(aes(x = Sepal.Length, y = Petal.Length), color = "black", size = 2) +
geom_point(aes(x = Sepal.Length, y = Petal.Length), color = "white", size = 1.5) +
geom_hline(yintercept = mean(iris$`Petal.Length`),
linetype = "dashed",
color = "red") +
ggtitle("Sepal Length vs Petal Length") +
xlab("Sepal Length") + ylab("Petal Length") +
theme_bw() +
theme(panel.border = element_blank()) +
theme(plot.title = element_text(hjust = .5))
The horizontal line in this chart has na intercept of 3.758
, which is the mean of iris$Petal.Length
, the outcome variable in our simple linear regression model. If there were not a predictor iris$Sepal.Length
, the mean of \(Y\) would be our best possible guess about any new value observed on that variable. Let’s take a look at the residuals of the outcome variable from its own mean:
linFit <- iris
linFit$Petal.Length.AtMean <- linFit$Petal.Length - mean(linFit$Petal.Length)
ggplot(data = linFit,
aes(x = Sepal.Length,
y = Petal.Length)) +
geom_hline(yintercept = mean(iris$`Petal.Length`),
linetype = "dashed",
color = "red") +
geom_segment(aes(x = Sepal.Length,
y = Petal.Length,
xend = Sepal.Length,
yend = Petal.Length - Petal.Length.AtMean),
color = "black", size = .2, linetype = "dashed") +
geom_point(aes(x = Sepal.Length, y = Petal.Length), color = "black", size = 2) +
geom_point(aes(x = Sepal.Length, y = Petal.Length), color = "white", size = 1.5) +
geom_point(aes(x = Sepal.Length, y = mean(iris$Petal.Length)), color = "red", size = 1) +
xlab("Sepal.Length") + ylab("Petal.Length") +
theme_classic() +
theme_bw() +
theme(panel.border = element_blank()) +
theme(strip.background = element_blank()) +
theme(axis.text.x = element_text(size = 6)) +
theme(axis.text.y = element_text(size = 6))
What is our error, in total, if we start predicting the values of a variable from its mean alone, without any other predictor at hand? Enters the Total Sum of Squares, \(SS_T\):
ss_total <- sum((iris$Petal.Length - mean(iris$Petal.Length))^2)
print(ss_total)
[1] 464.3254
Ok, now back to the simple linear regression model Petal.Length ~ Sepal.Length
:
linFit <- lm(data = iris,
Petal.Length ~ Sepal.Length)
linFit <- data.frame(
x = iris$Sepal.Length,
y = iris$Petal.Length,
predicted = linFit$fitted.values,
residuals = linFit$residuals
)
ggplot(data = linFit,
aes(x = x, y = y)) +
geom_smooth(method = lm, se = F, color = "red", size = .25) +
geom_segment(aes(x = x,
y = predicted,
xend = x,
yend = predicted + residuals),
color = "black", size = .2, linetype = "dashed") +
geom_point(aes(x = x, y = y), color = "black", size = 2) +
geom_point(aes(x = x, y = y), color = "white", size = 1.5) +
geom_point(aes(x = x, y = predicted), color = "red", size = 1) +
xlab("Sepal.Length") + ylab("Petal.Length") +
theme_classic() +
theme_bw() +
theme(panel.border = element_blank()) +
theme(strip.background = element_blank()) +
theme(axis.text.x = element_text(size = 6)) +
theme(axis.text.y = element_text(size = 6))
`geom_smooth()` using formula 'y ~ x'
The Residual Sum of Squares, \(SS_R\), is the sum of squared distances from the observed data points to the model’s respective predictions:
slr_model <- lm(data = iris,
Petal.Length ~ Sepal.Length)
ss_residual <- sum(residuals(slr_model)^2)
print(ss_residual)
[1] 111.4592
Finally, the Model Sum of Squares, \(SS_M\).
linFit <- lm(data = iris,
Petal.Length ~ Sepal.Length)
linFit <- data.frame(
x = iris$Sepal.Length,
y = iris$Petal.Length,
predicted = linFit$fitted.values,
meanY = mean(iris$Petal.Length)
)
ggplot(data = linFit,
aes(x = x, y = y)) +
geom_smooth(method = lm, se = F, color = "red", size = .25) +
geom_hline(yintercept = mean(iris$`Petal.Length`),
linetype = "dashed",
color = "red") +
geom_segment(aes(x = x,
y = predicted,
xend = x,
yend = meanY),
color = "red", size = .2, linetype = "dashed") +
geom_point(aes(x = x, y = y), color = "black", size = 2) +
geom_point(aes(x = x, y = y), color = "white", size = 1.5) +
geom_point(aes(x = x, y = predicted), color = "red", size = 1) +
xlab("Sepal.Length") + ylab("Petal.Length") +
theme_classic() +
theme_bw() +
theme(panel.border = element_blank()) +
theme(strip.background = element_blank()) +
theme(axis.text.x = element_text(size = 6)) +
theme(axis.text.y = element_text(size = 6))
`geom_smooth()` using formula 'y ~ x'
The Model Sum of Squares, \(SS_M\), is based on the distances between the mean of the outcome and the model predictions:
ss_model <- sum((linFit$predicted - linFit$meanY)^2)
print(ss_model)
[1] 352.8662
Now:
print(ss_total - ss_residual)
[1] 352.8662
!!! The complete error present in an attempt to predict Petal.Length
from its mean alone, \(SS_T\), can be thus decomposed into \(SS_R\) and \(SS_M\):
\[SS_{total} = SS_{model} + SS_{residual} = SS_T = SS_M + SS_R\] #### 1.2 \(F-test\), \(t-test\), and \(R^2\) in Simple Linear Regression
But there are more tricks waiting to be pulled, see:
slr_model <- lm(data = iris,
Petal.Length ~ Sepal.Length)
summary(slr_model)
Call:
lm(formula = Petal.Length ~ Sepal.Length, data = iris)
Residuals:
Min 1Q Median 3Q Max
-2.47747 -0.59072 -0.00668 0.60484 2.49512
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -7.10144 0.50666 -14.02 <2e-16 ***
Sepal.Length 1.85843 0.08586 21.65 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.8678 on 148 degrees of freedom
Multiple R-squared: 0.76, Adjusted R-squared: 0.7583
F-statistic: 468.6 on 1 and 148 DF, p-value: < 2.2e-16
What is \(SS_M/SS_T\)?
print(ss_model/ss_total)
[1] 0.7599546
So we have: \(R^2 = SS_M/SS_T\)!
Now, remember that \(F-test\) in the output that has been mentioned in the past?
Let’s divide \(SS_M\) and \(SS_R\) by their respective degrees of freedom. For the \(df_R\) in the simple linear regression (as in other models as well) we take the number of observations minus the numbers of the parameters in the model (two, in our case: the intercept and the slope), while for the \(df_M\) we need to take only the number predictors in the model (one, in our case):
df_residual <- dim(iris)[1] - 2 # - num.obs - num.parameters
print(paste0("df_residual is: ", df_residual))
[1] "df_residual is: 148"
df_model <- 1 # - how many variables?
print(paste0("df_model is: ", df_model))
[1] "df_model is: 1"
ms_model <- ss_model/df_model
ms_residual <- ss_residual/df_residual
print(paste0("MS_model is: ", ms_model))
[1] "MS_model is: 352.866244880181"
print(paste0("MS_residual is: ", ms_residual))
[1] "MS_residual is: 0.753102399458234"
They are now called mean squares, our \(MS_M\) and \(MS_R\). Ok, now:
f_test <- ms_model/ms_residual
print(f_test)
[1] 468.5502
And know we know that
\(F = MS_{Model}/MS_{Residual} = MS_M/MS_R\). The \(F-test\) follows the \(F-distribution\) with \(df_M\) and \(df_R\) degrees of freedom:
x <- rf(100000, df1 = ms_model, df2 = ms_residual)
hist(x,
freq = FALSE,
xlim = c(0,3),
ylim = c(0,1),
xlab = '',
main = ('F-distribution with df_1 = MS_M and df_2 = MS_R degrees of freedom (df)'),
cex.main=0.9)
curve(df(x, df1 = 10, df2 = 20), from = 0, to = 4, n = 5000, col= 'pink', lwd=2, add = T)
And another thing to observe:
print(f_test)
[1] 468.5502
slr_summary <- summary(slr_model)
slr_coeffs <- data.frame(slr_summary$coefficients)
print(slr_coeffs)
slr_ttest <- slr_coeffs$t.value[2]
print(slr_ttest^2)
[1] 468.5502
print(f_test)
[1] 468.5502
N.B. Complicated things have many correct interpretations and connections that exist among them. For example:
I want to write a function now. The function takes the following as its inputs:
and it returns the sum of squared errors (i.e. residuals) from the simple linear regression of the well known \(Y = \beta_0 + \beta_1X +\epsilon\) form.
Here is what I want essentially:
d <- data.frame(x = iris$Sepal.Length,
y = iris$Petal.Length)
residuals <- summary(lm(y ~ x, data = d))$residuals
print(sum(residuals^2))
[1] 111.4592
But I do not want to use lm()
: instead, I want to be able to specify \(/beta_0\) and \(/beta_1\) myself:
sse <- function(d, beta_0, beta_1) {
predictions <- beta_0 + beta_1 * d$x
residuals <- d$y - predictions
return(sum(residuals^2))
}
Ok. Now, the lm()
function in R can find the optimal values of the parameters \(/beta_0\) and \(/beta_1\), right?
slr_model <- lm(y ~ x, data = d)
coefficients(slr_model)
(Intercept) x
-7.101443 1.858433
Let’s test our sse()
function:
beta_0_test <- coefficients(slr_model)[1]
beta_1_test <- coefficients(slr_model)[2]
sse(d = d,
beta_0 = beta_0_test,
beta_1 = beta_1_test)
[1] 111.4592
And from lm()
again:
d <- data.frame(x = iris$Sepal.Length,
y = iris$Petal.Length)
residuals <- summary(lm(y ~ x, data = d))$residuals
print(sum(residuals^2))
[1] 111.4592
So we know that our sse()
works just fine.
Now: how could have determined the optimal values - the error minimizing values - of \(/beta_0\) and \(/beta_1\) without relying on lm()
?
One idea is to move across the space of the parameter values in small steps and compute the model error at each point in that space, for example:
test_beta_0 <- seq(-10, 10, by = .1)
test_beta_1 <- seq(-10, 10, by = .1)
model_errors <- lapply(test_beta_0, function(x) {
return(
rbindlist(
lapply(test_beta_1, function(y) {
s <- sse(d = d, beta_0 = x, beta_1 = y)
return(data.frame(sse = s,
beta_0 = x,
beta_1 = y))
}))
)
})
model_errors <- rbindlist(model_errors)
head(model_errors)
What would be the most optimal estimates of of \(/beta_0\) and \(/beta_1\) then?
model_errors[which.min(model_errors$sse), ]
Compare:
coefficients(slr_model)
(Intercept) x
-7.101443 1.858433
Hm, not bad?
Another idea that comes to mind is the following one: why not take a uniform random sample from the Parameter Space and check out the sse()
at various points defined by their \(/beta_0\) and \(/beta_1\) coordinates?
sample_parameters <- data.frame(beta_0 = runif(100000, -10, 10),
beta_1 = runif(100000, -10, 10))
head(sample_parameters)
Now let’s find the model errors at each randomly sampled combination of parameters:
sample_parameters$sse <- apply(sample_parameters, 1, function(x) {
sse(d, x[1], x[2])
})
head(sample_parameters)
And what would be the most optimal estimates of of \(/beta_0\) and \(/beta_1\) in this case?
sample_parameters[which.min(sample_parameters$sse), ]
Compare:
coefficients(slr_model)
(Intercept) x
-7.101443 1.858433
Hm?
N.B. Finding the optimal values of the model’s parameters implies some sort of search through the Paramater Space, and picking the values that minimize the model error as much as possible.
But there are better ways to do it than Grid Search or Random Sampling. And whwnever that is possible, this is what we do to our statistical learning models: we optimize them.
Please pay close attention to what exactly is happening in these procedures:
d
is a constant, it does not change in any of the sse()
function’s run;sse()
function does not estimate anything - it is not lm()
! - but computes the model error instead. So what are we really looking for? We are looking for a way to find the minimum of our sse()
function.First we need a slight rewrite of sse()
only:
sse <- function(params) {
beta_0 <- params[1]
beta_1 <- params[2]
# - MODEL IS HERE:
predictions <- beta_0 + beta_1 * d$x
# - MODEL ENDS HERE ^^
residuals <- d$y - predictions
return(sum(residuals^2))
}
N.B. As the dataset d
is a constant, it does not play a role of an sse()
function parameter anymore.
Pick some random, initial values for \(/beta_0\) and \(/beta_1\):
beta_0_start <- runif(1, -10, 10)
beta_1_start <- runif(1, -10, 10)
print(beta_0_start)
[1] 1.237548
print(beta_1_start)
[1] -2.486866
solution <- optim(par = c(beta_0_start, beta_1_start),
fn = sse,
method = "Nelder-Mead",
lower = -Inf, upper = Inf)
print(solution$par)
[1] -7.086737 1.855748
Compare:
coefficients(slr_model)
(Intercept) x
-7.101443 1.858433
Now that looks great!
What are we really looking at here is this (first reducing the sample_parameters
dataframe a bit… :-)
# - back to the old version of sse():
sse <- function(d, beta_0, beta_1) {
predictions <- beta_0 + beta_1 * d$x
residuals <- d$y - predictions
return(sum(residuals^2))
}
# - sample parameters on a different range for plotting purposes
sample_parameters <- data.frame(beta_0 = runif(100000, -7.5, 7.5),
beta_1 = runif(100000, -2, 2))
sample_parameters$sse <- apply(sample_parameters, 1, function(x) {
sse(d, x[1], x[2])
})
head(sample_parameters)
plot_ly() %>%
add_trace(data = sample_parameters,
x = sample_parameters$beta_0,
y = sample_parameters$beta_1,
z = sample_parameters$sse,
type = "mesh3d",
intensity = sample_parameters$sse,
colorscale = "Viridis") %>%
layout(
modebar = list(orientation = "v"),
title = "The Simple Linear Regression Error Landscape",
scene = list(
xaxis = list(title = "Beta 0", range = c(-7.5, 7.5)),
yaxis = list(title = "Beta 1", range = c(-2, 2)),
zaxis = list(title = "SSE")
))