General Linear Model 1
Here are the notes for general linear regression.
General Linear Model 1——Linear Regression
1. What is Linear Regression?
It is one of the most well known/understood algorithm in statistics and machine learning
Linear regression is a linear model, which a linear relationship between the input X and output Y. More technical, we can consider Y can be a linear combination of X
Quote: 我们的目的就是找到一条直线,使所有我们之前input的点到这条直线的距离最小。
The representation of simpliest linear regression can be written as
In statistics, this belongs to the parametric model, i.e. it has the parameter β0 and β1
(这里当然最好是Y=β0+β1X,,但是没办法我们有$\epsilon$,(但是我们知道它是normal))
so, what is our target?
为了找 X和 Y的关系——Find the best β0 and β1 ——find total error 最小(loss function)——find Least square
2. The keys
2.1 How to determine this model?(估计参数)
2.1.1 Loss function?(we can link the more general case for the loss function)
1. 如何我们要知道我们好不好呢?就需要看看error
Error=∣yi−yi′∣
error 可以变化吗?可以, 但是为了好计算所以用squre
loss function=total error
2.1.2.最小二乘法least square推导loss function
这里不一定要平方,但是平方和可以找到某种意义上的最好值
总的误差的平方最小的y就是真值,这个假设在误差是在随机波动下是最优的
所以我们可以来找 Loss=minβ0,β1∑i=1n(yi−β0+β1Xi)2
因着是找 β0 and β1符合 argminβ0,β1∑i=1n(yi−β0+β1Xi)2
所以我是在找β0,β1使得∑i=1n(yi−β0+β1Xi)2最小,也就是loss的最小
最小二乘法不永远是最优
一些提醒:
least square(最小二乘法)是从cost function的角度,利用距离的定义建立目标函数;(注意最小二乘法是方法整个loss体系的建立可以link到statsitcal learning theory)
经典的参数估计方法是从概率的角度建立目标,比如说最大似然估计MLE(maximum likelihood estimation)
2.1.3 最大似然估计MLE(maximum likelihood estimation)推导loss function
mle是什么?
mle is a method of estimating the parameters of a statistical model given obersvation, by finding the parameter values that maximize the likelihood of making the observation given the parameters. 用参数估计的方法,在有了一定的观测值之后,来找parameter,让我们可以最有可能看到我们观测值,让我们可以最大程度放大我们的观测值
如果对于linear regression 来说,相当与用一个方法,去找穿过最大可能性(最大密度)(尽可能多的概率)的那些点的线上
同时这条线对于x来说是最大可能性分布所在的线(CLT)
见图

别忘了我们要使用model里的assumption: p(y|x) 是 mean=μ=f(x)(和x有关), variance=σ2(和x无关)的normal distribution(or we consider ε∼N(0,σ2))
推导**(面试必考题)**
we know that Y∣X∼N(βˉ0+βˉ1X,σ2)
- p(Yi∣Xi)=σ(2π)1e−2σ21(Yi−βˉ0−βˉ1Xi)2
- L(βˉ0,βˉ1,σ2)=p(Y1,⋯,Yn∣X1,⋯,Xn)=σn(2π)n/21e−2σ21∑i=1n(Yi−βˉ0−βˉ1Xi)2
under the assumption (X1,Y1),⋯,(Xn,Yn) are independent
where L(βˉ0,βˉ1,σ2)=p(Y1,⋯,Yn∣X1,⋯,Xn)=p(Y1∣X1,⋯,Xn)⋯p(Yn∣X1,⋯,Xn)=p(Y1∣X1)p(Y2∣X2)⋯p(Yn∣Xn)L(βˉ0,βˉ1,σ2)=p(Y1,⋯,Yn∣X1,⋯,Xn)=p(Y1∣X1,⋯,Xn)⋯p(Yn∣X1,⋯,Xn)=p(Y1∣X1)p(Y2∣X2)⋯p(Yn∣Xn)
the corresponding the log function:(我只关心parameter,log函数不影响单调等数学性质)
logL(βˉ0,βˉ1,σ2)=−nlog(2πσ)−2σ21i=1∑n(Yi−βˉ0−βˉ1Xi)2所以我们要找的就是
argmax−i=1∑n(Yi−βˉ0−βˉ1Xi)2i.e.
argmini=1∑n(Yi−βˉ0−βˉ1Xi)2
两者关系?
当他们在linear regression下的assumption下,这两个方法得到结果是相通的
one is from statistics, and the other one is from optimization
一点提醒:
noise是数据造成的,是inherent bias. error是模型造成的,是人为的。是两个不同的概念
2.2 How is the performance of this model?(这个是来看模型自己的好坏,评价自己的参数)
我们只能系统的保证其不会偏差($E(Y)=\mu$)
Consider 这个问题,我们需要link到统计上的假设检验问题
null hypothesis : β1=0
目的是从统计上来判断这组数据和population相差多少,assessing the accuracy of the coefficient estimation,可以使用 p-value 或者是 confidence interval β^1±2SE(β^1)
选择的统计量 t=SE(β^1)β^1−0 where t distribution with n−2 degrees of freedom assuming β1=0
2.3 How can we compare this model with others models?
(这个相当于模型外的判断模型的好坏 i.e. the extent to which the model fits the data)
assessing the overall accuracy
RSE=n−21RSS=n−21∑in(yi−y^i)2
R2=TSSTSS−RRR=1−∑in(yi−y^i)2RSS,
where TSS is total sum of squares, $RSS$ is the residual sum of squares(对误差的多少)
当是simple regression时,他相同于correlation
这里相当于 proportion of variability in Y that can be explained using X (Y 的变化中能够被$X$解释的部分的比例 )
2.4 GLM extra study with assumption?
https://zhuanlan.zhihu.com/p/22876460
Multiple Linear Regression
We shall also put the notes in goodnotes here
interpreint regression coefficients:希望input 时uncorrelated;correlation 会影响;可以单独和output比较
RSS来判定好坏
Is at least one of the predictors X1,⋯,Xp useful in predictiing the response? F Statistic:
Do all the predictors help to explain Y, or is only a subset of the predictors useful? (不可能经过所有的input;所以基于最小化$RSS$选择一个Xi,然后你基于最小化RSS选择第二个Xj,直到选出来的p-value合格)(或者你可以采用全部放进去,基于p-value,然后一个个删掉)
How well does the model fit the data?
systematic criteria for choosing an 'optimal' member in the path of models produced by forward or backward stepwise selection;
其他度量方式 Mallow's Cp, Akaike information criterion(AIC), Bayesian information criterion(BIC), adjusted R2, Cross-validation(CV)
Given a set of predictor values, what response value should we predict, and how accurate is our prediction
小心qualitative data;可以换成binary x1=0&1在不同情况下,当然还可以有 $x_2$
Removing the additive assumption: interactions and nonlinearity
Interaction:市场造成的相互的影响,比如说你增加x1会影响x2;这时候刚增加一个 x1x2项
hierarchy:hierarchy principle:if we include an interaction in a model, we should also include the main effects, even if the $p$-value associated with their coefficients are not significant.
outliers&non-constant variance of error terms& high leverage points& collinearity section3.3
Gradient descent简单的解释
Gradient descent is a commonly used optimization technique for other models as well, like neural networks, which we'll explore later in this track. Here's an overview of the gradient descent algorithm for a single-parameter linear regression model:
select initial values for the parameter: a1
repeat until convergence (usually implemented with a max number of iterations):
calculate the error (MSE) of the model that uses the current parameter value: MSE(a1)=n1∑i=1n(y^(i)−y(i))2
calculate the derivative of the error (MSE) at the current parameter value: $\frac{d}{da_1}MSE(a_1)$
update the parameter value by subtracting the derivative times a constant ($\alpha$, called the learning rate): a1=a1−αda1dMSE(a1)
Reference:
books: an introduction to statistical learning
notes in Good notes
lai notes
Last updated