说明:算法是使用的梯度下降算法,成本函数是使用的最小二乘法:求残差的平方和的极小值
import numpy as np # 定义假设函数:X是一个矩阵 W是一个列向量 def hyFunction(X, W): return X.dot(W) # 一次计算所有的样本结果 pass # 梯度函数:X是样本矩阵,W是系数,y是实际结果 def gradientFunction(X, W, y): return (X.dot(W) - y).dot(X)/X.shape[0] # 一次计算所有的W的梯度值 pass # 成本函数:X是矩阵,W是向量,y也是向量 def costFunction(X, W, y): return 0.5*np.sum((X.dot(W) - y)**2)/X.shape[0] # 基于最小二乘法构造的成本函数:残差的平方和 pass # 定义梯度下降算法:X是样本,w是初始系数,costFunc成本函数,gFunc梯度函数,lamb步进系数,tolance收敛条件,times最大计算次数 def gradientDescent(X, w, y, costFunc, gFunc, lamb = 0.001 , tolance = 1.0e-12, times = 2000000): t = 0 # 上一次结算结果 result = costFunc(X, w, y) # 上一次的梯度 g = gFunc(X, w, y) # 根据梯度和步进系数计算的新的点 newW = w - lamb*g # 代入判别函数计算新的结果值 newResult = costFunc(X, newW, y) # 如果两次的结算结果的差值没有达到收敛条件,继续迭代 while np.abs(result - newResult) > tolance: # 把点移动到新的点 w = newW result = newResult g = gFunc(X, w, y) newW = w - lamb * g newResult = costFunc(X, newW, y) t += 1 # 避免无法收敛的情况 if t > times: break pass pass return w pass数据验证测试:
# 构造测试数据:验证算的有效性 ''' ''' ''' # f(x1, x2) = w0 + w1*x1 + w2*x2 # w1 = 2 w2 = 1 w0=3 # 样本: ''' X = np.array([[3, 0],[4, 1], [5,2], [7,3]]) y = np.array([9, 12,15, 20]) row = X.shape[0] one = np.ones(row) one = one[:,np.newaxis] X = np.hstack((X, one)) w = gradientDescent(X, np.array([2, 1, 4]), y, costFunction, gradientFunction) print(w) # 扩围降阶:对高阶函数拟合,下列x和y是使用函数:y=0.5*x*x*x+1.2*x*x-2.3*x+10 大致计算出来的 x = np.array([-3, -2.3, -1.6, -1, -0.3, 0.3, 1, 1.6, 2.3, 3]) y = np.array([14.2, 15.5, 14.8, 13., 10.8, 9.4, 9.4, 11.8, 17.5, 27.4]) def make_x_ext(X): x_ext = np.c_[X[:,np.newaxis], np.ones(len(X))] # 追加全1列 x_ext = np.insert(x_ext, 0, np.power(X, 2), axis=1) # 插入x*x数据列 x_ext = np.insert(x_ext, 0, np.power(X, 3), axis=1) # 插入x*x*x数据列 return x_ext w_init = np.array([2.5, 3.0, 1.2, 0.7]) # 猜测这就是最优的系数 x_ext = make_x_ext(x) w = gradientDescent(x_ext, w_init, y, costFunction, gradientFunction) print(w)低阶和高阶拟合效果:
第一组数据拟合的系数,符合:
[1.99858201 1.00181951 3.00403465]
第二组数据拟合的系数,符合:
[ 0.49012808 1.20100303 -2.19801389 10.07079948]
f(W,X) = [[w11, w12] dot [[x11,x12,x13], [w12, W22]] [x21,x22,x23], cost function : 成本函数/目标函数 Σ(i(1,n))(h(x(i)) - y(i))^2
最小二乘法是:线性回归问题的计算方法
有监督学习模型: 计算值 实际值
残差:计算值 - 实际值
假设函数:自定义的方程式 h(x) = w0+ w1*x
成本函数是w0和w1系数的函数: f(w0 , w1) =Σ(i(1,n))(h(x(i)) - y(i))^2 =Σ(i(1,n))(w0+w1*x(i) - y(i))^2 = (w0 + w1*x1 - y1)^2 + (w0 + w1*x2 - y2)^2 + ... + (w0 + w1*xn - yn)^2 求极值f(w0 , w1),导数值,梯度下降: f'(w0) = 2*(w0 + w1*x1 - y1) + 2*(w0 + w1*x2 - y2) + ... + 2*(w0 + w1*xn - yn) = 0 f'(w1) = 2*x1*(w0 + w1*x1 - y1) + 2*x2*(w0 + w1*x2 - y2) + ... + 2*xn*(w0 + w1*xn - yn) = 0
2*(w0 + w1*x1 - y1) + 2*(w0 + w1*x2 - y2) + ... + 2*(w0 + w1*xn - yn) = 0
n*w0 + (w1*x1 + w1*x2 + ... w1*xn) - (y1+y2+...+yn) = 0 n*w0 = (y1+y2+...+yn) - w1*(x1+x2+...+xn) w0 = ((y1+y2+...+yn) - w1*(x1+x2+...+xn))/n w0 = mean(y) - w1*mean(x)
2*x1*(w0 + w1*x1 - y1) + 2*x2*(w0 + w1*x2 - y2) + ... + 2*xn*(w0 + w1*xn - yn) = 0
w0*(x1+x2+...xn) + w1 * (x1^2+x2^2+...+xn^2) - (x1*y1 + x2*y2+ ... + xn*yn) = 0
残熵: 熵增 熵减
# pizza x1:大小 3 4 5 7 x2: 距离 0 1 2 3 . . . xm
h(x1, x2) = w1*x1 + w2*x2 + w0 # w1 = 2 w2 = 1 w0=3
样本: X = [[3, 0],[4, 1], [5,2], [7,3]]
y = [9, 12,15, 20]
残差的平方和: cost = Σ (h(x(i)1, x(i)2) - y(i))^2
h(x1, x2..., xm) = w0 + w1*x1 + w2*x2 + ... + wm*xm
扩展到m个维度: cost = Σ (h(xi1,xi2...,xim) - y(i))^2
高阶: f(x1, x2) = x1^2 + x2 + 2x2^2 + x1 + w0
扩围降阶: x3 = x1^2 x4 = x2^2 f(x1, x2) = x1^2 + x2 + 2x2^2 + x1 + 10 f(x1, x2) = x3 + x2 + 2x4 + x1 + 10
X = [[2,3],[4,6],[7,8]] y = [10,20,40]
XE = [[2,3, 4, 9, 1],[4,6, 16, 36, 1],[7,8, 49,64, 1]]
n个维度的线性回归问题,梯度下降算法的一般解法: 假设函数: f(x1,x2,...,xn) = w0 + w1*x1 + ... wn*xn
X = [[x11,x12,...,x1n] [x21,x22,...,x2n] . . . [xm1,xm2,...,xmn] ]
y = [y1,y2,...,ym]
F(w0,w1,...,wn) = Σ(1,m)(f(x(i)1,x(i)2,...,x(i)n) - y(i))^2
F(w0,w1,...,wn) = Σ(1,m)(w0 + w1*x(i)1 + ... wn*x(i)n - y(i))^2
求解成本函数的极小值(梯度下降算法):
F'(w1) = 2 * Σ(1,m)((w0 + w1*x(i)1 + ... wn*x(i)n - y(i))*xi1) F'(w2) = 2 * Σ(1,m)((w0 + w1*x(i)1 + ... wn*x(i)n - y(i))*xi2) . . . F'(wn) = 2 * Σ(1,m)((w0 + w1*x(i)1 + ... wn*x(i)n - y(i))*xin)
F'(w0) = 2 * Σ(1,m)(w0 + w1*x(i)1 + ... wn*x(i)n - y(i))*1
X.T.dot(X.dot(W)) #
X.dot(W).T.dot(X)
展开式:
F'(w1) = 2 * Σ(1,m)((w0 + w1*x(i)1 + ... wn*x(i)n - y(i))*xi1) = 2* (( (w0 + w1*x11 + ... wn*x1n - y1)*x11 + (w0 + w1*x21 + ... +wn*x2n - y2) * x21+... +(w0 + w1*xm1 + ... +wn*xmn - ym) * xm1 )) = 2* X.dot(w).T.dot(X[0]) F'(w2) = 2 * Σ(1,m)((w0 + w1*x(i)1 + ... wn*x(i)n - y(i))*xi2) = 2* X.dot(w).T.dot(X[1]) . . . F'(wn) = 2 * Σ(1,m)((w0 + w1*x(i)1 + ... wn*x(i)n - y(i))*xin) = 2* X.dot(w).T.dot(X[n-1])
F'(w0) = 2 * Σ(1,m)(w0 + w1*x(i)1 + ... wn*x(i)n - y(i))*1 = 2* X.dot(w).T.dot(X[n]) F'(w) = 2* (X.dot(w) - y.T).T.dot(X)