回归算法-线性回归分析-过拟合欠拟合岭回归
原标题:回归算法-线性回归分析-过拟合欠拟合岭回归
原文来自:CSDN 原文链接:https://blog.csdn.net/qq_36853469/article/details/103572440
机器学习中的泛化,泛化即是,模型学习到的概念在它处于学习的过程中时模型没有遇见过的样本时候的表现。在机器学习领域中,当我们讨论一个机器学习模型学习和泛化的好坏时,我们通常使用术语:过拟合和欠拟合。我们知道模型训练和测试的时候有两套数据,训练集和测试集。在对训练数据进行拟合时,需要照顾到每个点,而其中有一些噪点,当某个模型过度的学习训练数据中的细节和噪音,以至于模型在新的数据上表现很差,这样的话模型容易复杂,拟合程度较高,造成过拟合。而相反如果值描绘了一部分数据那么模型复杂度过于简单,欠拟合指的是模型在训练和预测时表现都不好的情况,称为欠拟合。
欠拟合例子:
经过训练后,知道了天鹅是有翅膀的,天鹅的嘴巴是长长的。简单的认为有这些特征的都是天鹅。因为机器学习到的天鹅特征太少了,导致区分标准太粗糙,不能准确识别出天鹅。
过拟合例子:
机器通过这些图片来学习天鹅的特征,经过训练后,知道了天鹅是有翅膀的,天鹅的嘴巴是长长的弯曲的,天鹅的脖子是长长的有点曲度,天鹅的整个体型像一个"2"且略大于鸭子。这时候机器已经基本能区别天鹅和其他动物了。然后,很不巧已有的天鹅图片全是白天鹅的,于是机器经过学习后,会认为天鹅的羽毛都是白的,以后看到羽毛是黑的天鹅就会认为那不是天鹅。
过拟合:一个假设在训练数据上能够获得比其他假设更好的拟合, 但是在训练数据外的数据集上却不能很好地拟合数据,此时认为这个假设出现了过拟合的现象。(模型过于复杂)
欠拟合:一个假设在训练数据上不能获得更好的拟合, 但是在训练数据外的数据集上也不能很好地拟合数据,此时认为这个假设出现了欠拟合的现象。(模型过于简单)
对线性模型进行训练学习会变成复杂模型:
在线性回归中,对于特征集过小的情况,容易造成欠拟合(underfitting),对于特征集过大的情况,容易造成过拟合(overfitting)。针对这两种情况有了更好的解决办法
欠拟合#
原因:学习到数据的特征过少
解决办法:增加数据的特征数量
过拟合#
原因:原始特征过多,存在一些嘈杂特征, 模型过于复杂是因为模型尝试去兼顾各个测试数据点
解决办法:
进行特征选择,消除关联性大的特征(很难做)
交叉验证(让所有数据都有过训练)
正则化
作用:可以使得W的每个元素都很小,都接近于0
优点:越小的参数说明模型越简单,越简单的模型则越不容易产生过拟合现象
Ridge
类实现了岭回归模型。其原型为:
class sklearn.linear_model.Ridge(alpha=1.0) alpha: 控制的是对模型正则化的程度。
对之前的波士顿房价预测使用岭回归:
Python代码实现:
# -*- coding: UTF-8 -*- ''' @Author :Jason ''' from sklearn.datasets import load_boston from sklearn.linear_model import LinearRegression,SGDRegressor,Ridge from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.metrics import mean_squared_error def mylinear(): ''' 波士顿放房价预测 :return: ''' #获取数据 lb = load_boston() #分割测试集到训练集合测试集 x_train, x_test, y_train, y_test = train_test_split(lb.data, lb.target, test_size=0.25) # 特征工程-标准化 std_x = StandardScaler() x_train = std_x.fit_transform(x_train) x_test = std_x.transform(x_test) # 目标值 std_y = StandardScaler() y_train = std_y.fit_transform(y_train.reshape(-1, 1)) y_test = std_y.transform(y_test.reshape(-1, 1)) # 正则方程求解预测结果 lr = LinearRegression() lr.fit(x_train, y_train) print(lr.coef_) # 预测的价格 y_lr_predict = std_y.inverse_transform(lr.predict(x_test)) print("使用正规方程每个房子的预测价格:", y_lr_predict) print("正规方程的均方误差:",mean_squared_error(std_y.inverse_transform(y_test),y_lr_predict)) # 使用梯度下降进行预测 sgd = SGDRegressor() sgd.fit(x_train, y_train) print(lr.coef_) y_sgd_predict = std_y.inverse_transform(sgd.predict(x_test)) print("使用梯度下降预测的房子价格:", y_sgd_predict) print("梯度下降的均方误差:",mean_squared_error(std_y.inverse_transform(y_test),y_lr_predict)) # 岭回归进行预测 rd = Ridge(alpha=1.0) rd.fit(x_train, y_train) print(rd.coef_) y_rd_predict = std_y.inverse_transform(rd.predict(x_test)) print("使用梯度下降预测的房子价格:", y_rd_predict) print("梯度下降的均方误差::", mean_squared_error(std_y.inverse_transform(y_test), y_lr_predict)) if __name__ == "__main__": mylinear()
结果:
[[-0.11463352 0.10198367 0.02231006 0.08456446 -0.1723333 0.31915769 0.00298214 -0.31259924 0.27053024 -0.20229054 -0.22157962 0.09269368 -0.41564546]] 使用正规方程每个房子的预测价格: [[13.37072047] [26.93738739] [28.19545157] [ 8.70985175] [16.72440905] [19.05843656] [14.26135991] [17.76652476] [35.18913369] [32.64936447] [27.5591566 ] [21.87827703] [25.00285921] [11.33717032] [22.19304173] [13.35795774] [24.06968308] [14.77941008] [19.57419242] [20.79322448] [24.46285962] [16.41224517] [31.59418299] [ 6.74611377] [19.01910372] [20.68475639] [20.12488826] [26.95872177] [36.23779434] [42.84455696] [14.94080084] [26.5445357 ] [28.52082584] [16.60404243] [23.4395676 ] [16.01863148] [18.03827065] [28.44751371] [20.71141696] [21.07119183] [24.64976302] [22.95224326] [14.51149149] [36.3300707 ] [19.33399057] [23.01857785] [21.27129575] [33.1274282 ] [32.60004603] [22.41354855] [20.96954387] [20.38899346] [18.39376511] [37.54577261] [34.96525154] [19.490239 ] [20.1435841 ] [22.59160076] [21.63470285] [13.89106994] [32.56365165] [23.24058142] [17.66492703] [24.50002664] [20.90663738] [23.32535887] [15.76483067] [25.44275365] [20.5441673 ] [31.6507649 ] [21.76157438] [ 6.09481724] [30.35529485] [ 7.63580748] [19.28338744] [23.84773445] [39.59621583] [21.42616329] [38.12163981] [21.51715207] [38.74571085] [16.5677445 ] [20.08657628] [32.3811511 ] [28.9536634 ] [13.86990902] [21.62537714] [29.15098862] [30.99847303] [19.37855993] [23.72260761] [19.90875463] [25.05175076] [30.87557208] [20.86130209] [ 8.24164039] [18.45063426] [22.39911798] [22.66322542] [18.29588163] [30.82505315] [10.07486629] [26.9426297 ] [17.09467262] [21.46555245] [22.65319887] [19.79651159] [13.07544428] [16.33661015] [ 7.2732533 ] [30.07644346] [19.28137638] [17.17377999] [33.85832317] [ 6.48514523] [18.93675523] [22.11815775] [12.07434388] [21.45163242] [36.20798721] [16.97858736] [14.73369484] [18.4359758 ] [14.56133073] [39.78965182] [23.92298055] [30.35945986]] 正规方程的均方误差: 23.706588866665594 D:Anacondalibsite-packagessklearnlinear_modelstochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.stochastic_gradient.SGDRegressor'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3. "and default tol will be 1e-3." % type(self), FutureWarning) D:Anacondalibsite-packagessklearnutilsvalidation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). y = column_or_1d(y, warn=True) [[-0.11463352 0.10198367 0.02231006 0.08456446 -0.1723333 0.31915769 0.00298214 -0.31259924 0.27053024 -0.20229054 -0.22157962 0.09269368 -0.41564546]] 使用梯度下降预测的房子价格: [13.9675434 26.25395517 28.82785138 11.34263995 19.26770236 19.34921704 14.86438515 18.25076644 34.82306897 31.70422491 26.58583227 20.82171453 24.90505059 10.68786332 21.95951366 14.13626638 26.36916421 16.99842214 19.1582397 21.61437285 23.83356184 15.42570578 31.54189136 7.2133892 18.97314698 21.18110511 20.96145856 28.66355969 36.07228837 40.40821745 16.43764925 25.46671631 28.0378286 18.77263261 23.76765121 16.35723136 18.61891127 28.20591292 18.7148766 21.20050035 24.51922052 22.332078 17.15936663 34.92201288 19.7466164 22.76903901 21.85343597 31.5839049 31.75005387 25.59089977 21.13431009 21.01061578 18.67103962 37.16702628 33.09951354 19.7058571 19.82079479 22.5169791 24.06339415 14.66701301 31.12677295 22.09256881 18.09550166 24.57011575 20.83991004 22.73646668 17.68606482 25.23566533 20.50835635 30.92357728 20.87142974 5.96217968 30.82924562 6.76415866 18.97372961 24.40999227 38.78483334 21.15129923 37.51614735 21.71962147 38.58869574 17.23211894 20.88687759 32.4824928 28.72741693 14.70249025 22.34185217 28.32322741 31.15034863 19.8137511 23.28816382 20.44721455 24.7367006 30.02339054 20.38853641 8.66186931 18.43006622 22.89150979 20.56618397 17.63121468 30.89378931 10.53705615 25.7857825 16.79039751 21.02986393 20.47642314 20.36143414 13.73161921 17.61441538 8.43848624 30.6001853 19.84180816 17.08200907 32.36027075 7.59039982 19.79261747 21.4614797 12.89316816 22.99702567 35.71645671 17.68714071 14.72835352 18.76093134 14.32522529 40.99299445 23.87878656 29.76536528] 梯度下降的均方误差: 23.706588866665594 [[-0.11362487 0.09998654 0.01888465 0.08518396 -0.16902613 0.319941 0.00234247 -0.30921649 0.26045584 -0.1925429 -0.22055136 0.09263432 -0.41393404]] 使用梯度下降预测的房子价格: [[13.37415761] [26.90504784] [28.21563998] [ 8.86419105] [16.86943034] [19.04862675] [14.29283925] [17.76151021] [35.1575154 ] [32.61764177] [27.50665061] [21.8434022 ] [24.99991704] [11.306863 ] [22.1802905 ] [13.39641171] [24.14199075] [14.89035585] [19.54318659] [20.82369319] [24.44051208] [16.36772372] [31.61270652] [ 6.75438103] [18.99650994] [20.68026055] [20.15671857] [26.9938016 ] [36.22804286] [42.74244097] [15.05427772] [26.50909895] [28.49514409] [16.71213297] [23.47915839] [16.03802518] [18.03643845] [28.45703619] [20.59538792] [21.05345456] [24.65311693] [22.91495741] [14.66000564] [36.28741567] [19.32824165] [23.02337164] [21.29647535] [33.0561231 ] [32.55469415] [22.57458257] [20.95378829] [20.38618478] [18.40871609] [37.51218936] [34.88916543] [19.51717166] [20.11309191] [22.59152474] [21.71350625] [13.89935179] [32.50744279] [23.18534185] [17.68366164] [24.50252582] [20.88398045] [23.30830851] [15.82013862] [25.45925461] [20.54611204] [31.64406665] [21.71465182] [ 6.08680787] [30.41068754] [ 7.6031304 ] [19.26933773] [23.83941718] [39.55156596] [21.40980019] [38.07670467] [21.54367529] [38.71553347] [16.56933165] [20.11740868] [32.39646812] [28.93972416] [13.87801213] [21.63966626] [29.10083299] [31.01310786] [19.39937608] [23.71030779] [19.93632761] [25.04560084] [30.83681385] [20.8404729 ] [ 8.24766238] [18.43188882] [22.40756311] [22.54313681] [18.25810087] [30.8338003 ] [10.07691145] [26.9028733 ] [17.06676763] [21.46393621] [22.53018004] [19.81817305] [13.1091368 ] [16.41082874] [ 7.30474261] [30.11268239] [19.31693979] [17.18487629] [33.81121555] [ 6.51419411] [18.96562504] [22.10120331] [12.0857933 ] [21.50510699] [36.17705284] [17.02770956] [14.71783414] [18.42723503] [14.53813717] [39.80049069] [23.94009425] [30.32535077]] 梯度下降的均方误差:: 23.706588866665594
免责声明:本文来自互联网新闻客户端自媒体,不代表本网的观点和立场。
合作及投稿邮箱:E-mail:editor@tusaishared.com
热门资源
Python 爬虫(二)...
所谓爬虫就是模拟客户端发送网络请求,获取网络响...
TensorFlow从1到2...
原文第四篇中,我们介绍了官方的入门案例MNIST,功...
TensorFlow从1到2...
“回归”这个词,既是Regression算法的名称,也代表...
TensorFlow2.0(10...
前面的博客中我们说过,在加载数据和预处理数据时...
反向传播是什么?
深度学习系统能够学习极其复杂的模式,它们通过调...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com