Multi-linear regression
Linear models with one or more independent variables and one response can also be built by applying the sklean.linear_model
class. The construction process and evaluation method of this model are the same as the simple linear model in the previous section.
Example 1)
Let's build a regression model that estimates the close price of Google (go) using the colse values of Dow Jones (dj), nasdaq (na), S&P500 (snp), VIX (vix), and the dollar index (dol) as independent variables.
The data for creating a regression model is prepared as follows:
- Use the FinanceDataReader package to invoke data from the target event.
- Combine closing data from all stocks into one object.
- Manage missing values such as inf, Na, etc. in the data.
Applynumpy.where()
function,DataFrame.replace() andDataFrame.dropna() methods. - The independent variables relocate to a structure that is one day ahead of the response.
The last row of independent variables is kept as a separate variable (new) for prediction from the generated model. - Standardize independent variables and response variables.
import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import stats from sklearn.linear_model import LinearRegression from sklearn import preprocessing import FinanceDataReader as fdr
st=pd.Timestamp(2020,5, 1) et=pd.Timestamp(2021, 12, 24) code=["DJI", 'IXIC','INX', 'VIX', 'DX',"GOOGL"] nme=["dj", 'na','snp', 'vix', 'dol',"go"] da=pd.DataFrame() for i in code: da=pd.concat([da,fdr.DataReader(i,st, et)["Close"]], axis=1) da.columns=nme da.head(2)
dj | na | snp | vix | dol | go | |
---|---|---|---|---|---|---|
2020-05-01 00:00:00 | 23723.69 | 8605.0 | NaN | 37.19 | 99.100 | 1317.32 |
2020-05-04 00:00:00 | 23749.76 | 8710.7 | NaN | 35.97 | 99.567 | 1322.90 |
np.where(da.isna()==True)
(array([ 0, 1, 2, 4, 5, 16, 16, 16, 16, 16, 45, 45, 45, …, 429, 429, 429]), array([2, 2, 2, 2, 2, 0, 1, 2, 3, 5, 0, 1, 5, 2, 2, 2, 2, 2, 2, 2, 0, 1, …, 2, 2, 0, 1, 4, 5]))
The called data contains missing values. Apply the replace method to replace this missing value with the immediate value.
da1=da.replace(np.nan, method='ffill') np.where(da1.isna())
(array([0, 1, 2]), array([2, 2, 2]))
Missing values are replaced with the previous value. However, as in the above result, the missing values in the first row still exist as missing values because there is no replacement value. In this case, use the pandas object method of dropna()
to drop it.
da1=da1.dropna() np.where(da1.isna())
(array([], dtype=int64), array([], dtype=int64))
Separate the data into descriptive variables (ind), response variables (de), and final estimates (new).
ind=da1.values[:-1,:-1] de=(da1.values[1:,-1]).reshape(-1,1) new=(da1.values[-1, :-1]).reshape(1,-1) new
array([[3.595063e+04, 1.565340e+04, 5.500000e+00, 1.796000e+01, 9.598500e+01]])
ind.shape, de.shape, new.shape
((426, 5), (426, 1), (1, 5))
Each variable varies in scale. In such cases, standardization or Regularization with the mean and standard deviation of each variable is required. Normalization is the conversion of data to [0,1] or [1,1] for the same purpose as standardization, but is mainly used in machine learning with slight differences in the process. Regression applies standardization rather than normalization. This course applies the StandardScaler()
class of sklearn.preprocessing.
#for independent indScaler=preprocessing.StandardScaler().fit(da1.values[:,:-1]) #for response deScaler=preprocessing.StandardScaler().fit(da1.values[:,-1].reshape(-1,1) indNor=indScaler.transform(ind) indNor[-1, :]
array([ 1.26786583, 1.40270917, -0.28910167, -0.87460412, 1.15241682])
indScaler.inverse_transform(indNor[-1,:])
array([3.595063e+04, 1.565340e+04, 5.500000e+00, 1.796000e+01, 9.598500e+01])
newNor=indScaler.transform(new) newNor
array([[ 1.26786583, 1.40270917, -0.28910167, -0.87460412, 1.15241682]])
deNor=deScaler.transform(de) deNor[-1,:]
array([1.55000732])
deScaler.inverse_transform(deNor[-1,:])
array([2938.33])
Apply the sklearn.linear_model.linearRegression
class to create a linear regression model.
model=LinearRegression().fit(indNor, deNor) R2=model.score(indNor, deNor) print(f'R2: {round(R2, 3)}')
R2: 0.972
a=model.coef_print(f'reg.coeff.: {np.around(a, 3)}')
reg.coeff.: [[0.514 0.558 0.053 0.024 0.284]]
a0=model.intercept_ print(f'bias: {np.around(a0, 3)}')
bias: [0.007]
The above results show a high coefficient of determination, so the model is appropriate. This result can also be seen as the result of the F test between each of the following independent variables and the response variable:
from sklearn.feature_selection import f_regression Ftest=f_regression(indNor, deNor.ravel()) re=pd.DataFrame([Ftest[0], Ftest[1]], index=['statistics','p-value'], columns=nme[:-1]).T np.around(re, 3)
statistics | p-value | |
---|---|---|
dj | 3721.670 | 0.000 |
na | 3699.516 | 0.000 |
snp | 61.258 | 0.000 |
vix | 459.769 | 0.000 |
dol | 11.403 | 0.001 |
plt.figure(figsize=(6, 4)) pre=model.predict(indNor) plt.plot(deNor, color="blue", label="observed") plt.plot(pre, color="red", label="estimated") plt.xlabel('Day', size="13") plt.ylabel('Value(Standardized)', size="13") plt.legend(loc='best') plt.show()
Estimates for variable New are as follows:
pre=model.predict(newNor) print(f'estimates for New(standardized): {np.round(pre, 3)}')
estimates for New(standardized): [[1.733]]
preV=deScaler.inverse_transform([pre]) print(f'estimates for New: {np.round(preV, 3)}')
estimates for New: [[[3036.656]]]
If the normality of the error is consistent, the above estimates can represent lower and upper bounds of the estimate depending on the confidence interval of the error. The normality of the error can be determined by the function scipy.stats.probplot()
.
error=de-deScaler.inverse_transform(model.predict(indNor)) errorRe=stats.probplot(error.ravel(),plot=plt) print("slope: {:.04}, bias: {:.04}, correl.coeff.: {:.04}".format(errorRe[1][0], errorRe[1][1], errorRe[1][2]))
slope: 90.16, bias: 2.911e-14, correl.coeff.: 0.9974
From the results above, the error can be assumed to be a probability variable that follows a normal distribution. Therefore, based on this assumption, the confidence interval can be calculated at a significance level of 0.95. Because it is a sample, apply the standard error.
se=pd.DataFrame(error).sem() se
0 4.359057 dtype: float64
ci=stats.norm.interval(0.95, error.mean(), se) print(f'Lower : {np.around(ci[0], 3)}, Upper: {np.around(ci[1], 3)}')
Lower : [-8.544], Upper: [8.544]
Applying this range to the estimate is as follows:
preCI=[float(preV+i) for i in [ci[0], ci[1]]] print(f'Lower: {round(preCI[0], 0)}, Upper: {round(preCI[1], 0)}')
Lower: 3028.0, Upper: 3045.0
댓글
댓글 쓰기