기본 콘텐츠로 건너뛰기

[ML] 결정트리(Decision Tree) 모델

Multi-linear regression

Multi-linear regression

Linear models with one or more independent variables and one response can also be built by applying the sklean.linear_model class. The construction process and evaluation method of this model are the same as the simple linear model in the previous section.

Example 1)
  Let's build a regression model that estimates the close price of Google (go) using the colse values of Dow Jones (dj), nasdaq (na), S&P500 (snp), VIX (vix), and the dollar index (dol) as independent variables.

The data for creating a regression model is prepared as follows:

  • Use the FinanceDataReader package to invoke data from the target event.
  • Combine closing data from all stocks into one object.
  • Manage missing values such as inf, Na, etc. in the data.
    Apply numpy.where() function, DataFrame.replace() and DataFrame.dropna() methods.
  • The independent variables relocate to a structure that is one day ahead of the response.
    The last row of independent variables is kept as a separate variable (new) for prediction from the generated model.
  • Standardize independent variables and response variables.
import numpy as np
import pandas as pd 
import matplotlib.pyplot as plt
from scipy import stats
from sklearn.linear_model import LinearRegression
from sklearn import preprocessing
import FinanceDataReader as fdr
st=pd.Timestamp(2020,5, 1)
et=pd.Timestamp(2021, 12, 24)
code=["DJI", 'IXIC','INX', 'VIX', 'DX',"GOOGL"]
nme=["dj", 'na','snp', 'vix', 'dol',"go"]
da=pd.DataFrame()
for i in code:
    da=pd.concat([da,fdr.DataReader(i,st, et)["Close"]], axis=1)
da.columns=nme
da.head(2)
dj na snp vix dol go
2020-05-01 00:00:00 23723.69 8605.0 NaN 37.19 99.100 1317.32
2020-05-04 00:00:00 23749.76 8710.7 NaN 35.97 99.567 1322.90
np.where(da.isna()==True)
(array([  0,   1,   2,   4,   5,  16,  16,  16,  16,  16,  45,  45,  45,
         …,
        429, 429, 429]),
 array([2, 2, 2, 2, 2, 0, 1, 2, 3, 5, 0, 1, 5, 2, 2, 2, 2, 2, 2, 2, 0, 1,
        …,
        2, 2, 0, 1, 4, 5]))

The called data contains missing values. Apply the replace method to replace this missing value with the immediate value.

da1=da.replace(np.nan, method='ffill')
np.where(da1.isna())
(array([0, 1, 2]), array([2, 2, 2]))

Missing values are replaced with the previous value. However, as in the above result, the missing values in the first row still exist as missing values because there is no replacement value. In this case, use the pandas object method of dropna() to drop it.

da1=da1.dropna()
np.where(da1.isna())
(array([], dtype=int64), array([], dtype=int64))

Separate the data into descriptive variables (ind), response variables (de), and final estimates (new).

ind=da1.values[:-1,:-1]
de=(da1.values[1:,-1]).reshape(-1,1)
new=(da1.values[-1, :-1]).reshape(1,-1)
new
array([[3.595063e+04, 1.565340e+04, 5.500000e+00, 1.796000e+01,
        9.598500e+01]])
ind.shape, de.shape, new.shape
((426, 5), (426, 1), (1, 5))

Each variable varies in scale. In such cases, standardization or Regularization with the mean and standard deviation of each variable is required. Normalization is the conversion of data to [0,1] or [1,1] for the same purpose as standardization, but is mainly used in machine learning with slight differences in the process. Regression applies standardization rather than normalization. This course applies the StandardScaler() class of sklearn.preprocessing.

#for independent 
indScaler=preprocessing.StandardScaler().fit(da1.values[:,:-1])
#for response
deScaler=preprocessing.StandardScaler().fit(da1.values[:,-1].reshape(-1,1)
indNor=indScaler.transform(ind)
indNor[-1, :]
array([ 1.26786583,  1.40270917, -0.28910167, -0.87460412,  1.15241682])
indScaler.inverse_transform(indNor[-1,:])
array([3.595063e+04, 1.565340e+04, 5.500000e+00, 1.796000e+01,
       9.598500e+01])
newNor=indScaler.transform(new)
newNor
array([[ 1.26786583,  1.40270917, -0.28910167, -0.87460412,  1.15241682]])
deNor=deScaler.transform(de)
deNor[-1,:]
array([1.55000732])
deScaler.inverse_transform(deNor[-1,:])
array([2938.33])

Apply the sklearn.linear_model.linearRegression class to create a linear regression model.

model=LinearRegression().fit(indNor, deNor)
R2=model.score(indNor, deNor)
print(f'R2: {round(R2, 3)}')
R2: 0.972
a=model.coef_
print(f'reg.coeff.: {np.around(a, 3)}')
reg.coeff.: [[0.514 0.558 0.053 0.024 0.284]]
a0=model.intercept_ 
print(f'bias: {np.around(a0, 3)}')
bias: [0.007]

The above results show a high coefficient of determination, so the model is appropriate. This result can also be seen as the result of the F test between each of the following independent variables and the response variable:

from sklearn.feature_selection import f_regression
Ftest=f_regression(indNor, deNor.ravel())
re=pd.DataFrame([Ftest[0], Ftest[1]], index=['statistics','p-value'], columns=nme[:-1]).T
np.around(re, 3)

Each independent variable for a response exhibits statistically significant differences. The estimated and actual values by the built model can be visualized as shown in Figure 1.

statistics p-value
dj 3721.670 0.000
na 3699.516 0.000
snp 61.258 0.000
vix 459.769 0.000
dol 11.403 0.001
plt.figure(figsize=(6, 4))
pre=model.predict(indNor)
plt.plot(deNor, color="blue", label="observed")
plt.plot(pre, color="red", label="estimated")
plt.xlabel('Day', size="13")
plt.ylabel('Value(Standardized)', size="13")
plt.legend(loc='best')
plt.show()
Table 1. Obervations and estimates.

Estimates for variable New are as follows:

pre=model.predict(newNor)
print(f'estimates for New(standardized): {np.round(pre, 3)}')
estimates for New(standardized): [[1.733]]
preV=deScaler.inverse_transform([pre])
print(f'estimates for New: {np.round(preV, 3)}')
estimates for New: [[[3036.656]]]

If the normality of the error is consistent, the above estimates can represent lower and upper bounds of the estimate depending on the confidence interval of the error. The normality of the error can be determined by the function scipy.stats.probplot().

error=de-deScaler.inverse_transform(model.predict(indNor))
errorRe=stats.probplot(error.ravel(),plot=plt)
print("slope: {:.04}, bias: {:.04}, correl.coeff.: {:.04}".format(errorRe[1][0], errorRe[1][1], errorRe[1][2]))
slope: 90.16, bias: 2.911e-14, correl.coeff.: 0.9974

From the results above, the error can be assumed to be a probability variable that follows a normal distribution. Therefore, based on this assumption, the confidence interval can be calculated at a significance level of 0.95. Because it is a sample, apply the standard error.

se=pd.DataFrame(error).sem()
se
0    4.359057
dtype: float64
ci=stats.norm.interval(0.95, error.mean(), se)
print(f'Lower : {np.around(ci[0], 3)}, Upper: {np.around(ci[1], 3)}')
Lower : [-8.544], Upper: [8.544]

Applying this range to the estimate is as follows:

preCI=[float(preV+i) for i in [ci[0], ci[1]]]
print(f'Lower: {round(preCI[0], 0)}, Upper: {round(preCI[1], 0)}')
Lower: 3028.0, Upper: 3045.0

댓글

이 블로그의 인기 게시물

[Linear Algebra] 유사변환(Similarity transformation)

유사변환(Similarity transformation) n×n 차원의 정방 행렬 A, B 그리고 가역 행렬 P 사이에 식 1의 관계가 성립하면 행렬 A와 B는 유사행렬(similarity matrix)이 되며 행렬 A를 가역행렬 P와 B로 분해하는 것을 유사 변환(similarity transformation) 이라고 합니다. $$\tag{1} A = PBP^{-1} \Leftrightarrow P^{-1}AP = B $$ 식 2는 식 1의 양변에 B의 고유값을 고려한 것입니다. \begin{align}\tag{식 2} B - \lambda I &= P^{-1}AP – \lambda P^{-1}P\\ &= P^{-1}(AP – \lambda P)\\ &= P^{-1}(A - \lambda I)P \end{align} 식 2의 행렬식은 식 3과 같이 정리됩니다. \begin{align} &\begin{aligned}\textsf{det}(B - \lambda I ) & = \textsf{det}(P^{-1}(AP – \lambda P))\\ &= \textsf{det}(P^{-1}) \textsf{det}((A – \lambda I)) \textsf{det}(P)\\ &= \textsf{det}(P^{-1}) \textsf{det}(P) \textsf{det}((A – \lambda I))\\ &= \textsf{det}(A – \lambda I)\end{aligned}\\ &\begin{aligned}\because \; \textsf{det}(P^{-1}) \textsf{det}(P) &= \textsf{det}(P^{-1}P)\\ &= \textsf{det}(I)\end{aligned}\end{align} 유사행렬의 특성 유사행렬인 두 정방행렬 A와 B는 'A ~ B' 와 같

[matplotlib] 히스토그램(Histogram)

히스토그램(Histogram) 히스토그램은 확률분포의 그래픽적인 표현이며 막대그래프의 종류입니다. 이 그래프가 확률분포와 관계가 있으므로 통계적 요소를 나타내기 위해 많이 사용됩니다. plt.hist(X, bins=10)함수를 사용합니다. x=np.random.randn(1000) plt.hist(x, 10) plt.show() 위 그래프의 y축은 각 구간에 해당하는 갯수이다. 빈도수 대신 확률밀도를 나타내기 위해서는 위 함수의 매개변수 normed=True로 조정하여 나타낼 수 있다. 또한 매개변수 bins의 인수를 숫자로 전달할 수 있지만 리스트 객체로 지정할 수 있다. 막대그래프의 경우와 마찬가지로 각 막대의 폭은 매개변수 width에 의해 조정된다. y=np.linspace(min(x)-1, max(x)+1, 10) y array([-4.48810153, -3.54351935, -2.59893717, -1.65435499, -0.70977282, 0.23480936, 1.17939154, 2.12397372, 3.0685559 , 4.01313807]) plt.hist(x, y, normed=True) plt.show()

R 미분과 적분

내용 expression 미분 2차 미분 mosaic를 사용한 미분 적분 미분과 적분 R에서의 미분과 적분 함수는 expression()함수에 의해 생성된 표현식을 대상으로 합니다. expression expression(문자, 또는 식) 이 표현식의 평가는 eval() 함수에 의해 실행됩니다. > ex1<-expression(1+0:9) > ex1 expression(1 + 0:9) > eval(ex1) [1] 1 2 3 4 5 6 7 8 9 10 > ex2<-expression(u, 2, u+0:9) > ex2 expression(u, 2, u + 0:9) > ex2[1] expression(u) > ex2[2] expression(2) > ex2[3] expression(u + 0:9) > u<-0.9 > eval(ex2[3]) [1] 0.9 1.9 2.9 3.9 4.9 5.9 6.9 7.9 8.9 9.9 미분 D(표현식, 미분 변수) 함수로 미분을 실행합니다. 이 함수의 표현식은 expression() 함수로 생성된 객체이며 미분 변수는 다음 식의 분모의 변수를 의미합니다. $$\frac{d}{d \text{변수}}\text{표현식}$$ 이 함수는 어떤 함수의 미분의 결과를 표현식으로 반환합니다. > D(expression(2*x^3), "x") 2 * (3 * x^2) > eq<-expression(log(x)) > eq expression(log(x)) > D(eq, "x") 1/x > eq2<-expression(a/(1+b*exp(-d*x))); eq2 expression(a/(1 + b * exp(-d * x))) > D(eq2, "x") a * (b * (exp(-d * x) * d))/(1 + b