기본 콘텐츠로 건너뛰기

pandas_ta를 적용한 통계적 인덱스 지표

Multi-linear regression

Multi-linear regression

Linear models with one or more independent variables and one response can also be built by applying the sklean.linear_model class. The construction process and evaluation method of this model are the same as the simple linear model in the previous section.

Example 1)
  Let's build a regression model that estimates the close price of Google (go) using the colse values of Dow Jones (dj), nasdaq (na), S&P500 (snp), VIX (vix), and the dollar index (dol) as independent variables.

The data for creating a regression model is prepared as follows:

  • Use the FinanceDataReader package to invoke data from the target event.
  • Combine closing data from all stocks into one object.
  • Manage missing values such as inf, Na, etc. in the data.
    Apply numpy.where() function, DataFrame.replace() and DataFrame.dropna() methods.
  • The independent variables relocate to a structure that is one day ahead of the response.
    The last row of independent variables is kept as a separate variable (new) for prediction from the generated model.
  • Standardize independent variables and response variables.
import numpy as np
import pandas as pd 
import matplotlib.pyplot as plt
from scipy import stats
from sklearn.linear_model import LinearRegression
from sklearn import preprocessing
import FinanceDataReader as fdr
st=pd.Timestamp(2020,5, 1)
et=pd.Timestamp(2021, 12, 24)
code=["DJI", 'IXIC','INX', 'VIX', 'DX',"GOOGL"]
nme=["dj", 'na','snp', 'vix', 'dol',"go"]
da=pd.DataFrame()
for i in code:
    da=pd.concat([da,fdr.DataReader(i,st, et)["Close"]], axis=1)
da.columns=nme
da.head(2)
dj na snp vix dol go
2020-05-01 00:00:00 23723.69 8605.0 NaN 37.19 99.100 1317.32
2020-05-04 00:00:00 23749.76 8710.7 NaN 35.97 99.567 1322.90
np.where(da.isna()==True)
(array([  0,   1,   2,   4,   5,  16,  16,  16,  16,  16,  45,  45,  45,
         …,
        429, 429, 429]),
 array([2, 2, 2, 2, 2, 0, 1, 2, 3, 5, 0, 1, 5, 2, 2, 2, 2, 2, 2, 2, 0, 1,
        …,
        2, 2, 0, 1, 4, 5]))

The called data contains missing values. Apply the replace method to replace this missing value with the immediate value.

da1=da.replace(np.nan, method='ffill')
np.where(da1.isna())
(array([0, 1, 2]), array([2, 2, 2]))

Missing values are replaced with the previous value. However, as in the above result, the missing values in the first row still exist as missing values because there is no replacement value. In this case, use the pandas object method of dropna() to drop it.

da1=da1.dropna()
np.where(da1.isna())
(array([], dtype=int64), array([], dtype=int64))

Separate the data into descriptive variables (ind), response variables (de), and final estimates (new).

ind=da1.values[:-1,:-1]
de=(da1.values[1:,-1]).reshape(-1,1)
new=(da1.values[-1, :-1]).reshape(1,-1)
new
array([[3.595063e+04, 1.565340e+04, 5.500000e+00, 1.796000e+01,
        9.598500e+01]])
ind.shape, de.shape, new.shape
((426, 5), (426, 1), (1, 5))

Each variable varies in scale. In such cases, standardization or Regularization with the mean and standard deviation of each variable is required. Normalization is the conversion of data to [0,1] or [1,1] for the same purpose as standardization, but is mainly used in machine learning with slight differences in the process. Regression applies standardization rather than normalization. This course applies the StandardScaler() class of sklearn.preprocessing.

#for independent 
indScaler=preprocessing.StandardScaler().fit(da1.values[:,:-1])
#for response
deScaler=preprocessing.StandardScaler().fit(da1.values[:,-1].reshape(-1,1)
indNor=indScaler.transform(ind)
indNor[-1, :]
array([ 1.26786583,  1.40270917, -0.28910167, -0.87460412,  1.15241682])
indScaler.inverse_transform(indNor[-1,:])
array([3.595063e+04, 1.565340e+04, 5.500000e+00, 1.796000e+01,
       9.598500e+01])
newNor=indScaler.transform(new)
newNor
array([[ 1.26786583,  1.40270917, -0.28910167, -0.87460412,  1.15241682]])
deNor=deScaler.transform(de)
deNor[-1,:]
array([1.55000732])
deScaler.inverse_transform(deNor[-1,:])
array([2938.33])

Apply the sklearn.linear_model.linearRegression class to create a linear regression model.

model=LinearRegression().fit(indNor, deNor)
R2=model.score(indNor, deNor)
print(f'R2: {round(R2, 3)}')
R2: 0.972
a=model.coef_
print(f'reg.coeff.: {np.around(a, 3)}')
reg.coeff.: [[0.514 0.558 0.053 0.024 0.284]]
a0=model.intercept_ 
print(f'bias: {np.around(a0, 3)}')
bias: [0.007]

The above results show a high coefficient of determination, so the model is appropriate. This result can also be seen as the result of the F test between each of the following independent variables and the response variable:

from sklearn.feature_selection import f_regression
Ftest=f_regression(indNor, deNor.ravel())
re=pd.DataFrame([Ftest[0], Ftest[1]], index=['statistics','p-value'], columns=nme[:-1]).T
np.around(re, 3)

Each independent variable for a response exhibits statistically significant differences. The estimated and actual values by the built model can be visualized as shown in Figure 1.

statistics p-value
dj 3721.670 0.000
na 3699.516 0.000
snp 61.258 0.000
vix 459.769 0.000
dol 11.403 0.001
plt.figure(figsize=(6, 4))
pre=model.predict(indNor)
plt.plot(deNor, color="blue", label="observed")
plt.plot(pre, color="red", label="estimated")
plt.xlabel('Day', size="13")
plt.ylabel('Value(Standardized)', size="13")
plt.legend(loc='best')
plt.show()
Table 1. Obervations and estimates.

Estimates for variable New are as follows:

pre=model.predict(newNor)
print(f'estimates for New(standardized): {np.round(pre, 3)}')
estimates for New(standardized): [[1.733]]
preV=deScaler.inverse_transform([pre])
print(f'estimates for New: {np.round(preV, 3)}')
estimates for New: [[[3036.656]]]

If the normality of the error is consistent, the above estimates can represent lower and upper bounds of the estimate depending on the confidence interval of the error. The normality of the error can be determined by the function scipy.stats.probplot().

error=de-deScaler.inverse_transform(model.predict(indNor))
errorRe=stats.probplot(error.ravel(),plot=plt)
print("slope: {:.04}, bias: {:.04}, correl.coeff.: {:.04}".format(errorRe[1][0], errorRe[1][1], errorRe[1][2]))
slope: 90.16, bias: 2.911e-14, correl.coeff.: 0.9974

From the results above, the error can be assumed to be a probability variable that follows a normal distribution. Therefore, based on this assumption, the confidence interval can be calculated at a significance level of 0.95. Because it is a sample, apply the standard error.

se=pd.DataFrame(error).sem()
se
0    4.359057
dtype: float64
ci=stats.norm.interval(0.95, error.mean(), se)
print(f'Lower : {np.around(ci[0], 3)}, Upper: {np.around(ci[1], 3)}')
Lower : [-8.544], Upper: [8.544]

Applying this range to the estimate is as follows:

preCI=[float(preV+i) for i in [ci[0], ci[1]]]
print(f'Lower: {round(preCI[0], 0)}, Upper: {round(preCI[1], 0)}')
Lower: 3028.0, Upper: 3045.0

댓글

이 블로그의 인기 게시물

[Linear Algebra] 유사변환(Similarity transformation)

유사변환(Similarity transformation) n×n 차원의 정방 행렬 A, B 그리고 가역 행렬 P 사이에 식 1의 관계가 성립하면 행렬 A와 B는 유사행렬(similarity matrix)이 되며 행렬 A를 가역행렬 P와 B로 분해하는 것을 유사 변환(similarity transformation) 이라고 합니다. $$\tag{1} A = PBP^{-1} \Leftrightarrow P^{-1}AP = B $$ 식 2는 식 1의 양변에 B의 고유값을 고려한 것입니다. \begin{align}\tag{식 2} B - \lambda I &= P^{-1}AP – \lambda P^{-1}P\\ &= P^{-1}(AP – \lambda P)\\ &= P^{-1}(A - \lambda I)P \end{align} 식 2의 행렬식은 식 3과 같이 정리됩니다. \begin{align} &\begin{aligned}\textsf{det}(B - \lambda I ) & = \textsf{det}(P^{-1}(AP – \lambda P))\\ &= \textsf{det}(P^{-1}) \textsf{det}((A – \lambda I)) \textsf{det}(P)\\ &= \textsf{det}(P^{-1}) \textsf{det}(P) \textsf{det}((A – \lambda I))\\ &= \textsf{det}(A – \lambda I)\end{aligned}\\ &\begin{aligned}\because \; \textsf{det}(P^{-1}) \textsf{det}(P) &= \textsf{det}(P^{-1}P)\\ &= \textsf{det}(I)\end{aligned}\end{align} 유사행렬의 특성 유사행렬인 두 정방행렬 A와 B는 'A ~ B' 와 같...

[sympy] Sympy객체의 표현을 위한 함수들

Sympy객체의 표현을 위한 함수들 General simplify(x): 식 x(sympy 객체)를 간단히 정리 합니다. import numpy as np from sympy import * x=symbols("x") a=sin(x)**2+cos(x)**2 a $\sin^{2}{\left(x \right)} + \cos^{2}{\left(x \right)}$ simplify(a) 1 simplify(b) $\frac{x^{3} + x^{2} - x - 1}{x^{2} + 2 x + 1}$ simplify(b) x - 1 c=gamma(x)/gamma(x-2) c $\frac{\Gamma\left(x\right)}{\Gamma\left(x - 2\right)}$ simplify(c) $\displaystyle \left(x - 2\right) \left(x - 1\right)$ 위의 예들 중 객체 c의 감마함수(gamma(x))는 확률분포 등 여러 부분에서 사용되는 표현식으로 다음과 같이 정의 됩니다. 감마함수는 음이 아닌 정수를 제외한 모든 수에서 정의됩니다. 식 1과 같이 자연수에서 감마함수는 factorial(!), 부동소수(양의 실수)인 경우 적분을 적용하여 계산합니다. $$\tag{식 1}\Gamma(n) =\begin{cases}(n-1)!& n:\text{자연수}\\\int^\infty_0x^{n-1}e^{-x}\,dx& n:\text{부동소수}\end{cases}$$ x=symbols('x') gamma(x).subs(x,4) $\displaystyle 6$ factorial 계산은 math.factorial() 함수를 사용할 수 있습니다. import math math.factorial(3) 6 a=gamma(x).subs(x,4.5) a.evalf(3) 11.6 simpilfy() 함수의 알고리즘은 식에서 공통사항을 찾아 정리하...

유리함수 그래프와 점근선 그리기

내용 유리함수(Rational Function) 점근선(asymptote) 유리함수 그래프와 점근선 그리기 유리함수(Rational Function) 유리함수는 분수형태의 함수를 의미합니다. 예를들어 다음 함수는 분수형태의 유리함수입니다. $$f(x)=\frac{x^{2} - 1}{x^{2} + x - 6}$$ 분수의 경우 분모가 0인 경우 정의할 수 없습니다. 이와 마찬가지로 유리함수 f(x)의 정의역은 분모가 0이 아닌 부분이어야 합니다. 그러므로 위함수의 정의역은 분모가 0인 부분을 제외한 부분들로 구성됩니다. sympt=solve(denom(f), a); asympt [-3, 2] $$-\infty \lt x \lt -3, \quad -3 \lt x \lt 2, \quad 2 \lt x \lt \infty$$ 이 정의역을 고려해 그래프를 작성을 위한 사용자 정의함수는 다음과 같습니다. def validX(x, f, symbol): ① a=[] b=[] for i in x: try: b.append(float(f.subs(symbol, i))) a.append(i) except: pass return(a, b) #x는 임의로 지정한 정의역으로 불연속선점을 기준으로 구분된 몇개의 구간으로 전달할 수 있습니다. #그러므로 인수 x는 2차원이어야 합니다. def RationalPlot(x, f, sym, dp=100): fig, ax=plt.subplots(dpi=dp) # ② for k in x: #③ x4, y4=validX(k, f, sym) ax.plot(x4, y4) ax.spines['left'].set_position(('data', 0)) ax.spines['right...