기본 콘텐츠로 건너뛰기

pandas_ta를 적용한 통계적 인덱스 지표

Covariance and correlation coefficient

Contents

Covariance and correlation coefficient

Covariance

If it is a continuous variable, you cannot create a cross tabulation that is subject to χ2 test. Instead, you can apply correlation analysis. Correlation analysis is an analysis method that measures the relationship between two or more continuous variables.

Use a scatterplot to visually represent the correlation between the two variables. Figure 1(a) shows a clear direct proportion between y1 and y2. On the other hand, (c) shows an inverse relationship, but (b) cannot specify any proportional relationship between y1 and y2. These relationships can be quantitatively represented using statistics called correlation coefficients, which relate to covariance of two variables and their respective standard deviations.

Figure 1. (a) normal relationship (b) unrelated relationship (c) inverse relationship of the two variables.

Figure 1(a) measures each deviation of y11, y22 between the means of each variable and any point $y_1 and y_2$. In this case, an increase of $y_2$ with an increase of $y_1$ is observed, so the product (y11)(y22) of the two deviations will increase more than each and be positive. If the same process is applied to figure (c), the product of the two deviations will be negative. In Figure (b), you cannot specify the product sign of the two deviations. As a result,(y11)(y22) is an indicator of linear dependence of two variables y1 and y2 and the expected value of this deviation product E[(y11)(y11)] is called covariance (Equation 1).

$$\begin{align}\tag{1} \text{Cov}(Y_1, Y_2)&=E[(Y_1-\mu_1)(Y_2-\mu_2)]\\ &=E(Y_1Y_2-Y_1\mu_2-\mu_1 Y_2+\mu_1 \mu_2)\\&= E(Y_1Y_2)-E(Y_1)\mu_2-\mu_1E(Y_2)+\mu_1 \mu_2\\&=E(Y_1Y_2)-\mu_1 \mu_2\\\because\; E(Y_1)=\mu_1, & E(Y_2)=\mu_2\end{align}$$

As the absolute value of covariance between two variables increases, linear dependence increases, positive covariance means direct proposition, and negative value means inverse relationship. If the covariance is zero, there is no linear dependence between the two variables. However, using covariance as an absolute dependency scale is difficult because its value depends on the measurement scale. As a result, it is difficult to check whether the covariance is large or small at a glance. These problems can be solved by standardizing values and using the Pearson correlation coefficient (ρ), which is an amount related to covariance.(Equation 2)

$$\begin{equation}\tag{2} \begin{aligned}&\rho = \frac{\text{Cov}(Y_1, Y_2)}{\sigma_1 \sigma_2}\\ & -1 \le \rho \le 1\\ &\sigma_1, \sigma_2: \text{standard deviation of}\,Y_1, Y_2 \end{aligned} \end{equation}$$

The sign of the correlation coefficient is the same as the sign of covariance and is organized as follows:

Table 1. Correlation Coefficient
correlation coefficientmean
ρ = 1perfect direct relationship
0 < ρ < 1 direct relationship
ρ= 0 No correlation
-1< ρ <0 inverse relationship
ρ = -1 perfect inverse relationship

The lack of correlation between the two variables means covariance=0, as shown in Table 1. This means that the two variables are independent of each other. That is, if the two variables are independent, the following is established:

$$E(Y_1Y_2)=E(Y_1)E(Y_2)$$

This result is equal to μ1, μ2 so the covariance, which is the difference between the two, is zero.

Example 1)
  Determine covariance and correlation coefficients from data on daily change rates between Apple and Google's beginning and closing price.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import FinanceDataReader as fdr
st=pd.Timestamp(2020,1,1)
et=pd.Timestamp(2021, 11, 29)
apO=fdr.DataReader('AAPL',st, et)
goO=fdr.DataReader('GOOGL',st, et)
ap=(apO["Close"]-apO["Open"])/apO["Open"]*100
go=(goO["Close"]-goO["Open"])/goO["Open"]*100

The pd.concat() function was applied to combine the two materials to create a single object.

y=pd.concat([ap, go], axis=1)
y.columns=[i+'Change' for i in ['ap', 'go']]
y.head(3)
apChange goChange
Date
2020-01-02 1.390764 1.505488
2020-01-03 0.094225 1.001484
2020-01-06 2.042206 3.418171

Figure 1 shows the scatter plot for both materials.

plt.scatter(y.values[:,0], y.values[:,1])
plt.xlabel("ap(%)", size=13, weight='bold')
plt.ylabel("go(%)", size=13, weight='bold')
plt.show()
Figure 2. Distribution of the two stock data.
(Direct prorportion)

Calculate covariance from the mean of each column of the above data and the difference between each value. In the following code, the product between the columns of the object applies a function object.product(axis). This function returns the product of the corresponding values based on the specified axis.

mean=y.mean(axis=0)
mean
apChange    0.108257
    goChange    0.098101
    dtype: float64
cov=(y-mean).product(axis=1).mean()
print(f'covariance: {np.round(cov, 4)}')
covariance: 1.617

The covariance above was calculated by multiplying the value of the second column corresponding to the first column of the object. The matrix product can progress the calculation more efficiently.

$$\begin{align} &\begin{bmatrix}x_{1} & y_{1}\\x_2&y_2\\x_3&y_3 \end{bmatrix}\rightarrow\begin{bmatrix}x_{1} \cdot y_{1}\\x_2 \cdot y_2\\x_3 \cdot y_3 \end{bmatrix}\\ \\ &\begin{bmatrix}x_{1} & x_2& x_3 \\y_{1} & y_2&y_3 \end{bmatrix} \begin{bmatrix}x_{1} & y_{1}\\x_2&y_2\\x_3&y_3 \end{bmatrix} \\ &\rightarrow \begin{bmatrix} x_1x_1+x_2x_2+x_3x_3& x_1y_1+x_2y_2+x_3y_3\\ x_1y_1+x_2y_2+x_3y_3 & y_1y_1+y_2y_2+y_3y_3\end{bmatrix} \end{align}$$

The results of this operation are shown in the following expression, and this matrix is called the **Correlation Coefficient Matrix**. As shown in the following results, the diagonal elements of the matrix are the variance of each column (variables), and the non-diagonal elements represent the covariance between the two variables.

$$\begin{bmatrix} \text{Variance of row 1} & \text{Covariance of row 1 and 2} \\ \text{Covariance of row 1 and 2} & \text{Variance of row 2} \end{bmatrix}$$

The covariance matrix in this example is 2 × 2 dimension, so the above matrices must be adjusted appropriately. In other words, object y(342 × 2) dimension must adjust the dimension of the object by applying a transposed matrix as shown in Equation 3, in order for the matrix product result to be to be 2 × 2.

$$\begin{align}\tag{3} &\text{cov Matrix} = \frac{Y^T \cdot Y}{n}\\&Y^T: \text{transposed matrix of Y}\\& n: \text{sample size} \end{align}$$
y1=y-y.mean()
print(f'covariance Matrix: {np.around(np.dot(y1.T,y1)/len(y1), 3)}')
covariance Matrix: [[2.85  1.617]
     [1.617 2.013]]
y.cov(ddof=0)
apChange goChange
apChange 2.849691 1.616999
goChange 1.616999 2.013030

The covariance matrix can be calculated by applying the pandas object.cov(ddof)) function. The calculation of covariance matrices by matrix product is for the population. That is, for ddof=0. However, the data in this example are samples and the degree of freedom should be considered. That is, ddof=1

covMat=y.cov()
covMat
apChange goChange
apChange 2.855615 1.620361
goChange 1.620361 2.017215

The coefficient of correlation is the covariance divided by each standard deviation.

#standard deviation
ysd=y.std(axis=0, ddof=1)
ysd=ysd.values.reshape(2,1)
np.around(ysd, 4)
array([[1.6899],
           [1.4203]])
#Multiplication matrix of each standard deviation
ysdMat=np.dot(ysd, ysd.T)
np.around(ysdMat, 4)
array([[2.8556, 2.4001],
           [2.4001, 2.0172]])
creCoef=covMat/ysdMat
creCoef
apChange goChange
apChange 1.000000 0.675127
goChange 0.675127 1.000000

Apply the pandas ``object.corr(method='pearson')`` function to return the results directly from the raw data.

y.corr()
apChange goChange
apChange 1.000000 0.675127
goChange 0.675127 1.000000

The example above is for two materials, which show the covariance of each data as follows:

$$\begin{align} \text{Cov}(x,x)&=E[(X-E(X))(X-E(X))]\\&=\frac{\sum^n_{i=1}(x_i - \mu_x)^2}{n-1}\\&= \sigma_x^2\\ \text{Cov}(y,y)&=E[(Y-E(Y))(Y-E(Y))]\\&=\frac{\sum^n_{i=1}(y_i - \mu_y)^2}{n-1}\\&= \sigma_y^2\\ \text{Cov}(x,y)&=E[(Y-E(Y))(Y-E(Y))]\\&=\frac{\sum^n_{i=1}(x_i-\mu_x)(y_i - \mu_y)}{n-1}\\&= \sigma_{xy}\\ \end{align}$$

The above expressions can be visualized as shown in Figure 2, a scatter plots for ap, go.

plt.figure(figsize=(10, 7))
plt.subplots_adjust(wspace=0.4)
ax1=plt.subplot(2,3,1)
ax1.scatter(ap, ap)
ax1.set_xlabel('ap(%)', size=13, weight='bold')
ax1.set_ylabel('ap(%)', size=13, weight='bold')
ax1.text(-5, 5, '(a)', size=13, weight='bold')
ax2=plt.subplot(2,3,2)
ax2.scatter(go, go)
ax2.set_xlabel('go(%)', size=13, weight='bold')
ax2.set_ylabel('go(%)', size=13, weight='bold')
ax2.text(-5, 4, '(b)', size=13, weight='bold')
ax3=plt.subplot(2,3,3)
ax3.scatter(ap, go)
ax3.set_xlabel('ap(%)', size=13, weight='bold')
ax3.set_ylabel('go(%)', size=13, weight='bold')
ax3.text(-6, 4, '(c)', size=13, weight='bold')
plt.show()
Figure 3. Covariance of the same data (a) and (b) and covariance of two other data(c).

Correlation analysis

Correlation analysis is the analysis of relationships between two or more data, and the parameters of the analysis are the correlation coefficients. The null hypothesis of the analysis is ρ = 0. In other words, test that there is no correlation between the data being compared.

H0: ρ =0, H1: ρ ≠ 0

Because the distribution for the coefficient of correlation (r) is averaged 0 and the range is [-1, 1], the variance in the distribution can be expressed as 1- r2. This probability variable follows a t distribution with standard error $\displaystyle \sqrt{\frac{1-r^2}{n-2}}$, degree of freedom n-2.

The test statistics standardizing the variables according to the characteristics of this distribution are shown in Equation 3.

$$\begin{align}\tag{3} t&= \frac{r-\rho_0}{\sqrt{\frac{1-r^2}{n-2}}}\\&=\frac{r}{\sqrt{\frac{1-r^2}{n-2}}} \end{align}$$

Example 2)
  Perform a correlation analysis between the above example ap (y1) and go (y2).

The correlation coefficient for both materials is r ≈ 0.70. It can be calculated using the np.corrcoef() function. This function returns the same result as the pd object.corr() applied above, but the arguments passed to the function must be entered separately from the data related to the calculation.

r=np.corrcoef(y.values[:,0], y.values[:,1])
r
array([[1., 0.67512745],
           [0.67512745, 1.]])
print(f'correlation coeff. Mat.:  {np.around(r,3) }')
 correlation coeff. Mat.:  [[1.    0.675]
     [0.675 1.   ]]
r12=r[0,1]
print(f'corr.coeff:{np.round(r12, 3)}')
corr.coeff:0.675

Calculates the test statistics and determines the confidence intervals from α = 0.05.

df=y.shape[0]-2
print(f'df: {df}')
df: 480
t=r12*np.sqrt(df/(1-r12**2))
print(f'statistics t: {round(t, 3)}')
statistics t: 20.051
from scipy import stats
ci=stats.t.interval(0.95, df)
print(f"Lower : {round(ci[0], 4)}, Upper : {round(ci[1], 4)}")
Lower : -1.9649, Upper : 1.9649
pVal=2*stats.t.sf(t, df)
print(f'p-value: {round(pVal, 4)}')
p-value: 0.0

The test statistic is located outside the confidence interval and is p-value 0 which is very low compared to the significance level. Therefore, the null hypothesis can be dismissed. In other words, you can conclude that the two groups are correlated. This analysis can be performed by scipy.stats.pearsonr(x, y).

corcoef, pval=stats.pearsonr(y.values[:,0], y.values[:,1])
print(f'corr.coef.: {round(corcoef, 3)},  p-value: {round(pval, 3)}')
corr.coef.: 0.675,  p-value: 0.0

댓글

이 블로그의 인기 게시물

[Linear Algebra] 유사변환(Similarity transformation)

유사변환(Similarity transformation) n×n 차원의 정방 행렬 A, B 그리고 가역 행렬 P 사이에 식 1의 관계가 성립하면 행렬 A와 B는 유사행렬(similarity matrix)이 되며 행렬 A를 가역행렬 P와 B로 분해하는 것을 유사 변환(similarity transformation) 이라고 합니다. $$\tag{1} A = PBP^{-1} \Leftrightarrow P^{-1}AP = B $$ 식 2는 식 1의 양변에 B의 고유값을 고려한 것입니다. \begin{align}\tag{식 2} B - \lambda I &= P^{-1}AP – \lambda P^{-1}P\\ &= P^{-1}(AP – \lambda P)\\ &= P^{-1}(A - \lambda I)P \end{align} 식 2의 행렬식은 식 3과 같이 정리됩니다. \begin{align} &\begin{aligned}\textsf{det}(B - \lambda I ) & = \textsf{det}(P^{-1}(AP – \lambda P))\\ &= \textsf{det}(P^{-1}) \textsf{det}((A – \lambda I)) \textsf{det}(P)\\ &= \textsf{det}(P^{-1}) \textsf{det}(P) \textsf{det}((A – \lambda I))\\ &= \textsf{det}(A – \lambda I)\end{aligned}\\ &\begin{aligned}\because \; \textsf{det}(P^{-1}) \textsf{det}(P) &= \textsf{det}(P^{-1}P)\\ &= \textsf{det}(I)\end{aligned}\end{align} 유사행렬의 특성 유사행렬인 두 정방행렬 A와 B는 'A ~ B' 와 같...

유리함수 그래프와 점근선 그리기

내용 유리함수(Rational Function) 점근선(asymptote) 유리함수 그래프와 점근선 그리기 유리함수(Rational Function) 유리함수는 분수형태의 함수를 의미합니다. 예를들어 다음 함수는 분수형태의 유리함수입니다. $$f(x)=\frac{x^{2} - 1}{x^{2} + x - 6}$$ 분수의 경우 분모가 0인 경우 정의할 수 없습니다. 이와 마찬가지로 유리함수 f(x)의 정의역은 분모가 0이 아닌 부분이어야 합니다. 그러므로 위함수의 정의역은 분모가 0인 부분을 제외한 부분들로 구성됩니다. sympt=solve(denom(f), a); asympt [-3, 2] $$-\infty \lt x \lt -3, \quad -3 \lt x \lt 2, \quad 2 \lt x \lt \infty$$ 이 정의역을 고려해 그래프를 작성을 위한 사용자 정의함수는 다음과 같습니다. def validX(x, f, symbol): ① a=[] b=[] for i in x: try: b.append(float(f.subs(symbol, i))) a.append(i) except: pass return(a, b) #x는 임의로 지정한 정의역으로 불연속선점을 기준으로 구분된 몇개의 구간으로 전달할 수 있습니다. #그러므로 인수 x는 2차원이어야 합니다. def RationalPlot(x, f, sym, dp=100): fig, ax=plt.subplots(dpi=dp) # ② for k in x: #③ x4, y4=validX(k, f, sym) ax.plot(x4, y4) ax.spines['left'].set_position(('data', 0)) ax.spines['right...

부분분수의 미분

내용 방법 1 방법 2 방법 3 부분분수의 미분 분수의 미분은 일정한 공식 을 적용하여 계산할 수 있습니다. 그러나 분수 자체가 단순한 표현으로 이루어지지 않았다면 미분 과정이나 결과는 매우 복잡할 수 있습니다. 만약 복잡한 분수 함수를 간단한 분수들로 분해할 수 있다면 계산이 보다 간편해질 것입니다. 이와 같이 분해된 간단한 분수들을 부분분수 라고 합니다. 예를 들어 다음 두 분수의 합을 계산해 봅니다. $$\begin{align} \frac{1}{x+1}+\frac{2}{x-1}&=\frac{x-1+2(x+1)}{(x+1)(x-1)}\\ &=\frac{3x+1}{x^2-1} \end{align}$$ 위 과정은 3개 이상의 여러 분수에서도 이루어질 수 있습니다. 또한 역으로 진행될 수 있습니다. 즉, 분수를 부분 분수로 분할할 수 있습니다. 그러나 이러한 과정은 대수분수 (분자의 가장 큰 차수가 분모의 최고의 차수보다 작은 분수)에서만 이루어질 수 있습니다. 예를 들어 $\displaystyle \frac {x^2+2}{x^2-1}$의 경우는 분자와 분모의 차수는 2차로 같습니다. 이러한 경우 다음과 같이 분리할 수 있습니다. $$\frac{x^2+2}{x^2-1}=1+\frac{3}{x^2-1}$$ 위의 식 중 $\displaystyle \frac{3}{x^2-1}$은 분자의 차수가 분모의 차수 보다 낮은 대수 분수이므로 부분 분수로 분리할 수 있습니다. 이와같이 부분 분수로 분해하는 방법은 다음과 같이 몇 가지로 구분할 수 있습니다. 방법 1 위 예의 결과 $\displaystyle \frac{3x+1}{x^2-1}$의 경우를 역으로 생각해 봅니다. 분모의 인수분해가 가능하면 그 분모의 인수에 의해 다음과 같이 분해할 수 있습니다. $$\begin{align} \frac{3x+1}{x^2-1}&=\frac{3x+1}{(x+1)(x-1)}\\ &=\frac{A}{x+1...