기본 콘텐츠로 건너뛰기

[ML] 결정트리(Decision Tree) 모델

Variance

Variance

As introduced in descriptive statistics, **variance** represents data variability and is calculated as Equation 1, and the square root of the variance becomes the standard deviation (σ).

$$\begin{equation}\tag{1} \begin{aligned}\sigma^2&=E(X-\mu)^2\\&=(x_1-\mu)^2P(X=x_1)+ \cdots+(x_k-\mu)^2P(X=x_k)\\&=\sum^k_{i=1} (x_k-\mu)^2P(X=x_k) \end{aligned} \end{equation}$$

Variance, a measure of the spread of a data distribution, is the weighted average of the squared deviations between each data and the mean. Equation 1 is simplified to:

$$\begin{aligned}&\begin{aligned}\sigma^2&=\sum (x-\mu)^2P(X=x)\\&=\sum(x^2-2x\mu+\mu^2)f(x)\\&=\sum x^2f(x) -2\mu \sum xf(x)+ \mu^2\\&=\sum x^2f(x)-\mu^2\\&=E(X^2)-(E(X))^2 \end{aligned}\\ & \because \sum xf(x)=\mu \end{aligned}$$

As in the above expression, the calculation of variance consists of the expected value of the square of the variable and the square of the mean. The expected value of that variable squared is called the second moment. In other words, the expected value according to the degree of a variable is expressed as a moment for that degree. Therefore, the variance is calculated as the difference between the square of the second moment and the first moment, and since they are all expected values, a linear combination as in Equation 2 is established.

$$\begin{equation}\tag{2} \begin{aligned} Var(aX+b)&=\sigma^2_{ax+b}\\&=E[((aX+b)-\mu_{aX+b})^2]\\ &=E[((aX+b)-E(aX+b))^2]\\&=E[((aX+b)-aE(X)+b)^2]\\&=E[(a(X-\mu))^2]\\&=a^2E[(x-\mu)^2]\\&=a^2\sigma^2_X \end{aligned} \end{equation}$$

A constant added to a variable as in Equation 2 does not affect the variance of that variable.

Example 1)
  The probability mass function of the random variable X is: $$f(x)=\frac{x}{8}, \quad x=1,2,5$$. Determine E(X) and Var(X).

import numpy as np
import pandas as pd
from sympy import * 
import matplotlib.pyplot as plt
x=np.array([1,2,5])
f=x/8
f
array([0.125, 0.25 , 0.625])
Ex=np.sum(x*f)
Ex
3.75
Var=np.sum(x**2*f)-Ex**2
Var
2.6875

Example 2)
 The probability density function of a continuous random variable X is: $$f(x)=\frac{x+1}{8}, \quad 2 < x < 4$$ Determine E(X) and Var(X).

The mean and variance are calculated using the integral of the PDF function. The integral operation applies the itegrate() function of the sympy module.

x=symbols("x")
f=(x+1)/8
Ex=integrate(x*f, (x, 2, 4))
Ex
$\displaystyle \frac{37}{12}$
Var=integrate(x**2*f,(x, 2, 4))-Ex**2
Var
$\displaystyle \frac{47}{144}$

Example 3)
 Calculate the variance of a random variable X with the probability density function

$$f(x)=\begin{cases} 1-|x|& \quad |x|<1\\0& \quad \text{otherwise} \end{cases}$$
x=symbols("x")
f=1-abs(x)
Ex=integrate(x*f, (x, -1,1))
Ex
$\displaystyle 0$
Var=integrate(x**2*f,(x, -1,1))-Ex**2
Var
$\displaystyle \frac{1}{6}$

Example 4)
  Two types of games are played based on the rule that one die is rolled and points are scored according to the eye.

Point 1 2 3 4 5 6
Game 1(x) 1 2 3 4 5 6
Game 2(y) 3 0 6 0 0 12
P(X or Y) $\displaystyle \frac{1}{6}$ $\displaystyle \frac{1}{6}$ $\displaystyle \frac{1}{6}$ $\displaystyle \frac{1}{6}$ $\displaystyle \frac{1}{6}$ $\displaystyle \frac{1}{6}$

Determine the expected value and variance for each game.

game=pd.DataFrame([np.arange(1, 7), np.arange(1, 7),[3,0,6,0,0,12],
                   np.repeat(Rational(1,6), 6)],
                  index=["Dice Eye","game1(x)", "game2(Y)", "P(X or Y)"])
X=game.iloc[1,:]
Y=game.iloc[2,:]
EX=(X*game.iloc[3,:]).sum()
EX
$\displaystyle \frac{7}{2}$
VarX=(X**2*game.iloc[3,:]).sum()-EX**2
VarX
$\displaystyle \frac{35}{12}$
#game2
EY=(Y*game.iloc[3,:]).sum()
EY
$\displaystyle \frac{7}{2}$
VarY=(Y**2*game.iloc[3,:]).sum()-EY**2
VarY
$\displaystyle \frac{77}{4}$

Combine the two dice games in this example to create a new random variable Z and calculate the mean and variance of the probability distribution.

Z=X+Y
X=game.iloc[1,:]
Y=game.iloc[2,:]
Z=X+Y
Z
0 4
1 2
2 9
3 4
4 5
5 18
dtype: object
EZ=np.sum(Z*game.iloc[3,:])
EZ
$\displaystyle 7$
EX+EY
$\displaystyle 7$

As shown in the above result, the expected value of the combined variable is equal to the sum of each expected value. However, the variance of the combined variables is not equal to the sum of the variances of each variable. The variance can be calculated by DataFrame ``object.var()``.

VarZ=np.sum(Z**2*game.iloc[3,:])-EZ**2
VarZ
$\displaystyle \frac{86}{3}$
np.var(Z)
28.666666666666668
VarX+VarY
$\displaystyle \frac{133}{6}$

As the above results show, the variance of the combined variables and the sum of the variances of each variable do not match. This difference can be explained by the process of inducing the variance of the binding variable as shown in Equation 3.

$$\begin{equation}\tag{3} \begin{aligned} &Var[aX+bY]\\&=E[((aX+bY)-(a\mu_X+b\mu_Y)^2)]\\&=E[(a(X-\mu_X)+b(Y-\mu_Y))^2)]\\ &=E[a^2(X-\mu_X)^2+2ab(X-\mu_X)(Y-\mu_Y)+b^2(Y-\mu_Y)] \\ &=a^2E[(X-\mu_X)^2]+2abE[(X-\mu_X)(Y-\mu_Y)]+b^2E[(Y-\mu_Y)] \\ &=a^2Var(X)+b^2Var(Y)\\ & \because \; E[(X-\mu_x)(Y-\mu_Y)]=0 \end{aligned} \end{equation}$$

In Equation 3, E[(X-μx)(Y-μY)] denotes the interaction of two variables. If the two variables are independent, the value of that interaction is zero. Therefore, the difference in variance between the variables X and Y in the example and the associated variable Z provides information that the two variables are not independent.

댓글

이 블로그의 인기 게시물

[Linear Algebra] 유사변환(Similarity transformation)

유사변환(Similarity transformation) n×n 차원의 정방 행렬 A, B 그리고 가역 행렬 P 사이에 식 1의 관계가 성립하면 행렬 A와 B는 유사행렬(similarity matrix)이 되며 행렬 A를 가역행렬 P와 B로 분해하는 것을 유사 변환(similarity transformation) 이라고 합니다. $$\tag{1} A = PBP^{-1} \Leftrightarrow P^{-1}AP = B $$ 식 2는 식 1의 양변에 B의 고유값을 고려한 것입니다. \begin{align}\tag{식 2} B - \lambda I &= P^{-1}AP – \lambda P^{-1}P\\ &= P^{-1}(AP – \lambda P)\\ &= P^{-1}(A - \lambda I)P \end{align} 식 2의 행렬식은 식 3과 같이 정리됩니다. \begin{align} &\begin{aligned}\textsf{det}(B - \lambda I ) & = \textsf{det}(P^{-1}(AP – \lambda P))\\ &= \textsf{det}(P^{-1}) \textsf{det}((A – \lambda I)) \textsf{det}(P)\\ &= \textsf{det}(P^{-1}) \textsf{det}(P) \textsf{det}((A – \lambda I))\\ &= \textsf{det}(A – \lambda I)\end{aligned}\\ &\begin{aligned}\because \; \textsf{det}(P^{-1}) \textsf{det}(P) &= \textsf{det}(P^{-1}P)\\ &= \textsf{det}(I)\end{aligned}\end{align} 유사행렬의 특성 유사행렬인 두 정방행렬 A와 B는 'A ~ B' 와 같

[matplotlib] 히스토그램(Histogram)

히스토그램(Histogram) 히스토그램은 확률분포의 그래픽적인 표현이며 막대그래프의 종류입니다. 이 그래프가 확률분포와 관계가 있으므로 통계적 요소를 나타내기 위해 많이 사용됩니다. plt.hist(X, bins=10)함수를 사용합니다. x=np.random.randn(1000) plt.hist(x, 10) plt.show() 위 그래프의 y축은 각 구간에 해당하는 갯수이다. 빈도수 대신 확률밀도를 나타내기 위해서는 위 함수의 매개변수 normed=True로 조정하여 나타낼 수 있다. 또한 매개변수 bins의 인수를 숫자로 전달할 수 있지만 리스트 객체로 지정할 수 있다. 막대그래프의 경우와 마찬가지로 각 막대의 폭은 매개변수 width에 의해 조정된다. y=np.linspace(min(x)-1, max(x)+1, 10) y array([-4.48810153, -3.54351935, -2.59893717, -1.65435499, -0.70977282, 0.23480936, 1.17939154, 2.12397372, 3.0685559 , 4.01313807]) plt.hist(x, y, normed=True) plt.show()

R 미분과 적분

내용 expression 미분 2차 미분 mosaic를 사용한 미분 적분 미분과 적분 R에서의 미분과 적분 함수는 expression()함수에 의해 생성된 표현식을 대상으로 합니다. expression expression(문자, 또는 식) 이 표현식의 평가는 eval() 함수에 의해 실행됩니다. > ex1<-expression(1+0:9) > ex1 expression(1 + 0:9) > eval(ex1) [1] 1 2 3 4 5 6 7 8 9 10 > ex2<-expression(u, 2, u+0:9) > ex2 expression(u, 2, u + 0:9) > ex2[1] expression(u) > ex2[2] expression(2) > ex2[3] expression(u + 0:9) > u<-0.9 > eval(ex2[3]) [1] 0.9 1.9 2.9 3.9 4.9 5.9 6.9 7.9 8.9 9.9 미분 D(표현식, 미분 변수) 함수로 미분을 실행합니다. 이 함수의 표현식은 expression() 함수로 생성된 객체이며 미분 변수는 다음 식의 분모의 변수를 의미합니다. $$\frac{d}{d \text{변수}}\text{표현식}$$ 이 함수는 어떤 함수의 미분의 결과를 표현식으로 반환합니다. > D(expression(2*x^3), "x") 2 * (3 * x^2) > eq<-expression(log(x)) > eq expression(log(x)) > D(eq, "x") 1/x > eq2<-expression(a/(1+b*exp(-d*x))); eq2 expression(a/(1 + b * exp(-d * x))) > D(eq2, "x") a * (b * (exp(-d * x) * d))/(1 + b