기본 콘텐츠로 건너뛰기

[matplotlib] 등고선(Contour)

Probability Inequalities & Moment Generating Functions

Contents

  1. Probability Inequalities
    1. Markov's inequality
    2. Chebyshev's inequality
  2. Moment generating function
    1. Combination of random variables

Probability Inequalities & Moment Generating Functions

It is necessary to know the interval of probabilities that contains the value(s) of interest in a distribution that can be estimated as a statistic. In addition, it is necessary to establish a confidence interval indicating the degree of confidence in the results of statistical analysis. Markov and Chebyshev inequalities are mathematical expressions underlying the rationale for establishing probability intervals or confidence intervals.

Probability Inequalities

Markov's inequality

If X is a random variable and g(x) is a nonnegative real-valued function, then Equation 1 holds for any positive real c.

(1)p[g(x)c]E[g(x)]c

If the event of this variable is A={x|g(x)c}, the above expression is proved as follows.

E[g(x)]=g(x)f(X)dx=Ag(x)f(X)dx+Acg(x)f(X)dxAg(x)f(X)dxAcf(X)dx=cP[xA]=cP[g(x)c]

Example 1)
The random variable X follows a binomial distribution with mean np and variance np(1-p). Apply the Markov inequality to determine the upper bound of the probability that satisfies the expression below.

ifp=12andα=34,P(Xαn)E(X)αn=pnαn=pα=23

Chebyshev's inequality

From the random variable X, define another non-negative random variable Y=(XE(X))2. Markov inequality can be applied to this variable.

P(Yb2)E(Y)b2E(Y)=E(XE(X))2=Var(x)P(Yb2)=P((XE(X))2b2)=P(|xE(X)|b)P(|xE(X)|b)Var(X)b2b:positive real number 

The above result is called Chebyshev's inequality and is generalized as Equation 2.

(2)P(|xE(X)|b)Var(X)b2

This inequality means that the difference between a random variable X and its mean (E(X)) is bounded by its variance (Var(X)). Even if the probability distribution function of the random variable X is unknown, the deviation from the mean can be estimated intuitively by Equation 2.

Example 2)
The random variable X follows a binomial distribution with mean np and variance np(1-p). Apply Chebyshev's inequality to determine the upper bound of the probability that satisfies P(Xαn)p=12,α34

P(Xαn)=P(Xnpαnnp)P(|Xnp|αnnp)=Var(X)(nαnp)2=p(1p)n(αp)24n

Example 2)
If the PDF of the random variable X is f(x)=ex and x>0, then E(X) and Var(X)?
Also, apply the Chebyshev inequality to the probability that the difference between the variable value and the mean is greater than twice the standard deviation.

  1. The mean and variance are:
import numpy as np
import pandas as pd
from sympy import *
x=symbols("x")
f=exp(-x)
E=integrate(x*f, (x, 0, oo))
E
1
var=integrate(x**2*f, (x, 0, oo))-E**2
var
1
  1. Determine the probabilities of the following events.
P(|x1|2)=P(X3)P(X1)=P(X3)=1P(X<3)
N(1-integrate(f, (x, 0, 3)), 3)
0.0498

Applying the above case to the Chebyshev inequality, the upper bound is:

P(|x1|2)=14

Therefore, the probability of the specified event is less than the upper bound of the Chebyshev inequality.

Moment generating function

Moment is a characteristic quantity that is the basis for calculating other values, and the expected value of the function that can generate the moment is called the moment generating function. Statistics that characterize the probability distribution can be calculated using the moment defined as follows.

For a random variable X, the nth moment centered on the origin is defined as E[Xn], and the nth order of the central moment centered on the mean value is defined as E[(X-E(X))n].

The first-order moment is the expected value E[X] and the second-order central moment is the variance of X as E[(x-\mu)2].

If there are expected values for all t belonging to (-h, h) where h > 0 in the random variable X, the moment generating function (MGF) is defined as Equation 3.
(3)MX(t)=E(etX)=xRetXf(x)discrete variable =etXf(x)continuous variable 

Example 4)
If the PMF of the discrete random variable X is as follows, the moment generating function (MGF) is calculated.

PX(t)={13t=123t=2 MX(t)=E(etX)=xXetxf(x)=13et+23e2t

MGF can represent all moments of a random variable. Therefore, the distribution can be determined by this function. For example, if two random variables have the same MGF, it means they must have the same distribution.

The function etX for MGF defined above can be expressed as Taylor Seires expanding as Equation 4.

Taylor Seires
(4)ex=1+x+x22+x33+=k=0xkk!

Taylor series can be expressed using the series() method of the sympy module or the fps() function. The O() in the following code (the letter O) is the Big O notation used in computer science as a symbol for x=. Use removeO() to delete this part.

exp(x).series(x)
1+x+x22+x36+x424+x5120+O(x6)
x=symbols("x")
fps(exp(x))
(k=1{xkk!forkmod1=00otherwise)+1

Apply the above Taylor equation to the expansion of the moment generating function (MGF).

MX(t)=E(etX)=E(1+tX+t2X22!+t3X33!+)=1+tE(X)+t22!E(X2)+t33!E(X3)+

Perform the first-order derivative of the expanded MGF with respect to t and substitute zero.

d(MX(t))dt=E(X)+tE(X2)+t22!E(X3)+d(MX(0))dt=E(X)

The result of first differentiating MGF and substituting 0 is the first moment of the random variable, that is, the expected value. Try the second derivative in the same way.

d2(MX(t))dt2=E(X2)+tE(X3)+d2(MX(0))dt2=E(X2)

The result is a second moment. If this process is generalized as in Equation 5, the moment of the same degree of the derivative of the moment generating function is generated as shown in the following equation. Therefore, various moments such as expected value and variance can be calculated by MGF.

(5)dn(MX(t))dtn=E(Xn)

Example 5)
A distribution that has a uniform probability mass function for all random variables is called a uniform distribution. The following is a uniform distribution with the following probability mass function (pmf) in the interval of axb for the random variable X.

f(x)=1ba

Determine the moment generating function and calculate the expected value of the first moment from it.
For this calculation, the series() method of the sympy module was used to develop the moment generating function as a Taylor series, and integrate(), diff() was used for integration and differentiation, respectively.

a, b, x, t=symbols("a, b, x, t", real=True)
f=1/(b-a)
M=integrate(exp(t*x)*f, (x, a, b))
M
{eatatbtebtatbtforatbt0aabbabotherwise
M1=M.args[0][0]
M1
eatatbtebtatbt
M2=M1.series(t, 0, 4).removeO()
M2
aabbab+t3(a424(ab)b424(ab))+t2(a36(ab)b36(ab))+t(a22(ab)b22(ab))
E1=(M2.diff(t)).subs(t, 0)
simplify(E1)
a2+b2

Combination of random variables

In actual data analysis, the relationship between two or more variables is often the subject of analysis. For example, in determining the relationship between cancer and tobacco or the relationship between stock prices and interest, there are two or more variables to be analyzed. In this multivariate situation, the process of calculating probability and various statistics is similar to the process for univariate introduced in Section 3.3, Probability and Statistics.

Example 6)
Of the 12 students in class A, there are 3 soccer players and 4 baseball players. If three people are selected to play a certain sport with another class, what is the probability that they are all students from athletes?

If the variable of a soccer player is X, the variable of a baseball player is Y, and the remainder is Z, the probability of this distribution is calculated as follows.

p(X=x,Y=y,Z=z)=(3x)(4y)(5z)(123)x+y+z=12
from scipy import special 
total=special.comb(12, 3)
total
220.0
p=pd.DataFrame([[]])
for i in range(4):
    for j in range(5):
        for k in range(5):
            if i+j+k==3:
                x=pd.DataFrame([[i, j, k, special.comb(3,i)*special.comb(4, j)*special.comb(5, k)/total]])
        p=pd.concat([p, x])
p=np.around(p.iloc[1:,:], 3)
p.columns=['x','y','z','P']
p
xy zP
00.00.0 3.0 0.045
00.01.0 2.0 0.182
00.02.0 1.0 0.136
03.00.0 0.0 0.005
03.00.0 0.0 0.005

Since all players must come from players, the cross table considering only the variables x and y is helpful in understanding the probabilities. This cross table (pivottable) can be created using the pd object.pivot_table() method.

p.pivot_table('P','x' , 'y', aggfunc="sum", margins=True )
y	0.0	1.0	2.0	3.0	All
x					
0.0	0.045	0.182	0.136	0.036	0.399
1.0	0.136	0.273	0.246	NaN	0.655
2.0	0.068	0.220	NaN	NaN	0.288
3.0	0.025	NaN	NaN	NaN	0.025
All	0.274	0.675	0.382	0.036	1.367

As in the above process, each probability of a combined random variable becomes the probability of the intersection of each variable. In the cross table above, when both x and y are 0, that is, in the case of P(0, 0, 3), if all random variables are independent, it is calculated as follows.

px0=special.comb(3, 0)/special.factorial(3)
round(px0, 3)
0.167
py0=special.comb(4, 0)/special.factorial(4)
round(py0, 3)
0.042
pz3=special.comb(5, 3)/special.factorial(5)
round(pz3, 3)
0.083
p003=px0*py0*pz3
round(p003, 3)
0.001

This result differs from the result of 0.045 shown in the crosstabulation. Therefore, the random variables X, Y, and Z are not independent and are calculated by applying Bayes theorem in this case.

f(x,y,z)=(Xx)(Yy)(Zz)(X+Y+Zx+y+z)X,Y,Z:Total number of each random variable X, Y, Z x,y,z:Number of x, y, z selected 

The above probability mass function is the same as the hypergeom distribution which will be mentioned later. Applied to this function, P(X=0, Y=0, Z=3) is calculated as

p003=special.comb(3, 0)*special.comb(4, 0)*special.comb(5, 3)/special.comb(12, 3)
round(p003, 3)
0.045

For x=0 in this example, this marginal probability is calculated as

fX=0(Y=y,Z=z)=(X0)j=0Yk=0Z(Yj)(Zk)1(X+Y+Zx+y+z)

The above calculations are made on discrete random variables, and integration is used for continuous variables.

Example 7)
If the probability density function of two continuous random variables X and Y is

f(x,y)=2exp(x)exp(2y),0<x,y<

Let's calculate:

  1. p{X < 1, Y > 1}
f(X>1,Y<1)=011f(x,y)dxdy
x,y=symbols('x y')
f=2*exp(-x)*exp(-2*y)
p=integrate(f,(x, 1, oo), (y, 0, 1))
p
1e3+e1
N(p, 3)
0.318
  1. P{X < a}, a > 0
a=symbols("a")
p=integrate(f,(x, 0, oo), (y, 0, a)).evalf(3)
p
1.0e2a

Example 8)
If two random variables X and Y are independent and each probability density function is

f(x)=exp(x),x>0f(y)=exp(y),y>0

Combined probability density function of random variable X/Y?

Independent means f(x, y)=f(x)f(y). Therefore, the joint probability density function is

f(x,y)=exp(x)exp(y),x,y>0

If the random variable XY is a, then the probability density function for a is calculated. This function can be calculated using the combined probability density of two variables X and Y.

XY=aX=aY,a>00<X<aY,0<Y

The probability density function is the derivative of the cumulative probability function.

In order to determine the probability density function expressed only by the new variable a, the cumulative probability function of f(x, y) can be calculated and the result can be expressed by differentiating it again. The cumulative probability function is:

FX/Y(a)=00ayexp(x)exp(y)dxdy
a=symbols("a", positive=True)
x=symbols("x", positive=True)
y=symbols("y", positive=True)
f=exp(-x)*exp(-y)
F=integrate(f, (x, 0, a*y),(y, 0, oo))
F
1(a+1)2

Example 9)
Families with children in the village are as follows:

# of children in the family No 1 2 3
Rate 15% 20% 35% 30%

Each child has an equal chance of being a boy or a girl. Therefore, the variable for this is independent. If a family is selected in this village, what is the probability of all cases for the number of boys (B) and the number of girls (G) in that family?

PMF=P(selected family)×P(boy or girl)

Here, the probabilities for boys and girls can be recognized as a probability experiment with a probability of success 12. Expressing each pair as (boys, girls), the total sample space (S) is:

S={(0,0),(0,1),(1,0),(0,2),(1,1),(2,0),(0,3),(1,2),(2,1),(3,0)}

Above (1,1) means (B=1,G=1 | Child=2). Therefore, it is calculated as:

P(B=1,G=1|C=2)=P(B=1,G=1)P(C=2)P(B=1,G=1)=P(B=1,G=1|C=2)P(C=2)=(21)(12)1(12)21P(C=2){C:ChildG:GirlB:Boy

In the above case, both (B,G) and (G, B) cases are possible, so the permutation of each case must be considered as in the above calculation. The probability mass function that generalizes the above case to this problem is as follows.

f(g)=(Cg)(12)Cg(12)gp(C)C:Child=0,1,2,3G:Girl=0,1,2,3

Of course, replacing the number of girls (g) with the number of boys (b) in the above case results in the same result.

pc={0:0.15, 1:0.2, 2:0.35, 3:0.3}
re=pd.DataFrame()
for i in pc.keys():
    for j in range(i+1):
        re1=pd.DataFrame([[i, j, i-j, special.comb(i, j)*(1/2)**j*(1/2)**(i-j)*pc[i]]])
        re=pd.concat([re, re1])
re.columns=['C', 'G', 'B', 'P']
re
C G B P
0 0 0 0 0.1500
0 1 0 1 0.1000
0 3 2 1 0.1125
0 3 3 0 0.0375
rePivot=re.pivot_table('P','G','B', aggfunc="sum", margins=True)
rePivot
B	0	1	2	3	All
G					
0	0.1500	0.1000	0.0875	0.0375	0.3750
1	0.1000	0.1750	0.1125	NaN	0.3875
2	0.0875	0.1125	NaN	NaN	0.2000
3	0.0375	NaN	NaN	NaN	0.0375
All	0.3750	0.3875	0.2000	0.0375	1.0000
P(G=1)=P(G=1,B=0)+P(G=1,B=1)+P(G=1,B=2)+P(G=1,B=3)
rePivot.iloc[1, :4].sum()
0.3875
P(B=1|G=1)=P(B=1,G=1)P(G=1)
rePivot.iloc[1,1]/rePivot.iloc[1, :4].sum()
0.4516129032258064

Example 10)
The combined probability density function of random variables X and Y is as follows.

$$f(x, y)=\frac{12}{5}x(2-x-y), \quad 0Calculate the probability density function of X under the condition of Y = y?

P(X=x|Y=y)=P(x,y)P(y)

P(y) is the integral with respect to x and fixed y in the above probability density function. In other words,

01f(x,y)dx=01125x(2xy)dx
x, y=symbols('x y')
f=12/5*x*(2-x-y)
py=f.integrate((x, 0, 1))
py
1.61.2y
px_y=f/py
px_y
2.4x(xy+2)1.61.2y

댓글

이 블로그의 인기 게시물

[Linear Algebra] 유사변환(Similarity transformation)

유사변환(Similarity transformation) n×n 차원의 정방 행렬 A, B 그리고 가역 행렬 P 사이에 식 1의 관계가 성립하면 행렬 A와 B는 유사행렬(similarity matrix)이 되며 행렬 A를 가역행렬 P와 B로 분해하는 것을 유사 변환(similarity transformation) 이라고 합니다. (1)A=PBP1P1AP=B 식 2는 식 1의 양변에 B의 고유값을 고려한 것입니다. (식 2)BλI=P1APλP1P=P1(APλP)=P1(AλI)P 식 2의 행렬식은 식 3과 같이 정리됩니다. det(BλI)=det(P1(APλP))=det(P1)det((AλI))det(P)=det(P1)det(P)det((AλI))=det(AλI)det(P1)det(P)=det(P1P)=det(I) 유사행렬의 특성 유사행렬인 두 정방행렬 A와 B는 'A ~ B' 와 같...

[sympy] Sympy객체의 표현을 위한 함수들

Sympy객체의 표현을 위한 함수들 General simplify(x): 식 x(sympy 객체)를 간단히 정리 합니다. import numpy as np from sympy import * x=symbols("x") a=sin(x)**2+cos(x)**2 a sin2(x)+cos2(x) simplify(a) 1 simplify(b) x3+x2x1x2+2x+1 simplify(b) x - 1 c=gamma(x)/gamma(x-2) c Γ(x)Γ(x2) simplify(c) (x2)(x1) 위의 예들 중 객체 c의 감마함수(gamma(x))는 확률분포 등 여러 부분에서 사용되는 표현식으로 다음과 같이 정의 됩니다. 감마함수는 음이 아닌 정수를 제외한 모든 수에서 정의됩니다. 식 1과 같이 자연수에서 감마함수는 factorial(!), 부동소수(양의 실수)인 경우 적분을 적용하여 계산합니다. (식 1)Γ(n)={(n1)!n:자연수0xn1exdxn:부동소수 x=symbols('x') gamma(x).subs(x,4) 6 factorial 계산은 math.factorial() 함수를 사용할 수 있습니다. import math math.factorial(3) 6 a=gamma(x).subs(x,4.5) a.evalf(3) 11.6 simpilfy() 함수의 알고리즘은 식에서 공통사항을 찾아 정리하...

sympy.solvers로 방정식해 구하기

sympy.solvers로 방정식해 구하기 대수 방정식을 해를 계산하기 위해 다음 함수를 사용합니다. sympy.solvers.solve(f, *symbols, **flags) f=0, 즉 동차방정식에 대해 지정한 변수의 해를 계산 f : 식 또는 함수 symbols: 식의 해를 계산하기 위한 변수, 변수가 하나인 경우는 생략가능(자동으로 인식) flags: 계산 또는 결과의 방식을 지정하기 위한 인수들 dict=True: {x:3, y:1}같이 사전형식, 기본값 = False set=True :{(x,3),(y,1)}같이 집합형식, 기본값 = False ratioal=True : 실수를 유리수로 반환, 기본값 = False positive=True: 해들 중에 양수만을 반환, 기본값 = False 예 x2=1의 해를 결정합니다. solve() 함수에 적용하기 위해서는 다음과 같이 식의 한쪽이 0이 되는 형태인 동차식으로 구성되어야 합니다. x21=0 import numpy as np from sympy import * x = symbols('x') solve(x**2-1, x) [-1, 1] 위 식은 계산 과정은 다음과 같습니다. x21=0(x+1)(x1)=0x=1or1x4=1의 해를 결정합니다. solve() 함수의 인수 set=True를 지정하였으므로 결과는 집합(set)형으로 반환됩니다. eq=x**4-1 solve(eq, set=True) ([x], {(-1,), (-I,), (1,), (I,)}) 위의 경우 I는 복소수입니다.즉 위 결과의 과정은 다음과 같습니다. x41=(x2+1)(x+1)(x1)=0x=±1,±1=±i,±1 실수...