barcodefontsoft.com

The Mathematical and Statistical Foundations of Econometrics in .NET Create Code 128 Code Set A in .NET The Mathematical and Statistical Foundations of Econometrics barcode for C#

The Mathematical and Statistical Foundations of Econometrics generate, create none none in none projectshow to generate barcode generator c# Theorem 6.26: Converg none for none ence in distribution implies stochastic boundedness. Proof: Let X n and X be random variables with corresponding distribution functions Fn and F, respectively, and assume that X n d X .

Given an (0, 1) we can choose continuity points M1 and M1 of F such that F(M1 ) > 1 /4, F( M1 ) < /4. Because limn Fn (M1 ) = F(M1 ) there exists an index n 1 such that . GS1 Data Matrix Introduction Fn (M1 ) F(M1 ). < /4 if n n 1 none none ; hence, Fn (M1 ) > 1 /2 if n n 1 . Similarly, there exists an index n 2 such that Fn ( M1 ) < /2 if n n 2 . Let m = max(n 1 , n 2 ).

Then infn m P[. X n M1 ] > 1 . F none for none inally, we can always choose an M2 so large that min1 n m 1 P[. X n M2 ] > 1 . I none for none f we take M = max(M1 , M2 ), the theorem follows. The proof of the multivariate case is almost the same.

Q.E.D.

Note that, because convergence in probability implies convergence in distribution, it follows trivially from Theorem 6.26 that convergence in probability implies stochastic boundedness. For example, let Sn = n X j , where the X j s are i.

i.d. random varij=1 ables with expectation and variance 2 < .

If = 0, then Sn = O p ( n) because, by the central limit theorem, Sn / n converges in distribution to N 2 ). However, if = 0, then only Sn = O p (n) because then (0, Sn / n n d N (0, 2 ); hence, Sn / n = O p (1) + O p ( n) and thus Sn = O p ( n) + O p (n) = O p (n). In De nition 6.

2 I have introduced the concept of uniform integrability. It is left as an exercise to prove that Theorem 6.27: Uniform integrability implies stochastic boundedness.

Tightness is the version of stochastic boundedness for probability measures: Definition 6.9: A sequence of probability measures n on the Borel sets in Rk is called tight if, for an arbitrary (0, 1) there exists a compact subset K of Rk such that inf n 1 n (K ) > 1 . Clearly, if X n = O p (1), then the sequence of corresponding induced probability measures n is tight because the sets of the type K = {x Rk : x M} are closed and bounded for M < and therefore compact.

For sequences of random variables and vectors the tightness concept does not add much over the stochastic boundedness concept, but the tightness concept is fundamental in proving so-called functional central limit theorems. If X n = O p (1), then obviously for any > 0, X n = O p (n ). But X n /n is now more than stochastically bounded because then we also have that X n /n p 0.

The latter is denoted by X n = o p (n ): Definition 6.10: Let an be a sequence of positive nonrandom variables. Then X n = o p (an ) means that X n /an converges in probability to zero (or a zero vector.

Modes of Convergence if X n is a vector), none none and o p (an ) by itself represents a generic random variable or vector X n such that X n = o p (an ). Moreover, the sequence 1/an represents the rate of convergence of X n . Thus, X n p X can also be denoted by X n = X + o p (1).

This notation is handy if the difference of X n and X is a complicated expression. For example, the result of Theorem 6.25 is obtained because, by mean the value theorem, n( (X n ) ( )) = n ( ) n(X n ) = ( ) n(X n ) + o p (1), where n ( ) , with j,n [0, 1], j = 1, .

. . , k.

. = . 1 (x)/ x x= + 1,n (X n ). . . .

. m (x)/ x x= + k,n (X n ). The remainder term ( n ( ) ( )) n ) can now be represented by n(X o p (1), because n ( ) p ( ) and n(X n ) d Nk [0, ]; hence, by Theorem 6.21 this remainder term converges in distribution to the zero vector and thus also in probability to the zero vector. 6.

9. Asymptotic Normality of M-Estimators This section sets forth conditions for the asymptotic normality of M-estimators in addition to the conditions for consistency. An estimator of a parameter 0 Rm is asymptotically normally distributed if an increasing sequence of positive numbers an and a positive semide nite m m matrix exist such that an ( 0 ) d Nm [0, ].

Usually, an = n, but there are exceptions to this rule. Asymptotic normality is fundamental for econometrics. Most of the econometric tests rely on it.

Moreover, the proof of the asymptotic normality theorem in this section also nicely illustrates the usefulness of the main results in this chapter. Given that the data are a random sample, we only need a few additional conditions over those of Theorems 6.10 and 6.

11: Theorem 6.1: Let, in addition to the conditions of Theorems 6.10 and 6.

11, the following conditions be satis ed: (a) is convex. (b) 0 is an interior point of . (c) For each x Rk , g(x, ) is twice continuously differentiable on .

(d) For each pair i1 , i2 of components of , E[sup . 2 g(X 1 , )/( i1 i2 ). ] < .
Copyright © barcodefontsoft.com . All rights reserved.