ri = in .NET Connect Code 3 of 9 in .NET ri =

ri = use vs .net 39 barcode implement toembed code 3/9 with .net Java Ci j P(w j x) =. P(w j x) = 1 P(w i x).. (14.40). Thus, to minimiz .NET barcode 39 e the average probability of error, we select i as the i which maximizes the a posteriori probability P(w i . x). That is, for minimum cost, we decide w i , if P(w i x) > P(w j x) for all i = j , which we have already seen is the simple maximum likelihood classi er. Thus, we see that the maximum likelihood classi er minimizes the Bayes risk associated with a symmetric cost function..

14.5 The quadratic classi er Consider the gen eral multivariate Gaussian classi er, with two classes. As in Assignment 14.1, if we take logs, we can work out a decision rule based on a likelihood.

14.5 The quadratic classi er ratio: Decide class 1 if ln K 1 . + (x K 2 . 1) T 1 K 1 (x 1). (x T 1 2 ) K 2 (x < T hr eshold; (14.41). P(w 2 )(C12 C2 2 ) . else decide class 2; where T hr eshold = 2 ln P(w 1 )(C21 C11 ) If we de ne . 1 1 1 A = K 1 K 2 , b = 2(K 2 1 K1 K 1 . , 2 + ln K 2 . 2 1 ),. and (14.42). T 1 1 K1 T 1 2 K2 we can rewrite E visual .net Code 39 q. (14.

41) using g(x) xT Ax + bT x + c. (14.43).

The Mahalanobis distance has the properties of a metric. Can you prove that (Do you recall the defnition of a metric ). And the decision Code 39 Full ASCII for .NET rule becomes: Decide class 1 if g(x) < T . In this formulation, we see clearly why the Gaussian parametric classi er is known as a quadratic classi er.

Let s examine the implications of this rule. Consider the quantity (x T 1 1 ) K 1 (x 1 ). This is some sort of measure involving a measurement, x, and a class parameterized by mean vector and a covariance matrix.

This quantity is known as the Mahalanobis distance. First, let s look at the case that the covariance is the identity. Then, the Mahalanobis distance simpli es to (x 1 )T (x 1 ).

That is, take the difference between the measurement and the mean. That is a vector. Then take the inner product of that vector with itself, which is, of course, the squared magnitude of that vector.

What is this quantity Of course! It is just the (squared) Euclidean distance between the measurement and the mean of the class. If the prior probabilities are the same and we use symmetric costs, Threshold works out to be zero, and the decision rule simpli es to: Decide class 1 if (x . T 1 ) (x (x (x (14.44). else decide clas s 2. If the measurement is closer to the mean of class 1 than class 2, this quantity is less than zero. Therefore, we refer to this (very simpli ed) classi er as a nearest mean classi er, or nearest mean decision rule.

Now, let s complicate the rule a bit. We no longer assume the covariances are equal to the identity, but do assume they are equal to each other (K 1 = K 2 K ). In this case, look at Eq.

(14.42) and notice that the A matrix becomes zero. Now, the operations are not quadratic any more.

We have a linear classi er. We could choose to ignore the ratio of the determinates of the covariance matrices, or, more appropriately, to include that number in the threshold T. Then we have a minimum distance decision rule, but now the distance used is not the Euclidean distance.

We refer to this as a minimum Mahalanobis distance classi er.. Statistical pattern recognition Here is another special case: What if the covariance is not only equal, but diagonal Now, the Mahalanobis distance takes on a special form. We illustrate this by using a three-dimensional measurement vector, and letting the mean be zero: 1 0 0 x 11 1 1 [x1 x2 x3 ] 0 0 x2 22 x3 1 0 0.
Copyright © . All rights reserved.