# 9781441998866-c1

31 pages
140 views

View again

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Share
Description
Chapter 2 Projection Matrices 2.1 Deﬁnition Deﬁnition 2.1 Let x ∈ E n = V ⊕W. Then x can be uniquely decomposed into x = x 1 +x 2 (where x 1 ∈ V and x 2 ∈ W). The transformation that maps x into x 1 is called the projection matrix (or simply projector) onto V along W and is denoted as φ. This is a linear transformation; that is, φ(a 1 y 1 +a 2 y 2 ) = a 1 φ(y 1 ) +a 2 φ(y 2 ) (2.1) for any y 1 , y 2 ∈ E n . This implies that it can be represen
Tags

## Mathematical Analysis

Transcript
Chapter 2 Projection Matrices 2.1 Deﬁnition Deﬁnition 2.1 Let  x ∈ E  n = V   ⊕ W  . Then  x can be uniquely decomposed into x = x 1 + x 2 ( where  x 1 ∈ V   and  x 2 ∈ W  ) . The transformation that maps  x into x 1 is called the projection matrix (or simply projector) onto V   along  W  and is denoted as  φ . This is a linear transformation; that is, φ ( a 1 y 1 + a 2 y 2 ) = a 1 φ ( y 1 ) + a 2 φ ( y 2 ) (2.1)  for any  y 1 , y 2 ∈ E  n . This implies that it can be represented by a matrix.This matrix is called a projection matrix and is denoted by  P  V  · W  . The vec-tor transformed by  P  V  · W  (that is, x 1 = P  V  · W  x ) is called the projection (or the projection vector) of  x onto V   along  W  . Theorem 2.1 The necessary and suﬃcient condition for a square matrix  P  of order  n to be the projection matrix onto V   = Sp( P  ) along  W  = Ker( P  ) is given by  P  2 = P  . (2.2)We need the following lemma to prove the theorem above. Lemma 2.1 Let  P  be a square matrix of order  n , and assume that (2.2)holds. Then  E  n = Sp( P  ) ⊕ Ker( P  ) (2.3) © Springer Science+Business Media, LLC 2011Statistics for Social and Behavioral Sciences, DOI 10.1007/978-1-4419-9887-3_2, 25 H. Yanai et al.,  Projection Matrices, Generalized Inverse Matrices, and Singular Value Decomposition ,  26 CHAPTER 2. PROJECTION MATRICES  and  Ker( P  ) = Sp( I  n − P  ) . (2.4) Proof of Lemma 2.1. (2.3): Let x ∈ Sp( P  ) and y ∈ Ker( P  ). From x = Pa , we have Px = P  2 a = Pa = x and Py = 0 . Hence, from x + y = 0 ⇒ Px + Py = 0 , we obtain Px = x = 0 ⇒ y = 0 . Thus, Sp( P  ) ∩ Ker( P  ) = { 0 } . On the other hand, from dim(Sp( P  )) + dim(Ker( P  )) =rank( P  ) + ( n − rank( P  )) = n , we have E  n = Sp( P  ) ⊕ Ker( P  ).(2.4): We have Px = 0 ⇒ x = ( I  n − P  ) x ⇒ Ker( P  ) ⊂ Sp( I  n − P  ) onthe one hand and P  ( I  n − P  ) ⇒ Sp( I  n − P  ) ⊂ Ker( P  ) on the other. Thus,Ker( P  ) = Sp( I  n − P  ). Q.E.D. Note When (2.4) holds, P  ( I  n − P  ) = O ⇒ P  2 = P  . Thus, (2.2) is the necessaryand suﬃcient condition for (2.4). Proof of Theorem 2.1. (Necessity) For ∀ x ∈ E  n , y = Px ∈ V   . Notingthat y = y + 0 , we obtain P  ( Px ) = Py = y = Px = ⇒ P  2 x = Px = ⇒ P  2 = P  . (Suﬃciency) Let V   = { y | y = Px , x ∈ E  n } and W  = { y | y = ( I  n − P  ) x , x ∈ E  n } . From Lemma 2.1, V   and W  are disjoint. Then, an arbitrary x ∈ E  n can be uniquely decomposed into x = Px + ( I  n − P  ) x = x 1 + x 2 (where x 1 ∈ V   and x 2 ∈ W  ). From Deﬁnition 2.1, P  is the projection matrixonto V   = Sp( P  ) along W  = Ker( P  ). Q.E.D.Let E  n = V   ⊕ W  , and let x = x 1 + x 2 , where x 1 ∈ V   and x 2 ∈ W  . Let P  W  · V  denote the projector that transforms x into x 2 . Then, P  V  · W  x + P  W  · V  x = ( P  V  · W  + P  W  · V  ) x . (2.5)Because the equation above has to hold for any x ∈ E  n , it must hold that I  n = P  V  · W  + P  W  · V  . Let a square matrix P  be the projection matrix onto V   along W  . Then, Q = I  n − P  satisﬁes Q 2 = ( I  n − P  ) 2 = I  n − 2 P  + P  2 = I  n − P  = Q ,indicating that Q is the projection matrix onto W  along V   . We also have PQ = P  ( I  n − P  ) = P  − P  2 = O , (2.6)  2.1. DEFINITION  27implying that Sp( Q ) constitutes the null space of  P  (i.e., Sp( Q ) = Ker( P  )).Similarly, QP  = O , implying that Sp( P  ) constitutes the null space of  Q (i.e., Sp( P  ) = Ker( Q )). Theorem 2.2 Let  E  n = V   ⊕ W  . The necessary and suﬃcient conditions  for a square matrix  P  of order  n to be the projection matrix onto V   along  W  are: (i) Px = x for ∀ x ∈ V, (ii) Px = 0 for ∀ x ∈ W. (2.7) Proof. (Suﬃciency) Let P  V  · W  and P  W  · V  denote the projection matricesonto V   along W  and onto W  along V   , respectively. Premultiplying (2.5) by P  , we obtain P  ( P  V  · W  x ) = P  V  · W  x , where PP  W  · V  x = 0 because of (i) and(ii) above, and P  V  · W  x ∈ V   and P  W  · V  x ∈ W  . Since Px = P  V  · W  x holdsfor any x , it must hold that P  = P  V  · W  .(Necessity) For any x ∈ V   , we have x = x + 0 . Thus, Px = x . Similarly,for any y ∈ W  , we have y = 0 + y , so that Py = 0 . Q.E.D. Example 2.1 InFigure 2.1, −→ OA indicates the projection of  z onto Sp( x )along Sp( y ) (that is, −→ OA = P  Sp ( x ) · Sp ( y ) z ), where P  Sp ( x ) · Sp ( y ) indicates theprojection matrix onto Sp( x ) along Sp( y ). Clearly, −→ OB = ( I  2 − P  Sp ( y ) · Sp ( x ) ) × z .Sp( y ) = { y } Sp( x ) = { x } AB P  { x }·{ y } z O z Figure 2.1: Projection onto Sp( x ) = { x } along Sp( y ) = { y } . Example 2.2 InFigure 2.2, −→ OA indicates the projection of  z onto V   = { x | x = α 1 x 1 + α 2 x 2 } along Sp( y ) (that is, −→ OA = P  V  · Sp ( y ) z ), where P  V  · Sp ( y ) indicates the projection matrix onto V   along Sp( y ).  28 CHAPTER 2. PROJECTION MATRICES  Sp( y ) = { y } V   = { α 1 x 1 + α 2 x 2 } AB P  V  ·{ y } z O z Figure 2.2: Projection onto a two-dimensional space V   along Sp( y ) = { y } . Theorem 2.3 The necessary and suﬃcient condition for a square matrix  P  of order  n to be a projector onto V   of dimensionality  r (dim( V   ) = r ) is given by  P  = T  ∆ r T  − 1 , (2.8) where  T  is a square nonsingular matrix of order  n and  ∆ r =  1 ··· 0 0 ··· 0 .................. 0 ··· 1 0 ··· 00 ··· 0 0 ··· 0 .................. 0 ··· 0 0 ··· 0  . (There are  r unities on the leading diagonals, 1 ≤ r ≤ n .) Proof. (Necessity) Let E  n = V   ⊕ W  , and let A = [ a 1 , a 2 , ··· , a r ] and B = [ b 1 , b 2 , ··· b n − r ] be matrices of linearly independent basis vectors span-ning V   and W  , respectively. Let T  = [ A , B ]. Then T  is nonsingular,since rank( A ) + rank( B ) = rank( T  ). Hence, ∀ x ∈ V   and ∀ y ∈ W  can beexpressed as x = Aα = [ A , B ]  α 0  = T   α 0  , y = Aα = [ A , B ]  0 β  = T   0 β  .
Similar documents

View more...