Tensor-based derivation of standard vector identities

9 pages
of 9
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Vector algebra is a powerful and needful tool for Physics but unfortunately, due to lack of mathematical skills, it becomes misleading for first undergraduate courses of science and engineering studies. Standard vector identities are usually proved
    a  r   X   i  v  :   0   9   0   4 .   1   8   1   4  v   1   [  p   h  y  s   i  c  s .  g  e  n  -  p   h   ]   1   1   A  p  r   2   0   0   9 Tensor-based derivation of standard vector identities Miguel ´Angel Rodr´ıguez-Valverde and Mar´ıa Tirado-Miranda Grupo de F´ısica de Fluidos y Biocoloides, Departamento de F´ısica Aplicada,Facultad de Ciencias, Universidad de Granada. E-18071 GranadaE-mail:  marodri@ugr.es Abstract.  Vector algebra is a powerful and needful tool for Physics butunfortunately, due to lack of mathematical skills, it becomes misleading for firstundergraduate courses of science and engineering studies. Standard vector identitiesare usually proved using Cartesian components or geometrical arguments, accordingly.Instead, this work presents a new teaching strategy in order to derive symbolicallyvector identities without analytical expansions in components, either explicitly or usingindicial notation. This strategy is mainly based on the correspondence between three-dimensional vectors and skew-symmetric second-rank tensors. Hence, the derivationsare performed from skew tensors and dyadic products, rather than cross products.Some examples of skew-symmetric tensors in Physics are illustrated.PACS numbers: 01.40.-d, 01.40.gb, 02.00.00, 45.10.Na  Tensor-based derivation of standard vector identities   2 1. Introduction Vector analysis [1] plays a key role in many branches of Physics: Mechanics, Fluiddynamics, Electromagnetism theory.., because it is a powerful mathematical tool thatcan express physical laws in invariant forms. Hence, learning of vector skills must be apriority goal for science and engineering students of undergraduate courses [2]. However,understanding of vectors often becomes intricate [3] due to the underlying mathematics,which can even hide the meaning of the involved physical quantities [4]. Common pitfallsare srcinated by the lack of mathematical resources for deriving vector identities.In undergraduate physics courses, the standard identities of introductory vectoralgebra are mostly proved either from geometrical arguments [5] or analytically usingrectangular Cartesian components, and at best from the indicial notation [1, 6]. Unlike the analytical proofs, geometrical derivations are performed regardless of the coordinatesystem. Instead, the demonstrations based on the indicial notation are more elegantand compact although they require to handle complex symbolic expressions, withoutany physical insight into the problem at hand.We present an alternate approach of vector identity derivation based on the use of tensors and dyadic products rather than  cross   products. Tensor algebra using matrixformat [7] become less cumbersome than indicial notation and further, the operationsinvolving second-order tensors are readily understood as transformations of vectors.Hereafter, only for illustrative purposes, just first- and second-rank Cartesiantensors are considered, i.e. the three-dimensional space is Euclidean. Hence, thecontravariant and covariant components are identical to one another because the metrictensor and conjugate metric tensor are equal to the identity matrix. Nevertheless,the derivations compiled in this text are equally valid for other metrices with minormodifications. 2. Dyadics Aside from the well-known  dot   product (particular case of the inner product), a  dyadic  is formed by the outer or direct product of two vectors. The dyadic between the vectors a  and   b  produces the following second-order tensor [7] of nine components:( a b ) ij   a i b  j  (1)with  i ,  j  = 1 , 2 , 3 and where  a i  and  b  j  are the respective Cartesian components of bothoperating vectors. Unlike the inner product or contraction, symbolized by a point, andthe double inner product, symbolized by colon, no specific symbol is employed for thedyadic product.Since an arbitrary vector can be expressed as a linear combination of the unit vectorbasis  { ˆ e i } i =1 , 2 , 3 , an arbitrary dyadic can be written into components from the concerningunit dyads  { ˆ e i ˆ e  j } i,j =1 , 2 , 3  as follows: a b  =  a b  ij ˆ e i ˆ e  j  (2)  Tensor-based derivation of standard vector identities   3where the summation convention is in effect for the repeated indices [1]. If the unitvectors ˆ e i  are mutually orthogonal, a special dyadic called the identical dyadic arises: 1  = ˆ e i ˆ e i  (3)where the summation convention is again invoked. This quantity is the second-order identity   tensor of three-dimensional space.The inner product can be applied between vectors and second-order tensors as well,like a matrix product keeping their own properties. Thus, dyadics hold the followingproperties (derivation not shown): •  a b  =   ba  t •  ( ca ) ·  b  =  a ·  b  c •  c ·  a b  = ( c · a )  b •  a b  ·  c d  =   b · c  a d  where the superscript  t  stands for the matrix transpose. Note that even though thevector transpose is represented by a 1 × 3 matrix instead of the conventional 3 × 1 matrix,the vector after transposition remains identical, i.e.  a  ≡  ( a ) t . By default, vectors atleft-hand side in an inner product are transpose.Although it is not used in this paper, the trace of   a b , i.e. the sum of their diagonalcomponents, is indeed the concerning dot product:trace  a b  =  a ·  b In fact, the trace of   a b  can be expressed in terms of double inner product as  13 a b  :  1 . 3. Skew-symmetric tensor associated to a vector In vector algebra [8], the skew-symmetric tensor  Ω a  of rank two associated to a vector a  is defined by:( Ω a ) ij   − ε ijk a k  (4)where  ε ijk  stands for the Levi-Civita symbol [7], also referred to as  ε -permutationsymbol, and where all indices have the range 1, 2, 3. The index  k  is the dummysummation index according to the summation convention. The epsilon symbol  ε ijk holds the following rules: •  ε 123  =  ε 231  =  ε 321  = 1 •  ε 123  =  − ε 213  =  − ε 132 •  ε ijk  = 0 ,  otherwise  Tensor-based derivation of standard vector identities   4There is an additional relation known as epsilon-delta identity: ε mni ε ijk =  δ  mj δ  nk  − δ  mk δ  nj  (5)where  δ  ij  is the Kronecker delta ( ij -component of the second-order identity tensor)and the summation is performed over the  i  index. Indeed, the epsilon symbol andthe Kronecker delta are both numerical tensors which have fixed components in everycoordinate system. As the identity tensor, 1 , can be generated from the summation of the unit dyads (3) built by any orthonormal vector basis  { ˆ e i } i =1 , 2 , 3 , the epsilon symbolcan be accordingly found from the following triple scalar product: ε ijk  = (ˆ e i  ×  ˆ e  j ) · ˆ e k where the cross product is symbolized by  × . From the anti-cyclic rule of   ε ijk  and thedefinition (4), it is straightforwardly shown that the tensor  Ω a  is anti-symmetric: Ω ta  =  − Ω a  (6)and this can be readily illustrated from the matrix form of   Ω a : Ω a  =  0  − a 3  a 2 a 3  0  − a 1 − a 2  a 1  0  Also,  a  is called the (Hodge) dual vector of the skew-symmetric tensor  Ω a . Hence, forinstance, the magnetic field tensor in Electrodynamics [9] is indeed the skew-symmetrictensor associated to the magnetic field vector.The Levi-Civita symbol also appears in the definition of the cross product of   a  and  b  [1]:  a ×  b  i  ε ijk a  j b k  (7)then, from the definition (4) and the anti-cyclic rule of the epsilon symbol, it is possiblerewritten the cross product in terms of the concerning skew-symmetric tensor (4) as:  a ×  b  i = ( Ω a ) ik  b k  (8)or in vector notation as: a ×  b  =  Ω a  ·  b  (9)A cross product typically returns a (true) vector or polar vector. More exactly, thecross product (9) is a vector if either a  or  b  (but not both) are pseudovectors. Otherwise, a ×  b  is a pseudovector [10]. Then, it is worthy to mention that the tensor  Ω a  will bea relative tensor or pseudotensor if the vector  a  is axial and otherwise, it will be anabsolute tensor if the vector  a  is polar.In addition to the skew-symmetry (6), the tensor  Ω a  holds the following properties(derivation not shown): •  Ω αa  =  α Ω a •  Ω a +  b  = Ω a  + Ω  b  Tensor-based derivation of standard vector identities   5 •  Ω a  · a  =   0 •  Ω  b  · a  =  b · Ω a •  Ω a  · Ω  b  =  ba −  a ·  b  1 •  Ω a ×  b  =  ba − a b  =  Ω a  · Ω  b  − Ω  b  · Ω a where  α  is a scalar. These properties can be straightforwardly proved using indexnotation and the above-mentioned rules of the Levi-Civita symbol. Thus, the epsilon-delta identity (5) draws to the last two properties, which are very helpful for thederivations compiled in section 4. In particular, these other properties are also veryuseful: •  Ω − a  =  Ω ta •  Ω a  · Ω  b  =  Ω  b  · Ω a  t •  Ω a  ·  b  =  −  b · Ω a •  Ω 2ˆ e  = ˆ e ˆ e − 1 •  Ω 3ˆ e  =  − Ω ˆ e where ˆ e  is a vector of unit length. Due to Eq. (3),  Ω 2ˆ e  is equal to the second-orderidentity tensor of the two-dimensional space (plane) with normal unit ˆ e . 4. Standard vector identities Next, the most useful vector identities are demostrated from the concerning dyadics(1) and skew-symmetric tensors (4). The above-listed properties, the associative rule of  matrix product and the matrix transposition rules are used accordingly. •  Cyclic permutation of the scalar triple product:  a ×  b  · c  =  a · Ω  b  · c  =  a ·  Ω  b  · c  =  a ·   b × c  =   b · Ω − a  · c  =  b · ( Ω − a  · c ) =  b · ( c × a ) (10)From these identities, the orthogonality between  a ×  b  and each vector can bereadily illustrated:  a ×  b  · a  =   0 (11) •  Vector triple product expansion (or Lagrange’s formula): a ×   b × c   =  a · Ω  b × c  =  a ·  c b −  bc  = ( a · c )  b −  a ·  b  c  (12)also known as the acb minus abc rule. Using this identity, any vector can beexpressed as linear combination of two mutually perpendicular vectors accordingto an arbitrary direction, ˆ e : a  = ( a · ˆ e ) ˆ e  + ˆ e × ( a × ˆ e ) (13)
Related Search
Similar documents
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks