![]() Multi-variable chain rule, which I have up here, and if you haven't seen those go take a look. (2.5), (3.5) and (4.2) follow from only one explicit use of the chain rule (in section 2).Last couple of videos, I talked about the (2.5), allows one to write the derivative in terms of the primed variables as,Īll of the results, Eqs. Making the same transformation of the independent variable as in section 2, Eq. Where the number of components of ( ) is not necessarily the same as the number of components of ( ). If the function itself is a vector,, then the derivative is a matrix, Chain Rule for Vector Functions (First Derivative) (2.5), can be used to rewrite the second derivative matrix in terms of the new, primed variables,Ĥ. (3.1) resembles an inner product (scalar), the 'row' property of derivative vectors (mentioned in section 1) means that this is actually an outer product.Īgain, imposing the transformation Eq. The second derivative with respect to the original variable,, can be written in matrix form as,Īlthough the right hand side of Eq. Chain rule for scalar functions (second derivative) ![]() This has the same appearance as the chain rule for single variable functions (now with vectors in the place of scalars) and is a convenient way of remembering the multi-variable result.ģ. (2.2)), transmits information in the reverse direction, but for adjoint variables. It is important to note that the adjoint of an operator is not generally its inverse: While transmits information from to (Eq. (2.2)) is similarly called the adjoint operator. The operator (as distinct from the forward operator, as defined in Eq. A column derivative with respect to a vector, such as, is often called an adjoint variable. This is the generalised chain rule for vector derivatives. It can be checked that the following, when expanded using Eqs. for the derivative of with respect to, expressed explicitly via the chain rule,Įxpressions for derivatives with respect to each component of can be assembled into a vector. Is sometimes referred to as a Jacobean, and has matrix elements (as Eq. Its derivative with respect to the vector is the vector,Īn important question is: what is in the case that the two sets of variables and are related via the transformation, Chain rule for scalar functions (first derivative)Ĭonsider a scalar that is a function of the elements of. ![]() Such expressions are used in data assimilation.Ģ. The inner and outer products and matrix operators applied with vectors and vector derivatives can be used in innovative ways to write compact multi-variable expressions. Generally, the matrix elements can be thought of as the partial derivatives, In this respect, the matrix operator is sometimes used as a transformation (or change of basis) where each row of represents the row vector for a member of the new basis. This action is like performing many inner products, one for each row of. Is valid if the number of rows of ( ) is the same as the number of elements in and the number of columns of ( ) is the same as the number of elements in. Matrix operatorsĪ matrix acts on one vector to give another vector. For with elements and with elements, the outer product as defined above will be a matrix. The number of elements of and need not be the same for the outer product. The outer product is written and yields a matrix, It is found by the summation ( and must be vectors of the same number of elements, ), The combination (an inner product) is a scalar. Since a vector is a special case of a matrix with either just one row or one column (depending on whether the vector is row or column vector), the transpose instruction here makes row vectors into column vectors, and vice-versa. Matrices and the transpose instructionĪ matrix,, contains element in row and column. The transpose symbol, written as a superscript after an object makes rows into columns and vice-versa. ![]() (1.2), but is a column vector by convention. The "nabla" version of the derivative, contains the same elements as Eq. A vector is, by convention, a column vector (here with elements),Īnd a vector derivative is a row operator, Vectors and vector derivativesĪs is usual notation, scalars and vectors are distinguished from each other by writing vectors in bold. In section 1 we review the standard notation used in linear algebra. The results lead us to the concept of adjoint variables and adjoint operators. We will derive the relevant expressions useful in the theory of variational data assimilation and inverse modelling. The Generalized Chain Rule Some vector algebra and the generalized chain ruleĭata Assimilation Research Centre, University of Reading, UKĪs we shall see in these notes, the chain rule can be applied to vector as well as scalar derivatives.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |