Term
|
Definition
coefficient matrix + an added column containing the constants from the right side of the equation |
|
|
Term
Elementary Row Operations |
|
Definition
1. (Replacement) Replace one row by the sum of itself and a multiple of another row
2. (Interchange) Interchange two rows
3. (Scaling) Multiply all entries by a nonzero constant |
|
|
Term
Matrices are row equivalent if... |
|
Definition
there is a sequence of elementary row operations that transform one matrix into another |
|
|
Term
If the augmented matrices of two linear systems are row equivalent, then the two systems have the same solution set. |
|
Definition
|
|
Term
|
Definition
1. All nonzero rows are above any rows of all zeros.
2. Each leading entry of a row is in a column to the right of the leading entry of the row above it.
3. All entries in a column below a leading entry are zero. |
|
|
Term
|
Definition
1. The leading entry in each nonzero row is 1.
2. The leading 1 is the only nonzero entry in its column.
Each matrix is row equivalent to one and only reduced echelon form |
|
|
Term
|
Definition
A location in Matrix A that corresponds to a leading 1 in the reduced echelon form of A. |
|
|
Term
|
Definition
A column of matrix A that contains a pivot position |
|
|
Term
|
Definition
a nonzero number in a pivot position that is used as needed to create zeros via row operations |
|
|
Term
Existence and Uniqueness Theorem
A linear system is consistent iff the rightmost column of the augmented matrix is not a pivot column (iff an echelon form of the augment matrix has no row of the form [0 0 0..b] with b nonzero) |
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
the set of all vectors with n entries |
|
|
Term
Asking if a vector (b) can be written as a linear combo (matrix A) or asking if a vector (b) is in Span {v1,..., vp} is the equivalent of asking if the linear system of [A b] has a solution |
|
Definition
|
|
Term
Two vectors in R^n are equal iff their corresponding entries are equal |
|
Definition
|
|
Term
Because we can identify a geometric point (a,b) with the column vector [a b] we may regard R^2 as the set of all points in the plane |
|
Definition
|
|
Term
Parallelogram Rule for Addition
If u + v in R^2 are represented as points in the plane, then u + v corresponds to the fourth vertex of the parallelogram whose other vertices are u, o, and v |
|
Definition
|
|
Term
The set of all scalar multiples of one fixed nonzero vector is a line through the origin |
|
Definition
|
|
Term
If n is a positive integer, then R^n is the collection of all lists of n real numbers, usually written in the form of n x 1 column matrices (All vectors with n entries) |
|
Definition
|
|
Term
Algebraic Properties of R^n
For all u, v, w in R^n and all scalars c and d
i. u + v = v + u
ii. (u +v) + w = u + (v + w)
iii. u + 0 = 0 + u = u
iv. u + (-u) = -u + u = 0, where -u denotes (-1)u
v. c(u + v) = cu + cv
vi. (c + d)u = cu + du
vii. c(du) = (cd)u
viii. 1u = u |
|
Definition
|
|
Term
Given vectors v1, v2,...,vp in R^n and scalars c1, c2,...,cp, the vector y defined as y = c1v1 + c2v2 + ... + cpvp is called a linear combination of v1,...,vp with weights c1,...,cp |
|
Definition
|
|
Term
A vector equatin x1a1 + x2a2 + ... + xnan = b has the same solution set as the linear system whose augmented matrix is [a1 a2 ... an b]
b can be generated by a linear combo of a1...an iff there exists a solution to the linear system corresponding to [a1 a2 ...an b] |
|
Definition
|
|
Term
If v1,...,vp are in R^n, then the set of all linear combos of v1,...,vp is denoted by Span {v1,...,vp} and is called the subset of R^n spanned by v1,...,vp
Span {v1,...,vp} is the collection of all vectors that can be written in the form c1v1 + c2v2 + ... + cpvp with c1...cp scalars |
|
Definition
|
|
Term
Asking whether a vector b is in Span {v1...vp} is the same asking whether x1v1 + x2v2 ... + xpvp = b has a solution or asking whether the linear system w/ augmented matrix [v1...vp b] has a solution |
|
Definition
|
|
Term
If A is a mxn matrix w/ columns a1,...,an and if x is in R^n then the product of A and x, denoted by Ax, is the linear combo of the columns of A denoted w/ the corresponding entries in x as weights
ie. Ax = [a1 a2 ... an][x1...xn] = x1a1 + x2a2 + ... + xnan
(vertical column)^
Ax is only defined if the number of columns of A equals the number of entries in x |
|
Definition
|
|
Term
Ax = b is called the matrix equation, as opposed to the vector equation x1[vector] + x2 [vector] = [vector] |
|
Definition
|
|
Term
Theorem 4 (1.4)
If A is an mxn matrix, with columns a1...an, and if b is in R^m, the matrix equation Ax = b has the same solution set as the vector equation x1a1 + x2a2 + ... + xnan = b which also has the same solution set of the system of linear equations whose augmented matrix is [a1 a2 ... an b] |
|
Definition
|
|
Term
The equation AX = b has a solution iff b is a linear combo of the columns of A |
|
Definition
|
|
Term
Let A be a mxm co-efficient matrix. All the following statements are all true or all false.
1. For each b in R^m, the equation Ax = b has a solution.
2. Each b in R^m ia linear combo of the columns of A.
3. The column of A span R^m.
4. A has a pivot position in every row. |
|
Definition
|
|
Term
A set of vectors {v1,...,vp} in R^m spans R^m if every vector in R^m is a linear combo of v1,...,vp (If Span {v1,...,vp} = R^m) |
|
Definition
|
|
Term
Identity Matrix
a matrix with 1's on the diagonal and 0's elsewhere |
|
Definition
|
|
Term
Theorem 5 (1.4)
If A is a mxn matrix, u and v are vectors in R^n, and c is a scalar, then:
a. A(u + v) = Au + Av
b. A(cu) = c(Au) |
|
Definition
|
|
Term
A system of linear equations is said to be homogeneous if...
it can be written in the system Ax = 0, where A is a mxn matrix and 0 is the zero vector in R^m. |
|
Definition
|
|
Term
Ax = 0 always has at least one solution -> x = 0.
The zero vector is called the trivial solution |
|
Definition
|
|
Term
|
Definition
A nonzero vector that satisfies Ax = 0 |
|
|
Term
The homogeneous system Ax = 0 has a nontrivial solution iff the equation has at least one free variable. |
|
Definition
|
|
Term
The solution set of a homogenous equation Ax = 0 can always be expressed as Span {v1,...,vp} for the suitable vectors v1,...,vp (v1,...,vp are the vectors from the parametric vector form of the solution)
If the only solution is the zero vector, then the solution set is Span {0}
If the equation Ax = 0 has only one free variable, the solution set is a line through the origin |
|
Definition
|
|
Term
Parametric Vector Equation
x = su + tv (s,t in R and u and v as vectors) |
|
Definition
|
|
Term
Theorem 6 (1.5)
Suppose the equation Ax = b is consistent for some given b, and let p be a solution. Then the solution set of Ax = b is the set of all vectors of the form w = p + vh, where vh is any solution of the homogeneous equation Ax = 0.
Only applicable if Ax = b has at least one nonzero solution p. If not, the solution set is empty |
|
Definition
|
|
Term
In indexed set of vectors {v1,...,vp} in R^n is said to be linearly independent if the vector equation x1v1 + x2v2 + ...xpvp = 0 has only the trivial solution
linearly independent = only the trivial solution |
|
Definition
|
|
Term
The set {v1,...,vp} is said to be linearly dependent if there exist weights c1,...,cp, not all zero, such that c1v1 + c2v2 + ...cpvp = 0
linearly dependent = nontrivial solution/free variable
c1v1 + c2v2 + ... + cpvp = 0 is called a linear dependence relation among v1,...,vp when the weights are not all zero |
|
Definition
|
|
Term
The columns of the matrix A are linearly independent iff the equation Ax = 0 has ONLY the trivial solution
|
|
Definition
|
|
Term
A set containing only one vector is linearly independent iff v is not the zero vector |
|
Definition
|
|
Term
A set of two vectors {v1, v2} is linearly dependent if at least one of the vectors is a multiple of the other. (In geometric terms, if the vectors lie on the same line through the origin)
The set is linearly independent iff neither of the vectors is a multiple of the other |
|
Definition
|
|
Term
Theorem 7: Characterization of Linearly Dependent Sets
An indexed set (A set) S = {v1,...,vp} of two or more vectors is linearly dependent iff at least one of the vectors in S is a linear combination of the others.
Every vector does not need to be a linear combo of the preceding vectors. |
|
Definition
|
|
Term
Theorem 8 (1.7)
If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set {v1,...,vp} in R^n is linearly dependent if p>n
ie. If number of columns > number of rows |
|
Definition
|
|
Term
Theorem 9 (1.7)
If a set S = {v1,...,vp} in R^n contains the zero vector, then the set is linearly dependent |
|
Definition
|
|
Term
In terms of linear transformations:
Solving Ax = b is the same as finding all the vectors x in R^n that are transformed into the vector b in R^m under the "action" of multiplication by A |
|
Definition
|
|
Term
A transformation/function/mapping T from R^n to R^m is a rule that assigns to each vector x in R^n a vector T(x) in R^m. The set R^n is called the domain of T, and R^m is called the codomain of T.
T : R^n -> R^m indicates that the domain is R^n and the codomain is R^m
For all x in R^n, the vector T(x) in R^m is called the image of x (under the action of T)
The set of all images T(x) is called the range of T. |
|
Definition
|
|
Term
The matrix transformation x |-> Ax is for each x in R^n, T(x) = Ax, where A is a mxn matrix.
The domain of T is R^n when A has n entries and the codomain of T is R^m when A has m entries.
The range of T is the set of all linear combinations of the columns of A b/c each T(x) is of the form Ax. |
|
Definition
|
|
Term
A transformation (or mapping) of T is linear if:
1. T(u + v) = Tu + Tv
2. T(cu) = cT(u) for all u and all scalars c
Every matrix transformation is a linear transformation |
|
Definition
|
|
Term
If T is a linear transformation, then...
1. T(0) = 0
2. T(cu + dv) = cT(u) + dT(v) for all vectors u,v in the domain of T and all scalars c,d. |
|
Definition
|
|
Term
Let T: R^n -> R^m be a linear transformation. Then there exists a unique matrix A such that
T(x) = Ax for all x in R^n
In fact, A is the m x n matrix whose jth column is the vector T(ej), where ej is the jth column of the identity matrix in R^n
A = [T(e1) T(e2) ... T(en)] : standard matrix for the linear transformation T |
|
Definition
|
|
Term
A mapping T : R^n -> R^m is said to be onto R^m if each b in R^m is the image of at least one x in R^n |
|
Definition
|
|
Term
A mapping T : R^n -> R^m is said to be one-to-one if there is the image of at most one x in R^n |
|
Definition
|
|
Term
Theorem 11 (1.9)
Let T : R^n -> R^m be a linear transformation. Then T is one-to-one iff the equation T(x) = 0 has only the trivial solution. |
|
Definition
|
|
Term
Theorem 12 (1.9)
Let T : R^n -> R^m be a linear transformation and let A be the standard matrix for T. Then:
a. T maps R^n onto R^m iff the columns of A span R^m
b. R is one-to-one iff the columns of A are linearly independent. |
|
Definition
|
|
Term
A diagonal matrix is a matrix whose non-diagonal entries are zero. |
|
Definition
|
|
Term
Two matrices are equal if they have the same size and if their corresponding columns are equal. |
|
Definition
|
|
Term
Theorem 1 (2.1)
a. A + B = B + A -> only defined if A and B are the same size
b. (A + B) + C = A + (B + C)
c. A + 0 = A
d. r(A + B) = rA + rB
e. (r + s)A = rA + sA
f. r(sA) = (rs)A
All verified by showing that the matrix on the left side has the same size as the matrix on the right and that corresponding columns are equal. |
|
Definition
|
|
Term
If A is a mxn matrix, and B is an nxp matrix with columns b1...bp, then the product AB is the mxp matrix whose columns are Ab1,...,Abp. That is,
AB = A[b1 b2 ... bp] = [Ab1 Ab2 ...Abp]
This makes A(Bx) = (AB)x for all x in R^p |
|
Definition
|
|
Term
Each column of AB is a linear combo of the columns of A using weights from the corresponding column of B. |
|
Definition
|
|
Term
Let A be a mxn matrix and let B adn C have sizes for which the indicated sums and products are defined.
a. A(BC) = (AB)C
b. A(B + C) = AB + AC
c. (B + C)A = BC + BA
d. r(AB) = (rA)B = A(rB) for any scalar r
e. ImA = A = AIn
In general,
1. AB does not = BA
2. If AB = AC, then B does not = C
3. If AB = 0, cannot conclude A or B = 0. |
|
Definition
|
|
Term
Given a mxn matrix A, the transpose of A is the nxm matrix, denoted by A^T, whose columns are formed from the corresponding rows of A |
|
Definition
|
|
Term
Theorem 3 (2.1)
Let A and B denote matrices whose sizes are appropriate for the following sums and products.
a. (A^T)^T = A
b. (A + B)^T = A^T + B^T
c. For any scalar r, (rA)^T = rA^T
d. (AB)^T = B^T x A^T |
|
Definition
|
|
Term
An nxn matrix A is said to be invertible if there is an nxn matrix C such that CA = I and AC = I, where I = In, the nxn identity matrix, in which case C is an inverse of A.
A matrix that is invertible is sometimes called a nonsingular matrix, and a matrix that is not invertible is sometimes called a singular matrix |
|
Definition
|
|
Term
A^(-1) x A = I and A x A^(-1) = I |
|
Definition
|
|
Term
Theorem 4 (2.2)
Let A = [a/c b/d] where / indicates verticalness. If ad - bc does not equal 0, then A is invertible and
A^(-1) = (1/ad-bc)[d/-c -b/a]
If ad - bc = 0, then A is not invertible. |
|
Definition
|
|
Term
det A = ad - bc for a 2x2 matrix |
|
Definition
|
|
Term
Theorem 5 (2.2)
If A is an invertible nxn, then for each b in R^n, the equation Ax = b has the unique solution x =A^(-1)b |
|
Definition
|
|
Term
Theorem 6 (2.2)
a. If A is an invertible matrix, then A^(-1) is invertible and (A^(-1))^(-1) = A
b. If A and B are nxn invertible matrices, then so is AB and the inverse of AB is the product of the inverses of A and B in the reverse order, or (AB)^(-1) = B^(-1) x A^(-1)
c. If A is an invertible matrix, then so is A^T, and the inverse of A^T is the transpose of A^(-1), or (A^T)^(-1) = (A^(-1))^T |
|
Definition
|
|
Term
The product of nxn invertible matrices is invertible, and the inverse is the product of their inverses in the reverse order. |
|
Definition
|
|
Term
An elementary matrix is...
a matrix that is obtained by performing a single elementary row operation on the identity matrix |
|
Definition
|
|
Term
If an elementary row operation is performed on an mxn matrix A, the resulting matrix can be written as EA, where the mxm matrix E is created by performing the same row operation on Im. |
|
Definition
|
|
Term
Each elementary matrix E is invertible. The inverse of E is the elementary matrix of the same type that transforms E back into I. |
|
Definition
|
|
Term
Theorem 7 (2.2)
An nxn matrix A is invertible iff A is row equivalent to In, and in this case, any sequence of elementary row operations that reduces A to In, also transforms In into A^(-1). |
|
Definition
|
|
Term
The Invertible Matrix Theorem
Let A be a square nxn matrix. Then the following statements are all true or all false.
a. A is an invertible matrix
b. A is row equivalent to the nxn identity matrix
c. A has n pivot positions
d. The equation Ax = 0 has only the trivial solution
e. The columns of A form a linearly independent set.
f. The linear transformation x |-> Ax is one-to-one
g. The equation Ax = b has at least one solution for each b in R^n.
h. The columns of A span R^n.
i. The linear transformation x |-> Ax maps R^n onto R^n
j. There is an nxn matrix C such that CA = I.
k. There is a nxn matrix D such that AD = I
L. A ^T is an invertible matrix
m. The columns of A form a basis of R^n
n. Col A = R^n
o. dim Col A = n
p. rank A = n
q. Nul A = {0}
r. dim Nul A = o
s. The number 0 is not an eigenvalue of A.
t. The determinant of A is not zero. |
|
Definition
|
|
Term
Let A and B be square matrices. If AB = I, then A and B are both invertible, with B = A^(-1) and A = B^(-1) |
|
Definition
|
|
Term
Theorem 9 (2.3)
Let T : R^n -> R^n be a linear transformation and let A be the standard matrix for T. Then T is invertible iff A is an invertible matrix. In that case, the linear transformation S given by S(x) = A^(-1)x is the unique function that satisfies S(T(x)) = x and T(S(x)) = x for all x in R^n |
|
Definition
|
|
Term
A linear transformation T: R^n -> R^n is said to be invertible if there exists a function S : R^n -> R^n such that
S(T(x)) = x for all x in R^n
T(S(x)) = x for all x in R^n
If such an x exists, it is unique and a linear transformation.
S is the inverse of T and can be written as T^(-1) |
|
Definition
|
|
Term
A subset of R^n is any set H in R^n that has three properties:
a. The zero vector is in H.
b. For each u and v in H, the sum u + v is in H.
c. For each u in H, and each scalar c, the vector cu is in H. |
|
Definition
|
|
Term
The column space of a matrix A is the set Col A of all linear combinations of the columns of A
The column space of a mxn matrix is a subspace of R^m
When a system of linear equations is written in the form Ax = B, the column space of A is the set of all b for which the system has a solution. |
|
Definition
|
|
Term
The null space of a matrix is the set Nul A of all solutions to the homogeneous equation Ax = 0 |
|
Definition
|
|
Term
Theorem 12 (2.8)
The null space of an mxn matrix is a subspace of R^n. Equivalently, the set of all solutions to a system Ax = 0 of m homogeneous linear equations in n unknowns is a subspace of R^n.
To test whether a given vector v is in Nul A, compute Av to see if Av is the zero vector.
To create an explicit description of Nul A, solve the equation Ax = 0 and write the solution in parametric vector form. |
|
Definition
|
|
Term
A basis for a subspace H is a linearly independent set in H that spans H.
|
|
Definition
|
|
Term
The set {e1...en} is called the standard basis for R^n. |
|
Definition
|
|
Term
To find the basis of Nul A, find the solution to Ax = 0 in parametric form and the final vectors are the basis. |
|
Definition
|
|
Term
The pivot columns of a matrix A form a basis for the column space of A. |
|
Definition
|
|
Term
The dimension of a nonzero subspace H, denoted by dim H, is the number of vectors in any basis for H. The dimension of the zero subspace {0} is defined to be zero. |
|
Definition
|
|
Term
The rank of a matrix A, denoted by rank A, is the dimension of the column space of A. |
|
Definition
|
|
Term
The Rank Theorem
If a matrix A has n columns, then rank A + dim Nul A = n |
|
Definition
|
|
Term
The Basis Theorem
Let H be a p-dimensional subspace of R^n. Any linearly independent set of exactly p elements in H us automatically a basis for H. Also, any set of p elements of H that spans H is automatically a basis for H. |
|
Definition
|
|
Term
Theorem 2 (3.1)
If a is a triangular matrix, then det A is the product of the entries on the main diagonal of A. |
|
Definition
|
|
Term
Theorem 3 (3.2)
Row Operations
Let A be a square matrix.
a. If a multiple of one row of A is added to another row to produce a matrix B, then det B = det A
b. If two rows are interchanged to produce B, then det B = - det A
c. If one row of A is multiplied by k to produce B, then det B = k x det A |
|
Definition
|
|
Term
det A = (-1)^r x the product of the pivots of U, where r is the number of row interchanges and U is the echelon form of A, when A is invertible
det A = 0 when A is not invertible |
|
Definition
|
|
Term
Theorem 4 (3.2)
A square matrix A is invertible iff det A is not equal to 0. |
|
Definition
|
|
Term
Theorem 5 (3.2)
If A is an nxn matrix, then det A^T = det A |
|
Definition
|
|
Term
Theorem 6 (3.2)
If A and B are nxn matrices, then det (AB) = det A x det B
Note: det (A + B) does not generally equal det A + det B |
|
Definition
|
|
Term
If A is an nxn matrix and E is an nxn elementary matrix, then det EA = det E x det A
where
det E = 1 if E is a row replacement, -1 if E is an interchange, or r if E is a scale by r |
|
Definition
|
|