Term
|
Definition
[image]
where [image] are coefficients |
|
|
Term
|
Definition
[image]
where [image] is a constant |
|
|
Term
System of Linear Equations |
|
Definition
[image]
[image]
.
.
.
[image]
|
|
|
Term
What operations are allowed in Gauss Elimination? |
|
Definition
- Row Swapping - one row may be swapped with another
- Scaling - both sides of an equation may be multiplied by a non-zero constant
- Row Combination - an equation may be replaced by the sum of itself and a multiple of another equation
|
|
|
Term
What operations are not allowed in Gauss's Elimination? |
|
Definition
- no eqaution may be multiplied by zero
- can't add a multiple of a row to itself
- can't swap a row with itself
|
|
|
Term
When is a system in echelon form? |
|
Definition
A system is in echelon form if each leading variable is to the right of the one above it and all-zero rows are at the bottom.
Example:
[image]
|
|
|
Term
When can we tell that a system has no solution? |
|
Definition
When any step in Gaussian Elimination produces a contradictory equation.
For example,
[image]
0 [image] 2 is a contradictory equation |
|
|
Term
When do we know a system has one unique solution? |
|
Definition
When we reach echelon form, with each variable a leading variable in its row.
For example,
[image] |
|
|
Term
When do we know a system has many solutions? |
|
Definition
When we reach echelon form without any contradictory equations and at least one variable is not a leading variable in its row.
For example,
[image] |
|
|
Term
|
Definition
In an echelon form linear system, non-leading variables are free.
For example, in a system with three unknowns, x, y, z
[image]
a, in a non-leading position, indicates z is a free variable.
|
|
|
Term
How are leading variables described in the solution set of a linear system with free variables? |
|
Definition
The leading variables are described in terms of the free variables.
For example, the solution set of a linear system with 4 unknowns (x,y,z,w) and 2 free variables (z,w) would describe x and y in terms of z and w as in this solution set:
[image]
|
|
|
Term
|
Definition
The variables need to describe a family of solutions.
For example, only two variables (z,w) may be needed to describe a linear system with 4 unknowns; so the solution would have two parameter variables, z and w. |
|
|
Term
|
Definition
A matrix is an m x n rectangular array of numbers with m rows and n columns.
|
|
|
Term
What term is used to describe a number in a matrix? |
|
Definition
Each number in a matrix is called an entry. |
|
|
Term
What is the notation for a matrix and an entry in that same matrix? |
|
Definition
An uppercase letter is used to denote a matrix.
The lowercase of the same letter, with row and column indices, is used to denote an entry in the same matrix.
For example, a matrix is denoted by A while an entry in the matrix is denoted by [image] |
|
|
Term
|
Definition
A vector (or column vector) is a matrix with a single column. A matrix with a single row is a row vector. (may also be called linear arrays) |
|
|
Term
What are the entries in a vector called? |
|
Definition
The entries in a vector are called its components (may also be called coordinates, entries or elements). |
|
|
Term
What notation is used to define a vector? |
|
Definition
A lowercase letter overlined by an arrow.
For example,
[image]
where u and v are both vectors.
|
|
|
Term
|
Definition
A vector whose components are all zero. |
|
|
Term
|
Definition
To add two vectors, both must have the same number of components.
[image]
Basic properties of vector addition:
- [image]
- [image]
- [image]
- [image]
|
|
|
Term
Vector Scalar Multiplication |
|
Definition
Multiply each vector component by the scalar.
[image]
Basic properties of vector scalar multiplication (where [image]) :
- [image]
- [image]
- [image]
- [image]
If k < 0, vector is scaled in the opposite direction. |
|
|
Term
Form of a Linear System's Solution Set |
|
Definition
[image]
where [image] is any particular solution and
where the number of vectors [image] equal the number of free variables in the system after Gaussian Elimination |
|
|
Term
When is a linear system homogeneous? |
|
Definition
A linear system is homogeneous when it has a constant of zero and can be written as:
[image] |
|
|
Term
Does a homogenous system always have a solution? |
|
Definition
Yes. They always have at least one, the zero-vector. Some may have many solutions. |
|
|
Term
Can a homogeneous system, after Gaussian Elimination, have contradictory equations? |
|
Definition
|
|
Term
What is the general solution to a linear system? |
|
Definition
The general solution to a linear system includes both the particular ( [image] ) and the homogeneous ( [image] ) solutions of the system. |
|
|
Term
|
Definition
A matrix with the same number of rows and columns. |
|
|
Term
What is a non-singular matrix? |
|
Definition
A square matrix whose entries are the coefficients of a homogeneous linear system with a unique solution (the system reduces to echelon form with no free variables or contradictory equations). |
|
|
Term
What is a singular matrix? |
|
Definition
A square matrix with many solutions.
Note: We normally expect a square matrix to have a unique solution; when it doesn't, it is singular (i.e. unusual, not what we expected) |
|
|
Term
|
Definition
The length of a vector [image] is the square root of the sum of the squares of its components.
[image]
For any non-zero vector [image], the vector [image]/[image] has a length of one. We say that [image]normalizes [image].
|
|
|
Term
|
Definition
The dot product is the scalar result of the linear combination by multiplication of two n-component real vectors.
[image]
Basic properties of dot product:
- [image]
- [image]
- [image]
- [image]
Note: the dot product of a vector with itself is the length of the vector squared: [image].
If the dot product of two vectors is 0 then the vectors are orthogonal (perpendicular to each other).
|
|
|
Term
Minkowski (Triangle) Inequality |
|
Definition
For any [image], [image] with equality if and only if one of the vectors is a non-negative multiple of the other. |
|
|
Term
Cauchy-Schwartz Inequality |
|
Definition
For any [image], [image] with equality if and only if one vector is a scalar multiple of the other. |
|
|
Term
Gauss-Jordan Reduction (backward elimination) |
|
Definition
Gauss elimination transforms a matrix into echelon form, Gauss-Jordan reduces echelon form to row canonical form.
Gauss elimination places zeros in the columns below the leading variables (pivots), Gauss-Jordan places zeros in the columns above the leading variables.
Gauss-Jordan reductionfollows the same rules as Gauss elimination but begins from the bottom, right corner of the matrix. It is only applied to columns above pivots (i.e. columns holding free variables are not zeored out.) |
|
|
Term
What is reduced echelon form? |
|
Definition
A matrix or linear system is in reduced echelon form if, in addition to being in echelon form, each leading entry is a one and it is the only non-zero entry in it's column.
Example of a matrix in reduced row echelon form:
[image] |
|
|
Term
|
Definition
- Any matrice reduces to itself
- If A reduces to B then B reduces to A (symmetry)
- If A reduces to B and B reduces to C then A reduces to C (transivity)
Two matrices that are interreducible by row operations are row equivalent.
Notation: A ~ B |
|
|
Term
How can you determine whether one matrix can be derived from another using row reductions? |
|
Definition
Apply Gauss-Jordan to both matrices, if they produce the same reduced echelon form then they are row equivalent and can be derived from each other.
Note: matrices of different sizes cannot be row equivalent. |
|
|
Term
|
Definition
A inear combination of linear combinations is always a linear combination.
|
|
|
Term
Echelon Form and Linear Combinations |
|
Definition
Lemma:
In an echelon form matrix, no non-zero row is a linear combination of the other non-zero rows. |
|
|
Term
|
Definition
A vector u is a unit vector if [image] for any non-zero vector [image] |
|
|
Term
|
Definition
For vectors [image] and [image] [image] the Euclidean distance is
[image]
|
|
|
Term
|
Definition
To determine if a square matrix is invertible, append it's identity matrix and reduce to echelon form. If the left hand side of the resulting matrix is in upper triangular form, the matrix is invertible. To find the inverse, continue row echelon form; the left-hand side will now be the identity matrix and the right-hand side will be the inverse of the original.
Example:
[ 1 1 2] [ 3 0 3] [-2 3 0] [1 0 0] [0 1 0] [0 0 1]
[ 1 1 2| 1 0 0] [ 3 0 3| 0 1 0] [-2 3 0| 0 0 1]
becomes, after reduction,
[ 1 0 0| -3 2 1] [ 0 1 0| -2 4/3 1] [ 0 0 1| 3 -5/3 -1]
|
|
|