Lecture 02. Linear Systems
Solution of Linear Systems
A linear system can have:
- no solution (inconsistent)
- exact one solution (consistent)
- infinite solutions (consistent)
The basic method for solving a system of linear equations is to replace the given system by a new system that has the same solution set but which is easier to solve.
Since the rows of an augmented matrix correspond to the equations in the associated system. new systems is generally obtained in a series of steps by applying the following three types of operations to eliminate unknowns systematically. These are called elementary row operations:
- Multiply an equation through by an nonzero constant
- Interchange two equations
- Add a constant multiple of one equation to another
Algorithms to Solve Linear Systems (Direct Methods)
Gauss-Jordan Elimination
I don't see the necessity to write the detailed procedure here.
Cramer's Rule
Find Inverse (A^{-1})
It is easy to consider that the solution to Ax = b can be written as x = A^{-1}b.
The problem is, how do we obtain the inverse A^{-1} ?
Simple Way
\begin{aligned}
A &=
\begin{bmatrix}
a_{11} &\cdots &a_{1n}\\
\vdots &\vdots &\vdots\\
a_{m1} &\cdots &a_{mn}\\
\end{bmatrix}\\
&\longrightarrow \begin{bmatrix}
a_{11} &\cdots &a_{1n} &| &1 &\cdots &1\\
\vdots &\vdots &\vdots &| &\vdots &\vdots &\vdots\\
a_{m1} &\cdots &a_{mn} &| &1 &\cdots &1\\
\end{bmatrix}
\longrightarrow \begin{bmatrix}
& &| & & &\\
&A &| & &I &\\
& &| & & &\\
\end{bmatrix}\\
&\longrightarrow \begin{bmatrix}
& &| & & &\\
&I &| & &A^{-1} &\\
& &| & & &\\
\end{bmatrix}
\end{aligned}
LU Decomposition
If all n leading principal minors of the n\times n matrix A are non-singular, then A has exact one LU-decomposition:
A = LU
Where L is a lower triangular matrix and U is an upper triangular matrix.
Algorithms to Solve Linear Systems (Iterative Methods)
\begin{aligned}
A &= D - L -U, \begin{cases}
D &= \{a_{ii} | a_{ii} \in A, i \in [1, n]\} \quad \text{(the diagonal line)}\\
L &= \{a_{ij} | a_{ij} \in A, i > j,i \in [1, n]\} \quad \text{(the neg. of lower part)}\\
U &= \{a_{ij} | a_{ij} \in A, i < j,i \in [1, n]\} \quad \text{(the neg. of upper part)}
\end{cases}\\
\end{aligned}
Jacobi Method
T = D^{-1}(L+U), \quad c= (D-L)^{-1}b
Gauss-Seidel Method
T = (D-L)^{-1}U, \quad c= (D-L)^{-1}b
x^{(k)} = Tx^{(k-1)} + c, \text{where } k \in \N^+
Vector and Matrix Norms
Vector Norms
On a vector space V, a norm is a function ||v|| from the space V to \R^{+} that obeys these three postulates:
- v \ne 0 \rightarrow ||v|| > 0
- ||\lambda v|| = |\lambda|\times ||v||, \lambda \in \R
- ||x + y|| \le ||x|| + ||y||
Then,
||x||_{l_p} = \bigg(\sum_{i=1}^{n} |x_i|^p\bigg)^{1/p}
Thus,
\begin{aligned}
||x||_1 &= \sum_{i = 1}^{n} |x_i|\\
||x||_2 &= \sqrt{\sum_{i = 1}^{n} |x_i|^2}\\
||x||_\infty &= \max_{i \in [1, n]} |x_i|\\
\end{aligned}
Matrix Norms
\begin{aligned}
||A||_1 &= \max_{j \in [1, m]}\sum_{i = 1}^{n} |a_{ij}| \quad (\text{largest column abs. sum})\\
||x||_2 &= \sqrt{\ \lambda_{max}(A^{T}A)}\\
||x||_\infty &= \max_{i \in [1, n]}\sum_{j = 1}^{m} |a_{ij}| \quad (\text{largest row abs. sum})
\end{aligned}