# 2.3 Elimination Using Matrix

## Abstract

The main purpose of linear algebra is to solve linear equations, we have learned the systematic method, elimination, last post. Today, We are trying to do elimination through matrix. The goal is to express all the steps of elimination (and final result) in a more clear way. And we will develop an important operation — Matrix Multiplication.

**keywords**: *Elimination,Matrix,n-Dimension Space, Matrix Multiplication, Row Exchange,The Augment Matrix*

## The Map of Concepts

## Review: Elimination, Matrix $\cdot$ Vector and Matrix Notation

There are many steps in solving a $3\times 3$ linear equations (3 equations in 3 unknowns), and when the scale of the set of equations become larger, such as $100\times 100$, which is a usual size in engineering, there will be too many steps to calculate the solutions. Since that, We want a more powerful tool, that can solve this problem simply and efficiently.

Let review the elimination by a set of linear equations:

$$

\begin{eqnarray}

2x_1+4x_2-2x_3&=2\tag{2.3.1a}\newline

4x_1+9x_2-3x_3&=8\tag{2.3.1b}\newline

-2x_1-3x_2+7x_3&=10\tag{2.3.1c}

\end{eqnarray}

$$

and rewrite them into matrix form:

$$

A\vec{x}=

\begin{bmatrix}

2&4&-2\newline

4&9&-3\newline

-2&-3&7

\end{bmatrix}\vec{x}=

\begin{bmatrix}

2\newline

8\newline

10

\end{bmatrix}=\vec{b}\tag{2.3.2}

$$

This matrix is square which means we have $n$ equations in $n$ unknowns. And let’s take a look at the matrix form equation $A\vec{x}=\vec{b}$, vector $\vec{x}$ and vector $\vec{b}$ are both vectors with the same dimensions, however, we have learned the difference matrix at 1.3 introduction to matrices , and both of them create a new vector through multiplication. **So maybe we can consider the matrix as a machine or a system which change the “input” ($\vec{x}$) into “output” ($\vec{b}$) by timing it**.

There are three components(unknowns) in the vector $\vec{x}$, so we can say vector $\vec{x}$ is in “$3$-dimensional space”, then the vectors with n components are in “$n$-dimensional space”. $A\vec{x}$ is a linear combination of the column of $A$:

$$

A\vec{x}=x_1\cdot \text{column 1}+x_2\cdot \text{column 2}+x_3\cdot \text{column 3}\tag{2.3.3a}

$$

And the row form here may be more useful:

$$

A\vec{x}=\begin{bmatrix}

\text{(row 1)}\cdot \vec{x}\newline

\text{(row 2)}\cdot \vec{x}\newline

\text{(row 3)}\cdot \vec{x}

\end{bmatrix}=\begin{bmatrix}

b_1\newline

b_2\newline

b_3

\end{bmatrix}\tag{2.3.3b}

$$

So we know the result of $A\vec{x}$ is $\vec{b}$, a vector has the same dimensions with $\vec{x}$. And the first component of $\vec{b}$ is the dot product of the first row of matrix $A$ and $\vec{x}$. We have learned that the components in matrix can be notated as $a_{ij}$ ( $A(i,j)$, means the entry of row $i$, column $j$ ), and a $3\times 3$ matrix is :

$$

A=\begin{bmatrix}a_{11}&a_{12}&a_{13}\newline a_{21}&a_{22}&a_{23}\newline a_{31}&a_{32}&a_{33}\end{bmatrix}\tag{2.3.4}

$$

And $b_1$, the first component of $\vec{b}$, can be written as:

$$

b_1=a_{11}x_1+a_{12}x_2+a_{13}x_3\tag{2.3.5}

$$

and more general form ( to a $n\times n$ matrix and n-dimensional vector):

$$

b_i=a_{i1}x_i+a_{i2}x_2+\dots+a_{in}x_n\tag{2.3.6}

$$

a sigma notation:

$$

b_i=\sum^{n}*{j=1}a*{ij}x_j\tag{2.3.7}

$$

## The Matrix Form of the One Elimination Step

We reviewed some knowledge above. And let’s go on reviewing the elimination, the first step of the elimination of equations (2.3.1) is (2.3.1b) substrcting 2 times (2.3.1a), and we get:

$$

\begin{eqnarray}

2x_1+4x_2-2x_3&=2\tag{2.3.1a}\newline

0x_1+1x_2+1x_3&=4\tag{2.3.1d}\newline

-2x_1-3x_2+7x_3&=10\tag{2.3.1c}

\end{eqnarray}

$$

The right side, $\vec{b}=\begin{bmatrix}2\newline 8\newline 10\end{bmatrix}$, is changed to $\vec{b_{new}}=\begin{bmatrix}2\newline 4\newline 10\end{bmatrix}$, which is transformed with the same step as the left side: $8$ (conponent of 2.3.1b) substract $2$ ( multiplier $\ell$ ) $\times$ $2$ (component of 2.3.1a).

What we want is to find a matrix that can express this eliminating step. However, result $\vec{b_{new}}=E\vec{b}$ illuminates that the matrix $E$ fulfills these purpose. When we multiply an “elimination matrix” $E$ by $\vec{b}$. It subtracts $2\times b_1$ from $b_2$:

>

$$

\text{The elimination matrix is } E=

\begin{bmatrix}

1&0&0\newline

-2&1&0\newline

0&0&1

\end{bmatrix}\tag{2.3.8}

$$

Multiply the matrix by vector $\vec{b}$:

$$

E\vec{b}=\begin{bmatrix}

1&0&0\newline

-2&1&0\newline

0&0&1

\end{bmatrix}

\begin{bmatrix}

2\newline

8\newline

10

\end{bmatrix}=

\begin{bmatrix}

2\newline

4\newline

10

\end{bmatrix}\tag{2.3.9}

$$

The elimination matrix works perfect in this example, and in general:

$$

E\vec{b}=\begin{bmatrix}

1&0&0\newline

-2&1&0\newline

0&0&1

\end{bmatrix}

\begin{bmatrix}

b_1\newline

b_2\newline

b_3

\end{bmatrix}=

\begin{bmatrix}

b_1\newline

b_2-2b_1\newline

b_3

\end{bmatrix}\tag{2.3.10}

$$

The elimination matrix looks a little like the identity matrix $I$. If the $-\ell$ denotes a multiplier, what we should do to get a “elementray matrix” or “elimination matrix” is replacing the zero with the $-\ell$ at a specific position.

The elementary matrix or elimination matrix $E_{ij}$ that subtracts a multiple $\ell$ of row $j$ from row $i$ has the extra nonzero entry $-\ell$ in $i,j$ position and 1’s on the diagonal.

An example of elimination matrix $E_{31}$ where $\ell=4$:

$$

E_{31}=

\begin{bmatrix}

1&0&0\newline

0&1&0\newline

-\ell&0&1

\end{bmatrix}=

\begin{bmatrix}

1&0&0\newline

0&1&0\newline

-4&0&1

\end{bmatrix}\tag{2.3.11}

$$

and this elimination matrix substract 4 times first entry from the third one:

$$

E_{31}\vec{b}=

\begin{bmatrix}

1&0&0\newline

0&1&0\newline

-4&0&1

\end{bmatrix}

\begin{bmatrix}

1\newline

3\newline

9

\end{bmatrix}=

\begin{bmatrix}

1\newline

3\newline

9-4\times 1

\end{bmatrix}=

\begin{bmatrix}

1\newline

3\newline

5

\end{bmatrix}\tag{2.3.12}

$$

This is the multiplication on the right side, and what about the left side of $A\vec{x}=\vec{b}$. To keep balance, the left side should also be multiplyed by the elimination matrix:

$$

E_{31}A\vec{x}=E_{31}\vec{b}\tag{2.3.13}

$$

To eliminate, the purpose of $E_{31}$ is to produce a zero in the $(3,1)$ position of the matrix. However, this is only one step in the elimination, so we can go on, step by step, transformimg the matrix $A$ into a triangular matrix $U$. The details will be show below, but we should be sure that the matrix of coefficients are changed after multiplyed by a elimination matrix. Even through the solution are not changed after eliminating, the system of equations has been changed. This is an important point that we should pay more attention to.

The elimination matrix $E_{ij}$ works well, but we will not see it later. They show us how a matrix acts on rows. If each step of elimination need a matrix, so many steps of the entire process will need too many matrices, and be too complicated. And what we want is a matrix that contains all the steps, or developing a operation that can combine all the matrices together.

## Matrix Multiplication

Everything we discuss above is to substract a multiple of a row from another row. And matrix $E_{ij}$ seems fulfill this purpose, however we get a new form of equations after the first step:

$$

E_{21}A\vec{x}=E_{21}\vec{b}\tag{2.3.14}

$$

This is the big question: **How do we multiply two matrices?**. When the first matrix is $E_{21}$, we expect $E_{21}A$ to eliminate $a_{21}$. And if the matrix $A$ is the matrix in quation(2.3.1), to cancel the entry at row 2 column 1, we should set $\ell=2$ and then $E_{21}$ is:

$$

\begin{bmatrix}

1&0&0\newline

-2&1&0\newline

0&0&1

\end{bmatrix}\tag{2.3.15}

$$

and the production of $E_{21}A$ should be:

$$

E_{21}A=

\begin{bmatrix}

1&0&0\newline

-2&1&0\newline

0&0&1

\end{bmatrix}

\begin{bmatrix}

2&4&-2\newline

4&9&-3\newline

-2&-3&7

\end{bmatrix}=

\begin{bmatrix}

2&4&-2\newline

\textbf{0}& \textbf{1}& \textbf{1}\newline

-2&-3&7

\end{bmatrix}\tag{2.3.16}

$$

where row 1 and row 3 do not change, but the first entry in row 2 is converted into zero. We design the “Matrix Multiplication”, not find it out. We designed and create it according our purpose. And a subtle idea is included in the progress, that is:

$$

(E_{21}A)\vec{x}=\begin{bmatrix}

2x_1&4x_2&-2x_3\newline

0x_1&1x_2&1x_3\newline

-2x_1&-3x_2&7x_3

\end{bmatrix}\tag{2.3.17}

$$

and

$$

E_{21}(A\vec{x})=\begin{bmatrix}

2x_1&4x_2&-2x_3\newline

0x_1&1x_2&1x_3\newline

-2x_1&-3x_2&7x_3

\end{bmatrix}\tag{2.3.18}

$$

So, we get a conclusion:

$$

(E_{21}A)\vec{x}=E_{21}(A\vec{x})\tag{2.3.19}

$$

then, we can just write it as $E_{21}A\vec{x}$. The rule can be extend to a matrix $C$:

$$

(EA)C=E(AC)\tag{2.3.20}

$$

>

$$

\text{Associative law is true }A(BC)=(AB)C\

\text{Commutative law is false }\text{ Often }AB\neq BA

$$

### Calculate Matrix Multiplication

We know how to calculate $A\vec{x}$, and it is a linear combination of columns of $A$. And if we expend the $\vec{x}$ into matrix, which has two columns, the multiplication become two linear combinations:

$$

AX=A\begin{bmatrix}\vec{x}& \vec{y}\end{bmatrix}=\begin{bmatrix}A\vec{x}& A\vec{y}\end{bmatrix}\tag{2.3.21}

$$

This can also be supported by the view of elimination, when we eliminate an entry in the equations, we calculate each column separately. However, we can expend the $\vec{x}$ into more than two columns matrices, a more general form, we instead $X$ with a matrix $B$:

$$

AB=\begin{bmatrix}A\vec{b_1}& A\vec{b_2}&\dots&A\vec{b_n}\end{bmatrix}

\tag{2.3.22}

$$

The beautiful of matrix multiplication is that all three approaches, rows, columns and whole matrics, come out right.

## Permutation Matrix

In elimination, we need multiplication and substraction, while, sometimes to get a pivot, we have to exchange two rows. So we need a matrix that can exchange rows — Permutation matrix.

$I=\begin{bmatrix}1&0&0\newline 0&1&0\newline 0&0&1\end{bmatrix}$ is a 3-dimensional identity matrix. By exchanging rows of $I$, we find a “permutation matrix”:

$$

P_{23}=\begin{bmatrix}1&0&0\newline 0&0&1\newline 0&1&0\end{bmatrix}

\tag{2.3.23}

$$

This is a row change matrix. Multiplying by $P_{23}$ can changing row 2 and row 3 of a vector:

$$

\begin{bmatrix}1&0&0\newline 0&0&1\newline 0&1&0\end{bmatrix}

\begin{bmatrix}1\newline 5\newline 3\end{bmatrix}=

\begin{bmatrix}1\newline 3\newline 5\end{bmatrix}\tag{2.3.24}

$$

And also works to matrices:

$$

\begin{bmatrix}1&0&0\newline 0&0&1\newline 0&1&0\end{bmatrix}

\begin{bmatrix}2&4&1\newline \textbf{0}&\textbf{0}&\textbf{3}\newline 0&6&5\end{bmatrix}=

\begin{bmatrix}2&4&1\newline 0&6&5\newline \textbf{0}&\textbf{0}&\textbf{3}\end{bmatrix}\tag{2.3.25}

$$

We need a more general permutation matrix than a specific one:

Row Exchange Matrix: $P_{ij}$ is the identity matrix with rows $i$ and $j$ reversed. When this “permutation matrix” $P_{ij}$ multiplies a matrix, it exchanges rows $i$ and $j$

Maybe we need a matrix that can exchange three rows at one operation, fortunately, that can be done by a complex of several permutation matrices.

**We exchange the rows to move the pivot up to diagonal.**

## The Augmented Matrix

The elimination does the same row operation to $A$ and $\vec{b}$, so we have an idea, how about including $\vec{b}$ as an extra column of matrix $A$ and eliminating the larger matrix to solve the equations:

$$

A_{augmented}=\begin{bmatrix}A&\vec{b}\end{bmatrix}

=\begin{bmatrix}2&4&-2&2\newline 4&9&-3&8\newline -2&-3&7&10\end{bmatrix}\tag{2.3.26}

$$

And we use the elimination matrix $E_{21}$ to eliminate entry $(2,1)$ :

$$

\begin{bmatrix}1&0&0\newline -2&1&0\newline 0&0&1\end{bmatrix}

\begin{bmatrix}2&4&-2&2\newline 4&9&-3&8\newline -2&-3&7&10\end{bmatrix}=

\begin{bmatrix}

2&4&-2&2\newline

0&1&1&4\newline

-2&-3&7&10\end{bmatrix}\tag{2.3.27}

$$

It looks good, because of matrix multiplication **acts** on each column separately.

$$

E\begin{bmatrix}A&\vec{b}\end{bmatrix}=\begin{bmatrix}EA&E\vec{b}\end{bmatrix}\tag{2.3.28}

$$

Notic again the word “acts”. It is essential! Matrices are acting. $E$ acts on $A$ to produce a new matrix $A_{new}$. And $E_{21},E_{31},E_{22}$ act on $A$ to produce a upper triangular matrix $U$.

## Conclusion

- $A\vec{x}=x_1\cdot \text{column 1}+\dots x_n\cdot\text{column n}$ and $(A\vec{x})
*{i}=\sum*{j=1}^{n}a_{ij}x_{j}$ - Identity matrix $I$, multiplier $\ell$, and elimination matrix $E$( combination of $I$ and $\ell$), exchange matrix $P_{ij}$
- $E_21$ subtracts a multiple $\ell_{21}$ of equation 1 from equation 2. $-\ell_{21}$ is the $(2,1)$ entry of the elimination matrix $E_{21}$
- Augmented matrix $\begin{bmatrix}A&\vec{b}\end{bmatrix}$ , it can be eliminated by $E$ as $\begin{bmatrix}EA&E\vec{b}\end{bmatrix}$
- $EA$, matrix $E$ acts on each column of $A$ separately.

## Reference

1.Strang G, Strang G, Strang G, et al. Introduction to linear algebra[M]. Wellesley, MA: Wellesley-Cambridge Press, 1993.

visitor tests

## test markdown and latex

$$

\frac{1}{2}

$$