Topic 14 Inverse matrix method and Cramer’s rule

14.1 Inverse matrices

We can multiply matrices. Can we also ``divide by matrices’’?

Recall that for numbers, \(\frac{b}{a} = b \cdot a^{-1}\) (where \(a^{-1}\) is the number which “solves” the equation \(a\cdot x = 1\), if such exists).

We can try to do a similar thing for matrices: Can we “solve” the equation \(A\cdot X=I_n\) for a given fixed matrix \(A\)?

Answer: well, sometimes we can, sometimes we can’t.

When we can, we call such a matrix \(A\) invertible, and we call the solution to \(A\cdot X=I_n\) the inverse of \(A\).

A note about \(I_n\): we use this in place of “\(1\)”, because it is the matrix that satisfies \(B\cdot I_n=B\) and \(I_n\cdot B=B\) whenever the multiplication makes sense (i.e. when \(B\) has the correct size).

Definition 14.1 A square \(n\times n\) matrix \(A\) is called invertible if there exists an \(n\times n\) matrix \(B\) such that \(AB=BA=I_n\). (If such a matrix \(B\) exists, then it is automatically unique.) We denote this \(B\) by \(A^{-1}\). So: \[ AA^{-1}=I_n=A^{-1}A. \]

Theorem 14.1 A square \(n\times n\) matrix \(A\) is invertible if and only if \(\det(A)\not=0\). In this case \[ \boxed{A^{-1} = \frac{1}{\det(A)}\mathrm{adj}(A)}, \] where \(\mathrm{adj}(A)\) is called the adjoint matrix (the transpose of the matrix of cofactors): \[ \mathrm{adj}(A) = \begin{pmatrix} A_{11} & A_{12} & \cdots & A_{1n}\\ A_{21} & A_{22} & \cdots & A_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ A_{n1} & A_{n2} & \cdots & A_{nn} \end{pmatrix}^T = \begin{pmatrix} A_{11} & A_{21} & \cdots & A_{n1}\\ A_{12} & A_{22} & \cdots & A_{n2}\\ \vdots & \vdots & \ddots & \vdots\\ A_{1n} & A_{n2} & \cdots & A_{nn} \end{pmatrix}. \]

This Theorem is somewhat tricky to prove; you can learn the proof in MATH1048 Linear Algebra I module.

Example 14.1 Is \(A=\begin{pmatrix}5&-3\\2&1\end{pmatrix}\) invertible? If so, determine \(A^{-1}\).

Solution. \(\det(A)=\begin{vmatrix}5&-3\\2&1\end{vmatrix}=5\cdot1+3\cdot2=11\neq0\)
\(A\) is invertible, and \[ A^{-1} = \frac{1}{11}\begin{pmatrix}1&-2\\3&5\end{pmatrix}^T = \begin{pmatrix}\frac{1}{11}&\frac{3}{11}\\\frac{-2}{11}&\frac{5}{11}\end{pmatrix}. \]

We can write the general formula for \(2\times 2\) matrices, which you may want to remember: \[ \begin{pmatrix}a&b\\c&d\end{pmatrix}^{-1} = \frac{1}{ad-bc} \begin{pmatrix}d&-b\\-c&a\end{pmatrix}\quad\quad \text{if}\quad ad-bc\neq0. \]

Example 14.2 Determine the matrix \(A^{-1}\) of the matrix \(A=\begin{pmatrix}1&1&1\\1&2&1\\3&-1&2\end{pmatrix}\) if it exists.

\[\begin{align*} \det(A) &= \begin{vmatrix}1&1&1\\1&2&1\\3&-1&2\end{vmatrix}\\ &= 1\cdot\begin{vmatrix}2&1\\-1&2\end{vmatrix} -1\cdot\begin{vmatrix}1&1\\3&2\end{vmatrix} +1\cdot\cdot\begin{vmatrix}1&2\\3&-1\end{vmatrix}\\ &= 5+1-7=-1. \end{align*}\] \(\implies\) \(A^{-1}\) exists and \[\begin{align*} A^{-1} &= \frac{1}{\det(A)}\mathrm{adj}(A) \\ &= \frac{1}{-1} \begin{pmatrix} \begin{vmatrix}2&1\\-1&2\end{vmatrix}&-\begin{vmatrix}1&1\\3&2\end{vmatrix}&\begin{vmatrix}1&2\\3&-1\end{vmatrix}\\ -\begin{vmatrix}1&1\\-1&2\end{vmatrix}&\begin{vmatrix}1&1\\3&2\end{vmatrix}&-\begin{vmatrix}1&2\\3&-1\end{vmatrix}\\ \begin{vmatrix}1&1\\2&1\end{vmatrix}&-\begin{vmatrix}1&1\\1&1\end{vmatrix}&\begin{vmatrix}1&1\\1&2\end{vmatrix} \end{pmatrix}^T\\ &=-\begin{pmatrix} 5&1&-7\\-3&-1&4\\-1&0&1 \end{pmatrix}^T = \begin{pmatrix} -5&-1&7\\3&1&-4\\1&0&-1 \end{pmatrix}^T \\ &= \begin{pmatrix} -5&3&1\\-1&1&0\\7&-4&-1 \end{pmatrix}. \end{align*}\]

14.2 Inverse matrix method for simultaneous equations

Given a system of linear equations \[\begin{equation} \begin{matrix} a_{11}x_1 & + & \cdots & + & a_{1n}x_n & = & b_1\\ \vdots & & \vdots & & \vdots & & \vdots\\ a_{n1}x_1 & + & \cdots & + & a_{nn}x_n & = & b_n \end{matrix},\tag{\(\ast\)} \end{equation}\] let \[ A = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{pmatrix} \] Then (*) is equivalent to the following matrix equation: \[ A\cdot \begin{pmatrix}x_1\\\vdots\\ x_n\end{pmatrix} = \begin{pmatrix}b_1\\\vdots\\ b_n\end{pmatrix}. \]

Theorem 14.2 Consider a system of linear equations \[ A\cdot \begin{pmatrix}x_1\\\vdots\\ x_n\end{pmatrix} = \begin{pmatrix}b_1\\\vdots\\ b_n\end{pmatrix}. \] If \(\det(A)\not=0\), then it has exactly one solution given by: \[ \begin{pmatrix}x_1\\\vdots\\ x_n\end{pmatrix} = A^{-1}\cdot \begin{pmatrix}b_1\\\vdots\\ b_n\end{pmatrix}. \]

Sketch of Proof (non–examinable): \[ A\cdot A^{-1}\cdot \begin{pmatrix}b_1\\\vdots\\ b_n\end{pmatrix} = I_n \begin{pmatrix}b_1\\\vdots\\ b_n\end{pmatrix} = \begin{pmatrix}b_1\\\vdots\\ b_n\end{pmatrix} \]

Example 14.3 Solve     \(\begin{matrix}4x+2y &=& 17\\ 5x-y &=& 7.25\end{matrix}\)     using the inverse matrix method.

Solution. Let \(A=\begin{pmatrix}4&2\\5&-1 \end{pmatrix}\)
\(\implies\) \(\det(A)=-4-10=-14\)   and   \(A^{-1}=\dfrac{1}{-14}\begin{pmatrix}-1&-2\\-5&4\end{pmatrix}\) \[\begin{align*} \implies\quad\quad \begin{pmatrix}x\\y\end{pmatrix} &= A^{-1}\cdot \begin{pmatrix}17\\7.25\end{pmatrix}\\ &=-\dfrac{1}{14}\cdot\begin{pmatrix}-1&-2\\-5&4\end{pmatrix}\cdot \begin{pmatrix}17\\7.25\end{pmatrix}\\ &=-\dfrac{1}{14}\cdot\begin{pmatrix}-17-14.5\\-85+ 29\end{pmatrix} = -\dfrac{1}{14}\cdot\begin{pmatrix}-31.5\\-56\end{pmatrix}\\ &=\begin{pmatrix}2.25\\4\end{pmatrix}. \end{align*}\] \(\implies\) \(x=2.25\) and \(y=4\).

Example 14.4 Using the inverse matrix method, solve the following system of equations for \(a,b,c\): \[ \begin{matrix} 4a&+&2b&+&c&=&12\\ a&-& b&+&c&=&-6\\ a&+& b&+&c&=&4. \end{matrix} \]

Solution. Let \(A=\begin{pmatrix}4&2&1\\1&-1&1\\1&1&1\end{pmatrix}\) \[\begin{align*} \implies\quad\quad\quad \det(A) &= 4\cdot\begin{vmatrix}-1&1\\1&1\end{vmatrix} -2\cdot\begin{vmatrix}1&1\\1&1\end{vmatrix} +1\cdot\begin{vmatrix}1&-1\\1&1\end{vmatrix}\\ &= 4\cdot(-2)-2\cdot0+2=-6 \end{align*}\] \[\begin{align*} \text{and}\quad\quad\quad A^{-1} &= \frac{1}{\det(A)}\mathrm{adj}(A) \\ &= \frac{1}{-6} \begin{pmatrix} \begin{vmatrix}-1&1\\1&1\end{vmatrix}&-\begin{vmatrix}1&1\\1&1\end{vmatrix}&\begin{vmatrix}1&-1\\1&1\end{vmatrix}\\ -\begin{vmatrix}2&1\\1&1\end{vmatrix}&\begin{vmatrix}4&1\\1&1\end{vmatrix}&-\begin{vmatrix}4&2\\1&1\end{vmatrix}\\ \begin{vmatrix}2&1\\-1&1\end{vmatrix}&-\begin{vmatrix}4&1\\1&1\end{vmatrix}&\begin{vmatrix}4&2\\1&-1\end{vmatrix} \end{pmatrix}^T\\ &=-\frac{1}{6}\begin{pmatrix} -2&0&2\\-1&3&-2\\3&-3&-6 \end{pmatrix}^T = -\frac{1}{6}\begin{pmatrix} -2&-1&3\\ 0&3&-3\\ 2&-2&-6 \end{pmatrix} \end{align*}\] \[\begin{align*} \implies\quad\quad\quad \begin{pmatrix}a\\b\\c\end{pmatrix} &= A^{-1}\cdot \begin{pmatrix}12\\-6\\4\end{pmatrix} = -\frac{1}{6}\begin{pmatrix} -2&-1&3\\ 0&3&-3\\ 2&-2&-6 \end{pmatrix}\cdot \begin{pmatrix}12\\-6\\4\end{pmatrix}\\ &=-\frac{1}{6}\begin{pmatrix}-24+6+12\\-18-12\\24+12-24\end{pmatrix} =-\frac{1}{6}\begin{pmatrix}-6\\-30\\12\end{pmatrix} = \begin{pmatrix}1\\5\\-2\end{pmatrix} \end{align*}\] \(\implies\) \(a=1\),   \(b=5\),   \(c=-2\).

14.3 Cramer’s rule (explained for \(2\times 2\) only)

Say we want to solve the following simultaneous equations (assuming that \(ac-bd\not=0\)): \[\begin{equation} \begin{matrix} ax&+&by&=&e\\ cx&+&dy&=&f\end{matrix} \tag{\(\ast\ast\)} \end{equation}\] Denote:

  • \(A=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\)
  • \(D_x := \begin{vmatrix}e&b\\f&d\end{vmatrix}\)     (the \(1^\text{st}\) column of \(A\) is replaced by the RHS of (**))
  • \(D_y := \begin{vmatrix}a&e\\c&f\end{vmatrix}\)     (the \(2^\text{nd}\) column of \(A\) is replaced by the RHS of (**))

Then \(x=\dfrac{D_x}{\det(A)}\) and \(y=\dfrac{D_y}{\det(A)}\) solves (**).

Example 14.5 Using Cramer’s rule, solve:    \(\begin{matrix}2x+4y &=& 16\\ x+3y &=& 11\end{matrix}\).

Solution. Let \(A=\begin{pmatrix}2&4\\1&3\end{pmatrix}\) \(\implies\) \(\det(A)=6-4=2\)
\(\implies\) \(D_x=\begin{vmatrix}16&4\\11&3\end{vmatrix}=4\) and \(D_y=\begin{vmatrix}2&16\\1&11\end{vmatrix}=6\)
\(\implies\) \(x=\frac{4}{2}=2\) and \(y=\frac{6}{2}=3\).