Download Careers360 App
Inverse Matrix

Inverse Matrix

Edited By Komal Miglani | Updated on Jul 02, 2025 06:35 PM IST

A matrix (plural: matrices) is a rectangular arrangement of symbols along rows and columns that might be real or complex numbers. Thus, a system of m x n symbols arranged in a rectangular formation along m rows and n columns is called an m by n matrix (which is written as m x n matrix). The linear system of equations can be solved by using the inverse of the matrix. Environmental science models real-world issues using linear systems as well.

This Story also Contains
  1. The inverse of a Matrix
  2. Formula to Calculate Inverse of a Matrix A-1
  3. Methods to find the inverse of the matrix
  4. Properties of the inverse of a matrix:
  5. Solved Examples Based on the Inverse of a Matrix
Inverse Matrix
Inverse Matrix

In this article, we will cover the concept of the inverse of matrices. This category falls under the broader category of Matrices, which is a crucial Chapter in class 12 Mathematics. It is not only essential for board exams but also for competitive exams like the Joint Entrance Examination(JEE Main) and other entrance exams such as SRMJEE, BITSAT, WBJEE, BCECE, and more. Over the last ten years of the JEE Main Exam (from 2013 to 2023), a total of 16 questions have been asked on this concept, including one in 2014, one in 2016, one in 2017, one in 2018, one in 2019, three in 2020, four in 2021, two in 2022, and two in 2023.

The inverse of a Matrix

A non-singular square matrix A is said to be invertible if there exists a non-singular square matrix B such that

AB = I = BA

and the matrix B is called the inverse of matrix A. Clearly, B should also have the same order as A.

Hence, $\mathrm{A}^{-1}=\mathrm{B} \Leftrightarrow \mathrm{AB}=\mathbb{I}_{\mathrm{n}}=\mathrm{BA}$

Formula to Calculate Inverse of a Matrix A-1

We know

$
\mathrm{A}(\operatorname{adj} \mathrm{A})=|\mathrm{A}| \mathbb{I}_{\mathrm{n}}
$

Multiplying both sides by $\mathrm{A}^{-1}$
$
\begin{aligned}
& \Rightarrow \mathrm{A}^{-1} \mathrm{~A}(\operatorname{adj} \mathrm{A})=\mathrm{A}^{-1} \mathbb{I}_{\mathrm{n}}|\mathrm{A}| \\
& \Rightarrow \mathbb{I}_{\mathrm{n}}(\operatorname{adjA})=\mathrm{A}^{-1}|\mathrm{~A}| \mathbb{I}_{\mathrm{n}} \quad\left(\text { As } A^{-1} \cdot A=I\right) \\
& \mathrm{A}^{-1}=\frac{\operatorname{adj} \mathrm{A}}{|\mathrm{A}|}
\end{aligned}
$

The inverse of a 2 x 2 Matrix

Let $\mathrm{A}$ is a square matrix of order 2
$
\mathrm{A}=\left[\begin{array}{ll}
a & b \\
c & d
\end{array}\right]
$

Then,
$
\mathrm{A}^{-1}=\left[\begin{array}{ll}
a & b \\
c & d
\end{array}\right]^{-1}=\frac{1}{\mathrm{ad}-\mathrm{bc}}\left[\begin{array}{cc}
d & -b \\
-c & a
\end{array}\right]
$

The inverse of a 3 x 3 Matrix

To compute the inverse of matrix A of order 3, first check whether the matrix is singular or non-singular.

If the matrix is singular, then its inverse does not exist.

If the matrix is non-singular, then the following are the steps to find the Inverse

We use the formula $A^{-1}=\frac{1}{|A|} \cdot \operatorname{adj}(A)$

  1. Calculate the Matrix of Minors,
  2. then turn that into the Matrix of Cofactors,
  3. then take the transpose (These 3 steps give us the adjoint of matrix A)
  4. multiply that by 1/|A|.

Methods to find the inverse of the matrix

Method 1: Directly apply the formula

We use the formula $A^{-1}=\frac{1}{|A|} \cdot \operatorname{adj}(A)$

For example,

Let's compute the inverse of matrix $A$,
$
A=\left[\begin{array}{lll}
1 & 1 & 2 \\
1 & 2 & 3 \\
3 & 1 & 1
\end{array}\right]
$

First, find the determinant of $\mathrm{A}$
$
\begin{aligned}
& |\mathrm{A}|=\left|\begin{array}{lll}
1 & 1 & 2 \\
1 & 2 & 3 \\
3 & 1 & 1
\end{array}\right|=1 \cdot(2 \times 1-3 \times 1)-1 \cdot(1 \times 1-3 \times 3)+2 \cdot(1 \times 1-3 \times 2) \\
& |\mathrm{A}|=-3 \neq 0 \\
& \therefore \mathrm{A}^{-1} \text { exists }
\end{aligned}
$

Now, find the minor of each element
$
\begin{aligned}
& \mathrm{M}_{11}=\left|\begin{array}{ll}
2 & 3 \\
1 & 1
\end{array}\right|=2 \times 1-3 \times 1=-1 \\
& \mathrm{M}_{12}=\left|\begin{array}{ll}
1 & 3 \\
3 & 1
\end{array}\right|=1 \times 1-3 \times 3=-8 \\
& \mathrm{M}_{13}=\left|\begin{array}{ll}
1 & 2 \\
3 & 1
\end{array}\right|=1 \times 1-2 \times 3=-5
\end{aligned}
$

Here is the calculation for the whole matrix:

Minor matrix

$
M=\left[\begin{array}{ccc}
2 \times 1-3 \times 1 & 1 \times 1-3 \times 3 & 1 \times 1-2 \times 3 \\
1 \times 1-2 \times 1 & 1 \times 1-2 \times 3 & 1 \times 1-3 \times 1 \\
1 \times 3-2 \times 2 & 1 \times 3-2 \times 1 & 1 \times 2-1 \times 1
\end{array}\right]=\left[\begin{array}{ccc}
-1 & -8 & -5 \\
-1 & -5 & -2 \\
-1 & 1 & 1
\end{array}\right]
$

Now Cofactor of the given matrix
We need to change the sign of alternate cells, like this $\left[\begin{array}{lll}+ & - & + \\ - & + & - \\ + & - & +\end{array}\right]$
So, Cofactor matrix C $=\left[\begin{array}{ccc}+(-1) & -(-8) & +(-5) \\ -(-1) & +(-5) & -(-2) \\ +(-1) & -(1) & +(1)\end{array}\right]=\left[\begin{array}{ccc}-1 & 8 & -5 \\ 1 & -5 & 2 \\ -1 & -1 & 1\end{array}\right]$
Now to find the $\operatorname{adj} \mathrm{A}$, take the transpose of matrix $\mathrm{C}$
Adj $A=C^{\prime}=\left[\begin{array}{ccc}-1 & 1 & -1 \\ 8 & -5 & -1 \\ -5 & 2 & 1\end{array}\right]$
Hence, $A^{-1}=\frac{\operatorname{adj} A}{|A|}$
$A^{-1}=\frac{1}{-3}\left[\begin{array}{ccc}-1 & 1 & -1 \\ 8 & -5 & -1 \\ -5 & 2 & 1\end{array}\right]=\left[\begin{array}{ccc}-\frac{1}{-3} & \frac{1}{-3} & -\frac{1}{-3} \\ \frac{8}{-3} & -\frac{5}{-3} & -\frac{1}{-3} \\ -\frac{5}{-3} & \frac{2}{-3} & \frac{1}{-3}\end{array}\right]$
$
\mathrm{A}^{-1}=\left[\begin{array}{ccc}
\frac{1}{3} & -\frac{1}{3} & \frac{1}{3} \\
-\frac{8}{3} & \frac{5}{3} & \frac{1}{3} \\
\frac{5}{3} & -\frac{2}{3} & -\frac{1}{3}
\end{array}\right]
$

Method 2: Using Elementary Row Transformation

Steps for finding the inverse of a matrix of order 2 by elementary row operations

Step I: Write $A=I_n A$
Step II: Perform a sequence of elementary row operations successively on A on the LHS and the prefactor $I_n$ on the RHS till we obtain the result $I_n=B A$
Step III: Write $A^{-1}=B$

For example:

Given matrix $\mathrm{A}=\left[\begin{array}{cc}a & b \\ c & \left(\frac{1+b c}{a}\right)\end{array}\right]$, then to find the inverse of matrix $\mathrm{A}$
We write,
$
\begin{aligned}
& {\left[\begin{array}{cc}
a & b \\
c & \left(\frac{1+b c}{a}\right)
\end{array}\right]=\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right] \mathrm{A}} \\
& \mathrm{R}_1 \rightarrow \frac{1}{\mathrm{a}} \mathrm{R}_1 \\
& {\left[\begin{array}{cc}
1 & \frac{b}{a} \\
c & \left(\frac{1+b c}{a}\right)
\end{array}\right]=\left[\begin{array}{ll}
\frac{1}{a} & 0 \\
0 & 1
\end{array}\right] \mathrm{A}} \\
& \mathrm{R}_2 \rightarrow \mathrm{R}_2-\mathrm{cR}_1 \\
& {\left[\begin{array}{ll}
1 & \frac{b}{a} \\
0 & \frac{1}{a}
\end{array}\right]=\left[\begin{array}{cc}
\frac{1}{a} & 0 \\
\frac{a c}{a} & 1
\end{array}\right] \mathrm{A}} \\
& \mathrm{R}_2 \rightarrow \mathrm{aR} \\
& {\left[\begin{array}{ll}
1 & \frac{b}{a} \\
0 & 1
\end{array}\right]=\left[\begin{array}{cc}
\frac{1}{a} & 0 \\
-c & a
\end{array}\right] \mathrm{A}} \\
& \mathrm{R}_2 \rightarrow \mathrm{R}_1-\frac{\mathrm{b}}{\mathrm{a}} \mathrm{R}_2 \\
& {\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right]=\left[\begin{array}{cc}
\frac{1+b c}{a} & -b \\
-c & a
\end{array}\right] \mathrm{A}} \\
& \mathrm{A}^{-1}=\left[\begin{array}{cc}
\frac{1+b c}{a} & -b \\
-c & a
\end{array}\right]
\end{aligned}
$

Finding the inverse of a Nonsingular 3 x 3 Matrix by Elementary Row Transformations

  1. Introduce unity at the intersection of the first row and first column either by interchanging two rows or by adding a constant multiple of elements of some other row to the first row.
  2. After introducing unity at (1,1) place introduce zeros at all other places in the first column.
  3. Introduce unity at the intersection of the 2nd row and 2nd column with the help of the 2nd and 3rd row.
  4. Introduce zeros at all other places in the second column except at the intersection of 2nd row and 2nd column.
  5. Introduce unity at the intersection of 3rd row and third column.
  6. Finally, introduce zeros at all other places in the third column except at the intersection of third row and third column.
NEET Highest Scoring Chapters & Topics
This ebook serves as a valuable study guide for NEET exams, specifically designed to assist students in light of recent changes and the removal of certain topics from the NEET exam.
Download E-book

For example, to find the inverse of matrix A

$
A=\left[\begin{array}{lll}
1 & 2 & 3 \\
0 & 1 & 2 \\
3 & 1 & 1
\end{array}\right]
$

First, write $A=I A$
$
\Rightarrow\left[\begin{array}{lll}
1 & 2 & 3 \\
0 & 1 & 2 \\
3 & 1 & 1
\end{array}\right]=\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right] \mathrm{A}
$

Apply, $R_3 \rightarrow R_3-3 R_1$
$
\Rightarrow\left[\begin{array}{ccc}
1 & 2 & 3 \\
0 & 1 & 2 \\
0 & -5 & -8
\end{array}\right]=\left[\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
-3 & 0 & 1
\end{array}\right] \mathrm{A}
$

Apply, $\mathrm{R}_1 \rightarrow \mathrm{R}_1-2 \mathrm{R}_2$

$
\Rightarrow\left[\begin{array}{ccc}
1 & 0 & -1 \\
0 & 1 & 2 \\
0 & -5 & -8
\end{array}\right]=\left[\begin{array}{ccc}
1 & -2 & 0 \\
0 & 1 & 0 \\
-3 & 0 & 1
\end{array}\right] \mathrm{A}
$

Apply, $\mathrm{R}_3 \rightarrow \mathrm{R}_3+5 \mathrm{R}_2$
$
\Rightarrow\left[\begin{array}{ccc}
1 & 0 & -1 \\
0 & 1 & 2 \\
0 & 0 & 2
\end{array}\right]=\left[\begin{array}{ccc}
1 & -2 & 0 \\
0 & 1 & 0 \\
-3 & 5 & 1
\end{array}\right] \mathrm{A}
$

Apply, $\mathrm{R}_3 \rightarrow \frac{1}{2} \mathrm{R}_3$
$
\Rightarrow\left[\begin{array}{ccc}
1 & 0 & -1 \\
0 & 1 & 2 \\
0 & 0 & 1
\end{array}\right]=\left[\begin{array}{ccc}
1 & -2 & 0 \\
0 & 1 & 0 \\
\frac{-3}{2} & \frac{5}{2} & \frac{1}{2}
\end{array}\right] \mathrm{A}
$

Apply, $\mathrm{R}_1 \rightarrow \mathrm{R}_1+\mathrm{R}_3$
$
\Rightarrow\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 2 \\
0 & 0 & 1
\end{array}\right]=\left[\begin{array}{ccc}
-\frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\
0 & 1 & 0 \\
\frac{-3}{2} & \frac{5}{2} & \frac{1}{2}
\end{array}\right] \mathrm{A}
$

Apply, $\mathrm{R}_2 \rightarrow \mathrm{R}_2-2 \mathrm{R}_3$
$
\Rightarrow\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right]=\left[\begin{array}{ccc}
-\frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\
3 & -4 & -1 \\
\frac{-3}{2} & \frac{5}{2} & \frac{1}{2}
\end{array}\right] \mathrm{A}
$

Hence,
$
\mathrm{A}^{-1}=\left[\begin{array}{ccc}
-\frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\
3 & -4 & -1 \\
\frac{-3}{2} & \frac{5}{2} & \frac{1}{2}
\end{array}\right]
$

Properties of the inverse of a matrix:

1. The inverse of a matrix is unique

Proof:

Let A be a square and non-singular matrix and let B and C be two inverses of matrix A

$\begin{aligned} & \mathrm{AB}=\mathrm{BA}=\mathbb{I}_{\mathrm{n}} \text { (since } \mathrm{B} \text { is inverse of } \mathrm{A} \text { ) } \\ & \mathrm{AC}=\mathrm{CA}=\mathbb{I}_{\mathrm{n}} \text { (since } \mathrm{C} \text { is inverse of } \mathrm{A} \text { ) } \\ & \text { now, } \mathrm{AB}=\mathbb{I}_{\mathrm{n}} \\ & \left.\mathrm{C}(\mathrm{AB})=\mathrm{CI}_{\mathrm{n}} \quad \text { [Multiplication by } \mathrm{C}\right] \\ & (\mathrm{CA}) \mathrm{B}=\mathrm{CI}_{\mathrm{n}} \quad \text { [by associativity] } \\ & \mathbb{I}_{\mathrm{n}} \mathrm{B}=\mathrm{C} \mathbb{I}_{\mathrm{n}} \Rightarrow B=C \\ & \end{aligned}$

Hence an invertible matrix has a unique inverse.

2. If A and B are invertible matrices of order n, then AB will also be invertible. and (AB)-1 = B-1A-1.

Proof :

$\mathrm{A}$ and $\mathrm{B}$ are invertible matrices, so $|A| \neq 0$ and $|B| \neq 0$
Hence, $|A||B| \neq 0 \Rightarrow|A B| \neq 0$
now, $(A B)\left(B^{-1} A^{-1}\right)=A\left(B B^{-1}\right) A^{-1}$ [by associative law]
$=A\left(I_n\right) A^{-1}$ $\left[\because \mathrm{BB}^{-1}=\mathrm{I}_{\mathrm{n}}\right]$
$=A A^{-1}=I_n$
also, $\left(B^{-1} A^{-1}\right)(A B)=B^{-1}\left(A^{-1} A\right) B$ [by associative law]
$
\begin{aligned}
& =B^{-1}\left(I_n B\right) \\
& =B^{-1} B=I_n
\end{aligned}
$

Thus, $(A B)\left(B^{-1} A^{-1}\right)=I_n=\left(B^{-1} A^{-1}\right)(A B)$
Hence, $(A B)^{-1}=B^{-1} A^{-1}$

3. If A is an invertible matrix, then

(A')-1 = (A-1) '

Proof: As A is an invertible matrix, so |A| ≠ 0 ⇒ |A' | ≠ 0. Hence, A' is also invertible.

Now, $\mathrm{AA}^{-1}=\mathbb{I}_{\mathrm{n}}=\mathrm{A}^{-1} \mathrm{~A}$
Taking transpose of all three sides
$
\begin{aligned}
& \Rightarrow\left(\mathrm{AA}^{-1}\right)^{\prime}=\left(\mathbb{I}_{\mathrm{n}}\right)^{\prime}=\left(\mathrm{A}^{-1} \mathrm{~A}\right)^{\prime} \\
& \Rightarrow\left(\mathrm{A}^{-1}\right)^{\prime} \mathrm{A}^{\prime}=\mathbb{I}=\mathrm{A}^{\prime}\left(\mathrm{A}^{-1}\right)^{\prime} \\
& \left(\mathrm{A}^{\prime}\right)^{-1}=\left(\mathrm{A}^{-1}\right)^{\prime}
\end{aligned}
$

4. Let A be an invertible matrix, then, (A-1)-1=A

Proof:

Let A be an invertible matrix of order n.

$\begin{aligned} & \text { As } \quad A \cdot A^{-1}=I=A^{-1} \cdot A \\ & \Rightarrow \quad\left(\mathrm{A}^{-1}\right)^{-1}=\mathrm{A}\end{aligned}$

5. Let A be an invertible matrix of order n and k is a natural number, then (Ak)-1 = (A-1)k = A-k

Proof:

$\begin{aligned}\left(\mathrm{A}^{\mathrm{k}}\right)^{-1} & =(\mathrm{A} \times \mathrm{A} \times \mathrm{A} \times \ldots \times \mathrm{A})^{-1} \\ & =\left(\mathrm{A}^{-1} \times \mathrm{A}^{-1} \times \mathrm{A}^{-1} \times \ldots \times \mathrm{A}^{-1}\right) \\ & =\left(\mathrm{A}^{-1}\right)^k\end{aligned}$

6. Let A be an invertible matrix of order n, then

$
\left|\mathrm{A}^{-1}\right|=\frac{1}{|\mathrm{~A}|}
$

Proof: $\because A$ is invertible, then $|A| \neq 0$.
now, $\mathrm{AA}^{-1}=\mathbb{I}_{\mathrm{n}}=\mathrm{A}^{-1} \mathrm{~A}$
$
\begin{aligned}
& \Rightarrow\left|\mathrm{AA}^{-1}\right|=\left|\mathbb{I}_{\mathrm{n}}\right| \\
& \Rightarrow|\mathrm{A}|\left|\mathrm{A}^{-1}\right|=1 \\
& \Rightarrow\left|\mathrm{A}^{-1}\right|=\frac{1}{|\mathrm{~A}|}
\end{aligned}
$

7. The inverse of a non-singular diagonal matrix is a diagonal matrix

if $A=\left[\begin{array}{lll}a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c\end{array}\right]$ and $|\mathrm{A}| \neq 0$
then
$
\mathrm{A}^{-1}=\left[\begin{array}{ccc}
\frac{1}{a} & 0 & 0 \\
0 & \frac{1}{b} & 0 \\
0 & 0 & \frac{1}{c}
\end{array}\right]
$

Recommended Video Based on Inverse of a Matrix:

Solved Examples Based on the Inverse of a Matrix

Example 1: The set of all values of $t \in \mathbb{R}$, for which the matrix $\left[\begin{array}{ccc}e^t & e^{-t}(\sin t-2 \cos t) & \mathrm{e}^{-t}(-2 \sin t-\cos t) \\ \mathrm{e}^t & \mathrm{e}^{-t}(2 \sin t+\cos t) & e^{-t}(\sin t-2 \cos t) \\ \mathrm{e}^t & e^{-t} \cos t & e^{-t} \sin t\end{array}\right]$ is invertible, is [JEE MAINS 2023]

Solution:

$\begin{aligned} & |\mathrm{A}|=\left|\begin{array}{ccc}\mathrm{e}^t & \mathrm{e}^{-t}(s-2 c) & \mathrm{e}^{-t}(-2 \mathrm{~s}-\mathrm{c}) \\ \mathrm{e}^{\mathrm{t}} & \mathrm{e}^{-t}(2 \mathrm{~s}+\mathrm{c}) & \mathrm{e}^{-t}(\mathrm{~s}-2 \mathrm{c}) \\ \mathrm{e}^{\mathrm{t}} & \mathrm{e}^{-t} \mathrm{c} & \mathrm{e}^{-t} \mathrm{~s}\end{array}\right| \\ & \Rightarrow\mathrm{e}^t \cdot \mathrm{e}^{-t} \cdot \mathrm{e}^{-t}\left|\begin{array}{ccc}1 & \mathrm{~s}-2 \mathrm{c} & -2 \mathrm{~s}-\mathrm{c} \\ 1 & 2 s+c & \mathrm{~s}-2 \mathrm{c} \\ 1 & \mathrm{c} & \mathrm{s}\end{array}\right| \\ & R_1 \rightarrow R_1-R_2 \& \quad R_2 \rightarrow R_2-R_3 \\ & =\mathrm{e}^{\mathrm{t}}\left|\begin{array}{ccc}0 & -\mathrm{s}-3 \mathrm{c} & -3 \mathrm{~s}-\mathrm{c} \\ 0 & 2 \mathrm{~s} & -2 \mathrm{c} \\ 1 & \mathrm{c} & \mathrm{s}\end{array}\right| \\ & \Rightarrow \mathrm{e}^{-\mathrm{t}}\left[1\left(2 \mathrm{sc}+6 \mathrm{c}^2+6 \mathrm{~s}^2+2 \mathrm{sc}\right)\right] \\ & \Rightarrow \mathrm{e}^{-\mathrm{t}}\left[4 \mathrm{sc}+6\left(\mathrm{c}^2+\mathrm{s}^2\right)\right]=\mathrm{e}^{-\mathrm{t}}(6+2 \sin 2 \mathrm{t}) \\ & \because 2 \sin 2 \mathrm{t} \in[-2,2] \\ & \therefore \mathrm{e}^{-\mathrm{t}}(6+2 \sin 2 \mathrm{t}) \neq 0 \quad \forall \mathrm{t} \in \mathrm{R} \\ & \end{aligned}$

Hence the set of all values of t is real numbers(R)

Example 2: Let
$
\mathrm{X}=\left[\begin{array}{lll}
0 & 1 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0
\end{array}\right], \mathrm{Y}=\alpha \mathrm{I}+\beta \mathrm{X}+\gamma \mathrm{X}^2
$ and $\mathrm{Z}=\alpha^2 I-\alpha \beta \mathrm{X}+\left(\beta^2-\alpha \gamma\right) \mathrm{X}^2, \alpha, \beta, \gamma \in \mathbb{R} \text {.If } \mathrm{Y}^{-1}=\left[\begin{array}{ccc}
1 / 5 & -2 / 5 & 1 / 5 \\
0 & 1 / 5 & -2 / 5 \\
0 & 0 & 1 / 5
\end{array}\right] \text {, then } \mathbf{n}:(\alpha-\beta+\gamma)^2 \text { is equal to }$

[JEE MAINS 2022]

Solution:

$
\begin{aligned}
& \mathrm{x}=\left[\begin{array}{lll}
0 & 1 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0
\end{array}\right] \\
& x^2=\left[\begin{array}{lll}
0 & 0 & 1 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array}\right] \\
& \therefore \mathrm{y}=\alpha \mathrm{I}+\beta \mathrm{x}+\gamma \mathrm{x}^2 \\
& \mathrm{y}=\left[\begin{array}{lll}
\alpha & \beta & \gamma \\
0 & \alpha & \beta \\
0 & 0 & \alpha
\end{array}\right]
\end{aligned}
$
$
\mathrm{z}=\left[\begin{array}{ccc}
\alpha^2 & -\alpha \beta & \beta^2-\alpha r \\
0 & \alpha^2 & -\alpha \beta \\
0 & 0 & \alpha^2
\end{array}\right]
$

As $\mathrm{yy}^{-1}=\mathrm{I}$ then
$
\begin{aligned}
& \alpha=5, \quad \beta=10, \gamma=15 \\
& \therefore(\alpha-\beta+\gamma)^2=100
\end{aligned}
$

Hence, the answer is 100.

Example 3: Let $A$ and $B$ be two $3 \times 3$ real matrices such that $\left(A^2-B^2\right)$ is an invertible matrix. If $A^5=B^5$ and $\mathrm{A}^3 \mathrm{~B}^2=\mathrm{A}^2 \mathrm{~B}^3$, then the value of the determinant of the matrix $\mathrm{A}^3+\mathrm{B}^3$ is equal to: [JEEMAINS 2021]

Solution:

$
A^5=B^5 \quad \& \quad A^3 B^2=A^2 B^3
$

Subtracting these
$
\begin{aligned}
& A^5-A^3 B^2=B^5-A^2 B^3 \\
\Rightarrow & A^3\left(A^2-B^2\right)=-B^3\left(A^2-B^2\right) \\
\Rightarrow & \left(A^3+B^3\right)\left(A^2-B^2\right)=0 \\
\Rightarrow & \left|A^3+B^3\right| \cdot\left|A^2-B^2\right|=0 \\
\Rightarrow & \left|A^3+B^3\right|=0\left(\text { As }\left|A^2-B^2\right| \neq 0\right) .
\end{aligned}
$

Hence, $\mathrm{A}^3+\mathrm{B}^3=0$

Example 4: The number of matrices $\mathrm{A}=\left(\begin{array}{ll}a & b \\ c & d\end{array}\right)$, where $\mathrm{a}, \mathrm{b}, \mathrm{c}, \mathrm{d} \in\{-1,0,1,2,3, \ldots \ldots, 10\}$, such that $\mathrm{A}=\mathrm{A}^{-1}$, is
[JEE MAINS 2022]

Solution:

$\begin{aligned} & \mathrm{A}=\left[\begin{array}{ll}a & b \\ c & d\end{array}\right] \\ & \mathrm{A}=\mathrm{A}^{-1} \\ & \Rightarrow \mathrm{AA}=\mathrm{A}^{-1} \mathrm{~A} \\ & \Rightarrow \mathrm{A}^2=\mathrm{I} \\ & \Rightarrow\left[\begin{array}{ll}a & b \\ c & d\end{array}\right]\left[\begin{array}{ll}a & b \\ c & d\end{array}\right]=\left[\begin{array}{ll}1 & 0 \\ 0 & 1\end{array}\right] \\ & \Rightarrow\left[\begin{array}{ll}a^2+b c & a b+b d \\ a c+c d & b c+d^2\end{array}\right]=\left[\begin{array}{ll}1 & 0 \\ 0 & 1\end{array}\right] \\ & \therefore \mathrm{a}^2+\mathrm{bc}=1, \mathrm{~d}^2+\mathrm{bc}=1, \mathrm{~b}(\mathrm{a}+\mathrm{d})=0, \mathrm{c}(\mathrm{a}+\mathrm{d})=0\end{aligned}$

From the first two equations,

$
\mathrm{a}^2=\mathrm{d}^2 \Rightarrow \mathrm{a}=\mathrm{d}, \mathrm{a}=-\mathrm{d}
$

Case I: $\mathrm{a}=-\mathrm{d}$
$(\mathrm{a}, \mathrm{d})$ can be $(0,0),(1,-1),(-1,1)$. For these $\mathrm{b}, \mathrm{c}$ can have 2,23 and 23 values $=48$ values
Case II : $\mathrm{a}=\mathrm{d}$
$(\mathrm{a}, \mathrm{d})$ can be $(1,1)_{\text {or }}(-1,-1)$. For this $(\mathrm{b}, \mathrm{c})$ can be $(0,0)$ only
$\therefore$ Total $=48+2=50$

Hence answer is 50

Example 5 :Let $\mathrm{A}=\left[\begin{array}{cc}1 & 2 \\ -1 & 4\end{array}\right]$. If $\mathrm{A}^{-1}=\alpha \mathrm{I}+\beta \mathrm{A}, \alpha, \beta \in \mathbf{R}, \mathrm{I}$ is a $2 \times 2$ identity matrix, then $4(\alpha-\beta)$ is equal to:
[JEE MAINS 2021]

Solution:

$
\begin{aligned}
& \text { Given } A=\left[\begin{array}{rr}
1 & 2 \\
-1 & 4
\end{array}\right] \\
& \begin{aligned}
& \Rightarrow|A|=4+2=6, \text { so inverse exists } \\
& \text { Now } A^{-1}=\frac{\operatorname{adj}(A)}{|A|}=\frac{1}{6}\left[\begin{array}{cc}
4 & -2 \\
1 & 1
\end{array}\right] \\
&=\left[\begin{array}{cc}
\frac{2}{3} & -\frac{1}{3} \\
\frac{1}{6} & \frac{1}{6}
\end{array}\right]
\end{aligned}
\end{aligned}
$

As $A^{-1}=\alpha I+\beta A$

$\begin{aligned} & \Rightarrow\left[\begin{array}{cc}\frac{2}{3} & -\frac{1}{3} \\ \frac{1}{6} & \frac{1}{6}\end{array}\right]=\left[\begin{array}{ll}\alpha & 0 \\ 0 & \alpha\end{array}\right]+\left[\begin{array}{cc}\beta & 2 \beta \\ -\beta & 4 \beta\end{array}\right] \\ & \Rightarrow\left[\begin{array}{cc}\frac{2}{3} & -\frac{1}{3} \\ \frac{1}{6} & \frac{1}{6}\end{array}\right]=\left[\begin{array}{cc}\alpha+\beta & 2 \beta \\ -\beta & \alpha+4 \beta\end{array}\right] \\ & \therefore \quad-\beta=\frac{1}{6} \Rightarrow \beta=-\frac{1}{6} \\ & \Rightarrow \alpha+\beta=\frac{2}{3} \Rightarrow \alpha=\frac{5}{6} \\ & \therefore \quad 4(\alpha-\beta)=4\left(\frac{5}{6}+\frac{1}{6}\right) \\ & =4\end{aligned}$

Hence, the answer is 4.


Frequently Asked Questions (FAQs)

1. Define the inverse of a matrix.

A non-singular square matrix A is said to be invertible if there exists a non-singular square matrix B such that AB = I = BA and the matrix B is called the inverse of matrix A.

2. How to calculate the inverse of a matrix?

The formula to calculate the inverse of the matrix: We use the formula $A^{-1}=\frac{1}{|A|} \cdot \operatorname{adj}(A)$

3. Are elementary row transformation and inverse of matrix are same?

 No Elementary row transformation and inverse of the matrix are not the same. Elementary row transformation is the method to find the inverse of a matrix.

4. If a matrix is singular, can we find its inverse?

No, if a matrix is singular we can not find its inverse. As the determinant of a singular matrix is zero.

5. Can we find the inverse of a rectangular matrix?

No, the inverse of the rectangular matrix does not exist because its determinant does not exist. We can only find the inverse of the square matrix.

6. What is the significance of the identity matrix in matrix inversion?
The identity matrix (I) plays a crucial role in matrix inversion. By definition, AA^(-1) = A^(-1)A = I. The identity matrix acts like the number 1 in regular multiplication, leaving any matrix unchanged when multiplied by it. It serves as a benchmark for defining and verifying matrix inverses.
7. How does matrix inversion relate to solving homogeneous systems of equations?
In a homogeneous system Ax = 0, where A is a square matrix, non-trivial solutions exist if and only if A is not invertible. If A is invertible, the only solution is the trivial solution (x = 0). This is because if A^(-1) exists, we can multiply both sides by A^(-1) to get x = A^(-1)0 = 0.
8. How does the inverse of a matrix change when the matrix is scaled?
If a matrix A is scaled by a factor k to become kA, the inverse of kA is (1/k)A^(-1). This relationship shows that scaling a matrix by k scales its inverse by 1/k. This property is useful in understanding how transformations scale and how their inverses behave.
9. How does the condition number of a matrix relate to the stability of its inverse?
The condition number of a matrix measures how sensitive its inverse is to small changes in the original matrix. A high condition number indicates that small errors in the input can lead to large errors in the inverse, making the matrix ill-conditioned. Well-conditioned matrices (with low condition numbers) have more stable inverses.
10. What is the relationship between matrix inversion and matrix adjoint?
For an invertible matrix A, its inverse can be expressed as A^(-1) = (1/det(A)) * adj(A), where adj(A) is the adjoint (or adjugate) of A. The adjoint is the transpose of the cofactor matrix. This relationship provides an alternative method for calculating inverses, especially useful for smaller matrices.
11. How does the inverse of a 2x2 matrix differ from larger matrices?
For a 2x2 matrix, there's a simple formula to find the inverse: for matrix [[a, b], [c, d]], the inverse is (1/(ad-bc)) * [[d, -b], [-c, a]]. For larger matrices, more complex methods like Gaussian elimination or adjoint method are typically used. The 2x2 case is often used to introduce the concept before moving to more general methods.
12. How does matrix inversion relate to solving systems of linear equations?
Matrix inversion provides a method for solving systems of linear equations. If Ax = b represents a system of equations, where A is an invertible matrix, x is the vector of unknowns, and b is the constant vector, then the solution is given by x = A^(-1)b. This demonstrates how the inverse "undoes" the effect of A on b to find x.
13. Why is the inverse of a product of matrices equal to the product of their inverses in reverse order?
For matrices A and B, (AB)^(-1) = B^(-1)A^(-1). This is because matrix multiplication is not commutative, so the order matters. The inverse of the product undoes the transformations in reverse order: first B^(-1) undoes B, then A^(-1) undoes A.
14. How does the determinant of an inverse matrix relate to the original matrix?
The determinant of an inverse matrix is the reciprocal of the determinant of the original matrix. Mathematically, det(A^(-1)) = 1 / det(A). This relationship holds because det(AA^(-1)) = det(A) * det(A^(-1)) = det(I) = 1.
15. What is the connection between eigenvalues of a matrix and its inverse?
If λ is an eigenvalue of matrix A, then 1/λ is an eigenvalue of A^(-1). This relationship exists because if Av = λv for eigenvector v, then A^(-1)v = (1/λ)v. This connection helps in understanding the behavior of inverse matrices in terms of their effect on eigenvectors.
16. What is an inverse matrix?
An inverse matrix is a matrix that, when multiplied by the original matrix, results in the identity matrix. It essentially "undoes" the effect of the original matrix. For a matrix A, its inverse is denoted as A^(-1), and A * A^(-1) = A^(-1) * A = I, where I is the identity matrix.
17. Can a non-square matrix have an inverse?
No, only square matrices can have inverses. Non-square matrices do not have inverses in the traditional sense. However, they may have left or right inverses, which are different concepts used in specific contexts.
18. How do you know if a matrix has an inverse?
A matrix has an inverse if and only if it is square (has the same number of rows and columns) and its determinant is not zero. Such matrices are called non-singular or invertible matrices. If the determinant is zero, the matrix is singular and does not have an inverse.
19. What is the relationship between matrix invertibility and linear independence?
A matrix is invertible if and only if its columns (or rows) are linearly independent. This means that no column can be expressed as a linear combination of the other columns. Linear independence ensures that the matrix transformation is one-to-one and onto, which is necessary for invertibility.
20. What is the difference between the inverse and the transpose of a matrix?
The inverse of a matrix A (denoted A^(-1)) is a matrix that, when multiplied by A, gives the identity matrix. The transpose of A (denoted A^T) is obtained by interchanging its rows and columns. While every invertible matrix has both an inverse and a transpose, these are generally different matrices.
21. How does the concept of matrix inversion extend to infinite-dimensional spaces?
In infinite-dimensional spaces, such as function spaces, the concept of matrix inversion extends to operator theory. Bounded linear operators on Hilbert spaces can have inverses, but the conditions for invertibility become more complex. Concepts like compact operators and spectral theory play important roles in this generalization.
22. How does the concept of matrix inversion apply to systems of nonlinear equations?
While matrix inversion directly applies to linear systems, it's also used in solving nonlinear systems through iterative methods like Newton's method. Here, the inverse of the Jacobian matrix is used to approximate the solution in each iteration, connecting matrix inversion to nonlinear problem-solving.
23. How does the concept of matrix inversion extend to infinite matrices?
Infinite matrices, used in some areas of mathematical analysis, can have inverses defined in certain cases. However, the conditions for invertibility become more complex, involving concepts from functional analysis. Invertibility often depends on the specific space in which the matrix operates.
24. What is the geometric interpretation of a matrix inverse?
Geometrically, a matrix represents a linear transformation. The inverse matrix represents the reverse transformation that "undoes" the original transformation. For example, if a matrix represents a rotation, its inverse represents the opposite rotation that brings points back to their original positions.
25. How does matrix inversion relate to linear transformations?
In the context of linear transformations, a matrix A represents a specific transformation, while its inverse A^(-1) represents the inverse transformation. If A transforms a vector v to Av, then A^(-1) will transform Av back to v. This concept is crucial in understanding how to reverse linear transformations.
26. What is the relationship between matrix inversion and matrix decomposition?
Matrix decomposition methods, such as LU decomposition or QR decomposition, can be used to efficiently compute matrix inverses. These methods break down a matrix into simpler components, making the inversion process more manageable and computationally efficient, especially for large matrices.
27. Can a matrix with zero entries have an inverse?
Yes, a matrix with zero entries can have an inverse, as long as it's non-singular (its determinant is not zero). For example, a diagonal matrix with non-zero diagonal elements has an inverse, even if all other entries are zero. The key is not the presence of zeros, but whether the matrix is singular or non-singular.
28. What is the relationship between matrix inversion and matrix rank?
A matrix is invertible if and only if it has full rank. For an n×n matrix, full rank means its rank is n. This implies that all columns (or rows) are linearly independent. If a matrix is not full rank, it is singular and does not have an inverse.
29. What is the pseudo-inverse and when is it used?
The pseudo-inverse, or Moore-Penrose inverse, is a generalization of the inverse for non-square or singular matrices. It's used when a traditional inverse doesn't exist, such as in overdetermined or underdetermined systems of equations. The pseudo-inverse provides a "best fit" solution in these cases.
30. How does the inverse of a matrix relate to its eigenvalues and eigenvectors?
The eigenvectors of a matrix A are the same as the eigenvectors of its inverse A^(-1). However, if λ is an eigenvalue of A, then 1/λ is the corresponding eigenvalue of A^(-1). This relationship helps in understanding how inverting a matrix affects its spectral properties.
31. What is the significance of the inverse in matrix similarity transformations?
In similarity transformations, a matrix A is transformed to B = P^(-1)AP, where P is an invertible matrix. The inverse P^(-1) is crucial in this process. Similarity transformations preserve eigenvalues and are used to diagonalize matrices or put them in simpler forms while maintaining their fundamental properties.
32. What is the connection between matrix inversion and solving differential equations?
Matrix inversion is crucial in solving systems of linear differential equations. When such systems are expressed in matrix form, finding the inverse of the coefficient matrix is often a key step in obtaining the general solution. This applies particularly to systems of first-order linear differential equations with constant coefficients.
33. How does the inverse of a matrix change under elementary row operations?
Elementary row operations (scaling, swapping, or adding multiples of rows) performed on a matrix A correspond to left multiplication by elementary matrices. To find the inverse of the resulting matrix, these operations must be applied in reverse order to the identity matrix. This forms the basis of the Gauss-Jordan method for finding inverses.
34. What is the role of matrix inversion in least squares regression?
In least squares regression, finding the best-fit parameters often involves solving the normal equations (X^T X)β = X^T y, where X is the design matrix. If (X^T X) is invertible, the solution is β = (X^T X)^(-1)X^T y. The inverse (X^T X)^(-1) plays a crucial role in determining the regression coefficients.
35. How does the concept of matrix inversion apply to Markov chains?
In Markov chain analysis, the inverse of (I - P), where P is the transition matrix and I is the identity matrix, is crucial for finding the fundamental matrix. This inverse helps in calculating expected number of visits to states and other long-term properties of the Markov chain.
36. What is the significance of the inverse in the context of change of basis?
When changing the basis of a vector space, the change of basis matrix P and its inverse P^(-1) are used. If v is a vector in the original basis and w is the same vector in the new basis, then w = P^(-1)v. The inverse is essential for converting coordinates between different bases.
37. How does matrix inversion relate to the concept of duality in linear algebra?
Matrix inversion is closely related to duality in linear algebra. The transpose of the inverse, (A^(-1))^T, is equal to the inverse of the transpose, (A^T)^(-1). This relationship highlights the dual nature of row and column operations and is important in understanding dual spaces and linear functionals.
38. What is the role of matrix inversion in computer graphics transformations?
In computer graphics, matrix inversion is crucial for implementing inverse transformations. For example, if a matrix M represents a composite transformation (like rotation, scaling, and translation), M^(-1) is used to undo this transformation. This is important in operations like camera positioning and object manipulation in 3D space.
39. How does the inverse of a matrix relate to its nullspace and range?
For an invertible matrix A, the nullspace contains only the zero vector, and the range is the entire codomain. This is because Ax = 0 has only the trivial solution, and for any b, there exists an x such that Ax = b. Understanding this helps in grasping the concept of bijective linear transformations.
40. What is the significance of the inverse in the context of matrix exponentials?
In the study of matrix exponentials, if A is invertible, then e^A is also invertible, and (e^A)^(-1) = e^(-A). This relationship is important in solving systems of differential equations and understanding continuous-time Markov chains, where matrix exponentials often appear.
41. How does the concept of matrix inversion extend to block matrices?
For a block matrix [[A, B], [C, D]], where A and D are square, if A is invertible, the inverse can be expressed using the Schur complement: [[A^(-1) + A^(-1)B(D - CA^(-1)B)^(-1)CA^(-1), -A^(-1)B(D - CA^(-1)B)^(-1)], [-(D - CA^(-1)B)^(-1)CA^(-1), (D - CA^(-1)B)^(-1)]]. This extends inversion to more complex matrix structures.
42. What is the relationship between matrix inversion and matrix factorization?
Matrix factorization methods like LU, QR, or Cholesky decomposition can be used to efficiently compute matrix inverses. For example, if A = LU, then A^(-1) = U^(-1)L^(-1). These factorizations often provide more stable and efficient ways to compute inverses, especially for large matrices.
43. How does the inverse of a matrix change when a row or column is added or removed?
Adding or removing a row or column changes the dimensions of the matrix, potentially affecting its invertibility. For bordered matrices (where a row and column are added), there are formulas relating the inverse of the original matrix to the inverse of the bordered matrix. This concept is important in updating matrix inverses efficiently.
44. What is the significance of the inverse in the context of matrix norms?
The condition number of a matrix A, defined as ||A|| * ||A^(-1)|| for some matrix norm, measures how close A is to being singular. A large condition number indicates that A is nearly singular, and its inverse may be numerically unstable. This concept is crucial in numerical linear algebra and error analysis.
45. What is the role of matrix inversion in principal component analysis (PCA)?
In PCA, the inverse of the covariance matrix is used to compute the Mahalanobis distance, which is important for understanding the spread of data in multiple dimensions. Additionally, when working with the correlation matrix instead of the covariance matrix, matrix inversion is involved in standardizing the variables.
46. How does the inverse of a matrix relate to its characteristic polynomial?
The characteristic polynomial of A^(-1) is closely related to that of A. If p(λ) is the characteristic polynomial of A, then λ^n p(1/λ) is the characteristic polynomial of A^(-1), where n is the size of the matrix. This relationship helps in understanding how inverting a matrix affects its eigenvalues.
47. What is the significance of the inverse in the context of matrix powers?
For an invertible matrix A, negative powers are defined using the inverse: A^(-n) = (A^(-1))^n. This extends the concept of matrix powers to negative integers, analogous to how we define negative powers for real numbers. It's crucial in understanding matrix series and matrix functions.
48. How does matrix inversion relate to the concept of orthogonality?
For an orthogonal matrix Q (where Q^T Q = QQ^T = I), the inverse is equal to its transpose: Q^(-1) = Q^T. This property makes orthogonal matrices particularly useful in many applications, as their inverses are easy to compute and numerically stable.
49. What is the role of matrix inversion in solving linear programming problems?
In the simplex method for solving linear programming problems, matrix inversion is used when performing pivot operations. The inverse of the basis matrix is maintained and updated throughout the algorithm, allowing for efficient computation of the optimal solution.
50. What is the significance of the inverse in the context of matrix decompositions like SVD?
In Singular Value Decomposition (SVD), A = UΣV^T, the inverse of A can be expressed as A^(-1) = VΣ

Articles

Back to top