Multiplication Theorem on Probability: Formulas and Proof

Multiplication Theorem on Probability: Formulas and Proof

Komal MiglaniUpdated on 02 Jul 2025, 07:54 PM IST

Probability is defined as the ratio of the number of favorable outcomes to the total number of outcomes. Multiplication Rule of probability is an important concept used for predicting the likelihood when two events occur. It is used to calculate joint probability. It is also known as the multiplication theorem of probability. In this probability of events is multiplied to get the likelihood of the events occurring together.

Multiplication Rule of Probability

Multiplication Theorem on Probability: Formulas and Proof
Multiplication Theorem on Probability: Formulas and Proof

The multiplication rule is closely linked to conditional probability. Conditional probability is a measure of the probability of one event occurring given that another event has already occurred. event $A$ given that $B$ has already occurred is written as $P(A \mid B), P(A / B)$ or $P\left(\frac{A}{B}\right)$.

The formula to calculate $P(A \mid B)$ is
$P(A \mid B)=\frac{P(A \cap B)}{P(B)}$ where $P(B)$ is greater than zero. written as $A B$.

The probability of event $A B$ or $A \cap B$ can be obtained by using the conditional probability.
The conditional probability of event $A$ given that $B$ has occurred is denoted by $P(A \mid B)$ and is given by

$
\mathrm{P}(\mathrm{A} \mid \mathrm{B})=\frac{\mathrm{P}(\mathrm{A} \cap \mathrm{B})}{\mathrm{P}(\mathrm{B})}, \mathrm{P}(\mathrm{B}) \neq 0
$

Using this result, we can write

$
\mathrm{P}(\mathrm{A} \cap \mathrm{B})=\mathrm{P}(\mathrm{B}) \cdot \mathrm{P}(\mathrm{A} \mid \mathrm{B})
$

Also, we know that

$
\begin{aligned}
& P(B \mid A)=\frac{P(B \cap A)}{P(A)}, P(A) \neq 0 \\
\text { or } \quad & P(B \mid A)=\frac{P(A \cap B)}{P(A)}, P(A) \neq 0 \quad(\because A \cap B=B \cap A)
\end{aligned}
$

Thus, $\quad \mathrm{P}(\mathrm{A} \cap \mathrm{B})=\mathrm{P}(\mathrm{A}) \cdot \mathrm{P}(\mathrm{B} \mid \mathrm{A})$
Combining (1) and (2), we get

$
\begin{aligned}
\mathrm{P}(\mathrm{A} \cap \mathrm{B}) & =\mathrm{P}(\mathrm{A}) \cdot \mathrm{P}(\mathrm{B} \mid \mathrm{A}) \\
& =\mathrm{P}(\mathrm{B}) \cdot \mathrm{P}(\mathrm{A} \mid \mathrm{B}) \quad(\text { provided } \mathrm{P}(\mathrm{A}) \neq 0 \text { and } \mathrm{P}(\mathrm{B}) \neq 0)
\end{aligned}
$

The above result is known as the multiplication rule of probability.

Multiplication rule of probability for more than two events

If $A, B$, and $C$ are three events associated with sample space, then we have

$
\begin{aligned}
P(A \cap B \cap C) & =P(A) P(B \mid A) P(C \mid A \cap B) \\
& =P(A) P(B \mid A) P(C \mid A B)
\end{aligned}
$

Similarly, the multiplication rule of probability can be extended for four or more events.

Solved Example Based on Multiplication Rule:

Example 1: In a game, a man wins $Rs. 100$ if he gets $5$ or $6$ on a throw of a fair die and loses $Rs.50$ for getting any other number on the die. If he decides to throw the die either till he gets a five or a six or to a maximum of three throws, then his expected gain/loss (in rupees) is:

1) $0$
2) $\frac{400}{3}$ gain
3) $\frac{400}{9}$ loss
4) $\frac{400}{3}$ loss

Solution
Let $A=$ getting $5$ or $6$
$B=$ not getting $5$ or $6$
So $P(A)=2 / 6=1 / 3$ and $P(B)=4 / 6=2 / 3$
Now as per the question, the following events are possible
$A$ or $BA$ or $BBA$ or $BBB$
Let us first find their probabilities

$
\begin{aligned}
& P(A)=1 / 3 \\
& P(B A)=P(B) \cdot P(A)=2 / 3 \cdot 1 / 3=2 / 9
\end{aligned}
$

Similarly $P(B B A)=4 / 27$ and $P(B B B)=8 / 27$
To get the expected value, let us first make a probability distribution

EventABABBABBB
Xi (money from the event)100500-150
Pi1/32/94/278/27
JEE Main Highest Scoring Chapters & Topics
Focus on high-weightage topics with this eBook and prepare smarter. Gain accuracy, speed, and a better chance at scoring higher.
Download E-book


Expectation $=100 \times 1 / 3+50 \times 2 / 9+0 \times 4 / 27+(-150)^\times 8 / 27=0$
Hence, the answer is the option 1.

Example 2: Let $S=\left\{w_1, w_2, \ldots \ldots\right\}$ be the sample space associated to a random experiment. Let $P\left(w_n\right)=\frac{P\left(w_{n-1}\right)}{2}, n \geq 2$. Let $A=\{2 k+3 l: \boldsymbol{k}, l \in \mathbb{N}\}$ and $B=\left\{w_n: n \in A\right\}$. Then $P(B)$ is equal to
1) $\frac{3}{64}$
2) $\frac{1}{16}$
3) $\frac{1}{32}$
4) $\frac{3}{32}$

Solution
$
\begin{aligned}
&\begin{aligned}
& \mathrm{A}=\{5,7,8,9,10,11 \ldots \ldots\} . \\
& \mathrm{P}\left(\mathrm{W}_1\right)+\mathrm{P}\left(\mathrm{W}_2\right)+\mathrm{P}\left(\mathrm{W}_3\right)+\ldots \ldots \mathrm{P}\left(\mathrm{W}_{\mathrm{n}}\right)=1 \\
& \mathrm{P}\left(\mathrm{W}_1\right)+\frac{\mathrm{P}\left(\mathrm{W}_1\right)}{2}+\frac{\mathrm{P}\left(\mathrm{W}_2\right)}{2^2}+\ldots .=1 \\
& \Rightarrow \mathrm{P}\left(\mathrm{W}_1\right) \cdot\left(\frac{1}{1-1 / 2}\right)=1 \\
& \mathrm{P}\left(\mathrm{W}_1\right)=\frac{1}{2} \quad \mathrm{P}\left(\mathrm{W}_{\mathrm{n}}\right)=\frac{1}{2} \cdot\left(\frac{1}{2}\right)^{\mathrm{n}-1}=\frac{1}{2^{\mathrm{n}}} \\
& \because B=\left\{W_n: n \in A\right\} \\
& =\left\{\mathrm{W}_5, \mathrm{~W}_7, \mathrm{~W}_8, \ldots \ldots\right\} \\
& \mathrm{P}(\mathrm{B})=\mathrm{P}\left(\mathrm{W}_5\right)+\mathrm{P}\left(\mathrm{W}_7\right)+\mathrm{P}\left(\mathrm{W}_8\right)+\mathrm{P}\left(\mathrm{W}_9\right)+\mathrm{P}\left(\mathrm{W}_{10}\right)+\mathrm{P}\left(\mathrm{W}_{11}\right) \\
& =\frac{1}{2^5}+\frac{1}{2^7}+\frac{1}{2^8}+\ldots . \\
& =\frac{1}{32}+\frac{\frac{1}{2^7}}{1-\frac{1}{2}} \\
& =\frac{1}{32}+\frac{1}{2^7} \times 2 \\
& =\frac{1}{32}+\frac{1}{64}=\frac{2+1}{64}=\frac{3}{64}
\end{aligned}\\
&\mathrm{A}=\{5,7,8,9,10,11 \ldots \ldots .\\
&\mathrm{P}\left(\mathrm{W}_1\right)+\mathrm{P}\left(\mathrm{W}_2\right)+\mathrm{P}\left(\mathrm{W}_3\right)+\ldots \ldots \mathrm{P}\left(\mathrm{W}_{\mathrm{n}}\right)=1\\
&\mathrm{P}\left(\mathrm{W}_1\right)+\frac{\mathrm{P}\left(\mathrm{W}_1\right)}{2}+\frac{\mathrm{P}\left(\mathrm{W}_2\right)}{2^2}+\ldots .=1\\
&\Rightarrow \mathrm{P}\left(\mathrm{W}_1\right) \cdot\left(\frac{1}{1-1 / 2}\right)=1\\
&\mathrm{P}\left(\mathrm{W}_1\right)=\frac{1}{2} \quad \mathrm{P}\left(\mathrm{W}_{\mathrm{n}}\right)=\frac{1}{2} \cdot\left(\frac{1}{2}\right)^{\mathrm{n}-1}=\frac{1}{2^{\mathrm{n}}}\\
&\because B=\left\{W_n: n \in A\right\}\\
&=\left\{\mathrm{W}_5, \mathrm{~W}_7, \mathrm{~W}_8, \ldots \ldots\right\}\\
&\mathrm{P}(\mathrm{B})=\mathrm{P}\left(\mathrm{W}_5\right)+\mathrm{P}\left(\mathrm{W}_7\right)+\mathrm{P}\left(\mathrm{W}_8\right)+\mathrm{P}\left(\mathrm{W}_9\right)+\mathrm{P}\left(\mathrm{W}_{10}\right)+\mathrm{P}\left(\mathrm{W}_{11}\right)\\
&=\frac{1}{2^5}+\frac{1}{2^7}+\frac{1}{2^8}+\ldots .\\
&=\frac{1}{32}+\frac{\frac{1}{2^7}}{1-\frac{1}{2}}\\
&=\frac{1}{32}+\frac{1}{2^7} \times 2\\
&=\frac{1}{32}+\frac{1}{64}=\frac{2+1}{64}=\frac{3}{64}
\end{aligned}
$

Hence, the answer is option (1).

Example 3: Let $P(E)$ denote the probability of an event $E$. Given $P(A)=1, P(B)=\frac{1}{2}$ the values of $P(A / B)$ and $P(B / A)$ respectively are
1) $\frac{1}{4}, \frac{1}{2}$
2) $\frac{1}{2}, \frac{1}{4}$
3) $\frac{1}{2}, 1$
4) $1, \frac{1}{2}$

Solution
$\mathrm{P}(\mathrm{A})=1$
$\Rightarrow A$ is sure event
$\Rightarrow A$ will definitely occur
So $A$ is independent from $B$

Hence,

$
\mathrm{P}(\mathrm{A} \cap \mathrm{B})=\mathrm{P}(\mathrm{A}) \cdot \mathrm{P}(\mathrm{B})=1 \times \frac{1}{2}=\frac{1}{2}
$

So,

$
\left.\begin{array}{l}
P\left(\frac{A}{B}\right)=\frac{P(A \cap B)}{P(B)}=1 \\
P\left(\frac{B}{A}\right)=\frac{P(B \cap A)}{P(A)}=\frac{1}{2}
\end{array}\right\}
$

Hence, the answer is the option(4).

Example 4: An examination consists of two papers, Paper $1$ and Paper $2$. The probability of failing in Paper $1$ is $0.3$ and that in Paper $2$ is $0.2$ . Given that a student has failed in Paper $2$. the probability of failing in Paper $1$ is $0.6$ . The probability of a student failing in both the papers is
1) $0.5$
2) $0.18$
3) $ 0.12$
4) $0.06$

Solution
Let $A$ and $B$ be events of failing in paper $1 and paper 2 respectively.

$
\begin{aligned}
& \mathrm{P}(\mathrm{A})=0.3 \\
& \mathrm{P}(\mathrm{B})=0.2 \\
& \mathrm{P}\left(\frac{\mathrm{A}}{\mathrm{B}}\right)=0.6
\end{aligned}
$

Required Probability:

$
\begin{aligned}
& =\mathrm{P}(\mathrm{A} \cap \mathrm{B}) \\
& =\mathrm{P}(\mathrm{A} \mid \mathrm{B}) \mathrm{P}(\mathrm{B})
\end{aligned}
$

Hence, the answer is the option (3).

Example 5: if $\mathrm{P}(\mathrm{X})=1 / 4, \mathrm{P}(\mathrm{Y})=1 / 3$ and $\mathrm{P}(\mathrm{X} \cap \mathrm{Y})=1 / 12$ the value of $\mathrm{P}(\mathrm{Y} / \mathrm{X})$ is
1) $\frac{1}{4}$
2) $\frac{4}{25}$
3) $\frac{1}{3}$
4) $\frac{29}{50}$

Solution

$
\begin{aligned}
& \mathrm{P}(\mathrm{X})=1 / 4 \\
& \mathrm{P}(\mathrm{Y})=1 / 3 \\
& \mathrm{P}(\mathrm{X} \cap \mathrm{Y})=1 / 12 \\
& \mathrm{P}(\mathrm{Y} / \mathrm{X})=\frac{\mathrm{P}(\mathrm{X} \cap \mathrm{Y})}{\mathrm{P}(\mathrm{X})} \\
& \quad=\frac{1 / 12}{1 / 4}=1 / 3
\end{aligned}
$

Hence, the answer is the option (3).

Frequently Asked Questions (FAQs)

Q: How does the Multiplication Theorem apply to problems involving mixture distributions?
A:
In mixture distributions, the Multiplication Theorem is used to calculate joint probabilities of component selection and value generation, especially when the component selection and value generation are independent processes.
Q: What's the connection between the Multiplication Theorem and the concept of probabilistic graphical models?
A:
In probabilistic graphical models, the Multiplication Theorem is used to factorize joint probability distributions based on the independence relationships encoded in the graph structure.
Q: How does the Multiplication Theorem relate to the concept of statistical sufficiency?
A:
While not directly related, both concepts are fundamental in probability theory. The Multiplication Theorem can be used in calculations involving sufficient statistics, especially when dealing with independent observations.
Q: Can the Multiplication Theorem be used in problems involving stochastic processes?
A:
Yes, in stochastic processes, the Multiplication Theorem is often used to calculate joint probabilities of events at different time points, especially in Markov processes where future states depend only on the current state.
Q: How does the Multiplication Theorem apply to problems involving conditional variance?
A:
While not directly used to calculate conditional variance, the Multiplication Theorem is often involved in deriving properties of conditional variance, especially for independent random variables.
Q: What's the role of the Multiplication Theorem in calculating the probability of rare events in extreme value theory?
A:
In extreme value theory, the Multiplication Theorem can be used to calculate the probability of multiple rare events occurring together, which is often of interest in risk analysis.
Q: How does the Multiplication Theorem relate to the concept of entropy in information theory?
A:
While not directly related, both concepts deal with probabilities of events. The Multiplication Theorem can be used in calculations involving joint entropy of multiple random variables.
Q: Can the Multiplication Theorem be used in problems involving copulas in probability theory?
A:
Copulas are functions that describe the dependence between random variables. The Multiplication Theorem for independent variables is a special case where the copula is the product copula.
Q: How does the Multiplication Theorem apply to problems involving conditional risk in decision theory?
A:
In decision theory, the Multiplication Theorem is used to calculate joint probabilities of decisions and outcomes, which are then used to compute expected utilities and conditional risks.
Q: Can the Multiplication Theorem be used in problems involving conditional expectation?
A:
While not directly used in calculating conditional expectations, the Multiplication Theorem is often involved in deriving properties of conditional expectation, especially for independent events.